hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
542a62b48d45febc53b82e238fe6ed286841ea91 | 454 | py | Python | src/pyuwds3/utils/egocentric_spatial_relations.py | LAAS-HRI/uwds3 | 42390f62ed5701a32710341b01faa10efc448078 | [
"MIT"
] | 2 | 2020-08-19T06:15:14.000Z | 2021-05-23T09:55:18.000Z | src/pyuwds3/utils/egocentric_spatial_relations.py | LAAS-HRI/uwds3 | 42390f62ed5701a32710341b01faa10efc448078 | [
"MIT"
] | 5 | 2021-01-06T09:00:35.000Z | 2021-01-20T13:22:19.000Z | src/pyuwds3/utils/egocentric_spatial_relations.py | LAAS-HRI/uwds3 | 42390f62ed5701a32710341b01faa10efc448078 | [
"MIT"
] | 2 | 2020-11-18T17:34:43.000Z | 2021-05-23T16:14:17.000Z |
import math
from scipy.spatial.distance import euclidean
from ..types.bbox import BoundingBox
def is_left_of(bb1, bb2):
_, _, bb1_max, _, _ = bb1
bb2_min, _, _, _, _ = bb2
return bb1_max < bb2_min
def is_right_of(bb1, bb2):
bb1_min, _, _, _, _ = bb1
_, _, bb2_max, _, _ = bb2
return bb1_min > bb2_max
def is_behind(bb1, bb2):
_, _, _, _, bb1_depth = bb1
_, _, _, _, bb2_depth = bb2
return bb1_depth > bb2_depth
| 19.73913 | 44 | 0.634361 | 63 | 454 | 3.920635 | 0.349206 | 0.145749 | 0.109312 | 0.089069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071006 | 0.255507 | 454 | 22 | 45 | 20.636364 | 0.659763 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
542b4553e4da40bd25e9c35ead38f8985d1d5c31 | 2,883 | py | Python | machine_replacement_action_probs.py | dsbrown1331/broil | 3c06e15c560db3242c0e331a2b16cc578a843606 | [
"MIT"
] | 1 | 2021-03-29T09:53:53.000Z | 2021-03-29T09:53:53.000Z | machine_replacement_action_probs.py | dsbrown1331/broil | 3c06e15c560db3242c0e331a2b16cc578a843606 | [
"MIT"
] | 1 | 2020-11-22T15:05:48.000Z | 2020-11-25T00:10:17.000Z | machine_replacement_action_probs.py | dsbrown1331/broil | 3c06e15c560db3242c0e331a2b16cc578a843606 | [
"MIT"
] | null | null | null | import bayesian_irl
import mdp_worlds
import utils
import mdp
import numpy as np
import scipy
import random
import generate_efficient_frontier
from machine_replacement import generate_posterior_samples
if __name__=="__main__":
seed = 1234
np.random.seed(seed)
scipy.random.seed(seed)
random.seed(seed)
num_states = 4
num_samples = 2000
gamma = 0.95
alpha = 0.99
posterior = generate_posterior_samples(num_samples)
r_sa = np.mean(posterior, axis=1)
init_distribution = np.ones(num_states)/num_states #uniform distribution
mdp_env = mdp.MachineReplacementMDP(num_states, r_sa, gamma, init_distribution)
print("mean MDP reward", r_sa)
u_sa = mdp.solve_mdp_lp(mdp_env, debug=True)
print("mean policy from posterior")
utils.print_stochastic_policy_action_probs(u_sa, mdp_env)
print("MAP/Mean policy from posterior")
utils.print_policy_from_occupancies(u_sa, mdp_env)
print("rewards")
print(mdp_env.r_sa)
print("expected value = ", np.dot(u_sa, r_sa))
stoch_pi = utils.get_optimal_policy_from_usa(u_sa, mdp_env)
print("expected return", mdp.get_policy_expected_return(stoch_pi, mdp_env))
print("values", mdp.get_state_values(u_sa, mdp_env))
print('q-values', mdp.get_q_values(u_sa, mdp_env))
#run CVaR optimization, just the robust version
u_expert = np.zeros(mdp_env.num_actions * mdp_env.num_states)
posterior_probs = np.ones(num_samples) / num_samples #uniform dist since samples from MCMC
#generate efficient frontier
lambda_range = [0.0, 0.3, 0.75, 0.95, 1.0]
import matplotlib.pyplot as plt
from matplotlib.pyplot import cm
bar_width = 0.15
opacity = 0.9
color=iter(cm.rainbow(np.linspace(0,1,6)))
cnt = 0
index = np.arange(num_states)
for i,lamda in enumerate(lambda_range):
print("lambda = ", lamda)
cvar_opt_usa, cvar, exp_ret = mdp.solve_max_cvar_policy(mdp_env, u_expert, posterior, posterior_probs, alpha, False, lamda)
print('action probs')
utils.print_stochastic_policy_action_probs(cvar_opt_usa, mdp_env)
stoch_pi = utils.get_optimal_policy_from_usa(cvar_opt_usa, mdp_env)
print(stoch_pi[:,1])
c = next(color)
plt.figure(1)
label = r"$\lambda={}$".format(lamda)
rects1 = plt.bar(index + cnt * bar_width,stoch_pi[:,0], bar_width,
alpha=opacity, label=label, color=c)
cnt += 1
plt.figure(1)
plt.axis([-1,5,0, 1])
plt.yticks(fontsize=18)
plt.xticks(index + 2*bar_width, ('1', '2', '3', '4'), fontsize=18)
plt.legend(loc='best', fontsize=16)
plt.xlabel('State',fontsize=20)
plt.ylabel('Pr(Do Nothing $\mid$ State)',fontsize=20)
plt.tight_layout()
plt.savefig("./figs/machine_replacement/action_probs_machine_replacement.png")
plt.show() | 27.990291 | 131 | 0.687825 | 432 | 2,883 | 4.326389 | 0.331019 | 0.044944 | 0.019262 | 0.024077 | 0.165329 | 0.107009 | 0.037453 | 0.037453 | 0 | 0 | 0 | 0.026782 | 0.197017 | 2,883 | 103 | 132 | 27.990291 | 0.780562 | 0.044745 | 0 | 0.029412 | 1 | 0 | 0.097419 | 0.022901 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.161765 | 0 | 0.161765 | 0.220588 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
580d445ca9f82fbb66ddc5c165290139ca728a53 | 2,795 | py | Python | meet/migrations/0001_initial.py | bjones-tech/speedy-meety | a7d557788a544b69fd6ad454d921d9cf02cfa636 | [
"MIT"
] | null | null | null | meet/migrations/0001_initial.py | bjones-tech/speedy-meety | a7d557788a544b69fd6ad454d921d9cf02cfa636 | [
"MIT"
] | null | null | null | meet/migrations/0001_initial.py | bjones-tech/speedy-meety | a7d557788a544b69fd6ad454d921d9cf02cfa636 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-03-17 02:58
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import meet.models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Caller',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(default='none', max_length=200)),
('session_id', models.CharField(default='none', max_length=200)),
],
),
migrations.CreateModel(
name='Meeting',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('room_name', models.CharField(default='none', max_length=200)),
('room_id', models.CharField(default='none', max_length=200)),
('voice_id', models.CharField(default=meet.models.get_voice_id, max_length=200)),
('voice_used', models.BooleanField(default=False)),
('state', models.IntegerField(choices=[(0, 'Staged'), (1, 'In Progress'), (2, 'Completed')], default=0)),
('length', models.IntegerField(default=0)),
('topic_time_limit', models.IntegerField(default=0)),
('queue_next_topic', models.BooleanField(default=False)),
('complete_id', models.CharField(default='none', max_length=200)),
],
),
migrations.CreateModel(
name='Topic',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(default='none', max_length=200)),
('message_id', models.CharField(default='none', max_length=200)),
('time_left', models.IntegerField(default=0)),
('recording', models.BooleanField(default=False)),
('transcription', models.TextField(blank=True, null=True)),
('meeting', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='meet.Meeting')),
],
),
migrations.AddField(
model_name='meeting',
name='current_topic',
field=models.OneToOneField(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='meet.Topic'),
),
migrations.AddField(
model_name='caller',
name='meeting',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='meet.Meeting'),
),
]
| 43.671875 | 131 | 0.586047 | 288 | 2,795 | 5.534722 | 0.309028 | 0.040151 | 0.110414 | 0.114178 | 0.461104 | 0.461104 | 0.442284 | 0.442284 | 0.365747 | 0.365747 | 0 | 0.022871 | 0.264759 | 2,795 | 63 | 132 | 44.365079 | 0.752798 | 0.023971 | 0 | 0.381818 | 1 | 0 | 0.112294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.072727 | 0 | 0.145455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5810e3bb40adfc4d345436082de3af836eeff704 | 14,812 | py | Python | utils/github/query.py | malkfilipp/ClickHouse | 79a206b092cd465731020f331bc41f6951dbe751 | [
"Apache-2.0"
] | 1 | 2019-09-16T11:07:32.000Z | 2019-09-16T11:07:32.000Z | utils/github/query.py | malkfilipp/ClickHouse | 79a206b092cd465731020f331bc41f6951dbe751 | [
"Apache-2.0"
] | null | null | null | utils/github/query.py | malkfilipp/ClickHouse | 79a206b092cd465731020f331bc41f6951dbe751 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import requests
class Query:
'''Implements queries to the Github API using GraphQL
'''
def __init__(self, token, max_page_size=100, min_page_size=5):
self._token = token
self._max_page_size = max_page_size
self._min_page_size = min_page_size
self.api_costs = {}
_MEMBERS = '''
organization(login: "{organization}") {{
team(slug: "{team}") {{
members(first: {max_page_size} {next}) {{
pageInfo {{
hasNextPage
endCursor
}}
nodes {{
login
}}
}}
}}
}}
'''
def get_members(self, organization, team):
'''Get all team members for organization
Returns:
logins: a list of members' logins
'''
logins = []
not_end = True
query = Query._MEMBERS.format(organization=organization,
team=team,
max_page_size=self._max_page_size,
next='')
while not_end:
result = self._run(query)['organization']['team']
if result is None:
break
result = result['members']
not_end = result['pageInfo']['hasNextPage']
query = Query._MEMBERS.format(organization=organization,
team=team,
max_page_size=self._max_page_size,
next=f'after: "{result["pageInfo"]["endCursor"]}"')
logins += [node['login'] for node in result['nodes']]
return logins
_LABELS = '''
repository(owner: "yandex" name: "ClickHouse") {{
pullRequest(number: {number}) {{
labels(first: {max_page_size} {next}) {{
pageInfo {{
hasNextPage
endCursor
}}
nodes {{
name
color
}}
}}
}}
}}
'''
def get_labels(self, pull_request):
'''Fetchs all labels for given pull-request
Args:
pull_request: JSON object returned by `get_pull_requests()`
Returns:
labels: a list of JSON nodes with the name and color fields
'''
labels = [label for label in pull_request['labels']['nodes']]
not_end = pull_request['labels']['pageInfo']['hasNextPage']
query = Query._LABELS.format(number = pull_request['number'],
max_page_size = self._max_page_size,
next=f'after: "{pull_request["labels"]["pageInfo"]["endCursor"]}"')
while not_end:
result = self._run(query)['repository']['pullRequest']['labels']
not_end = result['pageInfo']['hasNextPage']
query = Query._LABELS.format(number=pull_request['number'],
max_page_size=self._max_page_size,
next=f'after: "{result["pageInfo"]["endCursor"]}"')
labels += [label for label in result['nodes']]
return labels
_TIMELINE = '''
repository(owner: "yandex" name: "ClickHouse") {{
pullRequest(number: {number}) {{
timeline(first: {max_page_size} {next}) {{
pageInfo {{
hasNextPage
endCursor
}}
nodes {{
... on CrossReferencedEvent {{
isCrossRepository
source {{
... on PullRequest {{
number
baseRefName
merged
labels(first: {max_page_size}) {{
pageInfo {{
hasNextPage
endCursor
}}
nodes {{
name
color
}}
}}
}}
}}
target {{
... on PullRequest {{
number
}}
}}
}}
}}
}}
}}
}}
'''
def get_timeline(self, pull_request):
'''Fetchs all cross-reference events from pull-request's timeline
Args:
pull_request: JSON object returned by `get_pull_requests()`
Returns:
events: a list of JSON nodes for CrossReferenceEvent
'''
events = [event for event in pull_request['timeline']['nodes'] if event and event['source']]
not_end = pull_request['timeline']['pageInfo']['hasNextPage']
query = Query._TIMELINE.format(number = pull_request['number'],
max_page_size = self._max_page_size,
next=f'after: "{pull_request["timeline"]["pageInfo"]["endCursor"]}"')
while not_end:
result = self._run(query)['repository']['pullRequest']['timeline']
not_end = result['pageInfo']['hasNextPage']
query = Query._TIMELINE.format(number=pull_request['number'],
max_page_size=self._max_page_size,
next=f'after: "{result["pageInfo"]["endCursor"]}"')
events += [event for event in result['nodes'] if event and event['source']]
return events
_PULL_REQUESTS = '''
repository(owner: "yandex" name: "ClickHouse") {{
defaultBranchRef {{
name
target {{
... on Commit {{
history(first: {max_page_size} {next}) {{
pageInfo {{
hasNextPage
endCursor
}}
nodes {{
oid
associatedPullRequests(first: {min_page_size}) {{
totalCount
nodes {{
... on PullRequest {{
number
author {{
login
}}
mergedBy {{
login
}}
url
baseRefName
baseRepository {{
nameWithOwner
}}
mergeCommit {{
oid
}}
labels(first: {min_page_size}) {{
pageInfo {{
hasNextPage
endCursor
}}
nodes {{
name
color
}}
}}
timeline(first: {min_page_size}) {{
pageInfo {{
hasNextPage
endCursor
}}
nodes {{
... on CrossReferencedEvent {{
isCrossRepository
source {{
... on PullRequest {{
number
baseRefName
merged
labels(first: 0) {{
nodes {{
name
}}
}}
}}
}}
target {{
... on PullRequest {{
number
}}
}}
}}
}}
}}
}}
}}
}}
}}
}}
}}
}}
}}
}}
'''
def get_pull_requests(self, before_commit, login):
'''Get all merged pull-requests from the HEAD of default branch to the last commit (excluding)
Args:
before_commit (string-convertable): commit sha of the last commit (excluding)
login (string): filter pull-requests by user login
Returns:
pull_requests: a list of JSON nodes with pull-requests' details
'''
pull_requests = []
not_end = True
query = Query._PULL_REQUESTS.format(max_page_size=self._max_page_size,
min_page_size=self._min_page_size,
next='')
while not_end:
result = self._run(query)['repository']['defaultBranchRef']
default_branch_name = result['name']
result = result['target']['history']
not_end = result['pageInfo']['hasNextPage']
query = Query._PULL_REQUESTS.format(max_page_size=self._max_page_size,
min_page_size=self._min_page_size,
next=f'after: "{result["pageInfo"]["endCursor"]}"')
for commit in result['nodes']:
if str(commit['oid']) == str(before_commit):
not_end = False
break
# TODO: fetch all pull-requests that were merged in a single commit.
assert commit['associatedPullRequests']['totalCount'] <= self._min_page_size, \
f'there are {commit["associatedPullRequests"]["totalCount"]} pull-requests merged in commit {commit["oid"]}'
for pull_request in commit['associatedPullRequests']['nodes']:
if(pull_request['baseRepository']['nameWithOwner'] == 'yandex/ClickHouse' and
pull_request['baseRefName'] == default_branch_name and
pull_request['mergeCommit']['oid'] == commit['oid'] and
(not login or pull_request['author']['login'] == login)):
pull_requests.append(pull_request)
return pull_requests
_DEFAULT = '''
repository(owner: "yandex", name: "ClickHouse") {
defaultBranchRef {
name
}
}
'''
def get_default_branch(self):
'''Get short name of the default branch
Returns:
name (string): branch name
'''
return self._run(Query._DEFAULT)['repository']['defaultBranchRef']['name']
def _run(self, query):
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def requests_retry_session(
retries=3,
backoff_factor=0.3,
status_forcelist=(500, 502, 504),
session=None,
):
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
headers = {'Authorization': f'bearer {self._token}'}
query = f'''
{{
{query}
rateLimit {{
cost
remaining
}}
}}
'''
request = requests_retry_session().post('https://api.github.com/graphql', json={'query': query}, headers=headers)
if request.status_code == 200:
result = request.json()
if 'errors' in result:
raise Exception(f'Errors occured: {result["errors"]}')
import inspect
caller = inspect.getouterframes(inspect.currentframe(), 2)[1][3]
if caller not in self.api_costs.keys():
self.api_costs[caller] = 0
self.api_costs[caller] += result['data']['rateLimit']['cost']
return result['data']
else:
import json
raise Exception(f'Query failed with code {request.status_code}:\n{json.dumps(request.json(), indent=4)}')
| 41.96034 | 128 | 0.369498 | 975 | 14,812 | 5.424615 | 0.190769 | 0.05294 | 0.049915 | 0.028361 | 0.458877 | 0.425411 | 0.387408 | 0.35035 | 0.297599 | 0.263755 | 0 | 0.004018 | 0.546314 | 14,812 | 352 | 129 | 42.079545 | 0.783036 | 0.068188 | 0 | 0.585965 | 0 | 0.003509 | 0.588123 | 0.030613 | 0 | 0 | 0 | 0.002841 | 0.003509 | 1 | 0.02807 | false | 0 | 0.017544 | 0 | 0.091228 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
581495ab37cf4df801b88c86040220d6464bbc32 | 4,141 | py | Python | ref_rna.py | entn-at/warp-rna | f6bf19634564068f23f9906373754e04f9b653a3 | [
"MIT"
] | 39 | 2019-08-11T09:06:55.000Z | 2022-03-30T03:24:34.000Z | ref_rna.py | entn-at/warp-rna | f6bf19634564068f23f9906373754e04f9b653a3 | [
"MIT"
] | null | null | null | ref_rna.py | entn-at/warp-rna | f6bf19634564068f23f9906373754e04f9b653a3 | [
"MIT"
] | 6 | 2019-12-11T03:02:48.000Z | 2021-11-29T09:01:51.000Z | """
Python reference implementation of the Recurrent Neural Aligner.
Author: Ivan Sorokin
Based on the papers:
- "Recurrent Neural Aligner: An Encoder-Decoder Neural Network Model for Sequence to Sequence Mapping"
Hasim Sak, et al., 2017
- "Extending Recurrent Neural Aligner for Streaming End-to-End Speech Recognition in Mandarin"
Linhao Dong, et al., 2018
"""
import numpy as np
NEG_INF = -float("inf")
def logsumexp(*args):
"""
Stable log sum exp.
"""
if all(a == NEG_INF for a in args):
return NEG_INF
a_max = max(args)
lsp = np.log(sum(np.exp(a - a_max) for a in args))
return a_max + lsp
def log_softmax(acts, axis):
"""
Log softmax over the last axis of the 3D array.
"""
acts = acts - np.max(acts, axis=axis, keepdims=True)
probs = np.sum(np.exp(acts), axis=axis, keepdims=True)
log_probs = acts - np.log(probs)
return log_probs
def forward_pass(log_probs, labels, blank):
T, U, _ = log_probs.shape
S = T-U+2
alphas = np.zeros((S, U))
for u in range(1, U):
alphas[0, u] = alphas[0, u-1] + log_probs[u-1, u-1, labels[u-1]]
for t in range(1, S):
alphas[t, 0] = alphas[t-1, 0] + log_probs[t-1, 0, blank]
for t in range(1, S):
for u in range(1, U):
skip = alphas[t-1, u] + log_probs[t+u-1, u, blank]
emit = alphas[t, u-1] + log_probs[t+u-1, u-1, labels[u-1]]
alphas[t, u] = logsumexp(emit, skip)
return alphas, alphas[S-1, U-1]
def backward_pass(log_probs, labels, blank):
T, U, _ = log_probs.shape
S = T-U+2
S1 = S-1
U1 = U-1
betas = np.zeros((S, U))
for i in range(1, U):
u = U1-i
betas[S1, u] = betas[S1, u+1] + log_probs[T-i, u, labels[u]]
for i in range(1, S):
t = S1-i
betas[t, U1] = betas[t+1, U1] + log_probs[T-i, U1, blank]
for i in range(1, S):
t = S1-i
for j in range(1, U):
u = U1-j
skip = betas[t+1, u] + log_probs[T-i-j, u, blank]
emit = betas[t, u+1] + log_probs[T-i-j, u, labels[u]]
betas[t, u] = logsumexp(emit, skip)
return betas, betas[0, 0]
def analytical_gradient(log_probs, alphas, betas, labels, blank):
T, U, _ = log_probs.shape
S = T-U+2
log_like = betas[0, 0]
grads = np.full(log_probs.shape, NEG_INF)
for t in range(S-1):
for u in range(U):
grads[t+u, u, blank] = alphas[t, u] + betas[t+1, u] + log_probs[t+u, u, blank] - log_like
for t in range(S):
for u, l in enumerate(labels):
grads[t+u, u, l] = alphas[t, u] + betas[t, u+1] + log_probs[t+u, u, l] - log_like
return -np.exp(grads)
def numerical_gradient(log_probs, labels, neg_loglike, blank):
epsilon = 1e-5
T, U, V = log_probs.shape
grads = np.zeros_like(log_probs)
for t in range(T):
for u in range(U):
for v in range(V):
log_probs[t, u, v] += epsilon
alphas, ll_forward = forward_pass(log_probs, labels, blank)
grads[t, u, v] = (-ll_forward - neg_loglike) / epsilon
log_probs[t, u, v] -= epsilon
return grads
def test():
np.random.seed(0)
blank = 0
vocab_size = 4
input_len = 5
output_len = 3
inputs = np.random.rand(input_len, output_len + 1, vocab_size)
labels = np.random.randint(1, vocab_size, output_len)
log_probs = log_softmax(inputs, axis=2)
alphas, ll_forward = forward_pass(log_probs, labels, blank)
betas, ll_backward = backward_pass(log_probs, labels, blank)
assert np.allclose(ll_forward, ll_backward, atol=1e-12, rtol=1e-12), \
"Log-likelihood from forward and backward pass mismatch."
neg_loglike = -ll_forward
analytical_grads = analytical_gradient(log_probs, alphas, betas, labels, blank)
numerical_grads = numerical_gradient(log_probs, labels, neg_loglike, blank)
assert np.allclose(analytical_grads, numerical_grads, atol=1e-6, rtol=1e-6), \
"Analytical and numerical computation of gradient mismatch."
if __name__ == "__main__":
test()
| 26.544872 | 103 | 0.59744 | 682 | 4,141 | 3.501466 | 0.190616 | 0.103853 | 0.041457 | 0.025126 | 0.389866 | 0.321608 | 0.223618 | 0.173786 | 0.099665 | 0.047739 | 0 | 0.027429 | 0.269259 | 4,141 | 155 | 104 | 26.716129 | 0.761732 | 0.105047 | 0 | 0.197802 | 0 | 0 | 0.033815 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 1 | 0.076923 | false | 0.065934 | 0.010989 | 0 | 0.164835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
581774fbaaecfebcc97c105cd9ba5717bc57c3de | 5,396 | py | Python | SONOS/sonos-fadein-alarm.py | tksunw/IoT | 2148c49e9a90822400f195be7b1de3f8e8b8ba2a | [
"MIT"
] | 1 | 2018-01-30T23:30:27.000Z | 2018-01-30T23:30:27.000Z | SONOS/sonos-fadein-alarm.py | tksunw/IoT | 2148c49e9a90822400f195be7b1de3f8e8b8ba2a | [
"MIT"
] | 1 | 2018-02-14T19:58:56.000Z | 2018-02-14T19:58:56.000Z | SONOS/sonos-fadein-alarm.py | tksunw/IoT | 2148c49e9a90822400f195be7b1de3f8e8b8ba2a | [
"MIT"
] | 2 | 2018-02-13T18:52:09.000Z | 2021-09-29T14:27:49.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
'''
sonos-fadein-alarm.py - a gentle alarm using Sonos Favorites.
This module allows a user to choose a SONOS favorite channel to
play for a gentle alarm. Select the maximum desired volume, the
number of minutes over which to ramp volume from 0 to the chosen
maxium, and choose a favorite to use (by title), and the script
will do the rest.
2017-01-21 my new alarm clock.
2017-09-15 added ability to group a second speaker to the main speaker
also aded the ability to specify 'all' to group all
available speakers to the main speaker.
'''
import argparse
import datetime
import time
import os.path
import soco
# Set some default values. These are mine. The channel is listed
# by name, and comes from the Sonos players 'favorites'. Volume
# on the player(s) specified will ramp up from 0 to MAXVOL over
# the number of minutes specified. For me, I like a 30 minute
# ramp from 0 to 12. So the volume will increase by 1 every 2.5
# minutes.
# Set _WEEKEND days to skip certain days of the week, if you want
# to skip your days off work.
_SPEAKER = 'master bedroom'
_CHANNEL = 'Everybody Talks Radio'
_MINUTES = 30
_MAXVOL = 12
_WEEKEND = ('Saturday', 'Sunday')
def get_sonos_favorites(from_speaker):
''' get_sonos_favorites: gets the saved "favorites" from a Sonos speaker.
Args:
from_speaker (soco.core.Soco object): the speaker to pull favorites from.
Returns:
favs (list): a list of Sonos Favorites (title, meta, uri)
'''
favs = from_speaker.get_sonos_favorites()['favorites']
return favs
def main():
''' main function:
Args:
None
Returns:
None
Process command line arguments, and turn a Sonos speaker into an alarm
clock, with the flexibility to ramp the volume slowly over a defined
time period, to a "max vol" limit.
'''
parser = argparse.ArgumentParser(description='Sonos/Favorites ramping alarm.')
parser.add_argument('-S', '--speaker', type=str,
help='The Sonos speaker to use for the alarm',
default=_SPEAKER)
parser.add_argument('-s', '--slave', type=str,
help='The Sonos speaker(s) to join to a group for the alarm. Use the word "all" to join all available players.')
parser.add_argument('-c', '--channel', type=str,
help='The Sonos Favorite Channel to use for the alarm',
default=_CHANNEL)
parser.add_argument('-m', '--minutes', type=int,
help='The number of minutes the alarm will ramp up over',
default=_MINUTES)
parser.add_argument('-v', '--volume', type=int,
help='Set the maximum volume for the alarm',
default=_MAXVOL)
parser.add_argument('-p', '--pause',
help='Pause a speaker that is playing.',
action='store_true')
parser.epilog = "The channel you select must be a Sonos Favorite. Because\n"
parser.epilog += "I'm lazy and didn't feel like figuring out SoCo to get\n"
parser.epilog += "it working directly with Pandora, which SoCo doesn't seem\n"
parser.epilog += "to work with yet."
args = parser.parse_args()
speakers = soco.discover()
player = [x for x in speakers if x.player_name.lower() == args.speaker.lower()][0]
if args.slave:
if args.slave.lower() == 'all':
[x.join(player) for x in speakers if x.player_name.lower() != player.player_name.lower()]
else:
slave = [x for x in speakers if x.player_name.lower() == args.slave.lower()][0]
slave.join(player)
if args.pause:
''' this will stop the indicated sonos speaker. even if the alarm is
still running.
'''
player.stop()
else:
favorites = get_sonos_favorites(player)
for favorite in favorites:
if args.channel.lower() in favorite['title'].lower():
my_choice = favorite
break
print "Playing {} on {}".format(my_choice['title'], player.player_name)
player.play_uri(uri=my_choice['uri'], meta=my_choice['meta'], start=True)
if args.minutes == 0:
player.volume = args.volume
else:
player.volume = 0
seconds = args.minutes * 60
ramp_interval = seconds / args.volume
for _ in xrange(args.volume):
player.volume += 1
time.sleep(ramp_interval)
if __name__ == "__main__":
today = datetime.datetime.today().strftime('%A')
date = datetime.datetime.today().strftime('%Y-%m-%d')
holidays = set(line.strip() for line in open('holidays.txt'))
if today in _WEEKEND:
print today, 'is a scheduled weekend day. Not running.'
elif date in holidays:
print date, 'is a scheduled holiday. Not running.'
elif os.path.isfile('/tmp/holiday'):
''' /tmp/holiday allows us to mark when we don't want the alarm to run
tomorrow. Especially when we're using cron. Just touch the file.
'''
print "Today is marked as a holiday via /tmp/holiday, not running the alarm"
else:
main()
else:
print "This file is not intended to be included by other scripts."
| 38.542857 | 137 | 0.623981 | 749 | 5,396 | 4.427236 | 0.335113 | 0.0193 | 0.03076 | 0.016285 | 0.084138 | 0.06152 | 0.031966 | 0.031966 | 0.031966 | 0.022316 | 0 | 0.009741 | 0.277057 | 5,396 | 139 | 138 | 38.820144 | 0.840297 | 0.083951 | 0 | 0.064103 | 0 | 0.012821 | 0.263069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.064103 | null | null | 0.064103 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5819a9286725e2bb1d31cefd9b8edf4e2e05b208 | 642 | py | Python | simfin/revenue/personal_taxes.py | CREEi-models/simfin | a7c632ac8bc8f795cd46028c1a49e65a1c1b44eb | [
"MIT"
] | 1 | 2021-06-11T15:16:13.000Z | 2021-06-11T15:16:13.000Z | simfin/revenue/personal_taxes.py | CREEi-models/simfin | a7c632ac8bc8f795cd46028c1a49e65a1c1b44eb | [
"MIT"
] | 1 | 2021-06-07T14:39:27.000Z | 2021-06-07T14:39:27.000Z | simfin/revenue/personal_taxes.py | CREEi-models/simfin | a7c632ac8bc8f795cd46028c1a49e65a1c1b44eb | [
"MIT"
] | 1 | 2021-03-17T03:52:21.000Z | 2021-03-17T03:52:21.000Z | from simfin.tools import account
class personal_taxes(account):
'''
Classe permettant d'intégrer l'impôt des particuliers.
'''
def set_align(self,pop,eco):
earnings = pop.multiply(eco['emp']*eco['earn_c']+eco['taxinc'],fill_value=0.0)
value = earnings.multiply(eco['personal_taxes'],fill_value=0.0).sum()
self.align = self.value/value
return
def grow(self,macro,pop,eco,others):
earnings = pop.multiply(eco['emp']*eco['earn_c']+eco['taxinc'],fill_value=0.0)
self.value = (earnings.multiply(eco['personal_taxes'],fill_value=0.0).sum())*self.align
return
pass
| 33.789474 | 95 | 0.65109 | 91 | 642 | 4.483516 | 0.406593 | 0.107843 | 0.098039 | 0.107843 | 0.553922 | 0.553922 | 0.553922 | 0.553922 | 0.553922 | 0.553922 | 0 | 0.015326 | 0.186916 | 642 | 18 | 96 | 35.666667 | 0.766284 | 0.084112 | 0 | 0.333333 | 0 | 0 | 0.101399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.083333 | 0.083333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
581d47d6e3101d07297475a1a84d27b2898647b8 | 1,002 | py | Python | explain.py | jcsalterego/gh-contest | 033f87c5338e3066ee4c80df2ee8e1ae4d6f1c7b | [
"BSD-3-Clause"
] | 1 | 2015-11-05T02:50:57.000Z | 2015-11-05T02:50:57.000Z | explain.py | jcsalterego/gh-contest | 033f87c5338e3066ee4c80df2ee8e1ae4d6f1c7b | [
"BSD-3-Clause"
] | null | null | null | explain.py | jcsalterego/gh-contest | 033f87c5338e3066ee4c80df2ee8e1ae4d6f1c7b | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
from pprint import pprint
from matchmaker.database import *
import sys
def main(argv):
if len(argv) == 1:
return
line = argv[1]
if line[0] in '+-':
line = line[1:]
user, repos = line.split(":")
user = int(user)
repos = [int(r) for r in repos.split(",")]
print("Loading database...")
db = Database("data")
print("original watchlist")
watching = sorted(db.u_watching[user])
for r in watching:
print "%6d" % r,
if r in db.r_info:
print("%18s - %20s - %10s"
% tuple([x[:20] for x in db.r_info[r]]))
else:
print("")
print("")
print("new additions")
watching = sorted(repos)
for r in watching:
print "%6d" % r,
if r in db.r_info:
print("%18s - %20s - %10s"
% tuple([x[:20] for x in db.r_info[r]]))
else:
print("")
if __name__ == '__main__':
sys.exit(main(sys.argv))
| 22.266667 | 58 | 0.505988 | 135 | 1,002 | 3.659259 | 0.355556 | 0.030364 | 0.040486 | 0.072874 | 0.319838 | 0.319838 | 0.319838 | 0.319838 | 0.319838 | 0.319838 | 0 | 0.032934 | 0.333333 | 1,002 | 44 | 59 | 22.772727 | 0.706587 | 0.01996 | 0 | 0.428571 | 0 | 0 | 0.110092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.085714 | null | null | 0.314286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
583374a576c3edb6be71e460848c9177cb1eee6a | 18,398 | py | Python | createbag.py | axfelix/moveit | a0d4207fdd90af8f05a5c55b4b247757cd6d7bb2 | [
"Unlicense"
] | null | null | null | createbag.py | axfelix/moveit | a0d4207fdd90af8f05a5c55b4b247757cd6d7bb2 | [
"Unlicense"
] | null | null | null | createbag.py | axfelix/moveit | a0d4207fdd90af8f05a5c55b4b247757cd6d7bb2 | [
"Unlicense"
] | null | null | null | """
GUI tool to create a Bag from a filesystem folder.
"""
import sys
import os
import shutil
import bagit
import platform
import random
import string
import re
from time import strftime
import subprocess
from paramiko import SSHClient
from paramiko import AutoAddPolicy
from paramiko import AuthenticationException
from scp import SCPClient
from distutils.dir_util import copy_tree
import zipfile
import hashlib
import tempfile
from zipfile import ZipFile
import platform
pyversion = platform.python_version_tuple()[0]
if pyversion == "2":
from urllib import urlencode
import urllib2
else:
from urllib.parse import urlencode
import urllib.request as urllib2
# These are toggled at build time. TODO: switch to argument parser.
# toggle this if depositing to an Active Directory server
internalDepositor = 0
# toggle this if depositing to SFU Library
radar = 0
# toggle this if bypassing the Bagit step
nobag = 0
# toggle this if bypassing the transfer and only creating a Bag on desktop
ziponly = 1
bagit_checksum_algorithms = ['md5']
confirmation_message_win = "The transfer package will be created and placed on your\n desktop after this; large packages may take a moment.\n\nAre all the transfer details correct?\n\n"
#confirmation_message_mac = "The transfer package will be created and placed on your desktop after this; large packages may take a moment.\n\nAre all the transfer details correct?\n\n"
confirmation_message_mac = "The transfer package will be created and placed on your desktop after this; large packages may take a moment.\n\n"
session_message = "Session Number"
session_message_final_win = "The transfer package will be created and placed on your\n desktop after this; large packages may take a moment.\n\nSession Number"
session_message_final_mac = "The transfer package will be created and placed on your desktop after this; large packages may take a moment.\n\nSession Number"
transfer_message = "Transfer Number"
if internalDepositor == 0:
username_message = "Username"
password_message = "Password"
else:
username_message = "SFU Computing ID"
password_message = "SFU Computing password"
close_session_message = "Is this the final session for this transfer?\nThe transfer will begin in the background after this \nand let you know when it is complete."
close_session_osx_title = "Is this the final session for this transfer?"
close_session_osx_informative = "The transfer will begin in the background and let you know when it is complete."
if radar == 0:
sfu_success_message = "Files have been successfuly transferred to SFU Archives. \nAn archivist will be in contact with you if further attention is needed."
bag_success_message = "Files have been successfully packaged and placed in a new folder on your desktop for transfer."
else:
sfu_success_message = "Files have been successfuly transferred to SFU Library. \nA librarian will be in contact with you if further attention is needed."
password_message = "Please input your SFU Computing password. \nTransfer will commence after clicking OK and you will be notified when it is complete."
sfu_failure_message = "Transfer did not complete successfully. \nPlease contact moveit@sfu.ca for help."
if platform.system() != 'Darwin' and platform.system() != 'Windows':
# The Linux/Gtk config has been removed for now
from gi.repository import Gtk
elif platform.system() == 'Windows':
from PyQt4 import QtGui, QtCore
elif platform.system() == 'Darwin':
# Sets up Cocoadialog for error message popup on OSX.
CD_PATH = os.path.join("~/.createbag/", "CocoaDialog.app/Contents/MacOS/CocoaDialog")
def cocoaPopup(boxtype, title, texttype, message, button, buttontext):
template = CD_PATH + " %s --title '%s' '%s' '%s' '%s' '%s'"
cocoa_process = subprocess.Popen(template % (boxtype, title, texttype, message, button, buttontext), shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=False)
cocoa_output = cocoa_process.communicate()
cocoa_result = cocoa_output[0].splitlines()
return cocoa_result
def cocoaError():
if __name__ == "__main__":
popup = cocoaPopup("msgbox", "Error", "--text", "Sorry, you can't create a bag here -- you may want to change the config file so that bags are always created in a different output directory, rather than in situ.", "--button1", "OK")
if popup == "1":
sys.exit()
def cocoaSuccess(bag_dir):
if __name__ == "__main__":
popup = cocoaPopup("msgbox", "Success!", "--text", "Bag created at %s" % bag_dir, "--button1", "OK")
def cocoaTransferSuccess(success_type):
if __name__ == "__main__":
popup = cocoaPopup("msgbox", "SFU MoveIt", "--informative-text", success_type, "--button1", "OK")
def cocoaTransferError(failure_message=sfu_failure_message):
if __name__ == "__main__":
popup = cocoaPopup("msgbox", "SFU MoveIt", "--informative-text", failure_message, "--button1", "OK")
if popup == "1":
sys.exit()
def cocoaSessionNo():
if __name__ == "__main__":
popup = cocoaPopup("standard-inputbox", "Session Number", "--informative-text", session_message, "", "")
if popup[0] == "2":
sys.exit()
return popup[1]
def cocoaTransferNo():
if __name__ == "__main__":
popup = cocoaPopup("standard-inputbox", "Transfer Number", "--informative-text", transfer_message, "", "")
if popup[0] == "2":
sys.exit()
return popup[1]
def cocoaUsername():
if __name__ == "__main__":
popup = cocoaPopup("standard-inputbox", "Username", "--informative-text", username_message, "", "")
if popup[0] == "2":
sys.exit()
return popup[1]
def cocoaPassword():
if __name__ == "__main__":
popup = cocoaPopup("secure-standard-inputbox", "Password", "--informative-text", password_message, "", "")
if popup[0] == "2":
sys.exit()
return popup[1]
# Dummied temporarily because of issues w/ CocoaDialog under High Sierra
def cocoaConfirmation(confirmation_mac):
if __name__ == "__main__":
#popup = cocoaPopup("yesno-msgbox", "SFU MoveIt", "--text", "Confirm Transfer", "--informative-text", confirmation_mac)
#if popup[0] == "3" or popup[0] == "2":
# sys.exit()
popup = cocoaPopup("msgbox", "SFU MoveIt", "--informative-text", confirmation_mac, "--button1", "OK")
if popup == "1":
sys.exit()
return
def cocoaCloseSession():
if __name__ == "__main__":
popup = cocoaPopup("yesno-msgbox", "SFU MoveIt", "--text", close_session_osx_title, "--informative-text", close_session_osx_informative)
if popup[0] == "3":
sys.exit()
# "no" will equal 2 rather than 0 in cocoa, but "yes" still = 1
return popup[0]
def make_bag(chosen_folder):
if nobag == 0:
bag_dir_parent = tempfile.mkdtemp()
if os.path.isdir(bag_dir_parent):
shutil.rmtree(bag_dir_parent)
bag_dir = os.path.join(bag_dir_parent, 'bag')
os.makedirs(bag_dir)
copy_tree(chosen_folder, bag_dir)
# Create the Bag.
try:
bag = bagit.make_bag(bag_dir, None, 1, bagit_checksum_algorithms)
except (bagit.BagError, Exception) as e:
if platform.system() == 'Darwin':
cocoaError()
elif platform.system() == 'Windows':
QtChooserWindow.qt_error(ex)
return
return bag_dir_parent
else:
return chosen_folder
def transfer_manifest(bag_dir, sessionno, transferno, archivesUsername, checksum, metafilename, filelist):
current_time = strftime("%Y-%m-%d %H:%M:%S")
transfer_metadata = "Transfer Number: " + transferno + "-" + sessionno + "\nUser: " + archivesUsername + "\nChecksum: " + checksum + "\nTime Received: " + current_time + "\n" + filelist
with open(metafilename, 'w') as transfer_metafile:
transfer_metafile.write(transfer_metadata)
def generate_password():
length = 13
chars = string.ascii_letters + string.digits + '!@#$%^&*()'
random.seed = (os.urandom(1024))
passwordString = ''.join(random.choice(chars) for i in range(length))
return passwordString
def generate_file_md5(zipname, blocksize=2**20):
m = hashlib.md5()
with open(zipname, "rb") as f:
while True:
buf = f.read(blocksize)
if not buf:
break
m.update(buf)
return m.hexdigest()
def check_zip_and_send(bag_dir_parent, sessionno, transferno, archivesUsername, archivesPassword, close_session, parent_path):
if nobag == 0:
bag_dir = os.path.join(str(bag_dir_parent), 'bag')
numbered_bag_dir = os.path.join(str(bag_dir_parent), (transferno + "-" + sessionno))
metafilename = numbered_bag_dir + "-meta.txt"
zipname = shutil.make_archive(numbered_bag_dir, 'zip', bag_dir)
checksum = generate_file_md5(zipname)
with open(os.path.join(bag_dir, 'manifest-md5.txt'), 'r') as manifestmd5:
bagit_manifest_txt = manifestmd5.read()
filelist = re.sub("\r?\n\S*?\s+data", ("\n" + parent_path), bagit_manifest_txt)
filelist = filelist.split(' ', 1)[1]
passwordString = generate_password()
# Passwording uploaded files is disabled for now.
#with ZipFile(zipname, 'a') as transferZip:
# transferZip.setpassword(passwordString)
shutil.rmtree(bag_dir)
# check transfer number blacklist and post back if OK
get_req = urllib2.Request("http://arbutus.archives.sfu.ca:8008/blacklist")
try:
get_response = urllib2.urlopen(get_req, timeout = 2)
blacklist = get_response.read()
blacklist_entries = blacklist.split()
if transferno in blacklist_entries:
if platform.system() == 'Darwin':
cocoaTransferError()
elif platform.system() == 'Windows':
QtChooserWindow.qt_transfer_failure(ex)
return
except:
pass
values = {'transfer' : transferno, 'session' : sessionno, 'username' : archivesUsername, 'checksum' : checksum}
postdata = urlencode(values)
post_req = urllib2.Request("http://arbutus.archives.sfu.ca:8008/blacklist", postdata)
else:
filelist = ""
transfer_manifest(bag_dir, sessionno, transferno, archivesUsername, checksum, metafilename, filelist)
if ziponly == 1:
desktopPath = os.path.expanduser("~/Desktop/")
outputPath = desktopPath + os.path.splitext(os.path.basename(zipname))[0]
os.mkdir(outputPath)
shutil.move(zipname, (outputPath + "/" + os.path.basename(zipname)))
shutil.move(metafilename, (outputPath + "/" + os.path.basename(metafilename)))
return "bagged"
try:
ssh = SSHClient()
ssh.set_missing_host_key_policy(AutoAddPolicy())
if internalDepositor == 0:
ssh.connect("142.58.136.69", username=archivesUsername, password=archivesPassword, look_for_keys=False)
scp = SCPClient(ssh.get_transport())
remote_path = '~/deposit_here/' + transferno + "-" + sessionno
scp.put(bag_dir_parent, remote_path, recursive=True)
if close_session == 1:
try:
urllib2.urlopen(post_req, timeout = 2)
except:
pass
elif radar == 1:
ssh.connect("researchdata.sfu.ca", username=archivesUsername, password=archivesPassword, look_for_keys=False)
scp = SCPClient(ssh.get_transport())
remote_zip_path = '~/.pydiodata/' + os.path.basename(os.path.normpath(bag_dir))
try:
scp.put(os.path.normpath(bag_dir), remote_zip_path, recursive=True)
except:
ssh.exec_command('mkdir .pydiodata')
scp.put(os.path.normpath(bag_dir), remote_zip_path, recursive=True)
else:
ssh.connect("pine.archives.sfu.ca", username=archivesUsername, password=archivesPassword, look_for_keys=False)
scp = SCPClient(ssh.get_transport())
remote_path = '~/' + transferno + "-" + sessionno
scp.put(bag_dir_parent, remote_path, recursive=True)
if close_session == 1:
try:
urllib2.urlopen(post_req, timeout = 2)
except:
pass
except AuthenticationException:
failure_message = "Transfer did not complete successfully. \nUsername or password incorrect."
if platform.system() == 'Darwin':
cocoaTransferError(failure_message)
elif platform.system() == 'Windows':
QtChooserWindow.qt_transfer_failure(ex, failure_message)
return
except:
if platform.system() == 'Darwin':
cocoaTransferError()
elif platform.system() == 'Windows':
QtChooserWindow.qt_transfer_failure(ex)
return
if nobag == 0:
os.remove(zipname)
os.remove(metafilename)
return remote_path
# Windows/Qt-specific code (can also work on Linux but Gtk is nicer)
if platform.system() == 'Windows':
class QtChooserWindow(QtGui.QDialog):
def __init__(self, parent=None):
super(QtChooserWindow, self).__init__(parent)
if parent is None:
self.initUI()
def initUI(self):
choose_folder_button = QtGui.QPushButton("Choose a folder to transfer", self)
choose_folder_button.clicked.connect(self.showDialog)
choose_folder_button.resize(choose_folder_button.sizeHint())
choose_folder_button.move(20, 30)
quit_button = QtGui.QPushButton("Quit", self)
quit_button.clicked.connect(QtCore.QCoreApplication.instance().quit)
quit_button.resize(quit_button.sizeHint())
quit_button.move(250, 30)
self.resize(345, 80)
self.center()
self.setWindowTitle('SFU MoveIt')
self.show()
def center(self):
qr = self.frameGeometry()
cp = QtGui.QDesktopWidget().availableGeometry().center()
qr.moveCenter(cp)
self.move(qr.topLeft())
def showDialog(self):
fname = QtGui.QFileDialog.getExistingDirectory(self, 'SFU MoveIt - Choose a folder to transfer', '/home')
parent_path = os.path.basename(os.path.normpath(str(fname)))
bag_dir = make_bag(str(fname))
if (bag_dir):
archivesUsername = self.qt_username(bag_dir)
if archivesUsername == "":
sys.exit()
if ziponly == 0:
archivesPassword = self.qt_password(bag_dir)
else:
archivesPassword = ""
if radar == 0:
transferno = self.qt_transfer(bag_dir)
if transferno == "":
sys.exit()
sessionno = self.qt_session(bag_dir)
if sessionno == "":
sys.exit()
confirmation = self.qt_review(bag_dir, archivesUsername, sessionno, transferno)
if ziponly == 0:
close_session = self.qt_close_session()
else:
close_session = 0
else:
sessionno = 0
transferno = 0
close_session = 0
payload = check_zip_and_send(bag_dir, str(sessionno), str(transferno), str(archivesUsername), str(archivesPassword), close_session, parent_path)
if (payload):
if payload == "bagged":
self.qt_transfer_success(bag_success_message)
else:
self.qt_transfer_success(sfu_success_message)
def qt_username(self, bag_dir):
archivesUsername, ok = QtGui.QInputDialog.getText(self, "Username", username_message)
return archivesUsername
def qt_password(self, bag_dir):
archivesPassword, ok = QtGui.QInputDialog.getText(self, "Password", password_message, 2)
return archivesPassword
def qt_session(self, bag_dir):
sessionno, ok = QtGui.QInputDialog.getText(self, "Session Number", session_message)
return sessionno
def qt_transfer(self, bag_dir):
transferno, ok = QtGui.QInputDialog.getText(self, "Transfer Number", transfer_message)
return transferno
def qt_review(self, bag_dir, archivesUsername, transferno, sessionno):
confirmation_string = confirmation_message_win + "\nUsername: " + archivesUsername + "\nTransfer: " + transferno + "-" + sessionno
review_window = QtGui.QMessageBox.question(self, 'SFU MoveIt', confirmation_string, QtGui.QMessageBox.Yes | QtGui.QMessageBox.No, QtGui.QMessageBox.No)
if review_window == QtGui.QMessageBox.Yes:
return
else:
sys.exit()
def qt_close_session(self):
close_session_window = QtGui.QMessageBox.question(self, 'SFU MoveIt', close_session_message, QtGui.QMessageBox.Yes | QtGui.QMessageBox.No, QtGui.QMessageBox.No)
if close_session_window == QtGui.QMessageBox.Yes:
close_session = 1
else:
close_session = 0
return close_session
def qt_transfer_success(self, success_type):
confirmation_window = QtChooserWindow(self)
confirmation_string = success_type
confirmation_message = QtGui.QLabel(confirmation_string, confirmation_window)
confirmation_message.move(20, 30)
confirmation_window.resize(500, 80)
confirmation_window.center()
confirmation_window.setWindowTitle('Success')
confirmation_window.show()
def qt_transfer_failure(self, failure_message=sfu_failure_message):
confirmation_window = QtChooserWindow(self)
confirmation_string = failure_message
confirmation_message = QtGui.QLabel(confirmation_string, confirmation_window)
confirmation_message.move(20, 30)
confirmation_window.resize(500, 80)
confirmation_window.center()
confirmation_window.setWindowTitle('Error')
confirmation_window.show()
def qt_confirmation(self, bag_dir):
confirmation_window = QtChooserWindow(self)
confirmation_string = "The Bag for folder " + bag_dir + " has been created."
confirmation_message = QtGui.QLabel(confirmation_string, confirmation_window)
confirmation_message.move(20, 30)
confirmation_window.resize(500, 80)
confirmation_window.center()
confirmation_window.setWindowTitle('Bag created')
confirmation_window.show()
def qt_error(self):
error_window = QtChooserWindow(self)
error_message = QtGui.QLabel("Something went wrong! Please open an issue report at http://github.com/axfelix/moveit/issues", error_window)
error_message.move(20, 30)
error_window.resize(360, 80)
error_window.center()
error_window.setWindowTitle('Sorry')
error_window.show()
app = QtGui.QApplication(sys.argv)
ex = QtChooserWindow()
sys.exit(app.exec_())
# OSX-specific code.
elif platform.system() == 'Darwin':
# add progress bar code eventually
# Python 3 needs .decode() because Cocoa returns bytestrings
archivesUsername = cocoaUsername().decode()
if ziponly == 0:
archivesPassword = cocoaPassword().decode()
else:
archivesPassword = ""
transferno = cocoaTransferNo().decode()
sessionno = cocoaSessionNo().decode()
confirmation_mac = confirmation_message_mac + "\nUsername: " + archivesUsername + "\nTransfer: " + transferno + "-" + sessionno
confirmation = cocoaConfirmation(confirmation_mac)
bag_dir = make_bag(sys.argv[1])
parent_path = os.path.basename(os.path.normpath(sys.argv[1]))
if ziponly == 0:
close_session = cocoaCloseSession()
else:
close_session = 0
script_output = check_zip_and_send(bag_dir, sessionno, transferno, archivesUsername, archivesPassword, close_session, parent_path)
if script_output == "bagged":
cocoaTransferSuccess(bag_success_message)
else:
cocoaTransferSuccess(sfu_success_message)
| 35.655039 | 236 | 0.72709 | 2,331 | 18,398 | 5.553411 | 0.198627 | 0.020857 | 0.007725 | 0.011587 | 0.399614 | 0.343762 | 0.297412 | 0.264504 | 0.248899 | 0.227964 | 0 | 0.010456 | 0.157898 | 18,398 | 515 | 237 | 35.724272 | 0.825082 | 0.06979 | 0 | 0.337629 | 0 | 0.025773 | 0.183904 | 0.003863 | 0 | 0 | 0 | 0.001942 | 0 | 1 | 0.07732 | false | 0.06701 | 0.06701 | 0 | 0.203608 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5835a4f4779f435b367bd40c05663242713c67ad | 3,038 | py | Python | Morocco model/scripts/cropland_processing.py | KTH-dESA/FAO | 74459217a9e8ad8107b1d3a96fd52eebd93daebd | [
"MIT"
] | 3 | 2020-09-17T11:12:52.000Z | 2021-03-31T09:24:02.000Z | Morocco model/scripts/cropland_processing.py | KTH-dESA/FAO | 74459217a9e8ad8107b1d3a96fd52eebd93daebd | [
"MIT"
] | 101 | 2019-10-02T10:16:28.000Z | 2021-06-05T06:42:55.000Z | Morocco model/scripts/cropland_processing.py | KTH-dESA/FAO | 74459217a9e8ad8107b1d3a96fd52eebd93daebd | [
"MIT"
] | 2 | 2020-02-23T13:28:00.000Z | 2021-03-31T10:02:46.000Z | import sys
sys.path.append("..") #this is to add the avobe folder to the package directory
import geopandas as gpd
import pandas as pd
import numpy as np
import os
from nexustool.gis_tools import download_data, create_time_data, get_area_share, get_zonal_stats
from nexustool.weap_tools import reproject_raster, sample_raster
## Downloading solar irradiation and water table depth data
url = 'https://biogeo.ucdavis.edu/data/worldclim/v2.1/base/wc2.1_30s_srad.zip'
file_path = os.path.join('data', 'gis', 'srad', 'wc2.1_30s_srad.zip')
download_data(url, file_path)
url = 'https://souss-massa-dev.s3.us-east-2.amazonaws.com/post_build/Africa_model_wtd_v2.nc'
file_path = os.path.join('data', 'gis', 'wtd', 'Africa_model_wtd_v2.nc')
download_data(url, file_path)
## Reading the input data
demand_path = str(snakemake.input.demand_points)
cropland_path = str(snakemake.input.cropland)
crop_df = pd.read_csv(cropland_path, encoding='utf-8')
geometry = crop_df['WKT'].map(shapely.wkt.loads)
cropland = gpd.GeoDataFrame(crop_df.drop(columns=['WKT']), crs="EPSG:26192", geometry=geometry)
provinces = gpd.read_file(os.path.join('data', 'gis', 'admin', 'provinces.gpkg'), encoding='utf-8')
output_file = str(snakemake.output)
output_folder = output_file.split(os.path.basename(output_file))[0]
## Convert coordenate reference system (crs)
MerchidSudMoroc = 26192
for gdf in [provinces, provinces]:
gdf.to_crs(epsg=MerchidSudMoroc, inplace=True)
cropland = cropland.loc[cropland.area_m2>=100] #choose
## Solar irradiation zonal statistics
Loops through the 12 months of the year and gets the mean solar irradiation of each month within each cropland polygon
cropland.to_crs(epsg=4326, inplace=True)
for month in range(1, 13):
cropland = get_zonal_stats(cropland,
os.path.join('data', 'gis', 'srad',
f'wc2.1_30s_srad_{str(month).zfill(2)}.tif'),
['mean'], all_touched=True).rename(columns={'mean': f'srad{month}'})
## Water table depth zonal statistics
cropland.crs = 4326
cropland = get_zonal_stats(cropland,
os.path.join('data', 'gis', 'wtd',
'Africa_model_wtd_v2.nc'),
['mean'], all_touched=True).rename(columns={'mean': 'wtd'})
cropland.crs = 4326
cropland.to_crs(epsg=MerchidSudMoroc, inplace=True)
## Creating time series data
df_cropland = create_time_data(cropland, 2019, 2050)
## Calculating the area share of each croplan area within each province
cropland.loc[cropland['province']=='Inezgane-Aït Melloul', 'province'] = 'Taroudannt' #Including Inezgane-Aït Melloul irrigated area into results from Taroudant due to lack of data for the former
cropland['area_share'] = get_area_share(cropland, 'province', 'area_m2')
df_cropland = pd.merge(df_cropland, cropland[['Demand point', 'area_share']], on='Demand point')
os.makedirs(output_folder, exist_ok = True)
df_cropland.to_csv(output_file, index=False) | 40.506667 | 195 | 0.711982 | 438 | 3,038 | 4.773973 | 0.388128 | 0.017217 | 0.023912 | 0.033477 | 0.210904 | 0.158776 | 0.121473 | 0.072214 | 0.072214 | 0.072214 | 0 | 0.024745 | 0.161949 | 3,038 | 75 | 196 | 40.506667 | 0.796544 | 0.150428 | 0 | 0.130435 | 0 | 0.043478 | 0.189864 | 0.032749 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.152174 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
583a53eef1dad89d42938f5028c87aba4efb30bb | 10,917 | py | Python | pycost/rocch.py | tfawcett/pycost | 69f96866295dba937a23f36c8f24f2f6acdaccbd | [
"BSD-3-Clause"
] | 1 | 2019-10-05T10:37:47.000Z | 2019-10-05T10:37:47.000Z | pycost/rocch.py | tfawcett/pycost | 69f96866295dba937a23f36c8f24f2f6acdaccbd | [
"BSD-3-Clause"
] | null | null | null | pycost/rocch.py | tfawcett/pycost | 69f96866295dba937a23f36c8f24f2f6acdaccbd | [
"BSD-3-Clause"
] | 1 | 2020-06-12T17:13:14.000Z | 2020-06-12T17:13:14.000Z | """
Metrics to calculate and manipulate the ROC Convex Hull on a classification task given scores.
"""
# Author: Tom Fawcett <tom.fawcett@gmail.com>
from collections import namedtuple
from math import sqrt
from typing import List, Dict, Tuple, Union
# DESCRIPTION:
#
# This program computes the convex hull of a set of ROC points
# (technically, the upper left triangular convex hull, bounded
# by (0,0) and (1,1)). The ROC Convex Hull is used to find dominant
# (and locally best) classifiers in ROC space. For more information
# on the ROC convex hull and its uses, see the references below.
#
# FP and TP are the False Positive (X axis) and True Positive (Y axis)
# values for the point.
#
#
# REFERENCES:
#
# The first paper below is probably best for an introduction and
# general discussion of the ROC Convex Hull and its uses.
#
# 1) Provost, F. and Fawcett, T. "Analysis and visualization of
# classifier performance: Comparison under imprecise class and cost
# distributions". In Proceedings of the Third International
# Conference on Knowledge Discovery and Data Mining (KDD-97),
# pp.43-48. AAAI Press.
#
# 2) Provost, F. and Fawcett, T. "Robust Classification Systems for
# Imprecise Environments".
#
# 3) Provost, F., Fawcett, T., and Kohavi, R. "The Case
# Against Accuracy Estimation for Comparing Induction Algorithms".
# Available from:
#
#
# BUG REPORTS / SUGGESTIONS / QUESTIONS: Tom Fawcett <tom.fawcett@gmail.com>
#
#
"""
Typical use is something like this:
rocch = ROCCH(keep_intermediate=False)
for clf in classifiers:
y_scores = clf.decision_function(y_test)
rocch.fit(clfname, roc_curve(y_scores, y_true))
...
plt.plot(rocch.hull())
rocch.describe()
"""
Point = namedtuple( "Point", ["x", "y", "clfname"] )
Point.__new__.__defaults__ = ("",) # make clfname optional
INFINITY: float = float( "inf" )
class ROCCH( object ):
"""ROC Convex Hull.
Some other stuff.
"""
_hull: List[Point]
def __init__(self, keep_intermediate=False):
"""Initialize the object."""
self.keep_intermediate = keep_intermediate
self.classifiers: Dict[str, List[Tuple]] = { }
self._hull = [Point( 0, 0, "AllNeg" ), Point( 1, 1, "AllPos" )]
def fit(self, clfname: str, points):
"""Fit (add) a classifier's ROC points to the ROCCH.
:param clfname: A classifier name or identifier. This is only used to record the
identity of the classifier producing the points. It can be anything, such as a
(classifier, threshold) pair.
TODO: Let clfname be a string or a list; add some way to incorporate info per point so we
can associate each point with a parameter.
:param points: A sequence of ROC points, contained in a list or array. Each point should
be an (FP, TP) pair. TODO: Make this more general.
:return: None
"""
points_instances = [Point( x, y, clfname ) for (x, y) in points]
points_instances.extend( self._hull )
points_instances.sort( key=lambda pt: pt.x )
hull = []
# TODO: Make this more efficient by simply using pointers rather than append-pop.
while points_instances:
hull.append( points_instances.pop( 0 ) )
# Now test the top three on new_hull
test_top = True
while len( hull ) >= 3 and test_top:
turn_dir = turn( *hull[-3:] )
if turn_dir > 0: # CCW turn, this introduced a concavity.
hull.pop( -2 )
elif turn_dir == 0: # Co-linear, should we keep it?
if not self.keep_intermediate:
# No, treat it as if it's under the hull
hull.pop( -2 )
else: # Treat this as convex
test_top = False
else: # CW turn, this is convex
test_top = False
self._hull = hull
def _check_hull(self) -> None:
"""Check a list of hull points for convexity.
This is a simple utility function for testing.
Throws an AssertionError if a hull segment is concave or if the terminal AllNeg and
AllPos are not present.
Colinear segments (turn==0) will be considered violations unless keep_intermediate is on.
"""
hull = self._hull
assert len( hull ) >= 2, "Hull is damaged"
assert hull[0].clfname == "AllNeg", "First hull point is not AllNeg"
assert hull[-1].clfname == "AllPos", "Last hull point is not AllPos"
for hull_idx in range( len( hull ) - 2 ):
segment = hull[hull_idx: hull_idx + 3]
turn_val = turn( *segment )
assert turn_val <= 0, f"Concavity in hull: {segment}"
if not self.keep_intermediate:
assert turn_val < 0, "Intermediate (colinear) point in hull"
@property
def hull(self) -> List[Tuple]:
"""
Return a list of points constituting the convex hull of classifiers in ROC space.
Returns a list of tuples (FP, TP, CLF) where each (FP,TP) is a point in ROC space
and CLF is the classifier producing that performance point.
"""
# Defined just in case postprocessing needs to be done.
return self._hull
def dominant_classifiers(self) -> List[Tuple]:
"""
Return a list describing the hull in terms of the dominant classifiers.
Start at point (1,1) and work counter-clockwise down the hull to (0,0).
Iso-performance line slope starts at 0.0 and works up to infinity.
:return: A list consisting of (prob_min, prob_max, point) where
:rtype: List[Tuple]
"""
slope = 0.0
last_point = None
last_slope = None
segment_right_boundary: Union[Point,None] = None
dominant_list: List[Tuple] = []
# TODO: Check for hull uninitialized.
point: Point
for point in self._hull:
if last_point is not None:
slope: float = calculate_slope( point, last_point )
else:
segment_right_boundary = point
if last_slope is not None:
if self.keep_intermediate or last_slope != slope:
dominant_list.append( (last_slope, slope, segment_right_boundary) )
last_slope = slope
segment_right_boundary = point
else: # last_slope is undefined
last_slope = slope
last_point = point
if last_slope != INFINITY:
slope = INFINITY
# Output final point
dominant_list.append( (last_slope, slope, segment_right_boundary) )
return dominant_list
def best_classifiers_for_conditions(self, class_ratio=1.0, cost_ratio=1.0):
"""
Given a set of operating conditions (class and cost ratios), return best classifiers.
Given a class ratio (P/N) and a cost ratio (cost(FP),cost(FN)), return a set of
classifiers that will perform optimally for those conditions. The class ratio is the
fraction of positives per negative. The cost ratio is the cost of a False Positive
divided by the cost of a False Negative.
The return value will be a list of either one or two classifiers. If the conditions
identify a single best classifier, the result will be simply:
[ (clf, 1.0) ]
indicating that clf should be chosen.
If the conditions are between the performance of two classifiers, the result will be:
[ (clf1, p1), (clf2, p2) ]
indicating that clf1's decisions should be sampled at a rate of p1 and clf2's at a rate
of p2, with p1 and p2 summing to 1.
:param class_ratio, float: The ratio of positives to negatives: P/N
:param cost_ratio, float: The ratio of the cost of a False Positive error to a False
Negative Error: cost(FP)/cost(FN)
:return:
:rtype:
"""
assert 0 < class_ratio < 1.0, "Class ratio must be between 0 and 1"
assert 0 < cost_ratio < 1.0, "Cost ratio must be between 0 and 1"
def calculate_slope(pt1, pt2: Point):
"""
Return the slope from pt1 to pt2, or inf if slope is infinite
:param pt1:
:type pt1: Point
:param pt2:
:type pt2: Point
:return:
:rtype: float
"""
dx = pt2.x - pt1.x
dy = pt2.y - pt1.y
if dx == 0:
return INFINITY
else:
return dy / dx
def _check_hull(hull):
"""Check a list of hull points for convexity.
This is a simple utility function for testing.
Throws an AssertionError if a hull segment is concave.
Colinear segments (turn==0) are not considered violations.
:param hull: A list of Point instances describing an ROC convex hull.
:return: None
"""
for hull_idx in range( len( hull ) - 2 ):
segment = hull[hull_idx: hull_idx + 3]
assert turn( *segment ) <= 0, f"Concavity in hull: {segment}"
def ROC_order(pt1, pt2: Point) -> bool:
"""Predicate for determining ROC_order for sorting.
Either pt1's x is ahead of pt2's x, or the x's are equal and pt1's y is ahead of pt2's y.
"""
return (pt1.x < pt2.x) or (pt1.x == pt2.x and pt1.y < pt2.y)
def compute_theta(p1, p2: Point) -> float:
"""Compute theta, an ordering function on a point pair.
Theta has the same properties as the angle between the horizontal axis and
the line segment between the points, but is much faster to compute than
arctangent. Range is 0 to 360. Defined on P.353 of _Algorithms in C_.
"""
dx = p2.x - p1.x
ax = abs( dx )
dy = p2.y - p1.y
ay = abs( dy )
if dx == 0 and dy == 0:
t = 0
else:
t = dy / (ax + ay)
# Adjust for quadrants two through four
if dx < 0:
t = 2 - t
elif dy < 0:
t = 4 + t
return t * 90.0
def euclidean(p1, p2: Point) -> float:
"""Compute Euclidean distance.
"""
return sqrt( (p1.x - p2.x)**2 + (p1.y - p2.y)**2 )
def turn(a, b, c: Point) -> float:
"""Determine the turn direction going from a to b to c.
Going from a->b->c, is the turn clockwise, counterclockwise, or straight.
positive => CCW
negative => CW
zero => colinear
See: https://algs4.cs.princeton.edu/91primitives/
>>> a = Point(1,1)
>>> b = Point(2,2)
>>> turn(a, b, Point(3,2))
-1
>>> turn(a, b, Point(2,3))
1
>>> turn(a, b, Point(3,3))
0
>>> turn(a, b, Point(1.5, 1.5)) == 0
True
>>> turn(a, b, Point(1.5,1.7)) > 0
True
:param Point a:
:param Point b:
:param Point c:
:rtype: float
"""
return (b.x - a.x) * (c.y - a.y) - (c.x - a.x) * (b.y - a.y)
if __name__ == "__main__":
import doctest
doctest.testmod()
# End of rocch.py
| 33.798762 | 100 | 0.612989 | 1,567 | 10,917 | 4.198468 | 0.246331 | 0.0076 | 0.011856 | 0.00836 | 0.169631 | 0.121295 | 0.086031 | 0.066879 | 0.066879 | 0.051072 | 0 | 0.01987 | 0.294678 | 10,917 | 322 | 101 | 33.903727 | 0.834545 | 0.511587 | 0 | 0.205607 | 0 | 0 | 0.064729 | 0 | 0 | 0 | 0 | 0.012422 | 0.074766 | 1 | 0.11215 | false | 0 | 0.037383 | 0 | 0.242991 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
583ba4ab4b346b94532e02cbbc5e159874800f72 | 363 | py | Python | src/sentry/utils/strings.py | rogerhu/sentry | ee2b190e92003abe0f538b2df5b686e425df1200 | [
"BSD-3-Clause"
] | 1 | 2015-12-13T18:27:54.000Z | 2015-12-13T18:27:54.000Z | src/sentry/utils/strings.py | simmetria/sentry | 9731f26adb44847d1c883cca108afc0755cf21cc | [
"BSD-3-Clause"
] | null | null | null | src/sentry/utils/strings.py | simmetria/sentry | 9731f26adb44847d1c883cca108afc0755cf21cc | [
"BSD-3-Clause"
] | null | null | null | def truncatechars(value, arg):
"""
Truncates a string after a certain number of chars.
Argument: Number of chars to truncate after.
"""
try:
length = int(arg)
except ValueError: # Invalid literal for int().
return value # Fail silently.
if len(value) > length:
return value[:length] + '...'
return value
| 25.928571 | 55 | 0.606061 | 43 | 363 | 5.116279 | 0.651163 | 0.15 | 0.118182 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292011 | 363 | 13 | 56 | 27.923077 | 0.856031 | 0.385675 | 0 | 0.25 | 0 | 0 | 0.014851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5842b3ae714ec5029aefbd5f4f522395e8920892 | 4,652 | py | Python | examples/launch_tor_with_simplehttpd.py | kneufeld/txtorcon | fbe2fc70cae00aa6228a2920ef048b282872dbab | [
"MIT"
] | null | null | null | examples/launch_tor_with_simplehttpd.py | kneufeld/txtorcon | fbe2fc70cae00aa6228a2920ef048b282872dbab | [
"MIT"
] | null | null | null | examples/launch_tor_with_simplehttpd.py | kneufeld/txtorcon | fbe2fc70cae00aa6228a2920ef048b282872dbab | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''Create a new tor node and add a simple http server to it, serving a given
directory over http. The server is single-threaded and very limited.
There are two arguments that can be passed via the commandline:
-p\tThe internet-facing port the hidden service should listen on
-d\tThe directory to serve via http
Example:
./launch_tor_with_simplehttpd.py -p 8080 -d /opt/files/
'''
import SimpleHTTPServer
import SocketServer
import functools
import getopt
import os
import sys
import tempfile
import thread
from twisted.internet import reactor
import txtorcon
def print_help():
print __doc__
def print_tor_updates(prog, tag, summary):
# Prints some status messages while booting tor
print 'Tor booting [%d%%]: %s' % (prog, summary)
def start_httpd(httpd):
# Create a new thread to serve requests
print 'Starting httpd...'
return thread.start_new_thread(httpd.serve_forever, ())
def stop_httpd(httpd):
# Kill the httpd
print 'Stopping httpd...'
httpd.shutdown()
def setup_complete(config, port, proto):
# Callback from twisted when tor has booted.
# We create a reference to this function via functools.partial that
# provides us with a reference to 'config' and 'port', twisted then adds
# the 'proto' argument
print '\nTor is now running. The hidden service is available at'
print '\n\thttp://%s:%i\n' % (config.HiddenServices[0].hostname, port)
# This is probably more secure than any other httpd...
print '### DO NOT RELY ON THIS SERVER TO TRANSFER FILES IN A SECURE WAY ###'
def setup_failed(arg):
# Callback from twisted if tor could not boot. Nothing to see here, move
# along.
print 'Failed to launch tor', arg
reactor.stop()
def main():
# Parse the commandline-options
try:
opts, args = getopt.getopt(sys.argv[1:], 'hd:p:')
except getopt.GetoptError as excp:
print str(excp)
print_help()
return 1
serve_directory = '.' # The default directory to serve files from
hs_public_port = 8011 # The default port the hidden service is available on
web_port = 4711 # The real server's local port
web_host = '127.0.0.1' # The real server is bound to localhost
for o, a in opts:
if o == '-d':
serve_directory = a
elif o == '-p':
hs_public_port = int(a)
elif o == '-h':
print_help()
return
else:
print 'Unknown option "%s"' % (o, )
return 1
# Sanitize path and set working directory there (for SimpleHTTPServer)
serve_directory = os.path.abspath(serve_directory)
if not os.path.exists(serve_directory):
print 'Path "%s" does not exists, can\'t serve from there...' % \
(serve_directory, )
return 1
os.chdir(serve_directory)
# Create a new SimpleHTTPServer and serve it from another thread.
# We create a callback to Twisted to shut it down when we exit.
print 'Serving "%s" on %s:%i' % (serve_directory, web_host, web_port)
httpd = SocketServer.TCPServer((web_host, web_port),
SimpleHTTPServer.SimpleHTTPRequestHandler)
start_httpd(httpd)
reactor.addSystemEventTrigger('before', 'shutdown', stop_httpd, httpd=httpd)
# Create a directory to hold our hidden service. Twisted will unlink it
# when we exit.
hs_temp = tempfile.mkdtemp(prefix='torhiddenservice')
reactor.addSystemEventTrigger('before', 'shutdown',
functools.partial(txtorcon.util.delete_file_or_tree, hs_temp))
# Add the hidden service to a blank configuration
config = txtorcon.TorConfig()
config.SOCKSPort = 0
config.ORPort = 9089
config.HiddenServices = [txtorcon.HiddenService(config, hs_temp,
['%i %s:%i' % (hs_public_port,
web_host,
web_port)])]
config.save()
# Now launch tor
# Notice that we use a partial function as a callback so we have a
# reference to the config object when tor is fully running.
tordeferred = txtorcon.launch_tor(config, reactor,
progress_updates=print_tor_updates)
tordeferred.addCallback(functools.partial(setup_complete, config,
hs_public_port))
tordeferred.addErrback(setup_failed)
reactor.run()
if __name__ == '__main__':
sys.exit(main())
| 33.710145 | 96 | 0.635211 | 600 | 4,652 | 4.816667 | 0.386667 | 0.038754 | 0.022145 | 0.014533 | 0.018685 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008639 | 0.278375 | 4,652 | 137 | 97 | 33.956204 | 0.852249 | 0.242691 | 0 | 0.063291 | 0 | 0 | 0.119201 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.126582 | null | null | 0.21519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
584381c8993e76aeeaae4fc35eb8cf9d4869915b | 3,417 | py | Python | rever/__init__.py | limecrayon/rever | 0446ad9707fb1e81b3101625959fd16bdaac1853 | [
"MIT"
] | 2 | 2018-02-16T08:31:48.000Z | 2018-11-19T02:31:07.000Z | rever/__init__.py | limecrayon/rever | 0446ad9707fb1e81b3101625959fd16bdaac1853 | [
"MIT"
] | null | null | null | rever/__init__.py | limecrayon/rever | 0446ad9707fb1e81b3101625959fd16bdaac1853 | [
"MIT"
] | null | null | null | import functools
import time
__all__ = ('ReachedMaxRetries', 'rever')
class ReachedMaxRetries(Exception):
def __init__(self, func):
Exception.__init__(self, "Function {} raised exception due to max number of retries performed".format(func))
self.func = func
def rever(**rever_kwargs):
"""
rever_kwargs default values defined:
If backoff is True, then times and pause will not be initialized, but they will be calculated.
backoff: True
total_pause: 30
steps: 10
exception: BaseException
raises: True
prior: None
If backoff is False, then total_pause and steps will be initialized, but do not get used.
backoff: False
times: 1
pause: 0
exception: BaseException
raises: True
prior: None
"""
backoff = True
total_pause = 1
steps = 10
times = 1
pause = 0
exception = BaseException
raises = True
prior = None
if "backoff" not in rever_kwargs:
rever_kwargs["backoff"] = backoff
if "total_pause" not in rever_kwargs:
rever_kwargs["total_pause"] = total_pause
if "steps" not in rever_kwargs:
rever_kwargs["steps"] = steps
if "times" not in rever_kwargs:
if not rever_kwargs["backoff"]:
rever_kwargs["times"] = times
if "pause" not in rever_kwargs:
if not rever_kwargs["backoff"]:
rever_kwargs["pause"] = pause
if "exception" not in rever_kwargs:
rever_kwargs["exception"] = exception
if "raises" not in rever_kwargs:
rever_kwargs["raises"] = raises
if "prior" not in rever_kwargs:
rever_kwargs["prior"] = prior
initialized_kwargs = {key: rever_kwargs[key] for key in rever_kwargs}
def rever_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
nonlocal rever_kwargs
try:
if args or kwargs:
r = func(*args, **kwargs)
rever_kwargs = {key: initialized_kwargs[key] for key in initialized_kwargs}
return r
else:
r = func()
rever_kwargs = {key: initialized_kwargs[key] for key in initialized_kwargs}
return r
except rever_kwargs["exception"]:
if rever_kwargs["backoff"]:
rever_kwargs["pause"] = \
.5 * (rever_kwargs["total_pause"] / 2 ** (rever_kwargs["steps"]))
if rever_kwargs["steps"] >= 0:
time.sleep(rever_kwargs["pause"])
rever_kwargs["steps"] -= 1
if rever_kwargs["prior"]:
rever_kwargs["prior"]()
return wrapper(*args, **kwargs)
else:
if rever_kwargs["times"] > 0:
time.sleep(rever_kwargs["pause"])
rever_kwargs["times"] -= 1
if rever_kwargs["prior"]:
rever_kwargs["prior"]()
return wrapper(*args, **kwargs)
if rever_kwargs["raises"] and (rever_kwargs["steps"] < 0 or rever_kwargs["times"] <= 0):
raise ReachedMaxRetries(func)
else:
return None
return wrapper
return rever_decorator
| 30.238938 | 116 | 0.550776 | 369 | 3,417 | 4.913279 | 0.197832 | 0.260894 | 0.064534 | 0.070601 | 0.449531 | 0.434639 | 0.323773 | 0.323773 | 0.250414 | 0.250414 | 0 | 0.008656 | 0.357624 | 3,417 | 112 | 117 | 30.508929 | 0.817312 | 0.115891 | 0 | 0.239437 | 0 | 0 | 0.105834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056338 | false | 0 | 0.028169 | 0 | 0.197183 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5844f2ad1f289327e37c42bac510107e36f8f9d5 | 25,811 | py | Python | gui(12102018).py | hanhydro/T2H | f4922ce721eb450c7d91370f180e6c860e9ec6be | [
"MIT"
] | null | null | null | gui(12102018).py | hanhydro/T2H | f4922ce721eb450c7d91370f180e6c860e9ec6be | [
"MIT"
] | null | null | null | gui(12102018).py | hanhydro/T2H | f4922ce721eb450c7d91370f180e6c860e9ec6be | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'gui.ui'
#
# Created by: PyQt5 UI code generator 5.6
#
# WARNING! All changes made in this file will be lost!
import os
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import (QApplication, QDialog,
QProgressBar, QPushButton, QMessageBox)
import matplotlib.pyplot as plt
from matplotlib import style
import T2H, PLOT
import flopy
from matplotlib.backends.qt_compat import QtCore, QtWidgets, is_pyqt5
if is_pyqt5():
from matplotlib.backends.backend_qt5agg import (
FigureCanvas, NavigationToolbar2QT as NavigationToolbar)
else:
from matplotlib.backends.backend_qt4agg import (
FigureCanvas, NavigationToolbar2QT as NavigationToolbar)
from matplotlib.figure import Figure
#%%
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("T2H Graphical User Interface")
MainWindow.resize(1280, 800)
self.centralWidget = QtWidgets.QWidget(MainWindow)
self.centralWidget.setObjectName("centralWidget")
#%% QFrames
self.frame_1 = QtWidgets.QFrame(self.centralWidget)
self.frame_1.setGeometry(QtCore.QRect(810, 70, 461, 201))
self.frame_1.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.frame_1.setFrameShadow(QtWidgets.QFrame.Raised)
self.frame_1.setObjectName("frame_2")
self.frame_2 = QtWidgets.QFrame(self.centralWidget)
self.frame_2.setGeometry(QtCore.QRect(810, 280, 461, 101))
self.frame_2.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.frame_2.setFrameShadow(QtWidgets.QFrame.Raised)
self.frame_2.setObjectName("frame_2")
self.frame_3 = QtWidgets.QFrame(self.centralWidget)
self.frame_3.setGeometry(QtCore.QRect(810, 390, 461, 31))
self.frame_3.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.frame_3.setFrameShadow(QtWidgets.QFrame.Raised)
self.frame_3.setObjectName("frame_3")
#%% QLabels
self.sedK = QtWidgets.QLabel(self.frame_2)
self.sedK.setGeometry(QtCore.QRect(30, 10, 141, 16))
self.sedK.setObjectName("sedK")
self.aqK = QtWidgets.QLabel(self.frame_2)
self.aqK.setGeometry(QtCore.QRect(30, 40, 141, 16))
self.aqK.setObjectName("aqK")
self.faultK = QtWidgets.QLabel(self.frame_2)
self.faultK.setGeometry(QtCore.QRect(30, 70, 141, 16))
self.faultK.setObjectName("faultK")
self.sedKN = QtWidgets.QLabel(self.centralWidget)
self.sedKN.setGeometry(QtCore.QRect(910, 500, 141, 16))
self.sedKN.setObjectName("sedKN")
self.sedKNlabel = QtWidgets.QLabel(self.centralWidget)
self.sedKNlabel.setGeometry(QtCore.QRect(1100, 500, 61, 16))
self.sedKNlabel.setObjectName("sedKNlabel")
self.aquiferKNlabel = QtWidgets.QLabel(self.centralWidget)
self.aquiferKNlabel.setGeometry(QtCore.QRect(1100, 520, 61, 16))
self.aquiferKNlabel.setObjectName("aquiferKNlabel")
self.aqKN = QtWidgets.QLabel(self.centralWidget)
self.aqKN.setGeometry(QtCore.QRect(910, 520, 81, 16))
self.aqKN.setObjectName("aqKN")
self.faultKN = QtWidgets.QLabel(self.centralWidget)
self.faultKN.setGeometry(QtCore.QRect(910, 540, 81, 16))
self.faultKN.setObjectName("faultKN")
self.faultKNlabel = QtWidgets.QLabel(self.centralWidget)
self.faultKNlabel.setGeometry(QtCore.QRect(1100, 540, 61, 16))
self.faultKNlabel.setObjectName("faultKNlabel")
self.label_21 = QtWidgets.QLabel(self.frame_3)
self.label_21.setGeometry(QtCore.QRect(10, 7, 141, 16))
self.label_21.setObjectName("label_21")
self.visoptionsLabel = QtWidgets.QLabel(self.centralWidget)
self.visoptionsLabel.setGeometry(QtCore.QRect(20, 540, 141, 16))
self.visoptionsLabel.setObjectName("visoptionsLabel")
self.fileLabel = QtWidgets.QLabel(self.centralWidget)
self.fileLabel.setGeometry(QtCore.QRect(810, 4, 60, 16))
self.fileLabel.setObjectName("fileLabel")
self.fileLabel_path = QtWidgets.QLabel(self.centralWidget)
self.fileLabel_path.setGeometry(QtCore.QRect(880, 4, 320, 16))
self.fileLabel_path.setObjectName("fileLabel_path")
self.label = QtWidgets.QLabel(self.centralWidget)
self.label.setGeometry(QtCore.QRect(814, 51, 241, 16))
self.label.setObjectName("label")
self.nz = QtWidgets.QLabel(self.centralWidget)
self.nz.setGeometry(QtCore.QRect(840, 104, 141, 16))
self.nz.setObjectName("nz")
self.targetperiod = QtWidgets.QLabel(self.centralWidget)
self.targetperiod.setGeometry(QtCore.QRect(840, 80, 151, 16))
self.targetperiod.setObjectName("targetperiod")
self.nzfixed = QtWidgets.QLabel(self.centralWidget)
self.nzfixed.setGeometry(QtCore.QRect(840, 128, 141, 16))
self.nzfixed.setObjectName("nzfixed")
self.constrecharge = QtWidgets.QLabel(self.centralWidget)
self.constrecharge.setGeometry(QtCore.QRect(840, 176, 151, 16))
self.constrecharge.setObjectName("constrecharge")
#
self.hiniratio = QtWidgets.QLabel(self.centralWidget)
self.hiniratio.setGeometry(QtCore.QRect(840, 242, 151, 16))
self.hiniratio.setObjectName("hiniratio")
self.datvar = QtWidgets.QLabel(self.centralWidget)
self.datvar.setGeometry(QtCore.QRect(840, 152, 161, 16))
self.datvar.setObjectName("datvar")
# Recharge input
self.constrecharge_2 = QtWidgets.QLabel(self.centralWidget)
self.constrecharge_2.setGeometry(QtCore.QRect(840, 200, 151, 16))
self.constrecharge_2.setObjectName("constrecharge_2")
# Image pane
self.image = QtWidgets.QLabel(self.centralWidget)
self.image.setGeometry(QtCore.QRect(10, 10, 780, 520))
self.image.setObjectName("image")
self.pixmap = QtGui.QPixmap("logo.png")
self.image.setPixmap(self.pixmap)
#%% QLineEdits
self.sedKlineEdit = QtWidgets.QLineEdit(self.frame_2)
self.sedKlineEdit.setGeometry(QtCore.QRect(260, 10, 113, 21))
self.sedKlineEdit.setObjectName("sedKlineEdit")
self.sedKlineEdit.setText("547.5")
#
self.aqKlineEdit = QtWidgets.QLineEdit(self.frame_2)
self.aqKlineEdit.setGeometry(QtCore.QRect(260, 40, 113, 21))
self.aqKlineEdit.setObjectName("aqKlineEdit")
self.aqKlineEdit.setText("36.5")
#
self.faultKlineEdit = QtWidgets.QLineEdit(self.frame_2)
self.faultKlineEdit.setGeometry(QtCore.QRect(260, 70, 113, 21))
self.faultKlineEdit.setObjectName("faultKlineEdit")
self.faultKlineEdit.setText("0.0365")
#
self.nzfline = QtWidgets.QLineEdit(self.centralWidget)
self.nzfline.setGeometry(QtCore.QRect(1070, 128, 113, 21))
self.nzfline.setObjectName("nzfline")
self.nzfline.setText("10")
#
self.nzline = QtWidgets.QLineEdit(self.centralWidget)
self.nzline.setGeometry(QtCore.QRect(1070, 104, 113, 21))
self.nzline.setObjectName("nzline")
self.nzline.setText("40")
#
self.datline = QtWidgets.QLineEdit(self.centralWidget)
self.datline.setGeometry(QtCore.QRect(1070, 152, 113, 21))
self.datline.setObjectName("datline")
self.datline.setText("-10000")
#
self.hiniratioLineEdit = QtWidgets.QLineEdit(self.centralWidget)
self.hiniratioLineEdit.setGeometry(QtCore.QRect(1070, 242, 113, 21))
self.hiniratioLineEdit.setObjectName("hiniratioLineEdit")
self.hiniratioLineEdit.setText("0.9")
#
self.datvarline = QtWidgets.QLineEdit(self.centralWidget)
self.datvarline.setGeometry(QtCore.QRect(1070, 176, 113, 21))
self.datvarline.setObjectName("datvarline")
self.datvarline.setText("-3000")
self.rchline = QtWidgets.QLineEdit(self.centralWidget)
self.rchline.setGeometry(QtCore.QRect(1070, 200, 113, 21))
self.rchline.setObjectName("rchline")
self.rchline.setText("0.05")
# Ma input lineedit
self.maline = QtWidgets.QLineEdit(self.centralWidget)
self.maline.setGeometry(QtCore.QRect(1070, 80, 113, 21))
self.maline.setObjectName("maline")
self.maline.setText("12.5")
#%% QPushButtons
self.load = QtWidgets.QPushButton(self.centralWidget)
self.load.setGeometry(QtCore.QRect(1100, -1, 71, 32))
self.load.setObjectName("loadButton")
self.load.clicked.connect(self.fileloader)
self.load1 = QtWidgets.QPushButton(self.centralWidget)
self.load1.setGeometry(QtCore.QRect(1170, -1, 101, 32))
self.load1.setObjectName("loadButton1")
self.load1.clicked.connect(self.fileloader)
self.applyButton = QtWidgets.QPushButton(self.frame_1)
self.applyButton.setGeometry(QtCore.QRect(380, 60, 81, 81))
self.applyButton.setObjectName("applyButton")
self.applyButton.clicked.connect(self.applyclicked)
self.fileDialog_3 = QtWidgets.QPushButton(self.frame_2)
self.fileDialog_3.setGeometry(QtCore.QRect(380, 20, 81, 71))
self.fileDialog_3.setObjectName("fileDialog_3")
self.fileDialog_3.clicked.connect(self.applyCalClicked)
# Model run button
self.ModelRunButton = QtWidgets.QPushButton(self.centralWidget)
self.ModelRunButton.setGeometry(QtCore.QRect(640, 620, 113, 32))
self.ModelRunButton.setObjectName("ModelRunButton")
self.ModelRunButton.clicked.connect(self.run)
self.QuitButton = QtWidgets.QPushButton(self.centralWidget)
self.QuitButton.setGeometry(QtCore.QRect(760, 620, 113, 32))
self.QuitButton.setObjectName("QuitButton")
self.QuitButton.clicked.connect(QCoreApplication.instance().quit)
self.VtkOutputButton = QtWidgets.QPushButton(self.centralWidget)
self.VtkOutputButton.setGeometry(QtCore.QRect(880, 620, 113, 32))
self.VtkOutputButton.setObjectName("VtkOutputButton")
# self.VtkOutputButton.clicked.connect(self.vtk)
self.PlotButton = QtWidgets.QPushButton(self.centralWidget)
self.PlotButton.setGeometry(QtCore.QRect(460, 560, 113, 32))
self.PlotButton.setObjectName("PlotButton")
self.PlotButton.clicked.connect(self.plot)
#%% QGraphicsViews
self.figure = plt.figure(figsize=(12,12))
self.canvas = FigureCanvas(self.figure)
#%% QComboBoxes
# File combo box
self.fileBox = QtWidgets.QComboBox(self.centralWidget)
self.fileBox.setGeometry(QtCore.QRect(808, 25, 461, 26))
self.fileBox.setObjectName("fileBox")
# Solver selection combo box
self.solverBox = QtWidgets.QComboBox(self.frame_3)
self.solverBox.setGeometry(QtCore.QRect(63, 2, 281, 26))
self.solverBox.setObjectName("solverBox")
self.solverBox.addItem("xMD")
self.solverBox.addItem("GMRES")
#
self.visComboBox = QtWidgets.QComboBox(self.centralWidget)
self.visComboBox.setGeometry(QtCore.QRect(10, 560, 441, 26))
self.visComboBox.setObjectName("visComboBox")
self.visComboBox.addItem("Cross Section")
self.visComboBox.addItem("Fault Plane")
self.visComboBox.addItem("Vertical Flow Barriers (VFB)")
self.visComboBox.addItem("Horizontal Flow Barriers (HFB)")
#%% QCheckBoxes
#
self.elevdependentChecker = QtWidgets.QCheckBox(self.centralWidget)
self.elevdependentChecker.setGeometry(QtCore.QRect(860, 220, 231, 20))
self.elevdependentChecker.setObjectName("elevdependentChecker")
#%% QProgressBars
self.progress = QProgressBar(self.centralWidget)
self.progress.setGeometry(10, 620, 600, 25)
self.progress.setMaximum(100)
#%% Mainwindows
MainWindow.setCentralWidget(self.centralWidget)
self.menuBar = QtWidgets.QMenuBar(MainWindow)
self.menuBar.setGeometry(QtCore.QRect(0, 0, 1024, 22))
self.menuBar.setObjectName("menuBar")
self.menuT2H_Main = QtWidgets.QMenu(self.menuBar)
self.menuT2H_Main.setObjectName("menuT2H_Main")
self.menuT2H_Checker = QtWidgets.QMenu(self.menuBar)
self.menuT2H_Checker.setObjectName("menuT2H_Checker")
self.menuT2H_Plot = QtWidgets.QMenu(self.menuBar)
self.menuT2H_Plot.setObjectName("menuT2H_Plot")
MainWindow.setMenuBar(self.menuBar)
self.mainToolBar = QtWidgets.QToolBar(MainWindow)
self.mainToolBar.setObjectName("mainToolBar")
MainWindow.addToolBar(QtCore.Qt.TopToolBarArea, self.mainToolBar)
self.statusBar = QtWidgets.QStatusBar(MainWindow)
self.statusBar.setObjectName("statusBar")
MainWindow.setStatusBar(self.statusBar)
self.menuBar.addAction(self.menuT2H_Main.menuAction())
self.menuBar.addAction(self.menuT2H_Checker.menuAction())
self.menuBar.addAction(self.menuT2H_Plot.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
#%% Functions
def applyclicked(self):
self.Ma = float(self.maline.text())
self.Ma = format(self.Ma, '.1f')
self.nz = int(self.nzline.text())
self.nz_fixed = int(self.nzfline.text())
self.dx = 1000
self.dy = 1000
self.inz = self.nz - self.nz_fixed
self.dat = int(self.datline.text())
self.dat_var = int(self.datvarline.text())
self.idat = self.dat - self.dat_var
self.rech = float(self.rchline.text())
self.perm_sed = float(self.sedKlineEdit.text())
self.hratio = float(self.hiniratioLineEdit.text())
self.Kconst = float(self.aqKlineEdit.text())
self.hydchr = self.Kconst/1000
self.target_row = 101
self.iskip = 4
self.ivtk = 1
self.h_tol = 1e-4
self.fileLabel_path.setText("/tisc_output/topo_" + self.Ma +"0Ma.txt")
self.ans = QMessageBox.question(self.centralWidget, "Confirmation",\
"Are these correct?\n" + "Period: " + self.Ma\
+ "Ma\n" + "Nz: " + str(self.nz) +"\n" + "Datum: "\
+ str(self.dat) + " m\n", QMessageBox.Yes, QMessageBox.No)
if self.ans == QMessageBox.Yes:
self.rchline.setEnabled(False)
self.maline.setEnabled(False)
self.nzline.setEnabled(False)
self.nzfline.setEnabled(False)
self.datline.setEnabled(False)
self.datvarline.setEnabled(False)
self.hiniratioLineEdit.setEnabled(False)
QMessageBox.about(self.centralWidget, "Confirmed", "Properties confirmed")
else:
QMessageBox.about(self.centralWidget, "Check values", "Check values again!")
def applyCalClicked(self):
self.perm_sed = self.sedKlineEdit.text()
self.Kconst = self.aqKlineEdit.text()
self.hydchr = self.faultKlineEdit.text()
self.sedKNlabel.setText(str(float(self.perm_sed)/float(self.rchline.text())))
self.aquiferKNlabel.setText(str(float(self.Kconst)/float(self.rchline.text())))
self.faultKNlabel.setText(str(float(self.hydchr)/float(self.rchline.text())))
self.ans = QMessageBox.question(self.centralWidget, "Confirmation",\
"Are these correct?\n" + "Period: " + self.Ma\
+ "Ma\n" + "Nz: " + str(self.nz) +"\n" + "Datum: "\
+ str(self.dat) + " m\n", QMessageBox.Yes, QMessageBox.No)
if self.ans == QMessageBox.Yes:
self.sedKlineEdit.setEnabled(False)
self.aqKlineEdit.setEnabled(False)
self.faultKlineEdit.setEnabled(False)
QMessageBox.about(self.centralWidget, "Confirmed", "Properties confirmed")
else:
QMessageBox.about(self.centralWidget, "Check values", "Check values again!")
#%%
def run(self):
self.Ma = float(self.maline.text())
self.Ma = format(self.Ma, '.1f')
self.nz = int(self.nzline.text())
self.nz_fixed = int(self.nzfline.text())
self.dx = 1000
self.dy = 1000
self.inz = self.nz - self.nz_fixed
self.dat = int(self.datline.text())
self.dat_var = int(self.datvarline.text())
self.idat = self.dat - self.dat_var
self.rech = float(self.rchline.text())
self.perm_sed = float(self.sedKlineEdit.text())
self.hratio = float(self.hiniratioLineEdit.text())
self.Kconst = float(self.aqKlineEdit.text())
self.hydchr = self.Kconst/1000
self.target_row = 101
self.iskip = 4
self.ivtk = 1
self.h_tol = 1e-4
self.model = T2H.main(self.Ma, self.nz, self.nz_fixed, self.inz, self.dx,\
self.dy, self.dat, self.dat_var, self.idat\
, self.rech, self.perm_sed, self.target_row,\
self.Kconst, self.hratio, self.hydchr,\
self.iskip, self.ivtk, self.h_tol)
self.mf = self.model.mf
self.mf.dis.check()
self.mf.write_input()
self.mf.run_model()
return self.mf
def plot(self):
try:
self.mf
except AttributeError:
QMessageBox.about(self.centralWidget, "Warning", "Please run a model first")
else:
self.vcb = self.visComboBox.itemData
print(self.vcb)
if self.vcb == "Cross Section":
figheadxsect, axheadxsect = plt.subplots(figsize=(40,5))
self.mfxsect = PLOT.fmfxsect(self.mf, self.model.mfdis, self.target_row, axheadxsect).mfxsect
self.a = PLOT.head(self.mf, self.model.fdirmodel).a
self.headc = PLOT.headc(self.mfxsect, self.a)
self.headcontour = self.headc.headcontour
self.gdplot = self.mfxsect.plot_grid(color='r', linewidths=0.2)
self.BCplot = self.mfxsect.plot_ibound(self.model.ibound, color_noflow = 'black',\
color_ch = 'blue', head = self.a)
self.canvas.draw()
print("plot")
def fileloader(self):
self.path = os.getcwd() + "/tisc_output/"
self.l = os.listdir(self.path)
self.bdtopo = [0]*len(self.l)
self.topo = [0]*len(self.l)
self.fault = [0]*len(self.l)
self.sedthick = [0]*len(self.l)
for file in range(len(self.l)):
if self.l[file].startswith("bdtopo"):
if os.stat(self.path+self.l[file]).st_size > 5: # greater than 5 bytes
self.bdtopo[file] = float(self.l[file][7:]\
.split("Ma.txt")[0])
elif self.l[file].startswith("topo"):
if os.stat(self.path+self.l[file]).st_size > 5: # greater than 5 bytes
self.topo[file] = float(self.l[file][5:]\
.split("Ma.txt")[0])
elif self.l[file].startswith("fault"):
if os.stat(self.path+self.l[file]).st_size > 5: # greater than 5 bytes
self.fault[file] = float(self.l[file][6:]\
.split("Ma.txt")[0])
elif self.l[file].startswith("sedthick"):
if os.stat(self.path+self.l[file]).st_size > 5: # greater than 5 bytes
self.sedthick[file] = float(self.l[file][9:]\
.split("Ma.txt")[0])
self.a = list(filter((0).__ne__, self.topo))
self.a.sort()
self.b = list(filter((0).__ne__, self.bdtopo))
self.b.sort()
self.c = list(filter((0).__ne__, self.fault))
self.c.sort()
self.d = list(filter((0).__ne__, self.sedthick))
self.d.sort()
self.df = []
for nfile in range(len(self.a)):
if self.b.count(self.a[nfile]) == 1:
if self.c.count(self.a[nfile]) == 1:
if self.d.count(self.a[nfile]) == 1:
data = [self.a[nfile], "y", "y", "y", "y"]
self.df.append(data)
elif self.d.count(self.a[nfile]) == 0:
data = [self.a[nfile], "y", "y", "y", "n"]
self.df.append(data)
elif self.c.count(self.a[nfile]) == 0:
if self.d.count(self.a[nfile]) == 1:
data = [self.a[nfile], "y", "y", "n", "y"]
self.df.append(data)
elif self.d.count(self.a[nfile]) == 0:
data = [self.a[nfile], "y", "y", "n", "n"]
self.df.append(data)
elif self.b.count(self.a[nfile]) == 0:
if self.c.count(self.a[nfile]) == 1:
if self.d.count(self.a[nfile]) == 1:
data = [self.a[nfile], "y", "n", "y", "y"]
self.df.append(data)
elif self.d.count(self.a[nfile]) == 0:
data = [self.a[nfile], "y", "n", "y", "n"]
self.df.append(data)
elif self.c.count(self.a[nfile]) == 0:
if self.d.count(self.a[nfile]) == 1:
data = [self.a[nfile], "y", "n", "n", "y"]
self.df.append(data)
elif self.d.count(self.a[nfile]) == 0:
data = [self.a[nfile], "y", "n", "n", "n"]
self.df.append(data)
for age in range(len(self.a)):
if self.df[age][2] == "y" and self.df[age][3] == "y" and self.df[age][4] == "y":
self.fileBox.addItem("Snapshot:" + str(self.df[age][0]) + "Ma | Faults | Sediments")
elif self.df[age][2] == "y" and self.df[age][3] == "y" and self.df[age][4] == "n":
self.fileBox.addItem("Snapshot:" + str(self.df[age][0]) + "Ma | Faults | No Sediments")
elif self.df[age][2] == "y" and self.df[age][3] == "n" and self.df[age][4] == "y":
self.fileBox.addItem("Snapshot:" + str(self.df[age][0]) + "Ma | No Faults | Sediments")
elif self.df[age][2] == "y" and self.df[age][3] == "n" and self.df[age][4] == "n":
self.fileBox.addItem("Snapshot:" + str(self.df[age][0]) + "Ma | No Faults | No Sediments")
#%%
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "T2H Graphical User Interface"))
self.applyButton.setText(_translate("MainWindow", "Apply"))
self.sedK.setText(_translate("MainWindow", "Sediment K (m/yr)"))
self.aqK.setText(_translate("MainWindow", "Aquifer K (m/yr)"))
self.faultK.setText(_translate("MainWindow", "Fault zone K (m/yr)"))
self.fileDialog_3.setText(_translate("MainWindow", "Apply"))
self.sedKN.setText(_translate("MainWindow", "Sediment K / N:"))
self.sedKNlabel.setText(_translate("MainWindow", "N/A"))
self.aquiferKNlabel.setText(_translate("MainWindow", "N/A"))
self.aqKN.setText(_translate("MainWindow", "Aquifer K / N:"))
self.faultKN.setText(_translate("MainWindow", "Fault K / N:"))
self.faultKNlabel.setText(_translate("MainWindow", "N/A"))
self.label_21.setText(_translate("MainWindow", "Solver"))
self.ModelRunButton.setText(_translate("MainWindow", "Execute"))
self.load.setText(_translate("MainWindow", "Load"))
self.load1.setText(_translate("MainWindow", "Set selected"))
self.QuitButton.setText(_translate("MainWindow", "Abort"))
self.VtkOutputButton.setText(_translate("MainWindow", "VTK output"))
self.PlotButton.setText(_translate("MainWindow", "Plot"))
self.visoptionsLabel.setText(_translate("MainWindow", "Visualization options"))
self.fileLabel.setText(_translate("MainWindow", "File: "))
self.fileLabel_path.setText(_translate("MainWindow", "path"))
self.label.setText(_translate("MainWindow", "*dx = dy = 1,000 m fixed in this version"))
self.nz.setText(_translate("MainWindow", "Number of layers (nz)"))
self.targetperiod.setText(_translate("MainWindow", "Target period (Ma)"))
self.nzfixed.setText(_translate("MainWindow", "Fixed layers (nz_fixed)"))
self.constrecharge.setText(_translate("MainWindow", "Datum of variable dz (m)"))
self.hiniratio.setText(_translate("MainWindow", "Initial head ratio to topo."))
self.elevdependentChecker.setText(_translate("MainWindow", "Elevation-dependent recharge"))
self.datvar.setText(_translate("MainWindow", "Model datum (m)"))
self.constrecharge_2.setText(_translate("MainWindow", "Const. Recharge (m/yr)"))
self.menuT2H_Main.setTitle(_translate("MainWindow", "T2H Main"))
self.menuT2H_Checker.setTitle(_translate("MainWindow", "T2H Checker"))
self.menuT2H_Plot.setTitle(_translate("MainWindow", "T2H Plot"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 47.975836 | 109 | 0.618728 | 2,893 | 25,811 | 5.46215 | 0.151746 | 0.051639 | 0.066827 | 0.036451 | 0.367295 | 0.251361 | 0.178585 | 0.172447 | 0.172447 | 0.165992 | 0 | 0.038503 | 0.246329 | 25,811 | 537 | 110 | 48.065177 | 0.773814 | 0.022316 | 0 | 0.194131 | 1 | 0 | 0.08243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015801 | false | 0 | 0.031603 | 0 | 0.051919 | 0.004515 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5849a619f304aa85187564eba6cb5913a8f7354f | 2,403 | py | Python | tests/unit/backend/corpora/common/entities/datasets/test_revision.py | chanzuckerberg/dcp-prototype | 24d2323ba5ae1482395da35ea11c42708e3a52ce | [
"MIT"
] | 2 | 2020-02-07T18:12:12.000Z | 2020-02-11T14:59:03.000Z | tests/unit/backend/corpora/common/entities/datasets/test_revision.py | HumanCellAtlas/dcp-prototype | 44ca66a266004124f39d7d3e3dd75e9076012ff0 | [
"MIT"
] | 173 | 2020-01-29T17:48:02.000Z | 2020-03-20T02:52:58.000Z | tests/unit/backend/corpora/common/entities/datasets/test_revision.py | HumanCellAtlas/dcp-prototype | 44ca66a266004124f39d7d3e3dd75e9076012ff0 | [
"MIT"
] | 1 | 2020-03-20T17:06:54.000Z | 2020-03-20T17:06:54.000Z | from tests.unit.backend.corpora.common.entities.datasets import TestDataset
class TestDatasetRevision(TestDataset):
def test__create_dataset_revision(self):
dataset = self.generate_dataset_with_s3_resources(self.session, published=True)
rev_dataset = dataset.create_revision("test_collection_id_revision").to_dict()
dataset = dataset.to_dict()
with self.subTest("artifacts are correctly created and point to correct s3 uri"):
rev_artifacts = rev_dataset.pop("artifacts")
original_artifacts = dataset.pop("artifacts")
for i in range(0, len(rev_artifacts)):
for key in rev_artifacts[i].keys():
self.compare_original_and_revision(
original_artifacts[i], rev_artifacts[i], key, ("dataset_id", "id")
)
with self.subTest("deployment is correctly created and points to correct s3 uri "):
rev_deployment = rev_dataset.pop("explorer_url")
original_deployment = dataset.pop("explorer_url")
self.assertIsNotNone(original_deployment)
self.assertEqual(rev_deployment, f"http://bogus.url/d/{rev_dataset['id']}.cxg/")
with self.subTest("Test processing status copied over"):
rev_processing_status = rev_dataset.pop("processing_status")
original_processing_status = dataset.pop("processing_status")
for key in rev_processing_status.keys():
self.compare_original_and_revision(
original_processing_status, rev_processing_status, key, ("dataset_id", "id")
)
with self.subTest("revision points at a different collection"):
revision_collection = rev_dataset.pop("collection")
dataset_1_collection = dataset.pop("collection")
self.assertNotEqual(revision_collection, dataset_1_collection)
with self.subTest("metadata of revised matches original"):
for key in rev_dataset.keys():
self.compare_original_and_revision(dataset, rev_dataset, key, ("original_id", "id", "collection_id"))
def compare_original_and_revision(self, original, revision, key, unique_fields):
if key in unique_fields:
self.assertNotEqual(original[key], revision[key])
else:
self.assertEqual(original[key], revision[key])
| 52.23913 | 117 | 0.665418 | 272 | 2,403 | 5.621324 | 0.290441 | 0.052322 | 0.049052 | 0.068018 | 0.137345 | 0.115108 | 0.092871 | 0 | 0 | 0 | 0 | 0.003284 | 0.2397 | 2,403 | 45 | 118 | 53.4 | 0.833607 | 0 | 0 | 0.052632 | 1 | 0 | 0.186017 | 0.011236 | 0 | 0 | 0 | 0 | 0.131579 | 1 | 0.052632 | false | 0 | 0.026316 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
584b5746e6a8959beb85942376ecc9e56d8276af | 707 | py | Python | events/kawacon2016/migrations/0003_auto_20160127_1924.py | jlaunonen/turska | fc6ec4e0ae50a823e931152ce8835098b96f5966 | [
"CC-BY-3.0"
] | null | null | null | events/kawacon2016/migrations/0003_auto_20160127_1924.py | jlaunonen/turska | fc6ec4e0ae50a823e931152ce8835098b96f5966 | [
"CC-BY-3.0"
] | null | null | null | events/kawacon2016/migrations/0003_auto_20160127_1924.py | jlaunonen/turska | fc6ec4e0ae50a823e931152ce8835098b96f5966 | [
"CC-BY-3.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.1 on 2016-01-27 17:24
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('kawacon2016', '0002_auto_20160127_1922'),
]
operations = [
migrations.AlterField(
model_name='signupextra',
name='needs_lodging',
field=models.ManyToManyField(blank=True, help_text='V\xe4nk\xe4rin\xe4 saat tarvittaessa maksuttoman majoituksen lattiamajoituksessa. Merkitse t\xe4h\xe4n, min\xe4 \xf6in\xe4 tarvitset lattiamajoitusta.', to='kawacon2016.Night', verbose_name='Majoitustarve lattiamajoituksessa'),
),
]
| 33.666667 | 291 | 0.700141 | 78 | 707 | 6.192308 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083916 | 0.190948 | 707 | 20 | 292 | 35.35 | 0.76049 | 0.094767 | 0 | 0 | 1 | 0.076923 | 0.405024 | 0.036107 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
584c241bf384f1ee86da8eb49a7b42c532f3a92a | 8,007 | py | Python | botasky/utils/MyMAIL.py | 5atouristspot/sql_audit | 54c6d5ac9f8178ab1a17b7ff2d04ff738f14e0b7 | [
"MIT"
] | null | null | null | botasky/utils/MyMAIL.py | 5atouristspot/sql_audit | 54c6d5ac9f8178ab1a17b7ff2d04ff738f14e0b7 | [
"MIT"
] | null | null | null | botasky/utils/MyMAIL.py | 5atouristspot/sql_audit | 54c6d5ac9f8178ab1a17b7ff2d04ff738f14e0b7 | [
"MIT"
] | null | null | null | #! /usr/bin/python2.7
# -*- coding: utf-8 -*-
"""
Created on 2017-4-06
@module: MyMAIL
@used: send mail
"""
import smtplib
import mimetypes
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.mime.image import MIMEImage
from MyLOG import MyLog
from botasky.utils.MyFILE import project_abdir, recursiveSearchFile
logConfig = recursiveSearchFile(project_abdir, '*logConfig.ini')[0]
mylog = MyLog(logConfig, 'MyMAIL.py')
logger = mylog.outputLog()
__all__ = ['MyMail']
__author__ = 'zhihao'
mail_info = {'mail_host': 'smtp.163.com',
'mail_user': '15895890858',
'mail_pass': 'zhi@hao@111',
'mail_postfix': '163.com'}
class MyMail():
'''
used : send mail
'''
def __init__(self, mail_info):
'''
used : init mail
:param mail_info: smtp server config
'''
self.mail_info = mail_info
def send_mail(self, to_list, mail_type, subject, content, attachment_list, img_list):
'''
used : send mail
:param to_list: target mail adresses
:param mail_type: plain or html
:param subject: title
:param content: main body
:param attachment_list: attachment
:param img_list: picture
:return:
'''
my_adress = "0905zhihao" + "<" + self.mail_info['mail_user'] + "@" + self.mail_info['mail_postfix'] + ">"
msg = MIMEMultipart()
msg['Subject'] = subject
msg['From'] = my_adress
msg['To'] = ";".join(to_list)
#main text
if mail_type == 'plain' or mail_type == 'html':
try:
body_msg = MIMEText(content, _subtype=mail_type, _charset='gb2312')
msg.attach(body_msg)
exec_info = "[action]:init msg" \
"[status]:OK" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(Subject=msg['Subject'], From=msg['From'], To=msg['To'])
logger.info(exec_info)
except Exception, e:
print Exception, ":", e
error_msg = "[action]:init msg" \
"[status]:FAIL" \
"[Errorcode]:{e}" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(Subject=msg['Subject'], From=msg['From'], To=msg['To'], e=e)
logger.error(error_msg)
else:
error_msg = "[action]:send mail_type" \
"[status]:FAIL" \
"[Errorcode]mail_type is not format" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(Subject=msg['Subject'], From=msg['From'], To=msg['To'])
print error_msg
logger.info(error_msg)
#attachment
if attachment_list == '' or len(attachment_list) == 0:
pass
else:
for attachment in attachment_list:
try:
att = MIMEText(open(attachment, 'rb').read(), 'base64', 'gb2312')
att["Content-Type"] = 'application/octet-stream'
#display name
att["Content-Disposition"] = 'attachment; filename="'+attachment+'\"\''
msg.attach(att)
exec_info = "[action]:add attachment" \
"[status]:OK" \
"[attachment]:{attachment}" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(attachment=attachment, Subject=msg['Subject'],
From=msg['From'], To=msg['To'])
logger.info(exec_info)
except Exception, e:
print Exception, ":", e
error_msg = "[action]:add attachment" \
"[status]:FAIL" \
"[Errorcode]:{e}" \
"[attachment]={attachment}" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(Subject=msg['Subject'], From=msg['From'],
attachment=attachment, To=msg['To'], e=e)
logger.error(error_msg)
#img
if img_list == '' or len(img_list) == 0:
pass
else:
for image_adress in img_list:
try:
image = MIMEImage(open(image_adress, 'rb').read())
image.add_header('Content-ID', '<image1>')
msg.attach(image)
exec_info = "[action]:add image" \
"[status]:OK" \
"[image]:{image}" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(image=image_adress, Subject=msg['Subject'],
From=msg['From'], To=msg['To'])
logger.info(exec_info)
except Exception, e:
print Exception, ":", e
error_msg = "[action]:add image" \
"[status]:FAIL" \
"[Errorcode]:{e}" \
"[image]:{image}" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(Subject=msg['Subject'], From=msg['From'],
image=image_adress, To=msg['To'], e=e)
logger.error(error_msg)
#send mail
try:
server = smtplib.SMTP()
server.connect(self.mail_info['mail_host'])
server.login(self.mail_info['mail_user'], self.mail_info['mail_pass'])
server.sendmail(msg['from'], msg['to'], msg.as_string())
server.quit()
exec_info = "[action]:send mail" \
"[status]:OK" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(Subject=msg['Subject'], From=msg['From'],To=msg['To'])
logger.info(exec_info)
except Exception, e:
print Exception, ":", e
error_msg = "[action]:send mail" \
"[status]:FAIL" \
"[Errorcode]:{e}" \
"[Subject]:{Subject}" \
"[From]:{From}" \
"[To]:{To}".format(Subject=msg['Subject'], From=msg['From'], To=msg['To'], e=e)
logger.error(error_msg)
if __name__ == '__main__':
'''
mail_info = {'mail_host': 'smtp.163.com',
'mail_user': '15002283621',
'mail_pass': 'zhihao1206',
'mail_postfix': '163.com'}
#to_list = ['15002283621@163.com']
to_list = ['1204207658@qq.com']
subject = 'xxxxxxxxxxxxx'
content = 'xxxxxxxxxxxxx'
#attachment_list = ['F:\img\img.rar', 'F:\img\img2.rar']
attachment_list = []
#img_list = ['F:\img\\1025.jpg', 'F:\img\\1041.jpg']
img_list = []
mail = MyMail(mail_info)
mail.send_mail(to_list, 'plain', subject, content, attachment_list, img_list)
'''
import MyMAIL
help(MyMAIL)
| 36.729358 | 114 | 0.429374 | 711 | 8,007 | 4.682138 | 0.202532 | 0.059477 | 0.032442 | 0.059477 | 0.400421 | 0.381796 | 0.360769 | 0.360769 | 0.360769 | 0.264644 | 0 | 0.0223 | 0.42875 | 8,007 | 217 | 115 | 36.898618 | 0.705509 | 0.010616 | 0 | 0.507576 | 0 | 0 | 0.193184 | 0.01116 | 0.007576 | 0 | 0 | 0 | 0 | 0 | null | null | 0.030303 | 0.060606 | null | null | 0.037879 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
58560f5398484c07794db5199083195112cafef3 | 10,955 | py | Python | databricks/koalas/strings.py | mercileesb/koalas | 685176c512f31166f0e472aa0f461d0f1449fb0c | [
"Apache-2.0"
] | 1 | 2021-01-17T18:26:33.000Z | 2021-01-17T18:26:33.000Z | databricks/koalas/strings.py | mercileesb/koalas | 685176c512f31166f0e472aa0f461d0f1449fb0c | [
"Apache-2.0"
] | null | null | null | databricks/koalas/strings.py | mercileesb/koalas | 685176c512f31166f0e472aa0f461d0f1449fb0c | [
"Apache-2.0"
] | null | null | null | #
# Copyright (C) 2019 Databricks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
String functions on Koalas Series
"""
from typing import TYPE_CHECKING
import numpy as np
from pyspark.sql.types import StringType, BinaryType, BooleanType
from databricks.koalas.base import _wrap_accessor_pandas
if TYPE_CHECKING:
import databricks.koalas as ks
class StringMethods(object):
"""String methods for Koalas Series"""
def __init__(self, series: 'ks.Series'):
if not isinstance(series.spark_type, (StringType, BinaryType)):
raise ValueError(
"Cannot call StringMethods on type {}"
.format(series.spark_type))
self._data = series
self.name = self._data.name
# Methods
def capitalize(self) -> 'ks.Series':
"""
Convert Strings in the series to be capitalized.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.capitalize(),
StringType()
).alias(self.name)
def lower(self) -> 'ks.Series':
"""
Convert strings in the Series/Index to all lowercase.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.lower(),
StringType()
).alias(self.name)
def upper(self) -> 'ks.Series':
"""
Convert strings in the Series/Index to all uppercase.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.upper(),
StringType()
).alias(self.name)
def swapcase(self) -> 'ks.Series':
"""
Convert strings in the Series/Index to be swapcased.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.swapcase(),
StringType()
).alias(self.name)
def startswith(self, pattern, na=np.NaN) -> 'ks.Series':
"""
Test if the start of each string element matches a pattern.
Equivalent to :func:`str.startswith`.
Parameters
----------
pattern : str
Character sequence. Regular expressions are not accepted.
na : object, defulat NaN
Object shown if element is not a string.
Returns
-------
Series of bool
Koalas Series of booleans indicating whether the given pattern
matches the start of each string element.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.startswith(pattern, na),
BooleanType()
).alias(self.name)
def endswith(self, pattern, na=np.NaN) -> 'ks.Series':
"""
Test if the end of each string element matches a pattern.
Equivalent to :func:`str.endswith`.
Parameters
----------
pattern : str
Character sequence. Regular expressions are not accepted.
na : object, defulat NaN
Object shown if element is not a string.
Returns
-------
Series of bool
Koalas Series of booleans indicating whether the given pattern
matches the end of each string element.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.endswith(pattern, na),
BooleanType()
).alias(self.name)
def strip(self, to_strip=None) -> 'ks.Series':
"""
Remove leading and trailing characters.
Strip whitespaces (including newlines) or a set of specified
characters from each string in the Series/Index from left and
right sides. Equivalent to :func:`str.strip`.
Parameters
----------
to_strip : str
Specifying the set of characters to be removed. All combinations
of this set of characters will be stripped. If None then
whitespaces are removed.
Returns
-------
Series of str
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.strip(to_strip),
StringType()
).alias(self.name)
def lstrip(self, to_strip=None) -> 'ks.Series':
"""
Remove leading characters.
Strip whitespaces (including newlines) or a set of specified
characters from each string in the Series/Index from left side.
Equivalent to :func:`str.lstrip`.
Parameters
----------
to_strip : str
Specifying the set of characters to be removed. All combinations
of this set of characters will be stripped. If None then
whitespaces are removed.
Returns
-------
Series of str
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.lstrip(to_strip),
StringType()
).alias(self.name)
def rstrip(self, to_strip=None) -> 'ks.Series':
"""
Remove trailing characters.
Strip whitespaces (including newlines) or a set of specified
characters from each string in the Series/Index from right side.
Equivalent to :func:`str.rstrip`.
Parameters
----------
to_strip : str
Specifying the set of characters to be removed. All combinations
of this set of characters will be stripped. If None then
whitespaces are removed.
Returns
-------
Series of str
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.rstrip(to_strip),
StringType()
).alias(self.name)
def get(self, i) -> 'ks.Series':
"""
Extract element from each string in the Series/Index at the
specified position.
Parameters
----------
i : int
Position of element to extract.
Returns
-------
Series of objects
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.get(i),
StringType()
).alias(self.name)
def isalnum(self) -> 'ks.Series':
"""
Check whether all characters in each string are alphanumeric.
This is equivalent to running the Python string method
:func:`str.isalnum` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.isalnum(),
BooleanType()
).alias(self.name)
def isalpha(self) -> 'ks.Series':
"""
Check whether all characters in each string are alphabetic.
This is equivalent to running the Python string method
:func:`str.isalpha` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.isalpha(),
BooleanType()
).alias(self.name)
def isdigit(self) -> 'ks.Series':
"""
Check whether all characters in each string are digits.
This is equivalent to running the Python string method
:func:`str.isdigit` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.isdigit(),
BooleanType()
).alias(self.name)
def isspace(self) -> 'ks.Series':
"""
Check whether all characters in each string are whitespaces.
This is equivalent to running the Python string method
:func:`str.isspace` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.isspace(),
BooleanType()
).alias(self.name)
def islower(self) -> 'ks.Series':
"""
Check whether all characters in each string are lowercase.
This is equivalent to running the Python string method
:func:`str.islower` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.islower(),
BooleanType()
).alias(self.name)
def isupper(self) -> 'ks.Series':
"""
Check whether all characters in each string are uppercase.
This is equivalent to running the Python string method
:func:`str.isupper` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.isupper(),
BooleanType()
).alias(self.name)
def istitle(self) -> 'ks.Series':
"""
Check whether all characters in each string are titlecase.
This is equivalent to running the Python string method
:func:`str.istitle` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.istitle(),
BooleanType()
).alias(self.name)
def isnumeric(self) -> 'ks.Series':
"""
Check whether all characters in each string are numeric.
This is equivalent to running the Python string method
:func:`str.isnumeric` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.isnumeric(),
BooleanType()
).alias(self.name)
def isdecimal(self) -> 'ks.Series':
"""
Check whether all characters in each string are decimals.
This is equivalent to running the Python string method
:func:`str.isdecimal` for each element of the Series/Index.
If a string has zero characters, False is returned for that check.
"""
return _wrap_accessor_pandas(
self,
lambda x: x.str.isdecimal(),
BooleanType()
).alias(self.name)
| 31.033994 | 76 | 0.583204 | 1,281 | 10,955 | 4.925839 | 0.157689 | 0.038035 | 0.057052 | 0.072266 | 0.752139 | 0.690016 | 0.68748 | 0.65103 | 0.633756 | 0.602853 | 0 | 0.001091 | 0.330717 | 10,955 | 352 | 77 | 31.122159 | 0.85952 | 0.48188 | 0 | 0.59375 | 0 | 0 | 0.048528 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15625 | false | 0 | 0.039063 | 0 | 0.351563 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
585693264a6958d193fa10022658456c7350638b | 807 | py | Python | python/turbodbc_test/test_cursor_async_io.py | fjetter/turbodbc | b11f0a1bc7d67bc3cbc60f564594f0e735f524f4 | [
"MIT"
] | null | null | null | python/turbodbc_test/test_cursor_async_io.py | fjetter/turbodbc | b11f0a1bc7d67bc3cbc60f564594f0e735f524f4 | [
"MIT"
] | null | null | null | python/turbodbc_test/test_cursor_async_io.py | fjetter/turbodbc | b11f0a1bc7d67bc3cbc60f564594f0e735f524f4 | [
"MIT"
] | null | null | null | import pytest
import six
from turbodbc import connect
from query_fixture import query_fixture
from helpers import for_one_database, open_cursor
@for_one_database
def test_many_batches_with_async_io(dsn, configuration):
with open_cursor(configuration, use_async_io=True) as cursor:
with query_fixture(cursor, configuration, 'INSERT INTEGER') as table_name:
# insert 2^16 rows
cursor.execute("INSERT INTO {} VALUES (1)".format(table_name))
for _ in six.moves.range(16):
cursor.execute("INSERT INTO {} SELECT * FROM {}".format(table_name,
table_name))
cursor.execute("SELECT * FROM {}".format(table_name))
assert sum(1 for _ in cursor) == 2**16
| 36.681818 | 84 | 0.629492 | 99 | 807 | 4.89899 | 0.444444 | 0.092784 | 0.092784 | 0.094845 | 0.103093 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017361 | 0.286245 | 807 | 21 | 85 | 38.428571 | 0.824653 | 0.019827 | 0 | 0 | 0 | 0 | 0.108999 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
58593da1cc559e0383548c77af9516f78e6dbe07 | 8,223 | py | Python | VIP_modules/widgets/ResultCanvas_QTAgg.py | Nikolaj-K/lab-control-GUI | 3c7811de57f110870cf4740743fd84b76d918ad3 | [
"MIT"
] | 17 | 2017-05-24T13:31:31.000Z | 2021-12-04T22:47:33.000Z | VIP_modules/widgets/ResultCanvas_QTAgg.py | Nikolaj-K/lab-control-GUI | 3c7811de57f110870cf4740743fd84b76d918ad3 | [
"MIT"
] | null | null | null | VIP_modules/widgets/ResultCanvas_QTAgg.py | Nikolaj-K/lab-control-GUI | 3c7811de57f110870cf4740743fd84b76d918ad3 | [
"MIT"
] | 6 | 2017-11-21T01:32:33.000Z | 2020-12-15T05:28:17.000Z | import random
import numpy as np
import operator
from scipy import optimize
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg
from matplotlib.figure import Figure as MatplotlibFigure
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm as color_map
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import interface.auxiliary_functions as auxi
import dictionaries.constants as cs
#################################################################################
class ResultsCanvas(FigureCanvasQTAgg):
def __init__(self, canvas_ref, vip):
self._Figure = MatplotlibFigure(figsize = cs.FIG_SIZE, dpi = cs.DPI)#, tight_layout=True, frameon=True)
super(ResultsCanvas, self).__init__(self._Figure)
self.update_figure(canvas_ref, vip)
def _from_options(self, canvas_ref, vip):
self.Axes.set_position(self._get_axes_position(vip))
labels_x = self.Axes.xaxis.get_ticklabels()
labels_y = self.Axes.yaxis.get_ticklabels()
fontsize = vip.get('Options', 'R_axes_font_size')
angle = vip.get('Options', 'R_x_plot_label_rotation')
for label in labels_x+labels_y:
label.set_fontsize(fontsize)
if vip.get(canvas_ref, 'F_plot_function') == 'density':
for label in labels_x:
label.set_rotation(angle)
def _get_axes_position(self, vip):
session_keys = ['R_x_plot_position', 'R_y_plot_position', 'R_x_plot_size', 'R_y_plot_size']
f = lambda k: float(vip.get('Options', k))
return map(f, session_keys)
#################################################################################
class Canvas2dData(ResultsCanvas):
def __init__(self, canvas_ref, vip):
super(Canvas2dData, self).__init__(canvas_ref, vip)
def update_figure(self, canvas_ref, vip):
self._Figure.clear()
#from numpy.random import rand
#x, y, c, s = rand(4, 100)
#def onpick3(event):
# ind = event.ind
# print 'onpick3 scatter:', ind, np.take(x_axis, ind), np.take(y_axis, ind)
#self._Figure.canvas.mpl_connect('pick_event', onpick3)
try:
data_set = vip.get(canvas_ref, 'F_data_set')
plot_data2D = vip.plot_data[data_set]['2d_data']
########## Axes
self.Axes = self._Figure.add_axes(cs.AXES_POSITION_INIT)
x_axis = plot_data2D['axis_1']
y_axis = plot_data2D['axis_r']
self.Axes.plot(x_axis, y_axis, auxi.colour(cs.PLOT_COLOR_RANGE))
#self.Axes.set_xlim([x_axis[0], x_axis[-1]])
self.Axes.set_xlim(sorted([x_axis[0], x_axis[-1]]))
self._from_options(canvas_ref, vip)
self.Axes.set_xlabel(plot_data2D['label_1'])
self.Axes.set_ylabel(plot_data2D['label_r'])
#self.Axes.hold(False)
########## Extrema
#max_index, max_y = max(enumerate(y_axis), key=operator.itemgetter(1))
#vip.maximal_x = x_axis[max_index]
min_index, min_y = min(enumerate(y_axis), key=operator.itemgetter(1))
vip.minimal_x = x_axis[min_index]
print "* GLOBAL MINIMUM:\n{0}".format(vip.minimal_x)
if canvas_ref in ['Plot_column_1']:
########## Savitzky Golay Filter
ws = len(y_axis)/cs.SAVITZKY_GOLAY_FILTER_RANGE_DENOMINATOR
ws = ws if (ws % 2 == 1) else (ws + 1)
try:
y_axis_sg = auxi.savitzky_golay_filter(y_axis, window_size=ws, order=cs.SAVITZKY_GOLAY_FILTER_ORDER)
self.Axes.plot(x_axis, y_axis_sg, cs.FILTER_CURVE_STYLE, linewidth=cs.FILTER_LINEWIDTH)
except TypeError as exception:
print "! (update_figure) couldn't compute 'savitzky_golay_filter':"
print exception
########## Fit
try:
def lorenzian_fit(x, A, k, ke):
"""Take min_x of this session and define a fit function"""
def h(ke_):
return (k / 2 - ke_)**2 + (x - vip.minimal_x)**2
r = A * h(ke) / h(0)
return auxi.to_dB(r)
parameters, covariance = optimize.curve_fit(lorenzian_fit, x_axis, y_axis_sg)
LINE = 40 * "." + "\n"
print LINE
print "LORENZIAN FIT AT FILTER CUVE MINIMUM:\n"
print "* PARAMETERS:\n\n [A, kappa, kappa_e]\n= {0}\n".format(parameters)
print "* PARAMETERS:\n\n kappa_e / kappa\n= {0}\n" .format(parameters[1] / parameters[0])
print "* COVARIANCE:\n\n Matrix\n= {0}\n" .format(covariance)
print "* MINIMUM: \n\n (x,y)\n= ({0}, {1})\n" .format(x_axis[min_index], y_axis[min_index])
print LINE
fit_function = lambda x: lorenzian_fit(x, *parameters)
y_axis_fit = map(fit_function, x_axis)
self.Axes.plot(x_axis, y_axis_fit, cs.FITTING_CURVE_STYLE, linewidth=cs.FITTING_LINEWIDTH, linestyle=cs.FITTING_LINESTYLE)
except:
print "! (update_figure) couldn't fit to lorenzian_fit."
else:
pass
try:
self.draw()
except ValueError:
message = "! (update_figure, ValueError) at vip.draw."
vip.GUI_feedback(message)
except KeyError:
message = "! (update_figure) The specified dataset might not exist."
vip.GUI_feedback(message)
#################################################################################
class Canvas3dData(ResultsCanvas):
def __init__(self, canvas_ref, vip):
super(Canvas3dData, self).__init__(canvas_ref, vip)
def update_figure(self, canvas_ref, vip):
self._Figure.clear()
try:
data_set = vip.get(canvas_ref, 'F_data_set')
plot_data3D = vip.plot_data[data_set]['3d_data']
########## Axes
X, Y = np.meshgrid(plot_data3D['axis_1'], plot_data3D['axis_2'])
Z = np.array(plot_data3D['axis_r'])
if vip.get(canvas_ref, 'F_plot_function') == 'density':
self.Axes = self._Figure.add_axes(cs.AXES_POSITION_INIT)
self.Axes.pcolormesh(X, Y, Z, cmap = color_map.coolwarm)
elif vip.get(canvas_ref, 'F_plot_function') == 'surface':
self.Axes = Axes3D(self._Figure)
surf = self.Axes.plot_surface(X, Y, Z, cmap = color_map.coolwarm, rstride = 1, cstride = 1, linewidth = 0.15, antialiased = False)
self.Axes.zaxis.set_major_locator(LinearLocator(10))
self.Axes.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
#self.Axes.set_zlim(-1.01, 1.01)
position_color_bar = [0.015, 0.17, 0.015, 0.75]
Axes_color_bar = self._Figure.add_axes(position_color_bar)
self._Figure.colorbar(surf, cax = Axes_color_bar)
self._from_options(canvas_ref, vip)
#self.Axes.hold(False)
self.Axes.set_xlabel(plot_data3D['label_1'])
self.Axes.set_ylabel(plot_data3D['label_2'])
########## / Axes
try:
self.draw()
except ValueError:
message = "(update_figure, vip.draw, ValueError)"
vip.GUI_feedback(message)
except KeyError:
message = "The specified dataset might not exist"
vip.GUI_feedback(message)
| 46.721591 | 147 | 0.531558 | 944 | 8,223 | 4.353814 | 0.223517 | 0.042822 | 0.032117 | 0.023358 | 0.342092 | 0.281995 | 0.26764 | 0.193187 | 0.114842 | 0.096837 | 0 | 0.015876 | 0.333577 | 8,223 | 175 | 148 | 46.988571 | 0.734124 | 0.078074 | 0 | 0.272727 | 0 | 0.008264 | 0.114241 | 0.006687 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.008264 | 0.090909 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
585a68e41b2ee9276af7dd0a8f001bc6f258c0ac | 4,538 | py | Python | data/external/repositories_2to3/42139/KDDCup13Track2-master/cluster_kruskal.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | null | null | null | data/external/repositories_2to3/42139/KDDCup13Track2-master/cluster_kruskal.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | null | null | null | data/external/repositories_2to3/42139/KDDCup13Track2-master/cluster_kruskal.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | 1 | 2019-12-04T08:23:33.000Z | 2019-12-04T08:23:33.000Z | #!/usr/bin/env python
# Given weighted graph, perform kruskal-based clustering
from common import *
from cluster_common import *
import argparse
import csv
import pickle as pickle
from collections import defaultdict
class unionfind:
mp = {}
blacklisted_edges = set()
# blacklisted_e_nodes = set()
# blacklist_edges_adj = defaultdict(set)
def get_id(self, a):
if a not in self.mp:
self.mp[a] = a
return a
if self.mp[a] == a:
return a
else:
self.mp[a] = self.get_id(self.mp[a])
return self.mp[a]
def mergeset(self, a, b):
self.mp[self.get_id(b)] = self.get_id(a)
def mergeall(self, a):
d = self.get_id(a[0])
for b in a[1:]:
if not self.check_for_blacklist(b, d):
self.mp[self.get_id(b)] = d
def disallow(self, v1, v2):
if v2 > v1:
v1, v2 = v2, v1
self.blacklisted_edges.add((v1, v2))
# self.blacklisted_e_nodes.add(v1)
# self.blacklisted_e_nodes.add(v2)
# self.blacklist_edges[v1].add(v2)
# self.blacklist_edges[v2].add(v1)
def check_for_blacklist(self, v1, v2):
v1, v2 = self.get_id(v1), self.get_id(v2)
if v2 > v1:
v1, v2 = v2, v1
for e1, e2 in self.blacklisted_edges:
c1, c2 = self.get_id(e1), self.get_id(e2)
if c2 > c1:
c1, c2 = c2, c1
if c1 == v1 and c2 == v2:
return True
return False
def trymerge(self, v1, v2):
if self.get_id(v1) != self.get_id(v2) and not self.check_for_blacklist(v1, v2):
self.mergeset(v1, v2)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('edgelist')
parser.add_argument('outfile', nargs='?')
parser.add_argument('-t', '--interconnectivity', default=0.82, type=float)
parser.add_argument('-A', '--with-analysis', action='store_true')
parser.add_argument('-a', '--authorprefeat', default='generated/Author_prefeat.pickle')
parser.add_argument('-s', '--seedset', nargs='*', default=['data/goldstd_clusters.csv', 'data/seedset_clusters.csv'])
parser.add_argument('-S', '--seededges', nargs='*', default=['data/train.csv'])
parser.add_argument('-b', '--blacklist', nargs='*', default=['data/blacklist_edges.csv', 'data/train.csv', 'data/train_extra.csv'])
args = parser.parse_args()
if args.outfile == None:
args.outfile = args.edgelist.replace('.prob','') + '.clusters'
threshold_interconnectivity = args.interconnectivity
print_err("Loading graph")
reader = csv.reader(enforce_min(skip_comments(open(args.edgelist, 'rb')), threshold_interconnectivity))
edges = []
for i, line in enumerate(reader):
line[0:2] = list(map(int, line[0:2]))
line[2] = float(line[2])
edges.append((line[2], line[0], line[1]))
if (i+1) % 10000 == 0:
print_err(i+1, "edges done")
print_err("Sorting edges by weight")
edges = sorted(edges, reverse=True)
uf = unionfind()
if args.blacklist:
for filename in args.blacklist:
with open(filename, 'rb') as f:
reader = csv.reader(skip_comments(f))
for line in reader:
line[0:3] = list(map(int, line[0:3]))
if len(line) > 2:
if line[0] != 0:
continue
line = line[1:]
uf.disallow(line[0], line[1])
if args.seedset:
print_err("Loading seedset(s)")
for filename in args.seedset:
for cl in loadClusters(filename):
if len(cl) < 2:
continue
uf.mergeall(cl)
if args.seededges:
for filename in args.blacklist:
with open(filename, 'rb') as f:
reader = csv.reader(skip_comments(f))
for line in reader:
line[0:3] = list(map(int, line[0:3]))
if line[0] != 1:
continue
line = line[1:]
uf.trymerge(line[0], line[1])
print_err("Clustering")
for i, (w, v1, v2) in enumerate(edges):
uf.trymerge(v1, v2)
if (i+1) % 10000 == 0:
print_err(i+1, "edges done")
clusters = defaultdict(list)
for v in uf.mp:
clusters[uf.get_id(v)].append(v)
clusters = [v for v in list(clusters.values()) if len(v) > 1]
clusters = sorted(clusters, key=len, reverse=True)
print_err("Writing clusters")
f_out = open(args.outfile, 'wb')
if not args.with_analysis:
for cl in clusters:
f_out.write(','.join(map(str, sorted(cl))) + '\n')
if args.with_analysis:
print_err("Loading pickled author pre-features")
authors = pickle.load(open(args.authorprefeat, 'rb'))
import networkx as nx
G_sim = nx.read_weighted_edgelist(skip_comments(open(args.edgelist, 'rb')), nodetype=int, delimiter=',')
outputClusters(clusters, f_out, G_sim, authors)
if __name__ == "__main__":
main()
| 29.855263 | 133 | 0.642794 | 689 | 4,538 | 4.117562 | 0.223512 | 0.022912 | 0.034896 | 0.014804 | 0.24533 | 0.169898 | 0.126895 | 0.126895 | 0.100811 | 0.100811 | 0 | 0.029396 | 0.197885 | 4,538 | 151 | 134 | 30.05298 | 0.75 | 0.062583 | 0 | 0.208333 | 0 | 0 | 0.109158 | 0.025641 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058333 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5861aaa87e16980cf7f95fd4b748950ec3d44176 | 5,055 | py | Python | tests/test_error.py | iotanbo/iotanbo_py_utils | 96a2728e051b5e5ee601459b4c449b5495768ba8 | [
"MIT"
] | null | null | null | tests/test_error.py | iotanbo/iotanbo_py_utils | 96a2728e051b5e5ee601459b4c449b5495768ba8 | [
"MIT"
] | 14 | 2021-06-07T17:36:02.000Z | 2021-06-07T18:02:37.000Z | tests/test_error.py | iotanbo/iotanbo_py_utils | 96a2728e051b5e5ee601459b4c449b5495768ba8 | [
"MIT"
] | null | null | null | """Test `iotanbo_py_utils.error.py`."""
from iotanbo_py_utils.error import Error
from iotanbo_py_utils.error import ErrorKind
def test_error_one_of_arithmetic_errors() -> None:
errs = (
Error(ErrorKind.ArithmeticError),
Error(ErrorKind.FloatingPointError),
Error(ErrorKind.OverflowError),
Error(ErrorKind.ZeroDivisionError),
)
for err in errs:
assert err.one_of_arithmetic_errors()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_arithmetic_errors()
def test_error_one_of_import_errors() -> None:
errs = (
Error(ErrorKind.ImportError),
Error(ErrorKind.ModuleNotFoundError),
)
for err in errs:
assert err.one_of_import_errors()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_import_errors()
def test_error_one_of_lookup_errors() -> None:
errs = (
Error(ErrorKind.LookupError),
Error(ErrorKind.IndexError),
Error(ErrorKind.KeyError),
)
for err in errs:
assert err.one_of_lookup_errors()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_lookup_errors()
def test_error_one_of_name_errors() -> None:
errs = (
Error(ErrorKind.NameError),
Error(ErrorKind.UnboundLocalError),
)
for err in errs:
assert err.one_of_name_errors()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_name_errors()
def test_error_one_of_os_errors() -> None:
errs = (
Error(ErrorKind.OSError),
Error(ErrorKind.BlockingIOError),
Error(ErrorKind.ChildProcessError),
Error(ErrorKind.ConnectionError),
Error(ErrorKind.BrokenPipeError),
Error(ErrorKind.ConnectionAbortedError),
Error(ErrorKind.ConnectionRefusedError),
Error(ErrorKind.ConnectionResetError),
Error(ErrorKind.FileExistsError),
Error(ErrorKind.FileNotFoundError),
Error(ErrorKind.InterruptedError),
Error(ErrorKind.IsADirectoryError),
Error(ErrorKind.NotADirectoryError),
Error(ErrorKind.PermissionError),
Error(ErrorKind.ProcessLookupError),
Error(ErrorKind.TimeoutError),
)
for err in errs:
assert err.one_of_os_errors()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_os_errors()
def test_error_one_of_runtime_errors() -> None:
errs = (
Error(ErrorKind.RuntimeError),
Error(ErrorKind.NotImplementedError),
Error(ErrorKind.RecursionError),
)
for err in errs:
assert err.one_of_runtime_errors()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_runtime_errors()
def test_error_one_of_syntax_errors() -> None:
errs = (
Error(ErrorKind.SyntaxError),
Error(ErrorKind.IndentationError),
Error(ErrorKind.TabError),
)
for err in errs:
assert err.one_of_syntax_errors()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_syntax_errors()
def test_error_one_of_value_errors() -> None:
errs = (
Error(ErrorKind.ValueError),
Error(ErrorKind.UnicodeError),
Error(ErrorKind.UnicodeDecodeError),
Error(ErrorKind.UnicodeEncodeError),
Error(ErrorKind.UnicodeTranslateError),
)
for err in errs:
assert err.one_of_value_errors()
# == negative path ==
err = Error(ErrorKind.SyntaxError)
assert not err.one_of_value_errors()
def test_error_one_of_warnings() -> None:
errs = (
Error(ErrorKind.Warning),
Error(ErrorKind.DeprecationWarning),
Error(ErrorKind.PendingDeprecationWarning),
Error(ErrorKind.RuntimeWarning),
Error(ErrorKind.SyntaxWarning),
Error(ErrorKind.UserWarning),
Error(ErrorKind.FutureWarning),
Error(ErrorKind.ImportWarning),
Error(ErrorKind.UnicodeWarning),
Error(ErrorKind.BytesWarning),
Error(ErrorKind.ResourceWarning),
)
for err in errs:
assert err.one_of_warnings()
# == negative path ==
err = Error(ErrorKind.ValueError)
assert not err.one_of_warnings()
class _CustomException(Exception):
...
def test_error_from_exception() -> None:
# error from exception, preserve kind
e = Error.from_exception(ValueError("test"))
assert e.kind == ErrorKind.ValueError
assert not e.cause
assert e.msg == "test"
# new kind replaces exception's kind
e = Error.from_exception(ValueError("test"), new_kind=ErrorKind.Warning)
assert e.kind == ErrorKind.Warning
assert e.cause == ErrorKind.ValueError
assert e.msg == "test"
# error from custom exception, preserve kind
try:
raise _CustomException()
except _CustomException as ex:
e = Error.from_exception(ex)
assert e.kind == "_CustomException"
assert not e.cause
assert e.msg == ""
| 28.240223 | 76 | 0.669041 | 544 | 5,055 | 6.009191 | 0.174632 | 0.248394 | 0.04405 | 0.041297 | 0.467727 | 0.344142 | 0.259407 | 0.221474 | 0.149893 | 0.149893 | 0 | 0 | 0.231454 | 5,055 | 178 | 77 | 28.398876 | 0.841441 | 0.064886 | 0 | 0.227273 | 0 | 0 | 0.006794 | 0 | 0 | 0 | 0 | 0 | 0.204545 | 1 | 0.075758 | false | 0 | 0.05303 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5865c20624359a297b3a450c2e37573f88fc2710 | 245 | py | Python | amazon/model/admin.py | Lakshmivijaykrishnan/mini-amazon | 89ce7c5e2af127a2e8e027c87cb245fa82d184d6 | [
"Unlicense"
] | null | null | null | amazon/model/admin.py | Lakshmivijaykrishnan/mini-amazon | 89ce7c5e2af127a2e8e027c87cb245fa82d184d6 | [
"Unlicense"
] | null | null | null | amazon/model/admin.py | Lakshmivijaykrishnan/mini-amazon | 89ce7c5e2af127a2e8e027c87cb245fa82d184d6 | [
"Unlicense"
] | null | null | null | from amazon.model import db
def __search_by_admin_name(username):
query={'username': username}
matching_user = db['users'].find(query)
if matching_user.count() > 0:
return matching_user.next()
else:
return None
| 22.272727 | 43 | 0.669388 | 32 | 245 | 4.875 | 0.71875 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005236 | 0.220408 | 245 | 10 | 44 | 24.5 | 0.811518 | 0 | 0 | 0 | 0 | 0 | 0.053061 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
586904d063488a1bde40ac6c380144e572f09389 | 789 | py | Python | src/openbiolink/graph_creation/metadata_db_file/edge/dbMetaEdgeSiderInd.py | cthoyt/OpenBioLink | c5f85b99f9104f70493136c343e4554261e990a5 | [
"MIT"
] | null | null | null | src/openbiolink/graph_creation/metadata_db_file/edge/dbMetaEdgeSiderInd.py | cthoyt/OpenBioLink | c5f85b99f9104f70493136c343e4554261e990a5 | [
"MIT"
] | null | null | null | src/openbiolink/graph_creation/metadata_db_file/edge/dbMetaEdgeSiderInd.py | cthoyt/OpenBioLink | c5f85b99f9104f70493136c343e4554261e990a5 | [
"MIT"
] | null | null | null | from openbiolink.graph_creation.metadata_db_file.edge.dbMetadataEdge import DbMetadataEdge
from openbiolink.graph_creation.types.dbType import DbType
class DbMetaEdgeSiderInd(DbMetadataEdge):
NAME = 'Edge - Sider - Indications'
URL = "http://sideeffects.embl.de/media/download/meddra_all_indications.tsv.gz"
OFILE_NAME = "SIDER_dis_drug.tsv.gz"
COLS = ['stichID', 'umlsID', 'method', 'umlsName', 'medDRAumlsType',
'medDRAumlsID', 'medDRAumlsName']
FILTER_COLS = ['umlsID', 'stichID', 'method']
HEADER = 0
DB_TYPE = DbType.DB_EDGE_SIDER_IND
def __init__(self):
super().__init__(url= DbMetaEdgeSiderInd.URL,
ofile_name= DbMetaEdgeSiderInd.OFILE_NAME,
dbType= DbMetaEdgeSiderInd.DB_TYPE) | 43.833333 | 90 | 0.69455 | 84 | 789 | 6.22619 | 0.571429 | 0.051625 | 0.076482 | 0.107075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001577 | 0.196451 | 789 | 18 | 91 | 43.833333 | 0.823344 | 0 | 0 | 0 | 0 | 0 | 0.258228 | 0.026582 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
586d9bd962737e276a73a87798d6fdc63e31cd16 | 503 | py | Python | algos/lcs.py | asaini/algo-py | e9d18ef82d14e6304430bbd8b065430e76aa7eb8 | [
"MIT"
] | 1 | 2015-10-01T21:17:10.000Z | 2015-10-01T21:17:10.000Z | algos/lcs.py | asaini/algo-py | e9d18ef82d14e6304430bbd8b065430e76aa7eb8 | [
"MIT"
] | null | null | null | algos/lcs.py | asaini/algo-py | e9d18ef82d14e6304430bbd8b065430e76aa7eb8 | [
"MIT"
] | null | null | null | def lcs(x, y):
"""
Longest Common Subsequence
"""
n = len(x) + 1
m = len(y) + 1
table = [ [0]*m for i in range(n) ]
for i in range(n):
for j in range(m):
# If either string is empty, then lcs = 0
if i == 0 or j == 0:
table[i][j] = 0
elif x[i - 1] == y[j - 1]:
table[i][j] = 1 + table[i-1][j-1]
else:
table[i][j] = max(table[i-1][j], table[i][j-1])
return table[len(x)][len(y)]
if __name__ == '__main__':
x = "AGGTAB"
y = "GXTXAYB"
print lcs(x, y)
| 16.225806 | 51 | 0.499006 | 97 | 503 | 2.505155 | 0.350515 | 0.148148 | 0.115226 | 0.090535 | 0.111111 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0.039106 | 0.28827 | 503 | 30 | 52 | 16.766667 | 0.639665 | 0.077535 | 0 | 0 | 0 | 0 | 0.049412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
58708500568a55067da2b5aa34b23852b3efa570 | 1,576 | py | Python | hardware/testbenches/common/drivers/state/driver.py | Intuity/nexus | 0d1414fa2ea518dae9f031930c40692ebac5d154 | [
"Apache-2.0"
] | 6 | 2021-06-28T05:52:15.000Z | 2022-03-27T20:45:28.000Z | hardware/testbenches/common/drivers/state/driver.py | Intuity/nexus | 0d1414fa2ea518dae9f031930c40692ebac5d154 | [
"Apache-2.0"
] | null | null | null | hardware/testbenches/common/drivers/state/driver.py | Intuity/nexus | 0d1414fa2ea518dae9f031930c40692ebac5d154 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021, Peter Birch, mailto:peter@lightlogic.co.uk
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cocotb_bus.drivers import Driver
from cocotb.triggers import RisingEdge
from ..driver_common import BaseDriver
class StateInitiator(BaseDriver):
""" Drivers signal state updates as an initiator """
async def _driver_send(self, transaction, sync=True, **kwargs):
""" Send queued transactions onto the interface.
Args:
transaction: Transaction to send
sync : Align to the rising clock edge before sending
**kwargs : Any other arguments
"""
# Synchronise to the rising edge
if sync: await RisingEdge(self.clock)
# Wait for reset to clear
while self.reset == 1: await RisingEdge(self.clock)
# Drive the request
self.intf.index <= transaction.index
self.intf.is_seq <= transaction.sequential
self.intf.value <= transaction.state
self.intf.update <= 1
await RisingEdge(self.clock)
self.intf.update <= 0
| 37.52381 | 74 | 0.692259 | 209 | 1,576 | 5.196172 | 0.574163 | 0.055249 | 0.052486 | 0.066298 | 0.046041 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009098 | 0.232868 | 1,576 | 41 | 75 | 38.439024 | 0.889165 | 0.441624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.230769 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
587a892ea698fcb43f251688aa0bd017aec53e6b | 1,621 | py | Python | badboids/test/test_simulation_parameters.py | RiannaK/Coursework2 | 471589593fd09c61fae39cb5975cc88fee36971c | [
"MIT"
] | null | null | null | badboids/test/test_simulation_parameters.py | RiannaK/Coursework2 | 471589593fd09c61fae39cb5975cc88fee36971c | [
"MIT"
] | 2 | 2017-01-02T11:11:31.000Z | 2017-01-02T22:09:15.000Z | badboids/test/test_simulation_parameters.py | RiannaK/Coursework2 | 471589593fd09c61fae39cb5975cc88fee36971c | [
"MIT"
] | null | null | null | from numpy.testing import assert_array_almost_equal as array_assert
from badboids.boids import SimulationParameters
def test_simulation_parameters_init():
"""Tests Simulation Parameters constructor"""
# Arrange
formation_flying_distance = 800
formation_flying_strength = 0.10
alert_distance = 8
move_to_middle_strength = 0.2
delta_t = 1.5
# Act
sut = SimulationParameters(formation_flying_distance, formation_flying_strength, alert_distance,
move_to_middle_strength, delta_t)
# Assert
array_assert(sut.formation_flying_distance, formation_flying_distance)
array_assert(sut.formation_flying_strength, formation_flying_strength)
array_assert(sut.alert_distance, alert_distance)
array_assert(sut.move_to_middle_strength, move_to_middle_strength)
array_assert(sut.delta_t, delta_t)
def test_get_defaults():
"""Tests Simulation Parameters get defaults method"""
# Arrange
expected_formation_flying_distance = 10000
expected_formation_flying_strength = 0.125
expected_alert_distance = 100
expected_move_to_middle_strength = 0.01
expected_delta_t = 1.0
# Act
parameters = SimulationParameters.get_defaults()
# Assert
assert parameters.formation_flying_distance == expected_formation_flying_distance
assert parameters.formation_flying_strength == expected_formation_flying_strength
assert parameters.alert_distance == expected_alert_distance
assert parameters.move_to_middle_strength == expected_move_to_middle_strength
assert parameters.delta_t == expected_delta_t
| 33.770833 | 100 | 0.779766 | 196 | 1,621 | 6 | 0.239796 | 0.178571 | 0.136905 | 0.119048 | 0.167517 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02071 | 0.165947 | 1,621 | 47 | 101 | 34.489362 | 0.849112 | 0.07773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.407407 | 1 | 0.074074 | false | 0 | 0.074074 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
587e704ad57d09ab05a6f91557e90faddd8fb439 | 3,247 | py | Python | django_town/oauth2/models.py | uptown/django-town | 4c3b078a8ce5dcc275d65faa4a1cdfb7ebc74a50 | [
"MIT"
] | null | null | null | django_town/oauth2/models.py | uptown/django-town | 4c3b078a8ce5dcc275d65faa4a1cdfb7ebc74a50 | [
"MIT"
] | null | null | null | django_town/oauth2/models.py | uptown/django-town | 4c3b078a8ce5dcc275d65faa4a1cdfb7ebc74a50 | [
"MIT"
] | null | null | null | #-*- coding: utf-8 -*-
from django_town.core.settings import OAUTH2_SETTINGS
try:
if not OAUTH2_SETTINGS.ACCESS_TOKEN_SECRET_KEY:
raise ImportError
except KeyError:
# import traceback
# traceback.print_exc()
raise ImportError
from django.db import models
from django.conf import settings
from django.contrib import admin
from django_town.cache.model import CachingModel
from django_town.core.fields import JSONField
from django_town.utils import generate_random_from_vschar_set
class Service(models.Model):
name = models.CharField(max_length=200)
def __unicode__(self):
return self.name
# class ServiceSecretKey(CachingModel):
# cache_key_format = "_ut_o2ss:%(service__pk)d"
#
# service = models.ForeignKey(Service, unique=True)
# secret_key = models.CharField(max_length=OAUTH2_SETTINGS.SERVICE_SECRET_KEY_LENGTH,
# default=lambda: generate_random_from_vschar_set(
# OAUTH2_SETTINGS.SERVICE_SECRET_KEY_LENGTH))
def _generate_random_from_vschar_set_for_client_id():
return generate_random_from_vschar_set(OAUTH2_SETTINGS.CLIENT_ID_LENGTH)
def _generate_random_from_vschar_set_for_client_secret():
return generate_random_from_vschar_set(OAUTH2_SETTINGS.CLIENT_ID_LENGTH)
class Client(CachingModel):
IOS_CLIENT = 1
CLIENT_TYPE = (
(0, "Web"),
(1, "iOS"),
(2, "Android"),
(3, "Win"),
)
cache_key_format = "_ut_o2c:%(client_id)s"
name = models.CharField(max_length=200)
service = models.ForeignKey(Service)
client_id = models.CharField(max_length=OAUTH2_SETTINGS.CLIENT_ID_LENGTH, unique=True,
default=_generate_random_from_vschar_set_for_client_id)
client_secret = models.CharField(max_length=OAUTH2_SETTINGS.CLIENT_SECRET_LENGTH,
default=_generate_random_from_vschar_set_for_client_secret)
redirect_uris = JSONField(blank=True)
default_redirect_uri = models.URLField()
available_scope = JSONField(blank=True)
client_type = models.IntegerField(default=IOS_CLIENT, choices=CLIENT_TYPE)
client_min_version = models.CharField(max_length=20, default="")
client_cur_version = models.CharField(max_length=20, default="")
client_store_id = models.CharField(max_length=30, default="")
def __unicode__(self):
return self.name
def _generate_random_from_vschar_set_for_secret_key():
return generate_random_from_vschar_set(OAUTH2_SETTINGS.USER_SECRET_KEY_LENGTH)
class UserClientSecretKey(CachingModel):
cache_key_format = "_ut_o2u:%(user_id)d:%(client__pk)d"
user_id = models.IntegerField()
client = models.ForeignKey(Client)
secret_key = models.CharField(max_length=OAUTH2_SETTINGS.USER_SECRET_KEY_LENGTH,
default=_generate_random_from_vschar_set_for_secret_key)
unique_together = (("user_id", "client"),)
class Scope(models.Model):
name = models.CharField(max_length=30, unique=True)
class ClientAdmin(admin.ModelAdmin):
readonly_fields = ['client_id', 'client_secret']
admin.site.register(Client, admin.ModelAdmin)
admin.site.register(Service, admin.ModelAdmin)
| 33.132653 | 92 | 0.73206 | 405 | 3,247 | 5.45679 | 0.244444 | 0.069683 | 0.089593 | 0.119457 | 0.511765 | 0.456109 | 0.38371 | 0.288688 | 0.093213 | 0.055204 | 0 | 0.012397 | 0.180166 | 3,247 | 97 | 93 | 33.474227 | 0.817806 | 0.140129 | 0 | 0.169492 | 0 | 0 | 0.038102 | 0.01977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084746 | false | 0 | 0.152542 | 0.084746 | 0.779661 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
587e7271e86565dcf7c4f99ca8d0228de3d2839e | 265 | py | Python | util_list_files.py | jhu-alistair/image_utilities | 07fcf2fb78b57b3e8ac798daffa9f4d7b05d9063 | [
"Apache-2.0"
] | null | null | null | util_list_files.py | jhu-alistair/image_utilities | 07fcf2fb78b57b3e8ac798daffa9f4d7b05d9063 | [
"Apache-2.0"
] | null | null | null | util_list_files.py | jhu-alistair/image_utilities | 07fcf2fb78b57b3e8ac798daffa9f4d7b05d9063 | [
"Apache-2.0"
] | null | null | null | # List files in a directory. Useful for testing the path
from local_tools import *
from image_renamer import ImageRenamer
if confirm_config('path'):
img_path = get_config('path')
fl = ImageRenamer(img_path)
for ff in fl.image_files():
print(ff)
| 29.444444 | 56 | 0.720755 | 40 | 265 | 4.6 | 0.625 | 0.108696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196226 | 265 | 8 | 57 | 33.125 | 0.86385 | 0.203774 | 0 | 0 | 0 | 0 | 0.038278 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5882bcb6d8e741c3012ffb7ce72cd027f9aee6d9 | 727 | py | Python | Scripts/autotest/bug/migrations/0003_auto_20180128_2144.py | ludechu/DJevn | ee97447da3f6f55c92bfa1b6a20436a4f3098150 | [
"bzip2-1.0.6"
] | null | null | null | Scripts/autotest/bug/migrations/0003_auto_20180128_2144.py | ludechu/DJevn | ee97447da3f6f55c92bfa1b6a20436a4f3098150 | [
"bzip2-1.0.6"
] | null | null | null | Scripts/autotest/bug/migrations/0003_auto_20180128_2144.py | ludechu/DJevn | ee97447da3f6f55c92bfa1b6a20436a4f3098150 | [
"bzip2-1.0.6"
] | null | null | null | # Generated by Django 2.0 on 2018-01-28 21:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('bug', '0002_auto_20180110_1107'),
]
operations = [
migrations.AlterField(
model_name='bug',
name='buglevel',
field=models.CharField(choices=[('1', '1'), ('2', '2'), ('3', '3')], default='3', max_length=200, null=True, verbose_name='严重程度'),
),
migrations.AlterField(
model_name='bug',
name='bugstatus',
field=models.CharField(choices=[('激活', '激活'), ('已解决', '已解决'), ('已关闭', '已关闭')], default='激活', max_length=200, null=True, verbose_name='解决状态'),
),
]
| 30.291667 | 153 | 0.558459 | 82 | 727 | 4.841463 | 0.573171 | 0.100756 | 0.125945 | 0.146096 | 0.337531 | 0.337531 | 0.156171 | 0 | 0 | 0 | 0 | 0.079482 | 0.255846 | 727 | 23 | 154 | 31.608696 | 0.654344 | 0.059147 | 0 | 0.352941 | 1 | 0 | 0.120235 | 0.033724 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
588fc576880c0f000634f775d5b9b45b44869222 | 7,957 | py | Python | tools/merge_messages.py | cclauss/personfinder | 62417192e79c9711d0c6c7cfc042f6d6b0dc2dc2 | [
"Apache-2.0"
] | 1 | 2021-11-18T20:09:09.000Z | 2021-11-18T20:09:09.000Z | tools/merge_messages.py | ZhengC1/personfinder | 7e40f2783ac89b91efd1d8497f1acc5b006361fa | [
"Apache-2.0"
] | null | null | null | tools/merge_messages.py | ZhengC1/personfinder | 7e40f2783ac89b91efd1d8497f1acc5b006361fa | [
"Apache-2.0"
] | 1 | 2022-01-05T07:06:43.000Z | 2022-01-05T07:06:43.000Z | #!/usr/bin/env python
# Copyright 2010 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Merge translations from a set of .po or XMB files into a set of .po files.
Usage:
../tools/merge_messages <source-dir> <template-file>
../tools/merge_messages <source-dir> <template-file> <target-dir>
../tools/merge_messages <source-po-file> <template-file> <target-po-file>
<source-dir> should be a directory containing a subdirectories named with
locale codes (e.g. pt_BR). For each locale, this script looks for the first
.po or .xml file it finds anywhere under <source-dir>/<locale-code>/ and
adds all its messages and translations to the corresponding django.po file
in the target directory, at <target-dir>/<locale-code>/LC_MESSAGES/django.po.
<template-file> is the output file from running:
'find_missing_translations --format=po'
With the name that corresponds to the --format=xmb output.
Make sure to run this in a tree that corresponds to the version used for
generating the xmb file or the resulting merge will be wrong. See
validate_merge for directions on verifying the merge was correct.
If <target-dir> is unspecified, it defaults to the app/locale directory of
the current app. Alternatively, you can specify a single source file and
a single target file to update.
When merging messages from a source file into a target file:
- Empty messages and messages marked "fuzzy" in the source file are ignored.
- Translations in the source file will replace any existing translations
for the same messages in the target file.
- Other translations in the source file will be added to the target file.
- If the target file doesn't exist, it will be created.
- To minimize unnecessary changes from version to version, the target file
has no "#: filename:line" comments and the messages are sorted by msgid.
"""
import babel.messages
from babel.messages import pofile
import codecs
import os
import sys
import xml.sax
class XmbCatalogReader(xml.sax.handler.ContentHandler):
"""A SAX handler that populates a babel.messages.Catalog with messages
read from an XMB file."""
def __init__(self, template):
"""template should be a Catalog containing the untranslated messages
in the same order as the corresponding messages in the XMB file."""
self.tags = []
self.catalog = babel.messages.Catalog()
self.iter = iter(template)
assert self.iter.next().id == '' # skip the blank metadata message
def startElement(self, tag, attrs):
self.tags.append(tag)
if tag == 'msg':
self.string = ''
self.message = babel.messages.Message(self.iter.next().id)
if tag == 'ph':
self.string += '%(' + attrs['name'] + ')s'
self.message.flags.add('python-format')
def endElement(self, tag):
assert self.tags.pop() == tag
if tag == 'msg':
self.message.string = self.string
self.catalog[self.message.id] = self.message
def characters(self, content):
if self.tags[-1] == 'msg':
self.string += content
def log(text):
"""Prints out Unicode text."""
print text.encode('utf-8')
def log_change(old_message, new_message):
"""Describes an update to a message."""
if not old_message:
if new_message.id:
log('+ msgid "%s"' % str(new_message.id))
else:
print >>sys.stderr, 'no message id: %s' % new_message
log('+ msgstr "%s"' % str(new_message.string.encode('ascii', 'ignore')))
if new_message.flags:
log('+ #, %s' % ', '.join(sorted(new_message.flags)))
else:
if (new_message.string != old_message.string or
new_message.flags != old_message.flags):
log(' msgid "%s"' % old_message.id)
log('- msgstr "%s"' % old_message.string)
if old_message.flags:
log('- #, %s' % ', '.join(sorted(old_message.flags)))
log('+ msgstr "%s"' % new_message.string)
if new_message.flags:
log('+ #, %s' % ', '.join(sorted(new_message.flags)))
def create_file(filename):
"""Opens a file for writing, creating any necessary parent directories."""
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
return open(filename, 'w')
def merge(source, target_filename):
"""Merges the messages from the source Catalog into a .po file at
target_filename. Creates the target file if it doesn't exist."""
if os.path.exists(target_filename):
target = pofile.read_po(open(target_filename))
for message in source:
if message.id and message.string and not message.fuzzy:
log_change(message.id in target and target[message.id], message)
# This doesn't actually replace the message! It just updates
# the fields other than the string. See Catalog.__setitem__.
target[message.id] = message
# We have to mutate the message to update the string and flags.
target[message.id].string = message.string
target[message.id].flags = message.flags
else:
for message in source:
log_change(None, message)
target = source
target_file = create_file(target_filename)
pofile.write_po(target_file, target,
no_location=True, sort_output=True, ignore_obsolete=True)
target_file.close()
def merge_file(source_filename, target_filename, template_filename):
if source_filename.endswith('.po'):
merge(pofile.read_po(open(source_filename)), target_filename)
elif source_filename.endswith('.xml'):
handler = XmbCatalogReader(pofile.read_po(open(template_filename)))
xml.sax.parse(open(source_filename), handler)
merge(handler.catalog, target_filename)
if __name__ == '__main__':
args = sys.argv[1:]
if len(args) not in [1, 2, 3]:
print __doc__
sys.exit(1)
args = (args + [None, None])[:3]
source_path = args[0]
template_path = args[1]
target_path = args[2] or os.path.join(os.environ['APP_DIR'], 'locale')
# If a single file is specified, merge it.
if ((source_path.endswith('.po') or source_path.endswith('.xml')) and
target_path.endswith('.po')):
print target_path
merge_file(source_path, target_path, template_path)
sys.exit(0)
# Otherwise, we expect two directories.
if not os.path.isdir(source_path) or not os.path.isdir(target_path):
print __doc__
sys.exit(1)
# Find all the source files.
source_filenames = {} # {locale: po_filename}
def find_po_file(key, dir, filenames):
"""Looks for a .po file and records it in source_filenames."""
for filename in filenames:
if filename.endswith('.po') or filename.endswith('.xml'):
source_filenames[key] = os.path.join(dir, filename)
for locale in os.listdir(source_path):
os.path.walk(os.path.join(source_path, locale), find_po_file,
locale.replace('-', '_'))
# Merge them into the target files.
for locale in sorted(source_filenames.keys()):
target = os.path.join(target_path, locale, 'LC_MESSAGES', 'django.po')
print target
merge_file(source_filenames[locale], target, template_path)
| 39.004902 | 80 | 0.662184 | 1,104 | 7,957 | 4.669384 | 0.257246 | 0.023278 | 0.012609 | 0.013967 | 0.070611 | 0.050049 | 0.032978 | 0.017847 | 0.017847 | 0.017847 | 0 | 0.003432 | 0.231117 | 7,957 | 203 | 81 | 39.197044 | 0.839163 | 0.118763 | 0 | 0.141509 | 0 | 0 | 0.048071 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 0 | null | null | 0 | 0.056604 | null | null | 0.056604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5890360ab5457f3e208d3176b19465a1fa0b29ad | 621 | py | Python | misc/derwin.py | ssebs/nccsv | f5e94dab833a5f0822308299e154f13fd68d88f6 | [
"MIT"
] | null | null | null | misc/derwin.py | ssebs/nccsv | f5e94dab833a5f0822308299e154f13fd68d88f6 | [
"MIT"
] | null | null | null | misc/derwin.py | ssebs/nccsv | f5e94dab833a5f0822308299e154f13fd68d88f6 | [
"MIT"
] | null | null | null | # derwin.py - testing a window within a window
import curses
def main(stdscr):
# Create container window from stdscr
sh, sw = stdscr.getmaxyx()
container_win = curses.newwin(sh-1, sw-1, 1, 1)
# Create inner window from container win
bh, bw = container_win.getmaxyx()
box_win = container_win.derwin(bh-2, bw-2, 1, 1)
# Add size of inner win
box_win.addstr(1, 1, f"{bh}x{bw}")
# Draw borders
container_win.box()
box_win.box()
# Render and wait for char
container_win.refresh()
container_win.getch()
# main
if __name__ == "__main__":
curses.wrapper(main)
| 20.7 | 52 | 0.653784 | 94 | 621 | 4.138298 | 0.457447 | 0.215938 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020964 | 0.231884 | 621 | 29 | 53 | 21.413793 | 0.794549 | 0.296296 | 0 | 0 | 0 | 0 | 0.039627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
589e413db07bdd7cf6dcd6e3ab66ffc0b716eb5c | 1,001 | py | Python | portrait/webapp/migrations/0001_initial.py | andela-sjames/Portrait | 83074e3d16d8009a71b674b6859f7c276b8d6537 | [
"MIT"
] | null | null | null | portrait/webapp/migrations/0001_initial.py | andela-sjames/Portrait | 83074e3d16d8009a71b674b6859f7c276b8d6537 | [
"MIT"
] | null | null | null | portrait/webapp/migrations/0001_initial.py | andela-sjames/Portrait | 83074e3d16d8009a71b674b6859f7c276b8d6537 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.1 on 2019-05-16 23:28
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='SocialProfile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('provider', models.SmallIntegerField(choices=[('1', 'Facebook')])),
('social_id', models.CharField(max_length=255, unique=True)),
('photo', models.TextField(blank=True)),
('extra_data', models.TextField(blank=True)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='social_profile', to=settings.AUTH_USER_MODEL)),
],
),
]
| 34.517241 | 152 | 0.631369 | 107 | 1,001 | 5.775701 | 0.598131 | 0.038835 | 0.045307 | 0.071197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024934 | 0.238761 | 1,001 | 28 | 153 | 35.75 | 0.786089 | 0.044955 | 0 | 0 | 1 | 0 | 0.079665 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
58a3ad8eacd907942afee36829131b2e139101c4 | 894 | py | Python | conu/backend/nspawn/constants.py | lslebodn/conu | dee6fd958471f77d1c0511b031ea136dfaf8a77a | [
"MIT"
] | 95 | 2018-05-19T14:35:08.000Z | 2022-01-08T23:31:40.000Z | conu/backend/nspawn/constants.py | lslebodn/conu | dee6fd958471f77d1c0511b031ea136dfaf8a77a | [
"MIT"
] | 179 | 2017-09-12T11:14:30.000Z | 2018-04-26T05:36:13.000Z | conu/backend/nspawn/constants.py | lslebodn/conu | dee6fd958471f77d1c0511b031ea136dfaf8a77a | [
"MIT"
] | 16 | 2018-05-09T14:15:32.000Z | 2021-08-02T21:11:33.000Z | # -*- coding: utf-8 -*-
#
# Copyright Contributors to the Conu project.
# SPDX-License-Identifier: MIT
#
# TODO: move this line to some generic constants, instead of same in
# docker and nspawn
CONU_ARTIFACT_TAG = 'CONU.'
CONU_IMAGES_STORE = "/opt/conu-nspawn-images/"
CONU_NSPAWN_BASEPACKAGES = [
"dnf",
"iproute",
"dhcp-client",
"initscripts",
"passwd",
"systemd",
"rpm",
"bash",
"shadow-utils",
"sssd-client",
"util-linux",
"libcrypt",
"sssd-client",
"coreutils",
"glibc-all-langpacks",
"vim-minimal"]
BOOTSTRAP_IMAGE_SIZE_IN_MB = 5000
BOOTSTRAP_FS_UTIL = "mkfs.ext4"
BOOTSTRAP_PACKAGER = [
"dnf",
"-y",
"install",
"--nogpgcheck",
"--setopt=install_weak_deps=False",
"--allowerasing"]
DEFAULT_RETRYTIMEOUT = 30
DEFAULT_SLEEP = 1
| 22.35 | 68 | 0.587248 | 95 | 894 | 5.347368 | 0.8 | 0.03937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013804 | 0.270694 | 894 | 39 | 69 | 22.923077 | 0.765337 | 0.200224 | 0 | 0.133333 | 0 | 0 | 0.355021 | 0.079208 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0 | false | 0.033333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
542d0bbb398d02e9717cfe574c3d52048a5a205b | 836 | py | Python | Exercicios Colecoes Python/exercicio 33 - secao 7 - p1.py | cristinamais/exercicios_python | 8a09b0b68ffaa62d13afb952998e890a79667c7e | [
"MIT"
] | null | null | null | Exercicios Colecoes Python/exercicio 33 - secao 7 - p1.py | cristinamais/exercicios_python | 8a09b0b68ffaa62d13afb952998e890a79667c7e | [
"MIT"
] | null | null | null | Exercicios Colecoes Python/exercicio 33 - secao 7 - p1.py | cristinamais/exercicios_python | 8a09b0b68ffaa62d13afb952998e890a79667c7e | [
"MIT"
] | null | null | null | """
33 - Faca um programa que leia um vetor de 15 posicoes e o compacte, ou seja, elimine as posicoes com valor zero.
Para isso, todos os elementos a frente do valor zero, devem ser movidos uma posicao para tras no vetor.
"""
"""
vetor = []
count = 0
for x in range(1, 16):
vetor.append(int(input(f'Digite o {x}/15: ')))
n = len(vetor)
for i in range(n):
if vetor[i] != 0:
vetor[count] = vetor[i]
count += 1
while n > count:
vetor[count] = 0
count += 1
print(vetor) # [5, 6, 9, 8, 10, 15, 33, 22, 66, 99, 10, 100, 0, 0, 0]
Este os zeros vao para tras
"""
from itertools import compress, repeat, chain
vetor = []
for x in range(1, 16):
vetor.append(int(input(f'Digite o {x}/15: ')))
# usando list.count e itertools.compress
y = [0] * vetor.count(0)
y.extend(compress(vetor, vetor))
print(y)
| 22 | 113 | 0.626794 | 148 | 836 | 3.540541 | 0.513514 | 0.076336 | 0.062977 | 0.041985 | 0.167939 | 0.167939 | 0.167939 | 0.167939 | 0.167939 | 0.167939 | 0 | 0.069444 | 0.22488 | 836 | 37 | 114 | 22.594595 | 0.739198 | 0.307416 | 0 | 0 | 0 | 0 | 0.082524 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5431f5aaf571f8d48be62c018da65e8a8b984c28 | 5,140 | py | Python | python/venv/lib/python2.7/site-packages/openstack/tests/unit/telemetry/v2/test_sample.py | sjsucohort6/openstack | 8471e6e599c3f52319926a582358358ef84cbadb | [
"MIT"
] | null | null | null | python/venv/lib/python2.7/site-packages/openstack/tests/unit/telemetry/v2/test_sample.py | sjsucohort6/openstack | 8471e6e599c3f52319926a582358358ef84cbadb | [
"MIT"
] | null | null | null | python/venv/lib/python2.7/site-packages/openstack/tests/unit/telemetry/v2/test_sample.py | sjsucohort6/openstack | 8471e6e599c3f52319926a582358358ef84cbadb | [
"MIT"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import testtools
from openstack.telemetry.v2 import sample
SAMPLE = {
'id': None,
'metadata': {'1': 'one'},
'meter': '2',
'project_id': '3',
'recorded_at': '4',
'resource_id': '5',
'source': '6',
'timestamp': '7',
'type': '8',
'unit': '9',
'user_id': '10',
'volume': '11.1',
}
OLD_SAMPLE = {
'counter_name': '1',
'counter_type': '2',
'counter_unit': '3',
'counter_volume': '4',
'message_id': None,
'project_id': '5',
'recorded_at': '6',
'resource_id': '7',
'resource_metadata': '8',
'source': '9',
'timestamp': '10',
'user_id': '11',
}
class TestSample(testtools.TestCase):
def test_basic(self):
sot = sample.Sample(SAMPLE)
self.assertIsNone(sot.resource_key)
self.assertIsNone(sot.resources_key)
self.assertEqual('/meters/%(meter)s', sot.base_path)
self.assertEqual('metering', sot.service.service_type)
self.assertTrue(sot.allow_create)
self.assertFalse(sot.allow_retrieve)
self.assertFalse(sot.allow_update)
self.assertFalse(sot.allow_delete)
self.assertTrue(sot.allow_list)
def test_make_new(self):
sot = sample.Sample(SAMPLE)
self.assertIsNone(sot.id)
self.assertEqual(SAMPLE['metadata'], sot.metadata)
self.assertEqual(SAMPLE['meter'], sot.meter)
self.assertEqual(SAMPLE['project_id'], sot.project_id)
self.assertEqual(SAMPLE['recorded_at'], sot.recorded_at)
self.assertEqual(SAMPLE['resource_id'], sot.resource_id)
self.assertIsNone(sot.sample_id)
self.assertEqual(SAMPLE['source'], sot.source)
self.assertEqual(SAMPLE['timestamp'], sot.generated_at)
self.assertEqual(SAMPLE['type'], sot.type)
self.assertEqual(SAMPLE['unit'], sot.unit)
self.assertEqual(SAMPLE['user_id'], sot.user_id)
self.assertEqual(SAMPLE['volume'], sot.volume)
def test_make_old(self):
sot = sample.Sample(OLD_SAMPLE)
self.assertIsNone(sot.id)
self.assertIsNone(sot.sample_id),
self.assertEqual(OLD_SAMPLE['counter_name'], sot.meter)
self.assertEqual(OLD_SAMPLE['counter_type'], sot.type)
self.assertEqual(OLD_SAMPLE['counter_unit'], sot.unit)
self.assertEqual(OLD_SAMPLE['counter_volume'], sot.volume)
self.assertEqual(OLD_SAMPLE['project_id'], sot.project_id)
self.assertEqual(OLD_SAMPLE['recorded_at'], sot.recorded_at)
self.assertEqual(OLD_SAMPLE['resource_id'], sot.resource_id)
self.assertEqual(OLD_SAMPLE['resource_metadata'], sot.metadata)
self.assertEqual(OLD_SAMPLE['source'], sot.source)
self.assertEqual(OLD_SAMPLE['timestamp'], sot.generated_at)
self.assertEqual(OLD_SAMPLE['user_id'], sot.user_id)
def test_list(self):
sess = mock.Mock()
resp = mock.Mock()
resp.body = [SAMPLE, OLD_SAMPLE]
sess.get = mock.Mock(return_value=resp)
path_args = {'meter': 'name_of_meter'}
found = sample.Sample.list(sess, path_args=path_args)
self.assertEqual(2, len(found))
first = found[0]
self.assertIsNone(first.id)
self.assertIsNone(first.sample_id)
self.assertEqual(SAMPLE['metadata'], first.metadata)
self.assertEqual(SAMPLE['meter'], first.meter)
self.assertEqual(SAMPLE['project_id'], first.project_id)
self.assertEqual(SAMPLE['recorded_at'], first.recorded_at)
self.assertEqual(SAMPLE['resource_id'], first.resource_id)
self.assertEqual(SAMPLE['source'], first.source)
self.assertEqual(SAMPLE['timestamp'], first.generated_at)
self.assertEqual(SAMPLE['type'], first.type)
self.assertEqual(SAMPLE['unit'], first.unit)
self.assertEqual(SAMPLE['user_id'], first.user_id)
self.assertEqual(SAMPLE['volume'], first.volume)
def test_create(self):
sess = mock.Mock()
resp = mock.Mock()
resp.body = [SAMPLE]
sess.post = mock.Mock(return_value=resp)
data = {'id': None,
'meter': 'temperature',
'project_id': 'project',
'resource_id': 'resource',
'type': 'gauge',
'unit': 'instance',
'volume': '98.6'}
new_sample = sample.Sample.new(**data)
new_sample.create(sess)
url = '/meters/temperature'
sess.post.assert_called_with(url, service=new_sample.service,
json=[data])
self.assertIsNone(new_sample.id)
| 36.978417 | 75 | 0.637743 | 625 | 5,140 | 5.0944 | 0.2304 | 0.169598 | 0.145101 | 0.082915 | 0.479585 | 0.31941 | 0.207601 | 0.135678 | 0.026382 | 0.026382 | 0 | 0.009245 | 0.221401 | 5,140 | 138 | 76 | 37.246377 | 0.786357 | 0.100584 | 0 | 0.070796 | 0 | 0 | 0.146389 | 0 | 0 | 0 | 0 | 0 | 0.451327 | 1 | 0.044248 | false | 0 | 0.026549 | 0 | 0.079646 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5432a871244c2f1064853af01dd1344e9304f2e3 | 1,246 | py | Python | arachnado/utils/spiders.py | wigginzz/arachnado | 8de92625262958e886263b4ccb189f4fc62d7400 | [
"MIT"
] | 2 | 2017-12-26T14:50:14.000Z | 2018-06-12T07:04:08.000Z | arachnado/utils/spiders.py | wigginzz/arachnado | 8de92625262958e886263b4ccb189f4fc62d7400 | [
"MIT"
] | null | null | null | arachnado/utils/spiders.py | wigginzz/arachnado | 8de92625262958e886263b4ccb189f4fc62d7400 | [
"MIT"
] | null | null | null | from scrapy.utils.misc import walk_modules
from scrapy.utils.spider import iter_spider_classes
def get_spider_cls(url, spider_packages, default):
"""
Return spider class based on provided url.
:param url: if it looks like `spider://spidername` it tries to load spider
named `spidername`, otherwise it returns default spider class
:param spider_packages: a list of package names that will be searched for
spider classes
:param default: the class that is returned when `url` doesn't start with
`spider://`
"""
if url.startswith('spider://'):
spider_name = url[len('spider://'):]
return find_spider_cls(spider_name, spider_packages)
return default
def find_spider_cls(spider_name, spider_packages):
"""
Find spider class which name is equal to `spider_name` argument
:param spider_name: spider name to look for
:param spider_packages: a list of package names that will be searched for
spider classes
"""
for package_name in spider_packages:
for module in walk_modules(package_name):
for spider_cls in iter_spider_classes(module):
if spider_cls.name == spider_name:
return spider_cls
| 35.6 | 78 | 0.690209 | 171 | 1,246 | 4.859649 | 0.356725 | 0.084236 | 0.057762 | 0.048135 | 0.262335 | 0.262335 | 0.262335 | 0.173285 | 0.173285 | 0.173285 | 0 | 0 | 0.239165 | 1,246 | 34 | 79 | 36.647059 | 0.876582 | 0.455859 | 0 | 0 | 0 | 0 | 0.02946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5433996009680b5160e896f44a3bff1c9d65a2bb | 3,280 | py | Python | deephub/utils/__main__.py | deeplab-ai/deephub | b1d271436fab69cdfad14f19fa2e29c5338f18d6 | [
"Apache-2.0"
] | 8 | 2019-10-17T12:46:13.000Z | 2020-03-12T08:09:40.000Z | deephub/utils/__main__.py | deeplab-ai/deephub | b1d271436fab69cdfad14f19fa2e29c5338f18d6 | [
"Apache-2.0"
] | 12 | 2019-10-22T13:11:56.000Z | 2022-02-10T00:23:30.000Z | deephub/utils/__main__.py | deeplab-ai/deephub | b1d271436fab69cdfad14f19fa2e29c5338f18d6 | [
"Apache-2.0"
] | 1 | 2019-10-17T13:21:27.000Z | 2019-10-17T13:21:27.000Z | import click
import time
from deephub.common.io import resolve_glob_pattern
from deephub.models.feeders.tfrecords.meta import generate_fileinfo, get_fileinfo, TFRecordValidationError, \
TFRecordInfoMissingError
@click.group()
def cli():
"""
General purpose CLI utils.
"""
pass
@cli.command()
@click.argument('pattern', type=str)
@click.option('--force', is_flag=True, default=False,
help='It will forcefully regenerate meta data even for tfrecords that'
'have not changed.')
@click.option('--compression_type', type=str, default='', help="""Compression type of the tfrecord file. Options:
'' for no compression
'GZIP' for gzip compression""")
def generate_metadata(pattern, force, compression_type):
"""
Generate metadata for tfrecord files.
With this util you can generate metadata from tfrecords based on a matching
glob pattern.
Example: Generate metadata for training dataset
deep utils generate-metadata 'dataset/train-*'
"""
files = resolve_glob_pattern(pattern)
click.echo(f"{len(files)} files matched with the pattern.")
with click.progressbar(files) as files:
for fpath in files:
try:
generate_fileinfo(fpath, compression_type=compression_type)
except Exception as e:
click.echo(f'Skipping file {fpath} because of: {e!s}')
click.echo('Finished generating metadata')
@cli.command()
@click.argument('pattern', type=str)
def total_examples(pattern) -> int:
"""
Get total examples for all the files matched with the given input file pattern.
"""
files = resolve_glob_pattern(pattern)
click.echo(f"{len(files)} files matched with the pattern.")
total_rows = 0
for file in files:
try:
total_rows += get_fileinfo(file).total_records
except Exception:
pass
click.echo(f"Total number of examples: {total_rows}")
@cli.command()
@click.argument('pattern', type=str)
@click.option('--shallow-check/--deep-check', default=True,
help='Flag in order to control shallow or deep md5 hash check. With shallow-check only the size of'
'each file will be validated, while with deep-check both size and md5 hash of each file will be'
'validated.')
def validate(pattern: str, shallow_check: bool):
"""
Validate each one of the files matched using the input file pattern.
"""
start = time.time()
files = resolve_glob_pattern(pattern)
click.echo(f"{len(files)} files matched with the pattern.")
with click.progressbar(files) as files:
for file in files:
try:
get_fileinfo(file, shallow_check) # inside here happens the validation step too
except TFRecordValidationError:
raise
except TFRecordInfoMissingError:
raise
except Exception as e: # Probably not a valid tfrecords file
click.echo(f'Probably not a valid tf_record file {e}')
end = time.time()
click.echo(f"Total execution time: {end - start}")
| 33.814433 | 115 | 0.633232 | 396 | 3,280 | 5.176768 | 0.338384 | 0.035122 | 0.034146 | 0.037073 | 0.246829 | 0.231707 | 0.207317 | 0.189268 | 0.189268 | 0.142439 | 0 | 0.001262 | 0.275 | 3,280 | 96 | 116 | 34.166667 | 0.860807 | 0.147256 | 0 | 0.416667 | 1 | 0.016667 | 0.326591 | 0.010298 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.033333 | 0.066667 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
543471083e8ed6e6fd0d08082e7de83061292ab1 | 10,072 | py | Python | utils_mit_im.py | putama/visualcomposition | ada3d8e71b79a5f3e239718f3cdac58eca5e1327 | [
"MIT"
] | null | null | null | utils_mit_im.py | putama/visualcomposition | ada3d8e71b79a5f3e239718f3cdac58eca5e1327 | [
"MIT"
] | null | null | null | utils_mit_im.py | putama/visualcomposition | ada3d8e71b79a5f3e239718f3cdac58eca5e1327 | [
"MIT"
] | null | null | null | import numpy as np
import cPickle
import os
from scipy.io import loadmat
import time
import h5py
import json
import copy
import bz2
def unique_rows(a):
b = np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * a.shape[1])))
_, idx = np.unique(b, return_index=True)
return a[idx], idx;
def setdiff2d(a1, a2):
assert a1.dtype == a2.dtype;
#only works with numpy >= 1.7
versplit = [int(x) for x in np.__version__.split('.')];
assert versplit[0]>=1 and versplit[1]>=7;
a1_rows = a1.view([('', a1.dtype)] * a1.shape[1])
a2_rows = a2.view([('', a2.dtype)] * a2.shape[1])
return np.setdiff1d(a1_rows, a2_rows).view(a1.dtype).reshape(-1, a1.shape[1])
def argtopk(a, k):
ind = np.argpartition(a,-k)[-k:]
srtind = ind[np.argsort(a[ind])[::-1]];
return srtind;
def get_dir_list(dirPath, extension = None):
onlydirs = [ os.path.join(dirPath,f) for f in os.listdir(dirPath) if os.path.isdir(os.path.join(dirPath,f)) ];
if extension!= None:
onlydirs = [f for f in onlydirs if os.path.splitext(f)[1]==extension];
onlydirs.sort();
return onlydirs;
#extension with "." e.g. .jpg
def get_file_list(dirPath, extension = None):
onlyfiles = [ os.path.join(dirPath,f) for f in os.listdir(dirPath) if os.path.isfile(os.path.join(dirPath,f)) ];
if extension!= None:
onlyfiles = [f for f in onlyfiles if os.path.splitext(f)[1]==extension];
onlyfiles.sort();
return onlyfiles;
def get_file_list_prefix(dirPath, prefix, extension=None):
onlyfiles = [ os.path.join(dirPath,f) for f in os.listdir(dirPath) if os.path.isfile(os.path.join(dirPath,f)) and f.startswith(prefix) ];
if extension!= None:
onlyfiles = [f for f in onlyfiles if os.path.splitext(f)[1]==extension];
onlyfiles.sort();
return onlyfiles;
def list_to_indexed_dict(lvar):
dvar = {};
for ind, item in enumerate(lvar):
dvar[item]=ind;
return dvar;
def tic_toc_print(interval, string):
global tic_toc_print_time_old
if 'tic_toc_print_time_old' not in globals():
tic_toc_print_time_old = time.time()
print string
else:
new_time = time.time()
if new_time - tic_toc_print_time_old > interval:
tic_toc_print_time_old = new_time;
print string
def mkdir(output_dir):
return mkdir_if_missing(output_dir);
def mkdir_if_missing(output_dir):
"""
def mkdir_if_missing(output_dir)
"""
if not os.path.exists(output_dir):
try:
os.makedirs(output_dir)
return True;
except: #generally happens when many processes try to make this dir
return False;
def recurse_get_mat_struct(v, curr_field=None):
accum_dict = {};
if type(v).__name__ != 'mat_struct':
if type(v).__name__ == 'ndarray':
#sometimes we have nested mat_structs ...
numel = v.size;
found_nested_structs = False;
for x in range(numel):
if type(v.item(x)).__name__ == 'mat_struct':
if found_nested_structs == False:
accum_dict[curr_field]=[];
found_nested_structs = True;
if found_nested_structs:
newdict = recurse_get_mat_struct(v.item(x), curr_field);
accum_dict[curr_field].append(newdict);
if found_nested_structs == False:
accum_dict[curr_field] = v;
else:
accum_dict[curr_field] = v;
else:
for field in v._fieldnames:
local_dict = recurse_get_mat_struct( getattr(v, field), field );
if field not in local_dict:
accum_dict[field] = copy.deepcopy(local_dict);
else:
accum_dict[field] = copy.deepcopy(local_dict[field]);
if curr_field not in accum_dict:
ret_dict = {};
ret_dict[curr_field] = copy.deepcopy(accum_dict);
accum_dict = ret_dict;
return accum_dict;
def mat_to_dict(mat_name):
matfile = loadmat(mat_name, squeeze_me=True, struct_as_record=False);
var_keys = matfile.keys();
allVarDict = {};
for v in var_keys:
if v.startswith('__') == True:
continue;
dictData = {};
for field in matfile[v]._fieldnames:
localDict = recurse_get_mat_struct( getattr(matfile[v], field), field );
if field not in localDict:
dictData[field] = copy.deepcopy(localDict);
else:
dictData[field] = copy.deepcopy(localDict[field]);
allVarDict[v] = dictData;
return allVarDict;
def save_variables_h5(h5_file_name, var, info, overwrite = False):
if info is None:
return save_variables_h5_dict(h5_file_name, var, overwrite)
if os.path.exists(h5_file_name) and overwrite == False:
raise Exception('{:s} exists and over write is false.'.format(h5_file_name))
# Construct the dictionary
assert(type(var) == list); assert(type(info) == list);
with h5py.File(h5_file_name, 'w') as f:
for i in range(len(info)):
d = f.create_dataset(info[i], data=var[i], chunks=True, compression="gzip", compression_opts=9);
def rec_get_keys(fh, src, keyList):
if src!='' and type(fh[src]).__name__ == 'Dataset':
keyList.append(src);
return keyList;
if src!='':
moreSrcs = fh[src].keys();
else:
moreSrcs = fh.keys();
for kk in moreSrcs:
if src=='':
keyList = rec_get_keys(fh, kk, keyList);
else:
keyList = rec_get_keys(fh, src+'/'+kk, keyList);
return keyList;
def get_h5_keys(h5_file_name):
if os.path.exists(h5_file_name):
with h5py.File(h5_file_name,'r') as f:
keyList = rec_get_keys(f, '', []);
return keyList;
else:
raise Exception('{:s} does not exists.'.format(h5_file_name))
def save_variables_h5_dict(h5_file_name, dictVar, overwrite = False):
if os.path.exists(h5_file_name) and overwrite == False:
raise Exception('{:s} exists and over write is false.'.format(h5_file_name))
# Construct the dictionary
assert(type(dictVar) == dict);
with h5py.File(h5_file_name, 'w') as f:
for key in dictVar:
d = f.create_dataset(key, data=dictVar[key], chunks=True, compression="gzip", compression_opts=9);
def load_variablesh5(h5_file_name):
if os.path.exists(h5_file_name):
with h5py.File(h5_file_name,'r') as f:
d = {};
h5keys = get_h5_keys(h5_file_name);
for key in h5keys:
d[key] = f[key].value;
return d
else:
raise Exception('{:s} does not exists.'.format(h5_file_name))
def save_variables(pickle_file_name, var, info, overwrite = False):
"""
def save_variables(pickle_file_name, var, info, overwrite = False)
"""
fext = os.path.splitext(pickle_file_name)[1]
if fext =='.h5':
return save_variables_h5(pickle_file_name, var, info, overwrite);
elif fext == '.pkl' or fext == '.pklz':
if os.path.exists(pickle_file_name) and overwrite == False:
raise Exception('{:s} exists and over write is false.'.format(pickle_file_name))
if info is not None:
# Construct the dictionary
assert(type(var) == list); assert(type(info) == list);
d = {}
for i in xrange(len(var)):
d[info[i]] = var[i]
else: #we have the dictionary in var
d = var;
if fext == '.pkl':
with open(pickle_file_name, 'wb') as f:
cPickle.dump(d, f, cPickle.HIGHEST_PROTOCOL)
else:
with bz2.BZ2File(pickle_file_name, 'w') as f:
cPickle.dump(d, f, cPickle.HIGHEST_PROTOCOL)
else:
raise Exception('{:s}: extension unknown'.format(fext))
def load_variables(pickle_file_name):
"""
d = load_variables(pickle_file_name)
Output:
d is a dictionary of variables stored in the pickle file.
"""
fext = os.path.splitext(pickle_file_name)[1]
if fext =='.h5':
return load_variablesh5(pickle_file_name);
elif fext == '.pkl' or fext == '.pklz':
if os.path.exists(pickle_file_name):
if fext == '.pkl':
with open(pickle_file_name, 'rb') as f:
d = cPickle.load(f)
else:
with bz2.BZ2File(pickle_file_name, 'r') as f:
d = cPickle.load(f)
return d
else:
raise Exception('{:s} does not exists.'.format(pickle_file_name))
elif fext == '.json':
with open(pickle_file_name, 'r') as fh:
data = json.load(fh)
return data
else:
raise Exception('{:s}: extension unknown'.format(fext))
#wrappers for load_variables and save_variables
def load(pickle_file_name):
return load_variables(pickle_file_name);
def save(pickle_file_name, var, info, overwrite = False):
return save_variables(pickle_file_name, var, info, overwrite);
def calc_pr_ovr_noref(counts, out):
"""
[P, R, score, ap] = calc_pr_ovr(counts, out, K)
Input :
counts : number of occurrences of this word in the ith image
out : score for this image
Output :
P, R : precision and recall
score : score which corresponds to the particular precision and recall
ap : average precision
"""
#binarize counts
out = out.astype(np.float64)
counts = np.array(counts > 0, dtype=np.float32);
tog = np.hstack((counts[:,np.newaxis].astype(np.float64), out[:, np.newaxis].astype(np.float64)))
ind = np.argsort(out)
ind = ind[::-1]
score = np.array([tog[i,1] for i in ind])
sortcounts = np.array([tog[i,0] for i in ind])
tp = sortcounts;
fp = sortcounts.copy();
for i in xrange(sortcounts.shape[0]):
if sortcounts[i] >= 1:
fp[i] = 0.;
elif sortcounts[i] < 1:
fp[i] = 1.;
tp = np.cumsum(tp)
fp = np.cumsum(fp)
# P = np.cumsum(tp)/(np.cumsum(tp) + np.cumsum(fp));
P = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
numinst = np.sum(counts);
R = tp/numinst
ap = voc_ap(R,P)
return P, R, score, ap
def voc_ap(rec, prec):
# correct AP calculation
# first append sentinel values at the end
mrec = np.concatenate(([0.], rec, [1.]))
mpre = np.concatenate(([0.], prec, [0.]))
# compute the precision envelope
for i in range(mpre.size - 1, 0, -1):
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
# to calculate area under PR curve, look for points
# where X axis (recall) changes value
i = np.where(mrec[1:] != mrec[:-1])[0]
# and sum (\Delta recall) * prec
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
return ap | 32.807818 | 141 | 0.652502 | 1,528 | 10,072 | 4.130236 | 0.176047 | 0.049437 | 0.046585 | 0.016162 | 0.477737 | 0.389796 | 0.366186 | 0.321344 | 0.271906 | 0.258279 | 0 | 0.013795 | 0.2083 | 10,072 | 307 | 142 | 32.807818 | 0.777652 | 0.057685 | 0 | 0.266949 | 0 | 0 | 0.037407 | 0.002471 | 0 | 0 | 0 | 0 | 0.021186 | 0 | null | null | 0 | 0.038136 | null | null | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
543ac48e108696b4125575c0e8b5fa9098b4ddb3 | 830 | py | Python | votes/migrations/0004_team.py | aiventimptner/horizon | 6e2436bfa81cad55fefd4c0bb67df3c36a9b6deb | [
"MIT"
] | null | null | null | votes/migrations/0004_team.py | aiventimptner/horizon | 6e2436bfa81cad55fefd4c0bb67df3c36a9b6deb | [
"MIT"
] | 1 | 2021-06-10T19:59:07.000Z | 2021-06-10T19:59:07.000Z | votes/migrations/0004_team.py | aiventimptner/horizon | 6e2436bfa81cad55fefd4c0bb67df3c36a9b6deb | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2020-12-30 00:27
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('votes', '0003_auto_20201229_1301'),
]
operations = [
migrations.CreateModel(
name='Team',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=150)),
('slug', models.SlugField()),
('created', models.DateTimeField(auto_now_add=True)),
('members', models.ManyToManyField(related_name='teams', to=settings.AUTH_USER_MODEL)),
],
),
]
| 31.923077 | 114 | 0.609639 | 87 | 830 | 5.643678 | 0.689655 | 0.040733 | 0.065173 | 0.08554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.262651 | 830 | 25 | 115 | 33.2 | 0.746732 | 0.054217 | 0 | 0 | 1 | 0 | 0.08046 | 0.029374 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
543cef330851534694d86f1be5bca5d7e8614e34 | 1,210 | py | Python | shrike-examples/contoso/utils/arg_utils.py | lynochka/azure-ml-problem-sets | e7e69de763444c5603e4455e35e69e917081a4cc | [
"MIT"
] | 3 | 2021-07-27T16:28:51.000Z | 2021-11-15T18:29:02.000Z | shrike-examples/contoso/utils/arg_utils.py | lynochka/azure-ml-problem-sets | e7e69de763444c5603e4455e35e69e917081a4cc | [
"MIT"
] | null | null | null | shrike-examples/contoso/utils/arg_utils.py | lynochka/azure-ml-problem-sets | e7e69de763444c5603e4455e35e69e917081a4cc | [
"MIT"
] | 7 | 2021-08-09T15:04:03.000Z | 2022-03-09T05:48:56.000Z | """
Utility functions for argument parsing
"""
import argparse
def str2bool(val):
"""
Resolving boolean arguments if they are not given in the standard format
Arguments:
val (bool or string): boolean argument type
Returns:
bool: the desired value {True, False}
"""
if isinstance(val, bool):
return val
if isinstance(val, str):
if val.lower() in ("yes", "true", "t", "y", "1"):
return True
elif val.lower() in ("no", "false", "f", "n", "0"):
return False
else:
raise argparse.ArgumentTypeError("Boolean value expected.")
def str2intlist(val):
"""Converts comma separated string of integers into list of integers
Args:
val (str): comma separate string of integers
"""
return commastring2list(int)(val)
def commastring2list(output_type=str):
"""Returns a lambda function which converts a comma separated string into a list of a given type
Args:
output_type (function, optional): string type conversion function. Defaults to str.
Returns:
function: lambda function
"""
return lambda input_str: list(map(output_type, input_str.split(",")))
| 25.744681 | 100 | 0.638017 | 150 | 1,210 | 5.113333 | 0.48 | 0.039113 | 0.039113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006674 | 0.257025 | 1,210 | 46 | 101 | 26.304348 | 0.846496 | 0.47438 | 0 | 0 | 0 | 0 | 0.079566 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.066667 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
544c328461515102957fb6ba2f7ecaadd80e93ff | 1,356 | py | Python | A.py | JK-Incorporated/EYN-DOS | 6dc331655b5fd04e6d37651ea79ac4e204bfd52e | [
"BSD-3-Clause"
] | null | null | null | A.py | JK-Incorporated/EYN-DOS | 6dc331655b5fd04e6d37651ea79ac4e204bfd52e | [
"BSD-3-Clause"
] | null | null | null | A.py | JK-Incorporated/EYN-DOS | 6dc331655b5fd04e6d37651ea79ac4e204bfd52e | [
"BSD-3-Clause"
] | null | null | null | import os
from os import listdir
from os.path import isfile, join
dir_path = os.path.dirname(os.path.realpath(__file__))
filesys = [f for f in listdir(dir_path) if isfile(join(dir_path, f))]
def get_dir_size(path=dir_path):
total = 0
with os.scandir(dir_path) as it:
for entry in it:
if entry.is_file():
total += entry.stat().st_size
elif entry.is_dir():
total += get_dir_size(entry.path)
return total/1024
size=0
for path, dirs, files in os.walk(dir_path):
for f in files:
fp = os.path.join(path, f)
size += os.path.getsize(fp)
while True:
command_lineA=input("A:\> ")
if command_lineA==("B:"):
print("")
os.system("python3 B.py")
print("")
if command_lineA==("C:"):
print("")
os.system("python3 C.py")
print("")
if command_lineA==("D:"):
print("")
os.system("python3 D.py")
print("")
if command_lineA==("E:"):
print("")
os.system("python3 E.py")
print("")
if command_lineA==("dir"):
print("")
print("ERROR EYN_A1")
print("")
if command_lineA==("listdir"):
print("")
print("ERROR EYN_A1")
print("")
if command_lineA==("end"):
print("")
exit()
| 22.229508 | 69 | 0.526549 | 179 | 1,356 | 3.837989 | 0.312849 | 0.139738 | 0.142649 | 0.165939 | 0.235808 | 0.113537 | 0.113537 | 0.113537 | 0.113537 | 0 | 0 | 0.012862 | 0.311947 | 1,356 | 60 | 70 | 22.6 | 0.723473 | 0 | 0 | 0.3125 | 0 | 0 | 0.072271 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0 | 0.0625 | 0 | 0.104167 | 0.3125 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
544eed2f5a6fd341973e64324b8db14d8a1824d5 | 2,928 | py | Python | httpd/httpd.py | protocollabs/dmprd | c39e75532ae73458b8239b2d21ca69e42b68929f | [
"MIT"
] | 1 | 2018-09-05T08:16:00.000Z | 2018-09-05T08:16:00.000Z | httpd/httpd.py | protocollabs/dmprd | c39e75532ae73458b8239b2d21ca69e42b68929f | [
"MIT"
] | 8 | 2017-01-08T19:11:16.000Z | 2018-09-24T12:20:40.000Z | httpd/httpd.py | protocollabs/dmprd | c39e75532ae73458b8239b2d21ca69e42b68929f | [
"MIT"
] | 2 | 2017-08-23T12:41:02.000Z | 2018-08-17T08:11:35.000Z | import asyncio
import os
try:
from aiohttp import web
except ImportError:
web = None
class Httpd(object):
def __init__(self):
if not web:
print('httpd is specified in conf but aiohttp not available')
return
self.app = web.Application()
self._setup_routes()
self._run_app()
def _run_app(self):
loop = asyncio.get_event_loop()
handler = self.app.make_handler()
f = loop.create_server(handler, '0.0.0.0', 9000)
srv = loop.run_until_complete(f)
print('serving on', srv.sockets[0].getsockname())
async def handler_index(self, request):
data = '''
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta content="width=device-width" name="viewport">
<meta content="yes" name="apple-mobile-web-app-capable">
<meta content="IE=edge,chrome=1" http-equiv="X-UA-Compatible">
<title>DMPR</title>
<script type="text/javascript" src="static/js/vis.min.js"></script>
<script type="text/javascript" src="static/js/script.js"></script>
<link href="static/css/bootstrap-3.3.6.css" rel="stylesheet" />
<link href="static/css/style.css" rel="stylesheet" />
<link href="static/css/vis.min.css" rel="stylesheet" type="text/css" />
<link href="static/css/style.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div class="container-fluid">
<div class="row">
<div class="col-sm-3 col-lg-2">
<nav class="navbar navbar-default navbar-fixed-side">
<div class="container">
<div class="navbar-header">
<button class="navbar-toggle" data-target=".navbar-collapse" data-toggle="collapse">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="./">DMPR</a>
</div>
<div class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li class="active"><a href="#">Topology</a></li>
<li class=""><a href="#">Logging</a></li>
</ul>
</div>
</div>
</nav>
</div>
<div class="col-sm-9 col-lg-10 content">
<div id="mynetwork"></div>
</div>
</div>
</div>
<script src="static/js/bootstrap-3.3.6.js"></script>
</body>
</html>
'''
data = str.encode(data)
return web.Response(body=data, content_type='text/html')
def _setup_routes(self):
absdir = os.path.dirname(os.path.realpath(__file__))
app_path = os.path.join(absdir, 'www', 'static')
self.app.router.add_get('/', self.handler_index)
self.app.router.add_static('/static', app_path, show_index=True)
| 34.046512 | 100 | 0.565574 | 372 | 2,928 | 4.370968 | 0.395161 | 0.03444 | 0.03444 | 0.04182 | 0.172817 | 0.172817 | 0.149446 | 0.086101 | 0.03936 | 0.03936 | 0 | 0.010209 | 0.264003 | 2,928 | 85 | 101 | 34.447059 | 0.744316 | 0 | 0 | 0.12987 | 0 | 0.077922 | 0.673839 | 0.233607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038961 | false | 0 | 0.051948 | 0 | 0.12987 | 0.025974 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
545376512fee3de8e6da4487e774ee09c7ad912d | 1,479 | py | Python | cnns/foolbox/foolbox_2_3_0/tests/test_model_zoo.py | anonymous-user-commits/perturb-net | 66fc7c4a1234fa34b92bcc85751f0a6e23d80a23 | [
"MIT"
] | 12 | 2021-07-27T07:18:24.000Z | 2022-03-09T13:52:20.000Z | cnns/foolbox/foolbox_2_3_0/tests/test_model_zoo.py | anonymous-user-commits/perturb-net | 66fc7c4a1234fa34b92bcc85751f0a6e23d80a23 | [
"MIT"
] | 2 | 2021-08-03T09:21:33.000Z | 2021-12-29T14:25:30.000Z | cnns/foolbox/foolbox_2_3_0/tests/test_model_zoo.py | anonymous-user-commits/perturb-net | 66fc7c4a1234fa34b92bcc85751f0a6e23d80a23 | [
"MIT"
] | 3 | 2021-11-18T14:46:40.000Z | 2022-01-03T15:47:23.000Z | from foolbox import zoo
import numpy as np
import foolbox
import sys
import pytest
from foolbox.zoo.model_loader import ModelLoader
from os.path import join, dirname
@pytest.fixture(autouse=True)
def unload_foolbox_model_module():
# reload foolbox_model from scratch for every run
# to ensure atomic tests without side effects
module_names = ["foolbox_model", "model"]
for module_name in module_names:
if module_name in sys.modules:
del sys.modules[module_name]
test_data = [
# private repo won't work on travis
# ('https://github.com/bethgelab/AnalysisBySynthesis.git', (1, 28, 28)),
# ('https://github.com/bethgelab/convex_adversarial.git', (1, 28, 28)),
# ('https://github.com/bethgelab/mnist_challenge.git', 784)
(join("file://", dirname(__file__), "data/model_repo"), (3, 224, 224))
]
@pytest.mark.parametrize("url, dim", test_data)
def test_loading_model(url, dim):
# download model
model = zoo.get_model(url)
# create a dummy image
x = np.zeros(dim, dtype=np.float32)
x[:] = np.random.randn(*x.shape)
# run the model
logits = model.forward_one(x)
probabilities = foolbox.utils.softmax(logits)
predicted_class = np.argmax(logits)
# sanity check
assert predicted_class >= 0
assert np.sum(probabilities) >= 0.9999
# TODO: delete fmodel
def test_non_default_module_throws_error():
with pytest.raises(RuntimeError):
ModelLoader.get(key="other")
| 27.90566 | 76 | 0.694388 | 204 | 1,479 | 4.882353 | 0.544118 | 0.036145 | 0.042169 | 0.069277 | 0.062249 | 0.062249 | 0.062249 | 0.062249 | 0 | 0 | 0 | 0.023256 | 0.185936 | 1,479 | 52 | 77 | 28.442308 | 0.803987 | 0.275186 | 0 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0.019231 | 0.068966 | 1 | 0.103448 | false | 0 | 0.241379 | 0 | 0.344828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5454b8f602a3ea5235a7102af61b547b5c4c3b31 | 1,128 | py | Python | client/nodes/common/docker_subsriber.py | CanboYe/BusEdge | 2e53e1d1d82559fc3e9f0029b2f0faf4e356b210 | [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | 2 | 2021-08-17T14:14:28.000Z | 2022-02-02T02:09:33.000Z | client/nodes/common/docker_subsriber.py | cmusatyalab/gabriel-BusEdge | 528a6ee337882c6e709375ecd7ec7e201083c825 | [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | client/nodes/common/docker_subsriber.py | cmusatyalab/gabriel-BusEdge | 528a6ee337882c6e709375ecd7ec7e201083c825 | [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | 1 | 2021-09-01T16:18:29.000Z | 2021-09-01T16:18:29.000Z | # SPDX-FileCopyrightText: 2021 Carnegie Mellon University
#
# SPDX-License-Identifier: Apache-2.0
import cv2
import numpy as np
import rospy
from gabriel_protocol import gabriel_pb2
from std_msgs.msg import UInt8MultiArray
def run_node(client_filter, source_name):
rospy.init_node(source_name + "_subscriber_node")
rospy.loginfo("Initialized subscriber node for " + source_name)
sub = rospy.Subscriber(
source_name,
UInt8MultiArray,
callback,
callback_args=(client_filter,),
queue_size=1,
buff_size=2 ** 24,
)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
def callback(data, args):
client_filter = args[0]
serialized_message = data.data
# client_filter.send_serialized(serialized_message)
# TODO: this is inefficient becuase we deserialize the binary data,
# need to either modify the gabriel library or change the way
# we save the extra fields.
input_frame = gabriel_pb2.InputFrame()
input_frame.ParseFromString(serialized_message)
client_filter.send(input_frame)
| 29.684211 | 72 | 0.720745 | 144 | 1,128 | 5.458333 | 0.555556 | 0.076336 | 0.040712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017917 | 0.208333 | 1,128 | 37 | 73 | 30.486486 | 0.862262 | 0.329787 | 0 | 0 | 0 | 0 | 0.064257 | 0 | 0 | 0 | 0 | 0.027027 | 0 | 1 | 0.086957 | false | 0 | 0.217391 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
545c6c254ab620127f5ce9a6e6a0f63adc08b458 | 1,281 | py | Python | tinylinks/admin.py | lavindiuss/django-shorter | 50bc018e762b396cd9bc71991f6ea1329aaceddd | [
"MIT"
] | null | null | null | tinylinks/admin.py | lavindiuss/django-shorter | 50bc018e762b396cd9bc71991f6ea1329aaceddd | [
"MIT"
] | null | null | null | tinylinks/admin.py | lavindiuss/django-shorter | 50bc018e762b396cd9bc71991f6ea1329aaceddd | [
"MIT"
] | null | null | null | """Admin sites for the ``django-tinylinks`` app."""
from django.contrib import admin
from django.template.defaultfilters import truncatechars
from django.utils.translation import ugettext_lazy as _
from django.template.loader import render_to_string
from tinylinks.forms import TinylinkAdminForm
from tinylinks.models import Tinylink, TinylinkLog
class TinylinkAdmin(admin.ModelAdmin):
list_display = ('short_url', 'url_truncated', 'amount_of_views', 'user',
'last_checked', 'status', 'validation_error',)
search_fields = ['short_url', 'long_url']
form = TinylinkAdminForm
fieldsets = [
('Tinylink', {'fields': ['user', 'long_url', 'short_url', ]}),
]
def url_truncated(self, obj):
return truncatechars(obj.long_url, 60)
url_truncated.short_description = _('Long URL')
def status(self, obj):
if not obj.is_broken:
return _('OK')
return _('Link broken')
status.short_description = _('Status')
admin.site.register(Tinylink, TinylinkAdmin)
class TinylinkLogAdmin(admin.ModelAdmin):
list_display = ('tinylink', 'datetime', 'remote_ip', 'tracked')
readonly_fields = ('datetime',)
date_hierarchy = 'datetime'
admin.site.register(TinylinkLog, TinylinkLogAdmin)
| 28.466667 | 76 | 0.699454 | 142 | 1,281 | 6.098592 | 0.5 | 0.046189 | 0.04157 | 0.060046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001903 | 0.179547 | 1,281 | 44 | 77 | 29.113636 | 0.822074 | 0.035129 | 0 | 0 | 0 | 0 | 0.164228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0.035714 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
546484ce8b5ed762d88a0033bf3308f52967f631 | 296 | py | Python | active-learning/seq_data.py | ansunsujoe/ml-research | 7ab529a5ec1d420385e64b9eebf87e0847b85afd | [
"MIT"
] | null | null | null | active-learning/seq_data.py | ansunsujoe/ml-research | 7ab529a5ec1d420385e64b9eebf87e0847b85afd | [
"MIT"
] | null | null | null | active-learning/seq_data.py | ansunsujoe/ml-research | 7ab529a5ec1d420385e64b9eebf87e0847b85afd | [
"MIT"
] | null | null | null | import random
from tqdm import tqdm
def random_seq():
return [str(random.randint(1, 9)) for x in range(random.randint(2, 15))]
if __name__ == "__main__":
with open("sequences-1-train.txt", "w") as f:
for i in tqdm(range(5000)):
f.write(",".join(random_seq()) + "\n") | 29.6 | 76 | 0.614865 | 47 | 296 | 3.659574 | 0.702128 | 0.104651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.206081 | 296 | 10 | 77 | 29.6 | 0.689362 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.070707 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0.125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
546beba67c891d71b93c4df6d7f37c550d736d00 | 1,772 | py | Python | observations/r/chest_sizes.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 199 | 2017-07-24T01:34:27.000Z | 2022-01-29T00:50:55.000Z | observations/r/chest_sizes.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 46 | 2017-09-05T19:27:20.000Z | 2019-01-07T09:47:26.000Z | observations/r/chest_sizes.py | hajime9652/observations | 2c8b1ac31025938cb17762e540f2f592e302d5de | [
"Apache-2.0"
] | 45 | 2017-07-26T00:10:44.000Z | 2022-03-16T20:44:59.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
import numpy as np
import os
import sys
from observations.util import maybe_download_and_extract
def chest_sizes(path):
"""Chest measurements of 5738 Scottish Militiamen
Quetelet's data on chest measurements of 5738 Scottish Militiamen.
Quetelet (1846) used this data as a demonstration of the normal
distribution of physical characteristics.
A data frame with 16 observations on the following 2 variables.
`chest`
Chest size (in inches)
`count`
Number of soldiers with this chest size
Velleman, P. F. and Hoaglin, D. C. (1981). *Applications, Basics, and
Computing of Exploratory Data Analysis*. Belmont. CA: Wadsworth.
Retrieved from Statlib:
`https://www.stat.cmu.edu/StatDat/Datafiles/MilitiamenChests.html`
Args:
path: str.
Path to directory which either stores file or otherwise file will
be downloaded and extracted there.
Filename is `chest_sizes.csv`.
Returns:
Tuple of np.ndarray `x_train` with 16 rows and 2 columns and
dictionary `metadata` of column headers (feature names).
"""
import pandas as pd
path = os.path.expanduser(path)
filename = 'chest_sizes.csv'
if not os.path.exists(os.path.join(path, filename)):
url = 'http://dustintran.com/data/r/HistData/ChestSizes.csv'
maybe_download_and_extract(path, url,
save_file_name='chest_sizes.csv',
resume=False)
data = pd.read_csv(os.path.join(path, filename), index_col=0,
parse_dates=True)
x_train = data.values
metadata = {'columns': data.columns}
return x_train, metadata
| 29.533333 | 71 | 0.705418 | 243 | 1,772 | 5.012346 | 0.576132 | 0.032841 | 0.039409 | 0.037767 | 0.116585 | 0.08046 | 0.08046 | 0 | 0 | 0 | 0 | 0.017241 | 0.214447 | 1,772 | 59 | 72 | 30.033898 | 0.857759 | 0.520316 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.409091 | 0 | 0.5 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5477f31f091eaba6d081dd15b6e4e452029c17e6 | 4,480 | py | Python | examples/parser_example.py | pibico/beacontools | 513e1c7ff2aaf74b6c7d7b10805c2f6ca4384e3d | [
"MIT"
] | 139 | 2017-06-09T17:15:23.000Z | 2022-03-15T03:02:17.000Z | examples/parser_example.py | pibico/beacontools | 513e1c7ff2aaf74b6c7d7b10805c2f6ca4384e3d | [
"MIT"
] | 71 | 2017-06-20T03:20:56.000Z | 2022-02-13T22:47:53.000Z | examples/parser_example.py | pibico/beacontools | 513e1c7ff2aaf74b6c7d7b10805c2f6ca4384e3d | [
"MIT"
] | 59 | 2017-06-20T03:10:00.000Z | 2022-03-15T23:54:44.000Z | # -*- coding: utf-8 -*-
from beacontools import parse_packet
# Eddystone UID packet
uid_packet = b"\x02\x01\x06\x03\x03\xaa\xfe\x17\x16\xaa\xfe\x00\xe3\x12\x34\x56\x78\x90\x12" \
b"\x34\x67\x89\x01\x00\x00\x00\x00\x00\x01\x00\x00"
uid_frame = parse_packet(uid_packet)
print("Namespace: %s" % uid_frame.namespace)
print("Instance: %s" % uid_frame.instance)
print("TX Power: %s" % uid_frame.tx_power)
print("-----")
# Eddystone URL packet
url_packet = b"\x03\x03\xAA\xFE\x13\x16\xAA\xFE\x10\xF8\x03github\x00citruz"
url_frame = parse_packet(url_packet)
print("TX Power: %d" % url_frame.tx_power)
print("URL: %s" % url_frame.url)
print("-----")
# Eddystone TLM packet (unencrypted)
tlm_packet = b"\x02\x01\x06\x03\x03\xaa\xfe\x11\x16\xaa\xfe\x20\x00\x0b\x18\x13\x00\x00\x00" \
b"\x14\x67\x00\x00\x2a\xc4\xe4"
tlm_frame = parse_packet(tlm_packet)
print("Voltage: %d mV" % tlm_frame.voltage)
print("Temperature: %f °C" % tlm_frame.temperature)
print("Advertising count: %d" % tlm_frame.advertising_count)
print("Seconds since boot: %d" % tlm_frame.seconds_since_boot)
print("-----")
# Eddystone TLM packet (encrypted)
enc_tlm_packet = b"\x02\x01\x06\x03\x03\xaa\xfe\x11\x16\xaa\xfe\x20\x01\x41\x41\x41\x41\x41" \
b"\x41\x41\x41\x41\x41\x41\x41\xDE\xAD\xBE\xFF"
enc_tlm_frame = parse_packet(enc_tlm_packet)
print("Data: %s" % enc_tlm_frame.encrypted_data)
print("Salt: %d" % enc_tlm_frame.salt)
print("Mic: %d" % enc_tlm_frame.mic)
print("-----")
# iBeacon Advertisement
ibeacon_packet = b"\x02\x01\x06\x1a\xff\x4c\x00\x02\x15\x41\x41\x41\x41\x41\x41\x41\x41\x41" \
b"\x41\x41\x41\x41\x41\x41\x41\x00\x01\x00\x01\xf8"
adv = parse_packet(ibeacon_packet)
print("UUID: %s" % adv.uuid)
print("Major: %d" % adv.major)
print("Minor: %d" % adv.minor)
print("TX Power: %d" % adv.tx_power)
print("-----")
# Cypress iBeacon Sensor
cypress_packet = b"\x02\x01\x04\x1a\xff\x4c\x00\x02\x15\x00\x05\x00\x01\x00\x00\x10\x00\x80" \
b"\x00\x00\x80\x5f\x9b\x01\x31\x00\x02\x6c\x66\xc3"
sensor = parse_packet(cypress_packet)
print("UUID: %s" % sensor.uuid)
print("Major: %d" % sensor.major)
print("Temperature: %d °C" % sensor.cypress_temperature)
print("Humidity: %d %%" % sensor.cypress_humidity)
print("TX Power: %d" % sensor.tx_power)
print("-----")
# Estimote Telemetry Packet (Subframe A)
telemetry_a_packet = b"\x02\x01\x04\x03\x03\x9a\xfe\x17\x16\x9a\xfe\x22\x47\xa0\x38\xd5"\
b"\xeb\x03\x26\x40\x00\x00\x01\x41\x44\x47\xfa\xff\xff\xff\xff"
telemetry = parse_packet(telemetry_a_packet)
print("Identifier: %s" % telemetry.identifier)
print("Protocol Version: %d" % telemetry.protocol_version)
print("Acceleration (g): (%f, %f, %f)" % telemetry.acceleration)
print("Is moving: %s" % telemetry.is_moving)
# ... see packet_types/estimote.py for all available attributes and units
print("-----")
# Estimote Telemetry Packet (Subframe B)
telemetry_b_packet = b"\x02\x01\x04\x03\x03\x9a\xfe\x17\x16\x9a\xfe\x22\x47\xa0\x38\xd5"\
b"\xeb\x03\x26\x40\x01\xd8\x42\xed\x73\x49\x25\x66\xbc\x2e\x50"
telemetry_b = parse_packet(telemetry_b_packet)
print("Identifier: %s" % telemetry_b.identifier)
print("Protocol Version: %d" % telemetry_b.protocol_version)
print("Magnetic field: (%f, %f, %f)" % telemetry_b.magnetic_field)
print("Temperature: %f °C" % telemetry_b.temperature)
# ... see packet_types/estimote.py for all available attributes and units
# Estimote Nearable Advertisement
nearable_packet = b"\x02\x01\x04\x03\x03\x0f\x18\x17\xff\x5d" \
b"\x01\x01\x1e\xfe\x42\x7e\xb6\xf4\xbc\x2f" \
b"\x04\x01\x68\xa1\xaa\xfe\x05\xc1\x45\x25" \
b"\x53\xb5"
nearable_adv = parse_packet(nearable_packet)
print("Identifier: %s" % nearable_adv.identifier)
print("Hardware_version: %d" % nearable_adv.hardware_version)
print("Firmware_version: %d" % nearable_adv.firmware_version)
print("Temperature: %d" % nearable_adv.temperature)
print("Is moving: %i" % nearable_adv.is_moving)
print("-----")
# CJ Monitor packet
cj_monitor_packet = b"\x02\x01\x06\x05\x02\x1A\x18\x00\x18" \
b"\x09\xFF\x72\x04\xFE\x10\xD1\x0C\x33\x61" \
b"\x09\x09\x4D\x6F\x6E\x20\x35\x36\x34\x33"
cj_monitor = parse_packet(cj_monitor_packet)
print("Name: %s" % cj_monitor.name)
print("Temperature: %f °C" % cj_monitor.temperature)
print("Humidity: %d %%" % cj_monitor.humidity)
print("Light: %f" % cj_monitor.light)
| 40 | 94 | 0.690625 | 725 | 4,480 | 4.14069 | 0.231724 | 0.047968 | 0.05996 | 0.063957 | 0.277482 | 0.197868 | 0.159227 | 0.151899 | 0.147901 | 0.138574 | 0 | 0.112078 | 0.127679 | 4,480 | 111 | 95 | 40.36036 | 0.655067 | 0.1 | 0 | 0.101266 | 0 | 0.177215 | 0.427825 | 0.280737 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012658 | 0 | 0.012658 | 0.582278 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
547cd68f734cef8dede708252277b864855b2580 | 2,542 | py | Python | backend/apps/cmdb/migrations/0001_initial.py | renmcc/SA2 | a524124c140ae0b291b10dafc11d38744dd93bd9 | [
"MIT"
] | 4 | 2020-06-25T05:57:39.000Z | 2021-06-26T04:58:16.000Z | backend/apps/cmdb/migrations/0001_initial.py | renmcc/SA2 | a524124c140ae0b291b10dafc11d38744dd93bd9 | [
"MIT"
] | null | null | null | backend/apps/cmdb/migrations/0001_initial.py | renmcc/SA2 | a524124c140ae0b291b10dafc11d38744dd93bd9 | [
"MIT"
] | 1 | 2020-12-10T15:12:11.000Z | 2020-12-10T15:12:11.000Z | # Generated by Django 2.2.12 on 2020-06-15 16:55
import datetime
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('project', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='server',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('hostname', models.CharField(blank=True, help_text='主机名', max_length=200, verbose_name='主机名')),
('public_ip', models.GenericIPAddressField(blank=True, help_text='外网IP', null=True, verbose_name='外网IP')),
('private_ip', models.GenericIPAddressField(help_text='内网IP', unique=True, verbose_name='内网IP')),
('os', models.CharField(blank=True, default=None, help_text='操作系统', max_length=100, verbose_name='操作系统')),
('cpu', models.CharField(blank=True, default=None, help_text='CPU信息', max_length=250, verbose_name='CPU信息')),
('memory', models.CharField(blank=True, default=None, help_text='内存信息', max_length=100, verbose_name='内存信息')),
('disk', models.CharField(blank=True, help_text='硬盘信息', max_length=300, null=True, verbose_name='硬盘信息')),
('status', models.BooleanField(default=True, help_text='是否启用', verbose_name='启用')),
('remark', models.TextField(blank=True, help_text='备注', null=True, verbose_name='备注')),
('add_time', models.DateTimeField(default=datetime.datetime.now, help_text='添加时间', verbose_name='添加时间')),
('update_time', models.DateTimeField(auto_now=True, help_text='更新时间', verbose_name='更新时间')),
('area', models.ForeignKey(blank=True, help_text='所属大区', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='server_area', to='project.ProjectArea', verbose_name='大区')),
('project', models.ForeignKey(blank=True, default=1, help_text='项目', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='server_project', to='project.Project', verbose_name='项目')),
('role', models.ManyToManyField(blank=True, help_text='功能', null=True, related_name='server_role', to='project.ProjectRole', verbose_name='功能')),
],
options={
'verbose_name': '服务器列表',
'verbose_name_plural': '服务器列表',
'ordering': ('id',),
},
),
]
| 59.116279 | 215 | 0.632179 | 298 | 2,542 | 5.208054 | 0.342282 | 0.12049 | 0.061856 | 0.065722 | 0.233892 | 0.204253 | 0.163015 | 0.163015 | 0.079897 | 0.079897 | 0 | 0.017866 | 0.207317 | 2,542 | 42 | 216 | 60.52381 | 0.752357 | 0.018096 | 0 | 0 | 1 | 0 | 0.14154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.085714 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
547ee9e4da4b047390b557dc16580a853bcc3c8e | 281 | py | Python | setup.py | codewars/python-unittest | 5a6cc27a51a9d91ce997c953099515c701b76057 | [
"MIT"
] | 4 | 2020-06-20T12:36:09.000Z | 2021-10-31T22:04:48.000Z | setup.py | codewars/python-unittest | 5a6cc27a51a9d91ce997c953099515c701b76057 | [
"MIT"
] | null | null | null | setup.py | codewars/python-unittest | 5a6cc27a51a9d91ce997c953099515c701b76057 | [
"MIT"
] | 3 | 2020-07-11T13:46:24.000Z | 2022-02-23T20:55:19.000Z | from setuptools import setup
setup(
name="codewars_unittest",
version="0.1.0",
packages=["codewars_unittest"],
license="MIT",
description="unittest runner with Codewars output",
install_requires=[],
url="https://github.com/Codewars/python-unittest",
)
| 23.416667 | 55 | 0.690391 | 32 | 281 | 5.96875 | 0.75 | 0.167539 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012766 | 0.163701 | 281 | 11 | 56 | 25.545455 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.430605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
548634bd7f60817d2246c17acdb44bb98affa644 | 1,189 | py | Python | demo/demo/models.py | dracarysX/django-rest-query | 62fe8ee8f72251a1a8982265fff57870f2d43ca9 | [
"MIT"
] | 2 | 2017-06-28T02:51:52.000Z | 2017-06-28T09:28:33.000Z | demo/demo/models.py | dracarysX/django-rest-query | 62fe8ee8f72251a1a8982265fff57870f2d43ca9 | [
"MIT"
] | null | null | null | demo/demo/models.py | dracarysX/django-rest-query | 62fe8ee8f72251a1a8982265fff57870f2d43ca9 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*-coding: utf-8 -*-
__author__ = 'dracarysX'
from django.db import models
class Publisher(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=100)
class Meta:
db_table = 'Publisher'
def __str__(self):
return 'Publisher: {}'.format(self.name)
class School(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=100)
class Meta:
db_table = 'School'
def __str__(self):
return 'School: {}'.format(self.name)
class Author(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=50)
age = models.IntegerField()
school = models.ForeignKey(School)
class Meta:
db_table = 'Author'
def __str__(self):
return 'Author: {}'.format(self.name)
class Book(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=50)
author = models.ForeignKey(Author)
publisher = models.ForeignKey(Publisher)
class Meta:
db_table = 'Book'
def __str__(self):
return 'Book: {}'.format(self.name)
| 22.018519 | 48 | 0.652649 | 145 | 1,189 | 5.131034 | 0.275862 | 0.05914 | 0.069892 | 0.102151 | 0.432796 | 0.432796 | 0.432796 | 0.432796 | 0.432796 | 0.432796 | 0 | 0.011828 | 0.21783 | 1,189 | 53 | 49 | 22.433962 | 0.788172 | 0.035324 | 0 | 0.470588 | 0 | 0 | 0.065502 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.029412 | 0.117647 | 0.852941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
548afc21b16ee46ad8044ba3566ba260b8c8d71a | 899 | py | Python | database/chemtrack/contacts.py | mshobair/invitro_cheminformatics | 17201496c73453accd440646a1ee81726119a59c | [
"MIT"
] | null | null | null | database/chemtrack/contacts.py | mshobair/invitro_cheminformatics | 17201496c73453accd440646a1ee81726119a59c | [
"MIT"
] | null | null | null | database/chemtrack/contacts.py | mshobair/invitro_cheminformatics | 17201496c73453accd440646a1ee81726119a59c | [
"MIT"
] | null | null | null | import datetime
from database.database_schemas import Schemas
from sqlalchemy import Column, Integer, String, DateTime
from database.base import Base
class Contacts(Base):
"""Maps to contacts table in chemprop databases."""
__tablename__ = 'contacts'
__table_args__ = {'schema': Schemas.qsar_schema}
id = Column(Integer, primary_key=True, nullable=False)
first_name = Column(String)
last_name = Column(String)
vendor_id = Column(Integer)
email = Column(String)
title = Column(String)
phone1 = Column(String)
phone2 = Column(String)
fax = Column(String)
cell = Column(String)
other_details = Column(String)
department = Column(String)
contact_type_id = Column(Integer)
created_at = Column(DateTime, default=datetime.datetime.now, nullable=False)
updated_at = Column(DateTime, default=datetime.datetime.now, nullable=False) | 31 | 80 | 0.72525 | 108 | 899 | 5.851852 | 0.444444 | 0.189873 | 0.071203 | 0.072785 | 0.174051 | 0.174051 | 0.174051 | 0.174051 | 0.174051 | 0 | 0 | 0.00271 | 0.179088 | 899 | 29 | 81 | 31 | 0.853659 | 0.050056 | 0 | 0 | 0 | 0 | 0.01649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
548ba908b52f98060805c6474bd241356237c223 | 7,487 | py | Python | otter/generate/autograder.py | drjbarker/otter-grader | 9e89e1675b09cf7889995b5f1bc8e1648bf6c309 | [
"BSD-3-Clause"
] | null | null | null | otter/generate/autograder.py | drjbarker/otter-grader | 9e89e1675b09cf7889995b5f1bc8e1648bf6c309 | [
"BSD-3-Clause"
] | null | null | null | otter/generate/autograder.py | drjbarker/otter-grader | 9e89e1675b09cf7889995b5f1bc8e1648bf6c309 | [
"BSD-3-Clause"
] | null | null | null | """
Gradescope autograder configuration generator for Otter Generate
"""
import os
import json
import shutil
# import subprocess
import zipfile
import tempfile
import pathlib
import pkg_resources
import yaml
from glob import glob
from subprocess import PIPE
from jinja2 import Template
from .token import APIClient
from .utils import zip_folder
from ..plugins import PluginCollection
from ..run.run_autograder.constants import DEFAULT_OPTIONS
TEMPLATE_DIR = pkg_resources.resource_filename(__name__, "templates")
MINICONDA_INSTALL_URL = "https://repo.anaconda.com/miniconda/Miniconda3-py38_4.9.2-Linux-x86_64.sh"
OTTER_ENV_NAME = "otter-env"
def main(tests_path, output_path, config, lang, requirements, overwrite_requirements, environment,
username, password, files, assignment=None, plugin_collection=None, **kwargs):
"""
Runs Otter Generate
Args:
tests_path (``str``): path to directory of test files for this assignment
output_path (``str``): directory in which to write output zip file
config (``str``): path to an Otter configuration JSON file
lang (``str``): the language of the assignment; one of ``["python", "r"]``
requirements (``str``): path to a Python or R requirements file for this assignment
overwrite_requirements (``bool``): whether to overwrite the default requirements instead of
adding to them
environment (``str``): path to a conda environment file for this assignment
username (``str``): a username for Gradescope for generating a token
password (``str``): a password for Gradescope for generating a token
files (``list[str]``): list of file paths to add to the zip file
assignment (``otter.assign.assignment.Assignment``, optional): the assignment configurations
if used with Otter Assign
**kwargs: ignored kwargs (a remnant of how the argument parser is built)
Raises:
``FileNotFoundError``: if the specified Otter configuration JSON file could not be found
``ValueError``: if the configurations specify a Gradescope course ID or assignment ID but not
both
"""
# read in otter_config.json
if config is None and os.path.isfile("otter_config.json"):
config = "otter_config.json"
if config is not None and not os.path.isfile(config):
raise FileNotFoundError(f"Could not find otter configuration file {config}")
if config:
with open(config) as f:
otter_config = json.load(f)
else:
otter_config = {}
if "course_id" in otter_config and "assignment_id" in otter_config:
client = APIClient()
if username is not None and password is not None:
client.log_in(username, password)
token = client.token
else:
token = client.get_token()
otter_config["token"] = token
elif "course_id" in otter_config or "assignment_id" in otter_config:
raise ValueError(f"Otter config contains 'course_id' or 'assignment_id' but not both")
options = DEFAULT_OPTIONS.copy()
options.update(otter_config)
# update language
options["lang"] = lang.lower()
template_dir = os.path.join(TEMPLATE_DIR, options["lang"])
templates = {}
for fn in os.listdir(template_dir):
fp = os.path.join(template_dir, fn)
if os.path.isfile(fp): # prevents issue w/ finding __pycache__ in template dirs
with open(fp) as f:
templates[fn] = Template(f.read())
template_context = {
"autograder_dir": options['autograder_dir'],
"otter_env_name": OTTER_ENV_NAME,
"miniconda_install_url": MINICONDA_INSTALL_URL,
"ottr_branch": "stable",
}
if plugin_collection is None:
plugin_collection = PluginCollection(otter_config.get("plugins", []), None, {})
else:
plugin_collection.add_new_plugins(otter_config.get("plugins", []))
plugin_collection.run("during_generate", otter_config, assignment)
# create tmp directory to zip inside
with tempfile.TemporaryDirectory() as td:
# try:
# copy tests into tmp
test_dir = os.path.join(td, "tests")
os.mkdir(test_dir)
pattern = ("*.py", "*.[Rr]")[options["lang"] == "r"]
for file in glob(os.path.join(tests_path, pattern)):
shutil.copy(file, test_dir)
# open requirements if it exists
requirements = requirements
reqs_filename = f"requirements.{'R' if options['lang'] == 'r' else 'txt'}"
if requirements is None and os.path.isfile(reqs_filename):
requirements = reqs_filename
if requirements:
assert os.path.isfile(requirements), f"Requirements file {requirements} not found"
f = open(requirements)
else:
f = open(os.devnull)
template_context["other_requirements"] = f.read()
template_context["overwrite_requirements"] = overwrite_requirements
# close the {% if not other_requirements %}stream
f.close()
# open environment if it exists
# unlike requirements.txt, we will always overwrite, not append by default
environment = environment
env_filename = "environment.yml"
if environment is None and os.path.isfile(env_filename):
environment = env_filename
if environment:
assert os.path.isfile(environment), f"Environment file {environment} not found"
with open(environment) as f:
data = yaml.safe_load(f)
data['name'] = template_context["otter_env_name"]
template_context["other_environment"] = yaml.safe_dump(data, default_flow_style=False)
f.close()
else:
template_context["other_environment"] = None
rendered = {}
for fn, tmpl in templates.items():
rendered[fn] = tmpl.render(**template_context)
if os.path.isabs(output_path):
zip_path = os.path.join(output_path, "autograder.zip")
else:
zip_path = os.path.join(os.getcwd(), output_path, "autograder.zip")
if os.path.exists(zip_path):
os.remove(zip_path)
with zipfile.ZipFile(zip_path, mode="w") as zf:
for fn, contents in rendered.items():
zf.writestr(fn, contents)
test_dir = "tests"
pattern = ("*.py", "*.[Rr]")[options["lang"] == "r"]
for file in glob(os.path.join(tests_path, pattern)):
zf.write(file, arcname=os.path.join(test_dir, os.path.basename(file)))
zf.writestr("otter_config.json", json.dumps(otter_config, indent=2))
# copy files into tmp
if len(files) > 0:
for file in files:
full_fp = os.path.abspath(file)
assert os.getcwd() in full_fp, f"{file} is not in a subdirectory of the working directory"
if os.path.isfile(full_fp):
zf.write(file, arcname=os.path.join("files", file))
elif os.path.isdir(full_fp):
zip_folder(zf, full_fp, prefix="files")
else:
raise ValueError(f"Could not find file or directory '{full_fp}'")
if assignment is not None:
assignment._otter_config = otter_config
| 38.792746 | 110 | 0.631762 | 922 | 7,487 | 4.994577 | 0.238612 | 0.028665 | 0.019544 | 0.013029 | 0.136156 | 0.090988 | 0.052986 | 0.040825 | 0.026927 | 0.026927 | 0 | 0.002376 | 0.269133 | 7,487 | 192 | 111 | 38.994792 | 0.839181 | 0.22479 | 0 | 0.108333 | 1 | 0.008333 | 0.148226 | 0.007552 | 0 | 0 | 0 | 0 | 0.025 | 1 | 0.008333 | false | 0.025 | 0.125 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
548be68a4be4ce8e389208606dd772dad630cd84 | 4,947 | py | Python | kanka-manager/test.py | davidbradlycurtis/kanka-manager | f44f814c6d9433a40cb1edc558baac12f26b31ad | [
"MIT"
] | null | null | null | kanka-manager/test.py | davidbradlycurtis/kanka-manager | f44f814c6d9433a40cb1edc558baac12f26b31ad | [
"MIT"
] | null | null | null | kanka-manager/test.py | davidbradlycurtis/kanka-manager | f44f814c6d9433a40cb1edc558baac12f26b31ad | [
"MIT"
] | null | null | null | import requests
import yaml
import json
import os
import sys
import logging
from kankaclient.client import KankaClient
logging.basicConfig(format='%(asctime)s %(levelname)s: %(message)s')
LOGGER = logging.getLogger('KankaManagement')
class SpaceDumper(yaml.SafeDumper):
# HACK: insert blank lines between top-level objects
# inspired by https://stackoverflow.com/a/44284819/3786245
def write_line_break(self, data=None):
super().write_line_break(data)
if len(self.indents) == 1:
super().write_line_break('# ============================================================================================\n')
def write_data(file, data):
success = False
if os.path.isfile(file):
try:
with open(file, 'w') as output_yaml:
output_yaml.write(yaml.dump(data, Dumper=SpaceDumper, sort_keys=False))
success = True
except FileNotFoundError:
pass
#LOG ERROR
return success
def read_data(file):
data = None
if os.path.isfile(file):
try:
with open(file, 'r') as input_yaml:
data = yaml.safe_load(input_yaml.read())
except FileNotFoundError:
pass
#LOG ERROR
return data
def test_characters(client):
characters = client.characters.get_all()
vincent = client.characters.get('Vincent Von Hess')
vincent_by_id = client.characters.get(677748)
test_character = client.characters.create({"name": "test_character"})
test_character['name'] = 'test_character_updated'
test_character = client.characters.update({"name": "test_character_updated", "id": test_character.get("id")})
deleted = client.characters.delete(test_character.get('id'))
token = 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJhdWQiOiIxIiwianRpIjoiNjUxYzNkNDk1ZjVjZTUzMWQxMjc3MTk5Y2NlMzE1N2U4ZTFkMzZlOWRiYWZiOTY1ZGEyYmI5MTVkZjhkZDFkNTNkZGZlNDhmZTFmZWMzYjMiLCJpYXQiOjE2NDY0NTU3MDguMDA2Mjc4LCJuYmYiOjE2NDY0NTU3MDguMDA2MjgzLCJleHAiOjE2Nzc5OTE3MDcuOTk1NDY5LCJzdWIiOiIzMzM2MiIsInNjb3BlcyI6W119.BsK_qRFoPIlDnNG7DemtD_cVfN98LS-i3f9QUhfm_J7mS7_ltzuJ3typrPL_4lyqbnkrjjx0r5oICRqvgs902AmIDzt-bCGxsyesMWGQcQXFfoahGyJlYfRe4QSNsjlj3cLsM22dn0limMtnKB0I-7XcrbmNU15UJAN0MYJDOZ2pfCmjpn-5GnhgJQNwZrCZc33afUZSVvN_FAYT54GMPExMY0z1J1Zo49uUfs6FQhSG_SNrQ8zbPArCaGgH9hwMIEEhk0dn8-Kv-7SjJu1y4utWs3i9F08-WmIZ9YjDerJsrySc_N6TCgFn2GIeEnb_c-S3RpG4K3PMCTSrOGIKvy_S5zLYZOn6lNXaJ2RTaOhpZvHQHX_OeccoRJ5H9_K5ma1DXBPWaXgujCdaAi5S860ZRqsa8OUSQvHEsq03TNaOKupImBSKLGN6r3Qc57iBTfk6VrOIAO3cFG5Qej7t0gKQdpkDDPAK8dnLvC9QxrfKQCJcfwOrXz7dmUNb-XAKydU2brpqRzJyP3EScShrwPpYgXvE1BJNxtejpPhpE8GCM5TS6-qmHymHILYG0SsoM5HMrA70vFGu3DAJVkRzRavGEBsh_0mFzKR64zNT4hFFEzLyLha5c0FnkgKIFjUfZyrmskRW0t0DifJF5ZGX95PRezeNQHpRZ4yM5G3YseQ'
campaign = 'Journey to Morrivir'
kanka_client = KankaClient(token=token, campaign=campaign, verbose=True)
test_characters(kanka_client)
print()
# camp_id = 107538
# base_url = 'https://kanka.io/api/1.0/campaigns'
# char_url = '%s/%s/characters' % (base_url, camp_id)
# header = {'Authorization': token, 'Content-type': 'application/json'}
# result = requests.get(url=char_url, headers=header)
# if result.reason == 'OK':
# _characters = json.loads(result.text)['data']
# characters = list()
# for char in _characters:
# character = {
# "id" : char.get('id', None),
# "name" : char.get('name', None),
# "entry" : char.get('entry', None),
# "entry_parsed" : char.get('entry_parsed', None),
# "image" : char.get('image', None),
# "image_full" : char.get('image_full', None),
# "image_thumb" : char.get('image_thumb', None),
# "is_private" : char.get('is_private', None),
# "tags" : char.get('tags', []),
# "title" : char.get('title', None),
# "age" : char.get('age', None),
# "pronouns" : char.get('pronouns', None),
# "type" : char.get('type', None),
# "family_id" : char.get('family_id', None),
# "location_id" : char.get('location_id', None),
# "races" : char.get('races', []),
# "is_dead" : char.get('is_dead', None),
# "image_url" : char.get('image_url', None),
# "personality_name" : char.get('personality_name', []),
# "personality_entry" : char.get('personality_entry', []),
# "appearance_name" : char.get('appearance_name', []),
# "appearance_entry" : char.get('appearance_entry', []),
# "is_personality_visible" : char.get('is_personality_visible', None),
# }
# # Prep character for dump
# for field in character.copy():
# if character[field] == None or character[field] == []:
# del character[field]
# del character['id']
# characters.append(character)
# file = 'C:\\Users\\quazn\\Documents\\dev\\kanka-manager\\morrivir\\characters.yaml'
# code = write_data(file, characters)
# file_characters = read_data(file)
#print(file_characters) | 46.233645 | 1,002 | 0.686275 | 489 | 4,947 | 6.770961 | 0.372188 | 0.048626 | 0.014497 | 0.011477 | 0.044699 | 0.044699 | 0.019934 | 0.019934 | 0.019934 | 0 | 0 | 0.040623 | 0.168991 | 4,947 | 107 | 1,003 | 46.233645 | 0.764777 | 0.41884 | 0 | 0.173913 | 0 | 0.021739 | 0.443737 | 0.397381 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0.043478 | 0.152174 | 0 | 0.304348 | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5490a142b6dfe4a57805f7133f0d2ea9a4a1539c | 2,829 | py | Python | neutron_lib/db/sqlalchemytypes.py | rolaya/neutron-lib | 41a2226dfb93a0e6138de260f5126fa7c954178c | [
"Apache-2.0"
] | null | null | null | neutron_lib/db/sqlalchemytypes.py | rolaya/neutron-lib | 41a2226dfb93a0e6138de260f5126fa7c954178c | [
"Apache-2.0"
] | null | null | null | neutron_lib/db/sqlalchemytypes.py | rolaya/neutron-lib | 41a2226dfb93a0e6138de260f5126fa7c954178c | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Custom SQLAlchemy types."""
import netaddr
from sqlalchemy import types
from neutron_lib._i18n import _
class IPAddress(types.TypeDecorator):
impl = types.String(64)
def process_result_value(self, value, dialect):
return netaddr.IPAddress(value)
def process_bind_param(self, value, dialect):
if not isinstance(value, netaddr.IPAddress):
raise AttributeError(_("Received type '%(type)s' and value "
"'%(value)s'. Expecting netaddr.IPAddress "
"type.") % {'type': type(value),
'value': value})
return str(value)
class CIDR(types.TypeDecorator):
impl = types.String(64)
def process_result_value(self, value, dialect):
return netaddr.IPNetwork(value)
def process_bind_param(self, value, dialect):
if not isinstance(value, netaddr.IPNetwork):
raise AttributeError(_("Received type '%(type)s' and value "
"'%(value)s'. Expecting netaddr.IPNetwork "
"type.") % {'type': type(value),
'value': value})
return str(value)
class MACAddress(types.TypeDecorator):
impl = types.String(64)
def process_result_value(self, value, dialect):
return netaddr.EUI(value)
def process_bind_param(self, value, dialect):
if not isinstance(value, netaddr.EUI):
raise AttributeError(_("Received type '%(type)s' and value "
"'%(value)s'. Expecting netaddr.EUI "
"type.") % {'type': type(value),
'value': value})
return str(value)
class TruncatedDateTime(types.TypeDecorator):
"""Truncates microseconds.
Use this for datetime fields so we don't have to worry about DB-specific
behavior when it comes to rounding/truncating microseconds off of
timestamps.
"""
impl = types.DateTime
def process_bind_param(self, value, dialect):
return value.replace(microsecond=0) if value else value
process_result_value = process_bind_param
| 33.678571 | 78 | 0.607282 | 319 | 2,829 | 5.310345 | 0.376176 | 0.042503 | 0.066116 | 0.051948 | 0.488194 | 0.488194 | 0.488194 | 0.467532 | 0.467532 | 0.467532 | 0 | 0.006576 | 0.301166 | 2,829 | 83 | 79 | 34.084337 | 0.850278 | 0.265465 | 0 | 0.536585 | 0 | 0 | 0.129412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0 | 0.073171 | 0.097561 | 0.634146 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
549626fa07a7cc95e2aa2428a235bbc1adf539d5 | 2,102 | py | Python | solutions/051_n_queens.py | abawchen/leetcode | 41d3b172a7694a46a860fbcb0565a3acccd000f2 | [
"MIT"
] | null | null | null | solutions/051_n_queens.py | abawchen/leetcode | 41d3b172a7694a46a860fbcb0565a3acccd000f2 | [
"MIT"
] | null | null | null | solutions/051_n_queens.py | abawchen/leetcode | 41d3b172a7694a46a860fbcb0565a3acccd000f2 | [
"MIT"
] | null | null | null | class Solution:
# @return a list of lists of string
def solveNQueens(self, n):
board = [[1 for i in xrange(n)] for i in xrange(n)]
rs = range(n)
self.queens = []
self.directions = [[(-i, i), (i, i)] for i in xrange(1, n)]
self.recursive(board, n, 0, rs)
return self.queens
def recursive(self, wb, n, c, rs):
for r in rs:
if wb[r][c] == 1:
wb, marks = self.mark(wb, n, (r, c))
if c == n-1:
self.queens.append(map(lambda q: ''.join(map(lambda x: 'Q' if x == 0 else '.', q)), wb))
else:
nrs = rs[:]
nrs.remove(r)
self.recursive(wb, n, c+1, nrs)
wb = self.unmark(wb, marks)
def mark(self, board, n, (x, y)):
marks = []
for (a, b) in [(x, c) for c in range(y, n)]:
if board[a][b] != -1:
board[a][b] = -1
marks.append((a, b))
for d in self.directions[:len(self.directions)-y]:
for (a, b) in map(lambda s: (x+s[0], y+s[1]), d):
if a >= 0 and a < n and b >= 0 and b < n and board[a][b] != -1:
board[a][b] = -1
marks.append((a, b))
board[x][y] = 0
return board, marks
def unmark(self, board, marks):
for (x, y) in marks:
board[x][y] = 1
return board
import time
start_time = time.time()
s = Solution()
print s.solveNQueens(1)
print s.solveNQueens(2)
print s.solveNQueens(3)
print (4, s.solveNQueens(4))
print (5, len(s.solveNQueens(5)))
print (6, len(s.solveNQueens(6)))
print (7, len(s.solveNQueens(7)))
print (8, len(s.solveNQueens(8)))
print (9, len(s.solveNQueens(9)))
print (10, len(s.solveNQueens(10)))
print (11, len(s.solveNQueens(11)))
print("--- %s seconds ---" % (time.time() - start_time))
# s.solveNQueens(4)
# qs = s.solveNQueens(5)
# for q in qs:
# print "-------------------"
# for r in q:
# print r
# print "-------------------"
| 28.794521 | 108 | 0.471456 | 311 | 2,102 | 3.180064 | 0.199357 | 0.17088 | 0.113246 | 0.032356 | 0.084934 | 0.058645 | 0.058645 | 0.058645 | 0.058645 | 0.058645 | 0 | 0.030413 | 0.343007 | 2,102 | 72 | 109 | 29.194444 | 0.685735 | 0.08706 | 0 | 0.078431 | 0 | 0 | 0.010471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.019608 | null | null | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
549905ffeca6d09d599080cd848b9e365ea51dd3 | 763 | py | Python | oriskami/test/resources/test_router_data.py | oriskami/oriskami-python | 2b0d81f713a9149977907183c67eec136d49ee8c | [
"MIT"
] | 4 | 2017-05-28T19:37:31.000Z | 2017-06-13T11:34:26.000Z | oriskami/test/resources/test_router_data.py | ubivar/ubivar-python | 2b0d81f713a9149977907183c67eec136d49ee8c | [
"MIT"
] | null | null | null | oriskami/test/resources/test_router_data.py | ubivar/ubivar-python | 2b0d81f713a9149977907183c67eec136d49ee8c | [
"MIT"
] | null | null | null | import os
import oriskami
import warnings
from oriskami.test.helper import (OriskamiTestCase)
class OriskamiAPIResourcesTests(OriskamiTestCase):
def test_router_data_update(self):
response = oriskami.RouterData.update("0", is_active="true")
self.assertTrue(hasattr(response.data, "__iter__"))
self.assertEqual(response.data[0].is_active, "true")
response = oriskami.RouterData.update("0", is_active="false")
self.assertEqual(response.data[0].is_active, "false")
def test_router_data_list(self):
response = oriskami.RouterData.list()
self.assertTrue(hasattr(response.data, "__iter__"))
self.assertTrue(len(response.data), 1)
self.assertTrue(hasattr(response.data[0], "is_active"))
| 38.15 | 69 | 0.714286 | 89 | 763 | 5.910112 | 0.325843 | 0.136882 | 0.085551 | 0.165399 | 0.520913 | 0.441065 | 0.441065 | 0 | 0 | 0 | 0 | 0.00936 | 0.159895 | 763 | 19 | 70 | 40.157895 | 0.811232 | 0 | 0 | 0.125 | 0 | 0 | 0.058978 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
549fd848dd75d3c337cc6b1655249d58340ef912 | 2,744 | py | Python | plotting/trackTurnOn.py | will-fawcett/trackerSW | fc097b97539d0b40a15e1d6e112f4048cb4122b4 | [
"MIT"
] | null | null | null | plotting/trackTurnOn.py | will-fawcett/trackerSW | fc097b97539d0b40a15e1d6e112f4048cb4122b4 | [
"MIT"
] | null | null | null | plotting/trackTurnOn.py | will-fawcett/trackerSW | fc097b97539d0b40a15e1d6e112f4048cb4122b4 | [
"MIT"
] | null | null | null |
from utils import prepareLegend
from colours import colours
from ROOT import *
gROOT.SetBatch(1)
gStyle.SetPadLeftMargin(0.15) # increase space for left margin
gStyle.SetPadBottomMargin(0.15) # increase space for left margin
gStyle.SetGridStyle(3)
gStyle.SetGridColor(kGray)
gStyle.SetPadTickX(1) # add tics on top x
gStyle.SetPadTickY(1) # add tics on right y
OUTPUT_DIR = 'plots/'
REBIN = 2
def main():
ifile = TFile.Open('/Users/Will/Documents/fcc/trackerSW/delphes/output_ttbar_mu1000.root')
colourDef = Colours()
truthTrackPt = ifile.Get('truthTrack100')
truthTrackPt.Rebin(REBIN)
#truthTrackPt = TH1D('tracks', '', 100, 0, 100)
'''
for bin in range(truthTrackPt_1000.GetNbinsX()):
if bin > 100: continue
truthTrackPt.SetBinContent(bin, truthTrackPt_1000.GetBinContent(bin))
truthTrackPt_1000.GetXaxis().SetRangeUser(0,200)
truthTrackPt_1000.Draw()
truthTrackPt.SetLineColor(kGreen)
truthTrackPt.Draw('same')
can.SaveAs('test.pdf')
'''
can = TCanvas('can', 'can', 500, 500)
line = TF1('line', '1', 0, 100)
line.SetLineColor(kGray)
tGraphs = {}
leg = prepareLegend('bottomRight', [0.7, 0.15, 0.9, 0.35])
for i in range(0, 6):
ptCut = (i+1)*5
hName = 'truthTrackPt{0}'.format(ptCut)
print hName
ptAfterCut = ifile.Get(hName)
ptAfterCut.SetLineColor(kRed)
ptAfterCut.Rebin(REBIN)
can.SetLogy()
truthTrackPt.Draw()
ptAfterCut.Draw('same')
can.SaveAs(OUTPUT_DIR+'tracksPt{0}.pdf'.format(ptCut))
# to make turn on to TGraphAsymmErrors(numerator, denominator)
ratio = TGraphAsymmErrors(ptAfterCut, truthTrackPt)
can.SetLogy(0)
ratio.Draw('AP')
line.Draw('same')
xaxis = ratio.GetXaxis()
xaxis.SetRangeUser(0, ptCut*3)
xaxis.SetTitle('Truth track p_{T} [GeV]')
yaxis = ratio.GetYaxis()
yaxis.SetTitle('Efficiency')
can.SaveAs(OUTPUT_DIR+'turnOnPt{0}.pdf'.format(ptCut))
tGraphs[ptCut] = ratio
# now draw series of TGraphs
ptCuts = [5, 10, 15, 20]
colours = [colourDef.blue, colourDef.red, colourDef.orange, colourDef.purple]
for i, cut in enumerate(ptCuts):
gr = tGraphs[cut]
gr.SetLineColor(colours[i])
gr.SetMarkerColor(colours[i])
leg.AddEntry(gr, 'p_{T} > '+str(cut)+' GeV')
if i==0:
gr.Draw('APl')
gr.SetMinimum(0)
gr.GetXaxis().SetRangeUser(0, 45)
line.Draw('same')
gr.Draw('Psame')
else:
gr.Draw('Plsame')
leg.Draw()
can.SaveAs(OUTPUT_DIR+'trackTurnOn.pdf')
if __name__ == "__main__":
main()
| 27.717172 | 94 | 0.623178 | 330 | 2,744 | 5.121212 | 0.427273 | 0.021302 | 0.026627 | 0.031953 | 0.04142 | 0.04142 | 0.04142 | 0.04142 | 0 | 0 | 0 | 0.044647 | 0.240889 | 2,744 | 98 | 95 | 28 | 0.766683 | 0.084913 | 0 | 0.031746 | 0 | 0 | 0.116171 | 0.031599 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.047619 | null | null | 0.015873 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54a07034e31ea393994499d210b41085f8ae28cb | 2,362 | py | Python | src/Process/Process.py | mauriciocarvalho01/pln_api | 06743f1ae9e084ad15f1c91b32eb3719344f4a4b | [
"MIT"
] | 1 | 2021-12-14T19:10:44.000Z | 2021-12-14T19:10:44.000Z | src/Process/Process.py | mauriciocarvalho01/pln_api | 06743f1ae9e084ad15f1c91b32eb3719344f4a4b | [
"MIT"
] | null | null | null | src/Process/Process.py | mauriciocarvalho01/pln_api | 06743f1ae9e084ad15f1c91b32eb3719344f4a4b | [
"MIT"
] | null | null | null | import spacy
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from string import punctuation
from tqdm import tqdm
from rank_bm25 import BM25Okapi
import time
from collections import defaultdict
from heapq import nlargest
import nltk
nltk.download('punkt')
nltk.download('stopwords')
from operator import itemgetter
from .ProcessFiles import ProcessFiles
from src.Entity.ChatResponse import ChatResponse
from src.Entity.Files import Files
from .Thread import Thread
from .Resume import Resume
from .Tools import Tools
class Process:
def initProcess(database, process):
action = process['action']
print(action)
text = process['request_query']
file = process['file']
user_id = process['user_id']
print(user_id)
hash = Tools.encodeBase64(text)
file = Files.getFiles(database, file, user_id)
if len(file) == 0:
return {"status": "erro", "message": "Não achei nenhum arquivo cadastrado"}
process['type'] = file[0]['type']
process['hash'] = hash
chat_response = []
if action == 'query':
chat_response = ChatResponse.updateChatResponse(database, process)
if len(chat_response) > 0:
# print("chat_response")
# print(chat_response)
response = chat_response[0]
return response
else:
if action == "query":
db = database
Thread(db, process).start()
response = {"status": "learning", "message": "Ainda não sei a resposta, estou aprendendo...Pergunte - me novamente em instantes"}
return response
elif action == "resume":
resume = Resume.resumeFile(process)
# if text:
# resume = json.dumps(resume, indent = 4)
# insert = database.execute('INSERT INTO explain.chat_response (hash, text, response) VALUES (%s,%s, %s)', (hash, text, resume))
# if(insert):
# return resume
# else:
# return "Erro ao inserir texto"
return resume
else:
return "Não reconheço essa ação"
| 34.735294 | 148 | 0.600762 | 252 | 2,362 | 5.571429 | 0.392857 | 0.059829 | 0.022792 | 0.031339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006753 | 0.31033 | 2,362 | 67 | 149 | 35.253731 | 0.855126 | 0.133362 | 0 | 0.076923 | 0 | 0 | 0.122299 | 0.010314 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0 | 0.365385 | 0 | 0.5 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
54a4ba9c11d3248dceffbbc60702b2f7f2e73b4a | 3,950 | py | Python | launchpad2github.py | mleinart/launchpad2github | faade979a1f209dc1d25aa82a32f6342dbfe35b3 | [
"MIT"
] | 2 | 2016-10-07T08:55:40.000Z | 2017-08-30T16:43:57.000Z | launchpad2github.py | mleinart/launchpad2github | faade979a1f209dc1d25aa82a32f6342dbfe35b3 | [
"MIT"
] | null | null | null | launchpad2github.py | mleinart/launchpad2github | faade979a1f209dc1d25aa82a32f6342dbfe35b3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import os
import sys
import time
from getpass import getpass
from optparse import OptionParser
from termcolor import colored
from launchpadlib.launchpad import Launchpad
from github3 import login as github_login
from github3 import GitHubError
ACTIVE_STATUSES = [
"New",
"Confirmed",
"Triaged",
"In Progress"
]
IMPORTED_FIELDS = [
"owner",
"web_link",
"date_created",
"date_last_updated",
"tags",
]
def main(args):
usage = """%s: <lp project> <gh project>\n""" % (sys.argv[0],)
parser = OptionParser(usage=usage)
options, args = parser.parse_args(args=args)
if len(args) != 2:
parser.print_usage()
return 1
lp_project_name = args[0]
gh_project_name = args[1]
try:
gh_owner, gh_repo = gh_project_name.split('/')
except ValueError:
print "Unable to parse target Github repo: '%s'" % gh_project_name
print "Repo should be specified as <owner>/<repo>"
print "Authenticating with Launchpad"
launchpad = Launchpad.login_with(os.path.basename(sys.argv[0]), 'production')
print "Authenticating with Github"
github_user = raw_input("Github username: ")
github_pass = getpass("Github password: ")
try:
github = github_login(github_user, github_pass)
github.user()
except GitHubError:
raise SystemExit("Invalid Github login or problem contacting server")
# Validate launchpad project
try:
lp_project = launchpad.projects[lp_project_name]
except KeyError:
raise SystemExit("Unable to find project named '%s' on Launchpad" % lp_project_name)
# Validate github project
if github.repository(gh_owner, gh_repo) is None:
raise SystemExit("Unable to find Github project %s/%s" % (gh_owner, gh_repo))
# Begin migration
open_tasks = lp_project.searchTasks(status=ACTIVE_STATUSES)
for bug_task in open_tasks:
for field in IMPORTED_FIELDS:
print colored(field + ':', 'cyan') + colored(bug_task.bug.__getattr__(field), 'yellow')
print colored(bug_task.bug.description, 'yellow')
print
if confirm_or_exit(colored("Import?", 'cyan')):
title = bug_task.bug.title
description = format_description(bug_task.bug)
issue = github.create_issue(owner=gh_owner, repository=gh_repo, title=title, body=description)
for i, message in enumerate(bug_task.bug.messages):
if i == 0: continue # repeat of description
time.sleep(0.5)
comment = format_comment(message)
issue.create_comment(body=comment)
issue.add_labels('launchpad_import')
print colored("Created issue %d: %s" % (issue.number, issue.html_url), 'yellow')
if confirm_or_exit(colored("Close and update original?", 'cyan')):
bug_task.bug.newMessage(content="Migrated to Github: %s" % issue.html_url)
bug_task.status = "Won't Fix"
bug_task.bug.lp_save()
bug_task.lp_save()
def format_description(bug):
output = """#### Imported from %(web_link)s
|||
|----|----|
|Reported by|%(owner)s|
|Date Created|%(date_created)s|
""" % {
'web_link': bug.web_link,
'owner': format_user(bug.owner),
'date_created': bug.date_created.strftime("%b %d, %Y")
}
if bug.tags:
output += "|Tags|%s|" % bug.tags
output += bug.description.replace("Original description:\n", "")
return output
def format_user(user):
return "[%s](%s)" % (user.name, user.web_link)
def format_comment(message):
output = "#### Comment by %s on %s:\n" % \
(format_user(message.owner), message.date_created.strftime("%b %d, %Y"))
output += message.content
return output
def confirm_or_exit(prompt):
options = ['y','n','q']
option_prompt = '/'.join(options)
choice = None
while choice not in options:
choice = raw_input("%s (%s): " % (prompt, option_prompt)).lower()
if choice == 'y':
return True
if choice == 'n':
return False
if choice == 'q':
raise SystemExit(0)
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))
| 27.816901 | 100 | 0.679241 | 534 | 3,950 | 4.844569 | 0.303371 | 0.027058 | 0.027058 | 0.015075 | 0.05489 | 0.017008 | 0 | 0 | 0 | 0 | 0 | 0.00403 | 0.183291 | 3,950 | 141 | 101 | 28.014184 | 0.797892 | 0.027595 | 0 | 0.045455 | 0 | 0 | 0.205422 | 0.006517 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.027273 | 0.127273 | null | null | 0.081818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54a9266c033c65ceff0e6381eb549dcffd4ece05 | 890 | py | Python | firmware/temphumid/timeset.py | schizobovine/unicorder | 3165922c2662b1bd2c5ab1691c89e2af5ee185e7 | [
"CC-BY-4.0"
] | null | null | null | firmware/temphumid/timeset.py | schizobovine/unicorder | 3165922c2662b1bd2c5ab1691c89e2af5ee185e7 | [
"CC-BY-4.0"
] | null | null | null | firmware/temphumid/timeset.py | schizobovine/unicorder | 3165922c2662b1bd2c5ab1691c89e2af5ee185e7 | [
"CC-BY-4.0"
] | null | null | null | #!/usr/bin/env python
from datetime import datetime
import serial
import sys
import time
SERIAL_BAUD = 9600
SERIAL_PORT = '/dev/ttyUSB0'
TIME_FORMAT = "T%s"
# Reset device to activate time setting routine
DO_RST = True
# Open serial dong
print 'opening serial port %s...' % SERIAL_PORT
uart = serial.Serial(
port=SERIAL_PORT,
baudrate=SERIAL_BAUD,
dsrdtr=DO_RST,
)
# Frobulate the DTR pin to reset the target
if DO_RST:
print 'twiddling DTR to reset'
uart.setRTS(False)
uart.setDTR(False)
uart.flush()
time.sleep(0.2)
uart.flushInput()
uart.setRTS(True)
uart.setDTR(True)
time.sleep(1)
print 'reset done'
# Send start command to begin cycle
time.sleep(1)
for i in xrange(0, 30):
time.sleep(0.1)
now = datetime.now().strftime(TIME_FORMAT)
uart.write(now + "\r\n")
uart.flush()
uart.close()
print 'done!'
sys.exit(0)
| 18.93617 | 47 | 0.683146 | 138 | 890 | 4.333333 | 0.492754 | 0.083612 | 0.033445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021038 | 0.198876 | 890 | 46 | 48 | 19.347826 | 0.817672 | 0.178652 | 0 | 0.117647 | 0 | 0 | 0.11157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117647 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54a9d8c8660ee37792168966ac376aefeed7599f | 3,248 | py | Python | V1_backup/macro_ssh.py | YuanYuLin/iopcrestapi_client | 5c1683d1b5b44bd8bb641933d9526cee97075d31 | [
"MIT"
] | null | null | null | V1_backup/macro_ssh.py | YuanYuLin/iopcrestapi_client | 5c1683d1b5b44bd8bb641933d9526cee97075d31 | [
"MIT"
] | null | null | null | V1_backup/macro_ssh.py | YuanYuLin/iopcrestapi_client | 5c1683d1b5b44bd8bb641933d9526cee97075d31 | [
"MIT"
] | null | null | null | #!/usr/bin/python2.7
import sys
import time
import pprint
import libiopc_rest as rst
def gen_ssh_key(hostname, out_format):
payload = '{'
payload += '"ops":"gen_ssh_key"'
payload += '}'
return rst.http_post_ops_by_pyaload(hostname, payload)
def get_status_until_key_generated(hostname, out_format):
ssh_status_id = 2
while True :
rsp = rst.http_get_status(hostname, ssh_status_id)
if int(rsp.status_code) == 200 :
obj = rsp.json()
if (obj['status'] | 0x01) == 0x01:
rst.response_output(out_format, rsp)
return
time.sleep(2)
def set_env(hostname, out_format):
payload = '{'
payload += '"ops":"setenv",'
payload += '"env":"SSH_AUTH_NAME=mehlow"'
payload += '}'
return rst.http_post_ops_by_pyaload(hostname, payload)
def set_authname(hostname, out_format):
payload = '{'
payload += '"ops":"set_authname",'
payload += '"name":"helloworld"'
payload += '}'
rst.response_output(out_format, rst.http_post_ops_by_pyaload(hostname, payload))
def set_authsalt(hostname, out_format):
payload = '{'
payload += '"ops":"set_authsalt",'
payload += '"salt":"$6$01234$56789"'
payload += '}'
rst.response_output(out_format, rst.http_post_ops_by_pyaload(hostname, payload))
def set_authhash(hostname, out_format):
payload = '{'
payload += '"ops":"set_authhash",'
payload += '"hash":"$6$01234$40kDc/J3OMiWCRafMKQjAU5M6wAgEnKlhpsqFn8t.koNyBcRSguYQwLkIS90F2uHIc7hBPp.HSgCNgl8F955X/"'
payload += '}'
rst.response_output(out_format, rst.http_post_ops_by_pyaload(hostname, payload))
def start_ssh(hostname, out_format):
#
# curl -d '{"ops":"start_ssh"}' -H "Content-Type: application/json; charset=utf-8" -A 'iopc-app' -X POST http://<IP_ADDRESS>/api/v1/ops
#
payload = '{'
payload += '"ops":"start_ssh"'
payload += '}'
return rst.http_post_ops_by_pyaload(hostname, payload)
def stop_ssh(hostname, out_format):
payload = '{'
payload += '"ops":"stop_ssh"'
payload += '}'
return rst.http_post_ops_by_pyaload(hostname, payload)
def gen_start_ssh(hostname, out_format):
gen_ssh_key(hostname, out_format)
get_status_until_key_generated(hostname, out_format)
start_ssh(hostname, out_format)
action_list=[
{"NAME":"set_env", "FUNCTION":set_env},
{"NAME":"gen_ssh_key", "FUNCTION":gen_ssh_key},
{"NAME":"start_ssh", "FUNCTION":start_ssh},
{"NAME":"stop_ssh", "FUNCTION":stop_ssh},
]
def request_list(hostname, out_format, action):
for act in action_list:
if action == act["NAME"] and act["FUNCTION"]:
status_code, json_objs = act["FUNCTION"](hostname, out_format)
if status_code == 200:
pprint.pprint(json_objs)
else:
print "sub request error: %s" % obj
else:
print ""
def help_usage():
rst.out("rest_cli.py <hostname> <action>")
rst.out("action:")
for act in action_list:
rst.out(" %s," % act["NAME"])
sys.exit(1)
if __name__ == '__main__':
if len(sys.argv) < 3:
help_usage()
hostname=sys.argv[1]
action=sys.argv[2]
request_list(hostname, 'json', action)
| 29.798165 | 139 | 0.638855 | 417 | 3,248 | 4.688249 | 0.256595 | 0.082864 | 0.121739 | 0.050128 | 0.478772 | 0.42711 | 0.332481 | 0.275703 | 0.231714 | 0.231714 | 0 | 0.020663 | 0.210283 | 3,248 | 108 | 140 | 30.074074 | 0.74152 | 0.047106 | 0 | 0.294118 | 0 | 0 | 0.164725 | 0.07055 | 0 | 0 | 0.002589 | 0 | 0 | 0 | null | null | 0 | 0.047059 | null | null | 0.047059 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54b9924021536e75d5d98199ebdf2f58b7c84e9c | 15,384 | py | Python | bindings/python/cntk/utils/__init__.py | MSXC/CNTK | d223d48b411bc994acd465ed333c9f6bed64dd7f | [
"RSA-MD"
] | null | null | null | bindings/python/cntk/utils/__init__.py | MSXC/CNTK | d223d48b411bc994acd465ed333c9f6bed64dd7f | [
"RSA-MD"
] | null | null | null | bindings/python/cntk/utils/__init__.py | MSXC/CNTK | d223d48b411bc994acd465ed333c9f6bed64dd7f | [
"RSA-MD"
] | null | null | null | # Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license. See LICENSE.md file in the project root
# for full license information.
# ==============================================================================
import sys
import numbers
import collections
import copy
import numpy as np
from numbers import Number
from scipy import sparse
from .. import cntk_py
from ..device import use_default_device, cpu
from ..axis import Axis
from cntk.internal import typemap
# To __remove__
from cntk.logging import *
# End to remove
_VARIABLE_OR_FUNCTION = (cntk_py.Variable, cntk_py.Function)
# To __remove__
def one_hot(batch, num_classes, dtype=None, device=None):
import cntk
return cntk.Value.one_hot(batch, num_classes, dtype, device)
# End to remove
def get_data_type(*args):
"""
Calculates the highest precision numpy data type of the provided parameters.
If the parameter is a Function instance, it calculates it based on its
inputs. Placeholders are ignored in the type determination.
Args:
args (number, list, NumPy array, :class:`~cntk.ops.variables.Variable`, or :class:`~cntk.ops.functions.Function`): input
Returns:
np.float32, np.float64, or None
"""
from ..ops.variables import Variable
cntk_dtypes = set()
numpy_dtypes = set()
if len(args) == 1 and isinstance(args, _VARIABLE_OR_FUNCTION):
args = [args]
for arg in args:
if isinstance(arg, Variable) and arg.is_placeholder == True:
continue
if isinstance(arg,
(cntk_py.Variable, cntk_py.Value, cntk_py.NDArrayView)):
if cntk_py.DataType_Double == arg.get_data_type():
cntk_dtypes.add(np.float64)
elif cntk_py.DataType_Float == arg.get_data_type():
cntk_dtypes.add(np.float32)
elif isinstance(arg, np.ndarray):
if arg.dtype not in (np.float32, np.float64):
raise ValueError(
'NumPy type "%s" is not supported' % arg.dtype)
numpy_dtypes.add(arg.dtype.type)
elif isinstance(arg, _VARIABLE_OR_FUNCTION):
var_outputs = arg.outputs
if len(var_outputs) > 1:
raise ValueError(
'expected single output, but got %i' % len(var_outputs))
var_type = var_outputs[0].get_data_type()
if cntk_py.DataType_Double == var_type:
cntk_dtypes.add(np.float64)
else:
cntk_dtypes.add(np.float32)
else:
# We don't know anything so we convert everything to float32. If it
# works, we know the type.
# TODO figure out a better/faster way.
np.asarray(arg, dtype=np.float32)
numpy_dtypes.add(np.float32)
if cntk_dtypes:
if np.float64 in cntk_dtypes:
return np.float64
elif np.float32 in cntk_dtypes:
return np.float32
else:
if np.float64 in numpy_dtypes:
return np.float64
elif np.float32 in numpy_dtypes:
return np.float32
def _is_dense(batch):
if isinstance(batch, np.ndarray):
return True
elif sparse.issparse(batch):
return False
is_dense = True
b = batch
while isinstance(b, list):
b = b[0]
if sparse.issparse(b):
return False
return True
def _ones_like(batch, precision):
'''
Returns a new batch, which has the same format as ``batch`` but all values
set to 1.
Args:
batch (list of NumPy arrays): a list of sequences, which are NumPy arrays
'''
from cntk.internal import sanitize_precision
return [np.ones_like(sample, dtype=sanitize_precision(precision)) for sample in batch]
def get_train_loss(trainer):
'''
Fetch the train loss from the last minibatch and copy it to the CPU in case it is on the GPU.
Args:
trainer (:class:`~cntk.train.trainer.Trainer`): the trainer used.
Returns:
the loss value
'''
# we copy the value so swig does not destroy it when we leave the scope
return copy.copy(trainer.previous_minibatch_loss_average)
def get_train_eval_criterion(trainer):
'''
Fetch the train evaluation criterion (e.g., classification error) from the last minibatch and copy it to the CPU in case it is on the GPU.
Args:
trainer (:class:`Trainer`): the trainer used.
Returns:
the criterion value
'''
# we copy the value so swig does not destroy it when we leave the scope
return copy.copy(trainer.previous_minibatch_evaluation_average)
# Obsolete: All usages should be replaced with the variable_value_to_seq
# procedure below
def value_to_seq(value):
'''
Convert a Value to a sequence of NumPy arrays that have their masked
entries removed.
Args:
value (:class:`~cntk.core.Value`): Value as it is returned by Swig
Returns:
a list of NumPy arrays
'''
np_data = np.asarray(value)
mask = value.mask()
if mask:
mask = np.asarray(mask)
np_data = [seq[mask[idx] != cntk_py.MaskKind_Invalid]
for idx, seq in enumerate(np_data)]
return np_data
def variable_value_to_seq(value, variable):
'''
Convert a Value to a sequence of NumPy arrays that have their masked
entries removed.
Args:
value (:class:`~cntk.core.Value`): Value as it is returned by Swig
Returns:
a list of NumPy arrays
'''
mask = value.mask()
if mask:
value_sequences = value.unpack_variable_value(variable, True, cpu())
return [np.asarray(seq) for seq in value_sequences[0]]
else:
return np.asarray(value)
def eval(op, arguments=None, precision=None, device=None, backward_pass=False, expected_backward=None):
'''
It evaluates ``op`` on the data provided by the reader. This is useful
mainly to explore the operators and for convenient unit testing.
Args:
op (:class:`Function`): operation to evaluate
arguments: maps variables to their input data. The
interpretation depends on the input type:
* `dict`: keys are input variable or names, and values are the input data.
* any other type: if node has a unique input, ``arguments`` is mapped to this input.
For nodes with more than one input, only `dict` is allowed.
In both cases, every sample in the data will be interpreted
as a new sequence. To mark samples as continuations of the
previous sequence, specify ``arguments`` as `tuple`: the
first element will be used as ``arguments``, and the second one will
be used as a list of bools, denoting whether a sequence is a new
one (`True`) or a continuation of the previous one (`False`).
Data should be either NumPy arrays or a
:class:`~cntk.io.MinibatchData` instance.
seq_starts (list of bools or None): if None, every sequence is
treated as a new sequence. Otherwise, it is interpreted as a list of
Booleans that tell whether a sequence is a new sequence (`True`) or a
continuation of the sequence in the same slot of the previous
minibatch (`False`)
precision (str or None): precision being 'float32', 'float64', or
None, in which case it will be determined by inspecting the operator
(costly)
device (:class:`~cntk.device.DeviceDescriptor`, default None): device
this value should be put on
backward_pass (`bool`, optional): whether a backward pass is performed
expected_backward (`dict` or None): keys are variables for which to
compute a backward ouptut. By default (None) all entries from
'arguments' are used
Returns:
mapping of output variables to their values.
'''
if backward_pass:
state, forward_output = op.forward(arguments, op.outputs, op.outputs,
device=device)
if expected_backward is None:
expected_backward = arguments
root_gradients = {v: _ones_like(o, precision) for v, o in
forward_output.items()}
backward_output = op.backward(state, root_gradients, expected_backward)
return forward_output, backward_output
else:
state, forward_output = op.forward(
arguments, op.outputs, None, device=device)
return forward_output, None
class Record(dict):
'''
Easy construction of a record (=immutable singleton class) from keyword arguments.
e.g. r = Record(x = 13, y = 42) ; x = r.x
Args:
kwargs: keyword arguments to turn into the record members
Returns:
A singleton class instance that has all passed kw args as immutable class members.
'''
def __init__(self, **args_dict):
super(Record, self).__init__(args_dict)
self.__dict__.update(args_dict)
def __getattr__(self, key):
if key not in self:
raise AttributeError("record has no attribute '{}'".format(key))
return self[key]
def __setattr__(self, key, value):
raise AttributeError('record is immutable')
def updated_with(self, **kwargs):
'''
Create a new Record from an existing one with members modified or added.
e.g. r = Record(x = 13) ; print(r.x) ; r2 = r.updated_with(x = 42) ; print(r2.x)
Args:
kwargs: keyword arguments to turn into the record members
Returns:
A singleton class instance that has all passed kw args as immutable class members.
'''
d = dict(**self) # make it mutable
d.update(kwargs) # merge the new items
return Record(**d) # lock it up again
def get_python_function_arguments(f):
'''
Helper to get the parameter names and annotations of a Python function.
'''
# Note that we only return non-optional arguments (we assume that any optional args are not specified).
# This allows to, e.g., accept max(a, b, *more, name='') as a binary function
import sys
if sys.version_info.major >= 3:
from inspect import getfullargspec
else:
def getfullargspec(f):
from inspect import getargspec
annotations = getattr(f, '__annotations__', {})
#f.__annotations__ = None # needed when faking it under Python 3 for debugging purposes
a = getargspec(f)
#f.__annotations__ = annotations
return Record(args=a.args, varargs=a.varargs, varkw=a.keywords, defaults=a.defaults, kwonlyargs=[], kwonlydefaults=None, annotations=annotations)
param_specs = getfullargspec(f)
annotations = param_specs.annotations
arg_names = param_specs.args
defaults = param_specs.defaults # "if this tuple has n elements, they correspond to the last n elements listed in args"
if defaults:
arg_names = arg_names[:-len(defaults)] # we allow Function(functions with default arguments), but those args will always have default values since CNTK Functions do not support this
return (arg_names, annotations)
def map_function_arguments(params, params_dict, *args, **kwargs):
'''
Helper to determine the argument map for use with various call operations.
Returns a dictionary from parameters to whatever arguments are passed.
Accepted are both positional and keyword arguments.
This mimics Python's argument interpretation, except that keyword arguments are not optional.
This does not require the arguments to be Variables or Functions. It is also called by train_minibatch() and @Signature.
'''
# start with positional arguments
arg_map = dict(zip(params, args))
# now look up keyword arguments
if len(kwargs) != 0:
for name, arg in kwargs.items(): # keyword args are matched by name
if name not in params_dict:
raise TypeError("got an unexpected keyword argument '%s'" % name)
param = params_dict[name]
if param in arg_map:
raise SyntaxError("got multiple values for argument '%s'" % name)
arg_map[param] = arg # add kw argument to dict
assert len(arg_map) == len(params)
return arg_map
def Signature(*args, **kwargs):
'''
``@Signature`` is a decorator to implement the function-argument annotations in Python-2.7,
as needed by the ``@Function`` decorator.
This is only needed when you have not yet migrated to Python 3.x.
Note: Although this is aimed at enabling ``@Function`` syntax with type annotations
in Python 2.7, ``@Signature`` is independent of CNTK and can be used for any argument annotation.
Args:
*args: types of arguments of the function that this decorator is applied to, in the same order.
**kwargs: types of arguments with optional names, e.g. `x=Tensor[42]`. Use this second form for
longer argument lists.
Example::
# Python 3:
@Function
def f(x: Tensor[42]):
return sigmoid(x)
# Python 2.7:
@Function
@Signature(Tensor[42])
def f(x):
return sigmoid(x)
# note that this:
@Function
@Signature(x:int)
def sqr(x):
return x*x
# is identical to:
def sqr(x):
return x*x
sqr.__annotations__ = {'x': int}``
'''
# this function returns another function which is the actual decorator applied to the def:
def add_annotations(f):
# prepare the signature
param_names, annotations = get_python_function_arguments(f)
if annotations:
raise ValueError('@Signature cannot be applied to functions that already have annotations')
annotations = {}
if len(args) + len(kwargs) != len(param_names):
raise TypeError("{} annotations provided for function to be decorated, but function has {} parameters".format(len(args) + len(kwargs), len(param_names)))
# implant anotations into f
params_dict = { name: name for name in param_names }
f.__annotations__ = map_function_arguments(param_names, params_dict, *args, **kwargs)
return f # and return the updated function
return add_annotations
def start_profiler(dir='profiler', sync_gpu=True, reserve_mem=cntk_py.default_profiler_buffer_size):
'''
Start profiler to prepare performance statistics gathering. Note that
the profiler is not enabled after start
(`example
<https://github.com/Microsoft/CNTK/wiki/Performance-Profiler#for-python>`_).
Args:
dir: directory for profiler output
sync_gpu: whether profiler syncs CPU with GPU when timing
reserve_mem: size in byte for profiler memory reserved
'''
cntk_py.start_profiler(dir, sync_gpu, reserve_mem)
def stop_profiler():
'''
Stop profiler from gathering performance statistics and flush them to file
'''
cntk_py.stop_profiler()
def enable_profiler():
'''
Enable profiler to gather data. Note that in training_session, profiler would be enabled automatically after the first check point
'''
cntk_py.enable_profiler()
def disable_profiler():
'''
Disable profiler from gathering data.
'''
cntk_py.disable_profiler()
| 35.528868 | 189 | 0.651586 | 2,062 | 15,384 | 4.752182 | 0.225509 | 0.009185 | 0.005613 | 0.006123 | 0.179202 | 0.148893 | 0.118992 | 0.113073 | 0.090621 | 0.090621 | 0 | 0.006626 | 0.264236 | 15,384 | 432 | 190 | 35.611111 | 0.858998 | 0.097504 | 0 | 0.152941 | 0 | 0 | 0.050979 | 0 | 0 | 0 | 0 | 0.002315 | 0.005882 | 0 | null | null | 0.011765 | 0.105882 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54bc320185cf4b126b5fbdb33a31e831a7364c2c | 1,209 | py | Python | objectModel/Python/tests/cdm/cdm_collection/cdm_collection_helper_functions.py | aaron-emde/CDM | 9472e9c7694821ac4a9bbe608557d2e65aabc73e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | objectModel/Python/tests/cdm/cdm_collection/cdm_collection_helper_functions.py | aaron-emde/CDM | 9472e9c7694821ac4a9bbe608557d2e65aabc73e | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-05-11T23:57:12.000Z | 2021-08-04T05:03:05.000Z | objectModel/Python/tests/cdm/cdm_collection/cdm_collection_helper_functions.py | aaron-emde/CDM | 9472e9c7694821ac4a9bbe608557d2e65aabc73e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | from cdm.objectmodel import CdmCorpusDefinition, CdmManifestDefinition
from cdm.storage import LocalAdapter
from cdm.enums import CdmObjectType
def generate_manifest(local_root_path: str) -> 'CdmManifestDefinition':
"""
Creates a manifest used for the tests.
"""
cdmCorpus = CdmCorpusDefinition()
cdmCorpus.storage.default_namespace = 'local'
adapter = LocalAdapter(root=local_root_path)
cdmCorpus.storage.mount('local', adapter)
# add cdm namespace
cdmCorpus.storage.mount('cdm', adapter)
manifest = CdmManifestDefinition(cdmCorpus.ctx, 'manifest')
manifest.folder_path = '/'
manifest.namespace = 'local'
return manifest
def create_document_for_entity(cdm_corpus: 'CdmCorpusDefinition', entity: 'CdmEntityDefinition', nameSpace: str = 'local'):
"""
For an entity, it creates a document that will contain the entity.
"""
cdm_folder_def = cdm_corpus.storage.fetch_root_folder(nameSpace)
entity_doc = cdm_corpus.ctx.corpus.make_object(CdmObjectType.DOCUMENT_DEF, '{}.cdm.json'.format(entity.entity_name), False)
cdm_folder_def.documents.append(entity_doc)
entity_doc.definitions.append(entity)
return entity_doc
| 35.558824 | 127 | 0.746071 | 138 | 1,209 | 6.347826 | 0.384058 | 0.041096 | 0.02968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157155 | 1,209 | 33 | 128 | 36.636364 | 0.859666 | 0.102564 | 0 | 0 | 1 | 0 | 0.097514 | 0.020076 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.157895 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54c1abcc8ecb4f60275606b22bbb22422b5b3be6 | 1,021 | py | Python | dashboard/frontend/callbacks.py | AndreWohnsland/CocktailBerry | 60b2dfc3a4a6f3ef9ab2d946a97d14829e575a9d | [
"MIT"
] | 1 | 2022-03-06T23:50:34.000Z | 2022-03-06T23:50:34.000Z | dashboard/frontend/callbacks.py | AndreWohnsland/CocktailBerry | 60b2dfc3a4a6f3ef9ab2d946a97d14829e575a9d | [
"MIT"
] | 4 | 2022-03-03T11:16:17.000Z | 2022-03-20T15:53:37.000Z | dashboard/frontend/callbacks.py | AndreWohnsland/CocktailBerry | 60b2dfc3a4a6f3ef9ab2d946a97d14829e575a9d | [
"MIT"
] | null | null | null | import dash
from dash.dependencies import Input, Output # type: ignore
import datetime
from treemap import generate_treemap, get_plot_data
from app import app
from store import store
@app.callback(Output('treemap', 'figure'),
Output('timeclock', "children"),
Input('interval-component', 'n_intervals'),
Input('url', 'pathname'))
def update_plot(n, pathname):
routes = {
"/n_today": 1,
"/vol_today": 2,
"/n_all": 3,
"/vol_all": 4,
}
graphtype = routes.get(pathname, 1)
store.current_graph_type = graphtype
df = get_plot_data(store.current_graph_type)
now_time = datetime.datetime.now().strftime('%H:%M')
trigger_id = dash.callback_context.triggered[0]["prop_id"]
triggered_by_time = trigger_id == "interval-component.n_intervals"
if df.equals(store.last_data) and triggered_by_time:
return [dash.no_update, now_time]
store.last_data = df
fig = generate_treemap(df)
return [fig, now_time]
| 31.90625 | 70 | 0.663075 | 134 | 1,021 | 4.820896 | 0.440299 | 0.032508 | 0.034056 | 0.083591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007481 | 0.214496 | 1,021 | 31 | 71 | 32.935484 | 0.798005 | 0.011753 | 0 | 0 | 1 | 0 | 0.142999 | 0.029791 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.214286 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54c3ac280575bb0ee6051627754ebf1784317751 | 4,095 | py | Python | tms/useraccount/views.py | csagar131/TicketManagementSystem | d2c6b340dcb1d7607257d88dc5b931a0624a774b | [
"Apache-2.0"
] | null | null | null | tms/useraccount/views.py | csagar131/TicketManagementSystem | d2c6b340dcb1d7607257d88dc5b931a0624a774b | [
"Apache-2.0"
] | 4 | 2021-06-04T23:51:17.000Z | 2022-02-10T10:41:21.000Z | tms/useraccount/views.py | csagar131/TicketManagementSystem | d2c6b340dcb1d7607257d88dc5b931a0624a774b | [
"Apache-2.0"
] | 1 | 2020-06-04T11:44:42.000Z | 2020-06-04T11:44:42.000Z | from django.shortcuts import render
from rest_framework.viewsets import ModelViewSet
from useraccount.serializer import UserSerializer,AgentUserSerializer
from rest_framework.views import APIView
from useraccount.models import User
from django.http.response import JsonResponse
from django.template.loader import render_to_string
from django.core.mail import send_mail
from rest_framework.authtoken.models import Token
from rest_framework.authentication import TokenAuthentication
from ticket.models import Organization
import random
import array
def username_generator(email):
email = email.split('@')[0]
return email
def password_generator():
passwd = ''
temp_pass_list = []
MAX_LEN = 12
DIGITS = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
LOCASE_CHARACTERS = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h',
'i', 'j', 'k', 'm', 'n', 'o', 'p', 'q',
'r', 's', 't', 'u', 'v', 'w', 'x', 'y',
'z']
UPCASE_CHARACTERS = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',
'I', 'J', 'K', 'M', 'N', 'O', 'p', 'Q',
'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y',
'Z']
SYMBOLS = ['@', '#', '$', '%', '=', ':', '?', '.', '/', '|', '~', '>',
'*', '(', ')', '<&# 039;']
# combines all the character arrays above to form one array
COMBINED_LIST = DIGITS + UPCASE_CHARACTERS + LOCASE_CHARACTERS + SYMBOLS
# randomly select at least one character from each character set above
rand_digit = random.choice(DIGITS)
rand_upper = random.choice(UPCASE_CHARACTERS)
rand_lower = random.choice(LOCASE_CHARACTERS)
rand_symbol = random.choice(SYMBOLS)
temp_pass = rand_digit + rand_upper + rand_lower + rand_symbol
for x in range(MAX_LEN - 4):
temp_pass = temp_pass + random.choice(COMBINED_LIST)
temp_pass_list=array.array('&# 039;u&# 039;, temp_pass')
random.shuffle(temp_pass_list)
for x in temp_pass_list:
passwd +=x
return passwd
class UserModelViewset(ModelViewSet):
serializer_class = UserSerializer
authentication_classes = [TokenAuthentication]
queryset = User.objects.all()
def create(self,request,*args,**kwargs):
ser_data = self.get_serializer(data = request.data)
if ser_data.is_valid():
org=Organization.objects.create(name = request.data.get('org_name'))
user = User.objects.create_user(request.data.get('username'), request.data.get('email'),
request.data.get('password'),is_admin = True,organization = org)
usr = request.data['username']
msg_html = render_to_string('email_template.html',{'usr':usr})
send_mail('Subject here','Here is the message.','chouhansagar131@gmail.com',
[request.data['email'],'chouhansagar131@gmail.com'],html_message=msg_html,
fail_silently=False,
)
token = str(Token.objects.create(user=user))
return JsonResponse({'token':token,'user':ser_data.data})
else:
return JsonResponse(ser_data.errors)
class AgentUserViewSet(ModelViewSet):
serializer_class = AgentUserSerializer
queryset = User.objects.filter(is_admin = False)
def create(self,request,*args,**kwargs):
ser_data = self.get_serializer(data = request.data)
if ser_data.is_valid():
email = request.data.get('email')
username = username_generator(email)
password = '12345678'
org = Organization.objects.get(name = request.data.get('org_name'))
user = User.objects.create_user(username=username,password= password,email = email,organization = org)
usr_ser = UserSerializer(user)
token = str(Token.objects.create(user=user))
return JsonResponse({'token':token,'username':username,'password':password})
else:
return JsonResponse(ser_data.errors)
| 36.238938 | 114 | 0.60464 | 473 | 4,095 | 5.088795 | 0.323467 | 0.0457 | 0.034898 | 0.010802 | 0.221853 | 0.221853 | 0.192771 | 0.192771 | 0.192771 | 0.192771 | 0 | 0.012076 | 0.25177 | 4,095 | 112 | 115 | 36.5625 | 0.773499 | 0.031013 | 0 | 0.15 | 0 | 0 | 0.077702 | 0.012655 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.175 | 0.1625 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
54c4dc3efeaaf5e89758e47b3cc255b10a88682a | 1,160 | py | Python | setup.py | ionata/django-unique-uploadto | da66ed30d6abd86566d9b141e3c48b10340740a2 | [
"BSD-3-Clause"
] | null | null | null | setup.py | ionata/django-unique-uploadto | da66ed30d6abd86566d9b141e3c48b10340740a2 | [
"BSD-3-Clause"
] | 1 | 2017-11-21T22:11:24.000Z | 2017-11-22T00:38:17.000Z | setup.py | ionata/django-unique-uploadto | da66ed30d6abd86566d9b141e3c48b10340740a2 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
from __future__ import absolute_import, print_function, unicode_literals
from setuptools import setup, find_packages
from unique_uploadto import __version__
with open('README.rst', 'r') as f:
readme = f.read()
setup(
name='django-unique-uploadto',
version=__version__,
description='Use a unique filename for django uploads',
long_description=readme,
author='Ionata Digital',
author_email='webmaster@ionata.com.au',
url='https://github.com/ionata/django-unique-uploadto',
license='BSD',
packages=find_packages(),
install_requires=[
'django>=1.8.0',
],
package_data={},
include_package_data=True,
classifiers=[
'Environment :: Web Environment',
'Intended Audience :: Developers',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Framework :: Django',
],
)
| 27.619048 | 72 | 0.64569 | 125 | 1,160 | 5.8 | 0.6 | 0.157241 | 0.206897 | 0.107586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012209 | 0.223276 | 1,160 | 41 | 73 | 28.292683 | 0.792453 | 0.017241 | 0 | 0.060606 | 0 | 0 | 0.438104 | 0.039508 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.030303 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
54c84616a029f134346dc45645dd043f6f816a04 | 793 | py | Python | scripts/python/helper/decoration.py | sulthonzh/zaruba | ec9262f43da17d86330da2c593b7da451aabd60f | [
"Apache-2.0"
] | null | null | null | scripts/python/helper/decoration.py | sulthonzh/zaruba | ec9262f43da17d86330da2c593b7da451aabd60f | [
"Apache-2.0"
] | null | null | null | scripts/python/helper/decoration.py | sulthonzh/zaruba | ec9262f43da17d86330da2c593b7da451aabd60f | [
"Apache-2.0"
] | null | null | null | import random
normal="\033[0m"
bold="\033[1m"
faint="\033[2m"
italic="\033[3m"
underline="\033[4m"
blinkSlow="\033[5m"
blinkRapid="\033[6m"
inverse="\033[7m"
conceal="\033[8m"
crossedOut="\033[9m"
black="\033[30m"
red="\033[31m"
green="\033[32m"
yellow="\033[33m"
blue="\033[34m"
magenta="\033[35m"
cyan="\033[36m"
white="\033[37m"
bgBlack="\033[40m"
bgRed="\033[41m"
bgGreen="\033[42m"
bgYellow="\033[43m"
bgBlue="\033[44m"
bgMagenta="\033[45m"
bgCyan="\033[46m"
bgWhite="\033[47m"
noStyle="\033[0m"
noUnderline="\033[24m"
noInverse="\033[27m"
noColor="\033[39m"
def generate_icon() -> str:
icon_list = ['🥜', '🍄', '🌰', '🍞', '🥐', '🥖', '🥞', '🧀', '🍖', '🍗', '🥓', '🍔', '🍟', '🍕', '🌭', '🌮', '🌯', '🥙', '🍲', '🥗', '🍿']
index = random.randrange(0, len(icon_list))
return icon_list[index]
| 20.333333 | 121 | 0.583859 | 130 | 793 | 3.692308 | 0.715385 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196353 | 0.100883 | 793 | 38 | 122 | 20.868421 | 0.447405 | 0 | 0 | 0 | 0 | 0 | 0.315259 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.028571 | 0 | 0.085714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49a498a0dfc278640dff975e47a36448f00bf3bc | 2,918 | py | Python | data_structures/tree/avl_tree.py | hongta/practice-python | 52d5278ea5402ea77054bfa5c4bfdbdf81c9c963 | [
"MIT"
] | null | null | null | data_structures/tree/avl_tree.py | hongta/practice-python | 52d5278ea5402ea77054bfa5c4bfdbdf81c9c963 | [
"MIT"
] | null | null | null | data_structures/tree/avl_tree.py | hongta/practice-python | 52d5278ea5402ea77054bfa5c4bfdbdf81c9c963 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from tree_node import AVLTreeNode
from binary_search_tree import BinarySearchTree
class AVLTree(BinarySearchTree):
def __init__(self):
super(AVLTree, self).__init__()
def insert(self, k, payload=None):
# tree is empty construct the tree
if not self._root:
self._root= AVLTreeNode(k,payload)
else:
n = AVLTreeNode(k, payload)
self._insert(self._root, n)
def _insert(self, tree_node, new_node):
if new_node.key == tree_node.key:
tree_node.payload = new_node.payload
return tree_node
if new_node.key < tree_node.key:
if not tree_node.left:
tree_node.set_children(left=new_node)
else:
self._insert(tree_node.left, new_node)
else:
if not tree_node.right:
tree_node.set_children(right=new_node)
else:
self._insert(tree_node.right, new_node)
return self._avl_insert_fixup(tree_node)
def _avl_insert_fixup(self, node):
# 2. update height of the ancestor node
self._update_height(node)
# 3. check whether the node became unbalanced
balance = self.get_balance(node)
if self.get_balance(node) ==2:
if self.get_balance(node.right) < 0:
node.right = self._right_rotate(node.right)
return self._left_rotate(node)
if self.get_balance == -2:
if self.get_balance(node.left) > 0:
node.left = self._left_rotate(node.left)
return self._right_rotate(node)
return node
def _update_height(self, node):
node.height = max(self.height(node.left), self.height(node.right)) + 1
def height(self, n):
if not n:
return 0
else:
return n.height
def get_balance(self, node):
if not node:
return 0
return self.height(node.right) - self.height(node.left);
def _right_rotate(self, node):
k1 = node.left
self._replace_with(node, k1)
node.set_children(left=k1.right)
k1.set_children(right=node)
self._update_height(node)
self._update_height(k1)
return k1
def _left_rotate(self, node):
k2 = node.right
self._replace_with(node, k2)
node.set_children(right=k2.left)
k2.set_children(left=node)
self._update_height(node)
self._update_height(k2)
return k2
if __name__ == '__main__':
t = AVLTree()
t.insert(10)
t.insert(15)
t.insert(20)
t.insert(25)
t.insert(30)
p = t.search(20)
print p, p.left, p.right, p.height, p.parent
p = t.search(15)
print p, p.left, p.right, p.height, p.parent
p = t.search(25)
print p, p.left, p.right, p.height, p.parent
| 26.770642 | 78 | 0.59013 | 396 | 2,918 | 4.116162 | 0.186869 | 0.063804 | 0.042945 | 0.06135 | 0.252147 | 0.210429 | 0.184663 | 0.14908 | 0.066871 | 0.066871 | 0 | 0.018793 | 0.30706 | 2,918 | 108 | 79 | 27.018519 | 0.787339 | 0.053804 | 0 | 0.166667 | 0 | 0 | 0.002903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.025641 | null | null | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49aa6dbb7d625a529dc7cc00fc711016b4a758db | 3,614 | py | Python | scripts/collect.py | oveis/DeepVideoFaceSwap | e507f94d4f5d74c36e41c386c6fb14bb745a4885 | [
"MIT"
] | 5 | 2019-05-17T11:54:04.000Z | 2020-10-06T18:45:17.000Z | scripts/collect.py | oveis/DeepVideoFaceSwap | e507f94d4f5d74c36e41c386c6fb14bb745a4885 | [
"MIT"
] | null | null | null | scripts/collect.py | oveis/DeepVideoFaceSwap | e507f94d4f5d74c36e41c386c6fb14bb745a4885 | [
"MIT"
] | 5 | 2019-06-05T00:20:24.000Z | 2019-09-15T15:40:23.000Z | #!/usr/bin python3
""" The script to collect training data """
import logging
import os
import cv2 as cv
import numpy as np
from google_images_download import google_images_download as gid
from lib.utils import get_folder
from os.path import exists, isfile, join
logger = logging.getLogger(__name__) # pylint: disable=invalid-name
FRONT_FACE_CASCADE = cv.CascadeClassifier('scripts/haarcascades/haarcascade_frontalface_default.xml')
PROFILE_FACE_CASCADE = cv.CascadeClassifier('scripts/haarcascades/haarcascade_profileface.xml')
# TODO: Need a function to put images in S3 bucket.
# TODO: Retrieve face images from a given video file.
class Collect():
""" Data collect process. """
def __init__(self, arguments):
logger.debug("Initializing %s: (args: %s", self.__class__.__name__, arguments)
self.args = arguments
self.output_dir = get_folder(self.args.output_dir)
self.limit = self.args.limit
self.keywords = self.args.keywords
self.driver_path = self.args.driver_path
self.extract_face = False
self.face_img_shape = (64, 64)
logger.debug("Initialized %s", self.__class__.__name__)
def process(self):
images_dir = join(self.output_dir, 'images')
# Images are downloaded in 'images_dir/<keywords>'.
self._download_images_from_google(images_dir)
# Extract faces from images.
if self.extract_face:
faces_dir = join(self.output_dir, 'faces')
self._detect_and_save_faces(join(images_dir, self.keywords), join(faces_dir, self.keywords))
# Examples: https://google-images-download.readthedocs.io/en/latest/examples.html
# Argument: https://google-images-download.readthedocs.io/en/latest/arguments.html
def _download_images_from_google(self, output_dir):
self._check_dir_path(output_dir)
params = {
'keywords': self.keywords,
"limit": self.limit,
'output_directory': output_dir
}
if self.limit >= 100:
params['chromedriver'] = self.driver_path
downloader = gid.googleimagesdownload()
downloader.download(params)
def _save_faces(self, img, faces, output_dir, file_id):
self._check_dir_path(output_dir)
for i in range(len(faces)):
x, y, w, h = faces[i]
face_img = img[y:y+h, x:x+w]
output_file_path = join(output_dir, '{}_{}.jpeg'.format(file_id, i))
print(output_file_path)
face_img = cv.resize(face_img, self.face_img_shape)
cv.imwrite(output_file_path, face_img)
def _detect_and_save_faces(self, images_dir, faces_dir):
self._check_dir_path(images_dir)
self._check_dir_path(faces_dir)
file_names = [f for f in os.listdir(images_dir) if isfile(join(images_dir, f))]
for file_name in file_names:
file_id = file_name.split('.')[0]
img = cv.imread(join(images_dir, file_name))
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
frontal_faces = FRONT_FACE_CASCADE.detectMultiScale(gray, 1.3, 5)
self._save_faces(img, frontal_faces, join(faces_dir, 'frontal'), file_id)
profile_faces = PROFILE_FACE_CASCADE.detectMultiScale(gray, 1.3, 5)
self._save_faces(img, profile_faces, join(faces_dir, 'profile'), file_id)
def _check_dir_path(self, dir_path):
if not exists(dir_path):
os.makedirs(dir_path) | 36.505051 | 104 | 0.649972 | 467 | 3,614 | 4.72591 | 0.286938 | 0.040779 | 0.027186 | 0.028999 | 0.219755 | 0.164024 | 0.141368 | 0.086996 | 0.04531 | 0.04531 | 0 | 0.00663 | 0.248755 | 3,614 | 99 | 105 | 36.505051 | 0.806262 | 0.123409 | 0 | 0.032258 | 0 | 0 | 0.070181 | 0.033026 | 0 | 0 | 0 | 0.010101 | 0 | 1 | 0.096774 | false | 0 | 0.112903 | 0 | 0.225806 | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49aaf3536a9b3013f2535a7951571b5299a8099f | 604 | py | Python | heisen/core/__init__.py | HeisenCore/heisen | 0cd4d27822960553a8e83a72c7dfeefa76e65c06 | [
"MIT"
] | 5 | 2016-08-30T07:51:08.000Z | 2021-09-13T11:30:05.000Z | heisen/core/__init__.py | HeisenCore/heisen | 0cd4d27822960553a8e83a72c7dfeefa76e65c06 | [
"MIT"
] | 15 | 2016-09-15T19:21:24.000Z | 2016-10-22T16:22:15.000Z | heisen/core/__init__.py | HeisenCore/heisen | 0cd4d27822960553a8e83a72c7dfeefa76e65c06 | [
"MIT"
] | null | null | null | from heisen.config import settings
from jsonrpclib.request import ConnectionPool
def get_rpc_connection():
if settings.CREDENTIALS:
username, passowrd = settings.CREDENTIALS[0]
else:
username = passowrd = None
servers = {'self': []}
for instance_number in range(settings.INSTANCE_COUNT):
servers['self'].append((
'localhost', settings.RPC_PORT + instance_number, username, passowrd
))
servers.update(getattr(settings, 'RPC_SERVERS', {}))
return ConnectionPool(servers, 'heisen', settings.APP_NAME)
rpc_call = get_rpc_connection()
| 27.454545 | 80 | 0.692053 | 65 | 604 | 6.261538 | 0.553846 | 0.117936 | 0.078624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002075 | 0.201987 | 604 | 21 | 81 | 28.761905 | 0.842324 | 0 | 0 | 0 | 0 | 0 | 0.056291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.2 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
49ac5028ee971f3e584f2c491889fc4e4b16901b | 3,023 | py | Python | stub/nginx-status-stub.py | geld-tech/nginx-monitor-dashboard | 3fcd3bd184a0348095c4f4ec91a46ab98ee0ca80 | [
"Apache-2.0"
] | 1 | 2018-07-30T14:01:36.000Z | 2018-07-30T14:01:36.000Z | stub/nginx-status-stub.py | geld-tech/nginx-monitor-dashboard | 3fcd3bd184a0348095c4f4ec91a46ab98ee0ca80 | [
"Apache-2.0"
] | null | null | null | stub/nginx-status-stub.py | geld-tech/nginx-monitor-dashboard | 3fcd3bd184a0348095c4f4ec91a46ab98ee0ca80 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
NGINX Status Stub
Returns sample resources usage
"""
import logging
import logging.handlers
import random
from optparse import OptionParser
from flask import Flask
app = Flask(__name__)
app.debug = True
# Initialisation
logging.basicConfig(format='[%(asctime)-15s] [%(threadName)s] %(levelname)s %(message)s', level=logging.INFO)
logger = logging.getLogger('root')
@app.route("/")
@app.route("/nginx_status", strict_slashes=False)
def nginx_status():
response = '''Active connections: {active}
server accepts handled requests
1650 1650 9255
Reading: {reading} Writing: {writing} Waiting: {waiting}'''.format(active = random.randint(1, 3),
reading = random.randint(0, 3),
writing = random.randint(1, 3),
waiting = random.randint(1, 5))
return response, 200
@app.route("/v")
@app.route("/version", strict_slashes=False)
def version():
response = 'nginx version: nginx/1.10.3 (Ubuntu)'
return response, 200
@app.route("/version_full", strict_slashes=False)
@app.route("/version/full", strict_slashes=False)
def full_version():
response = '''nginx version: nginx/1.10.3 (Ubuntu)
built with OpenSSL 1.0.2g 1 Mar 2016
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads'''
return response, 200
if __name__ == "__main__":
# Parse options
opts_parser = OptionParser()
opts_parser.add_option('--port', type="int", dest='port', help='IP Port to listen to.', default=8000)
opts_parser.add_option('--debug', action='store_true', dest='debug', help='Print verbose output.', default=False)
options, args = opts_parser.parse_args()
if options.debug:
logger.setLevel(logging.DEBUG)
logger.debug('Enabled DEBUG logging level.')
logger.info('Options parsed')
app.run(host='0.0.0.0', port=options.port)
| 50.383333 | 1,124 | 0.700629 | 431 | 3,023 | 4.765661 | 0.401392 | 0.073028 | 0.081792 | 0.03408 | 0.161149 | 0.076923 | 0.076923 | 0.040896 | 0.040896 | 0 | 0 | 0.022745 | 0.141912 | 3,023 | 59 | 1,125 | 51.237288 | 0.769083 | 0.032418 | 0 | 0.069767 | 0 | 0.023256 | 0.562436 | 0.304094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.116279 | 0 | 0.255814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49ad0529acc7b30e818083fbddf61cedb7ec9149 | 1,616 | py | Python | test_question4.py | fmakawa/Practice | 7f6eaa1dde4e46088ca5dcee76de1bb56a363238 | [
"MIT"
] | null | null | null | test_question4.py | fmakawa/Practice | 7f6eaa1dde4e46088ca5dcee76de1bb56a363238 | [
"MIT"
] | null | null | null | test_question4.py | fmakawa/Practice | 7f6eaa1dde4e46088ca5dcee76de1bb56a363238 | [
"MIT"
] | null | null | null | """
Question 4
Level 1
Question:
Write a program which accepts a sequence of comma-separated numbers from console and generate a list and a tuple which contains every number.
Suppose the following input is supplied to the program:
34,67,55,33,12,98
Then, the output should be:
['34', '67', '55', '33', '12', '98']
('34', '67', '55', '33', '12', '98')
Hints:
In case of input data being supplied to the question, it should be assumed to be a console input.
tuple() method can convert list to tuple
"""
import unittest
from unittest.mock import patch
from question4 import listicle, tuplicle, listpicle
class TestDict(unittest.TestCase):
@patch('builtins.input', lambda *args: '34,67,55,33,12,98')
def test_list(self):
d=listicle()
self.assertEqual(d, ['34', '67', '55', '33', '12', '98'], "Supposed to equal ['34', '67', '55', '33', '12', '98']")
@patch('builtins.input', lambda *args: '34,67,55,33,12,98')
def test_tuple(self):
d = tuplicle()
self.assertEqual(d, ('34', '67', '55', '33', '12', '98'),"Supposed to equal ('34', '67', '55', '33', '12', '98')")
@patch('builtins.input', lambda *args: '34,67,55,33,12,98')
def test_listpicle(self):
d = listpicle()
print(d)
self.assertEqual(d[0], ['34', '67', '55', '33', '12', '98'],"Supposed to equal ['34', '67', '55', '33', '12', '98']")
self.assertEqual(d[1], ('34', '67', '55', '33', '12', '98'),"Supposed to equal ('34', '67', '55', '33', '12', '98')")
suite = unittest.TestLoader().loadTestsFromTestCase(TestDict)
unittest.TextTestRunner(verbosity=2).run(suite)
| 36.727273 | 141 | 0.61448 | 243 | 1,616 | 4.074074 | 0.329218 | 0.056566 | 0.084848 | 0.113131 | 0.368687 | 0.368687 | 0.332323 | 0.332323 | 0.332323 | 0.332323 | 0 | 0.131222 | 0.179455 | 1,616 | 43 | 142 | 37.581395 | 0.615385 | 0.305693 | 0 | 0.15 | 1 | 0.2 | 0.320467 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.15 | false | 0 | 0.15 | 0 | 0.35 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49ad2866726183e18afb70540beb33954b2be143 | 543 | py | Python | app/tasks/uwu/uwu.py | tahosa/discord-util-bot | 2f261c5ae06da8a62e72502b53341720437860f5 | [
"MIT"
] | null | null | null | app/tasks/uwu/uwu.py | tahosa/discord-util-bot | 2f261c5ae06da8a62e72502b53341720437860f5 | [
"MIT"
] | null | null | null | app/tasks/uwu/uwu.py | tahosa/discord-util-bot | 2f261c5ae06da8a62e72502b53341720437860f5 | [
"MIT"
] | 1 | 2022-02-09T04:16:54.000Z | 2022-02-09T04:16:54.000Z | import logging
import discord
import discord.ext.commands as commands
_LOG = logging.getLogger('discord-util').getChild("uwu")
class Uwu(commands.Cog):
@commands.Cog.listener()
async def on_message(self, message: discord.Message):
if message.content.lower().startswith('hello bot') or message.content.lower().startswith('hewwo bot'):
await message.channel.send('Hewwo uwu')
return
if message.content.lower().startswith('good bot'):
await message.add_reaction("\N{FLUSHED FACE}")
| 31.941176 | 110 | 0.685083 | 67 | 543 | 5.507463 | 0.552239 | 0.113821 | 0.154472 | 0.235772 | 0.168022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18232 | 543 | 16 | 111 | 33.9375 | 0.831081 | 0 | 0 | 0 | 0 | 0 | 0.121547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49ae4cab0439ba556dfe9b168c615e0466cf0551 | 2,195 | py | Python | test.py | mltnhm/sr-turtle | d839eeb50e4ba70cfc2a4070c9f6fda2f0b19ca2 | [
"MIT"
] | 1 | 2020-04-16T18:06:13.000Z | 2020-04-16T18:06:13.000Z | test.py | mltnhm/sr-turtle | d839eeb50e4ba70cfc2a4070c9f6fda2f0b19ca2 | [
"MIT"
] | 3 | 2019-05-11T20:39:31.000Z | 2019-11-13T10:51:59.000Z | test.py | mltnhm/sr-turtle | d839eeb50e4ba70cfc2a4070c9f6fda2f0b19ca2 | [
"MIT"
] | 1 | 2019-11-12T08:02:52.000Z | 2019-11-12T08:02:52.000Z | from __future__ import print_function
import time
from sr.robot import *
SEARCHING = "SEARCHING"
DRIVING = "DRIVING"
R = Robot()
def drive(speed, seconds):
R.motors[0].m0.power = speed
R.motors[0].m1.power = speed
time.sleep(seconds)
R.motors[0].m0.power = 0
R.motors[0].m1.power = 0
def turn(speed, seconds):
R.motors[0].m0.power = speed
R.motors[0].m1.power = -speed
time.sleep(seconds)
R.motors[0].m0.power = 0
R.motors[0].m1.power = 0
state = SEARCHING
def get_gold_tokens():
gold_tokens = []
for token in R.see():
if token.info.marker_type is MARKER_TOKEN_GOLD:
gold_tokens.append(token)
# Sort list with the closest token first
gold_tokens.sort(key=lambda m: m.dist)
return gold_tokens
while True:
if state == SEARCHING:
print("Searching for gold tokens...")
tokens = get_gold_tokens()
print(tokens)
if len(tokens) > 0:
m = tokens[0]
# TODO: Pick the closest token, not just any token.
print("Token sighted. {0} is {1}m away, bearing {2} degrees." \
.format(m.info.offset, m.dist, m.rot_y))
state = DRIVING
else:
print("Can't see anything.")
turn(25, 0.3)
time.sleep(0.2)
elif state == DRIVING:
print("Aligning...")
tokens = get_gold_tokens()
if len(tokens) == 0:
state = SEARCHING
else:
m = tokens[0]
if m.dist < 0.4:
print("Found it!")
if R.grab():
print("Gotcha!")
turn(50, 0.5)
drive(50, 1)
R.release()
drive(-50, 0.5)
else:
print("Aww, I'm not close enough.")
exit()
elif -15 <= m.rot_y <= 15:
print("Ah, that'll do.")
drive(50, 0.5)
elif m.rot_y < -15:
print("Left a bit...")
turn(-12.5, 0.5)
elif m.rot_y > 15:
print("Right a bit...")
turn(12.5, 0.5)
| 25.229885 | 75 | 0.491116 | 288 | 2,195 | 3.666667 | 0.333333 | 0.05303 | 0.060606 | 0.056818 | 0.291667 | 0.246212 | 0.246212 | 0.223485 | 0.189394 | 0.189394 | 0 | 0.050074 | 0.381321 | 2,195 | 86 | 76 | 25.523256 | 0.727541 | 0.040091 | 0 | 0.253731 | 0 | 0 | 0.100285 | 0 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.044776 | false | 0 | 0.044776 | 0 | 0.104478 | 0.179104 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49b63c647e63040901947f17755b744a1b67eb27 | 298 | py | Python | 17_Greedy/Step05/gamjapark.py | StudyForCoding/BEAKJOON | 84e1c5e463255e919ccf6b6a782978c205420dbf | [
"MIT"
] | null | null | null | 17_Greedy/Step05/gamjapark.py | StudyForCoding/BEAKJOON | 84e1c5e463255e919ccf6b6a782978c205420dbf | [
"MIT"
] | 3 | 2020-11-04T05:38:53.000Z | 2021-03-02T02:15:19.000Z | 17_Greedy/Step05/gamjapark.py | StudyForCoding/BEAKJOON | 84e1c5e463255e919ccf6b6a782978c205420dbf | [
"MIT"
] | null | null | null | import sys
N = int(sys.stdin.readline())
dis = list(map(int, sys.stdin.readline().split()))
coin = list(map(int, sys.stdin.readline().split()))
use_coin = coin[0]
tot = dis[0] * use_coin
for i in range(1, N - 1):
if coin[i] < use_coin:
use_coin = coin[i]
tot += dis[i] * use_coin
print(tot) | 19.866667 | 51 | 0.64094 | 55 | 298 | 3.381818 | 0.381818 | 0.188172 | 0.177419 | 0.306452 | 0.333333 | 0.333333 | 0.333333 | 0 | 0 | 0 | 0 | 0.016 | 0.161074 | 298 | 15 | 52 | 19.866667 | 0.728 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49bc98db6539f3a16066fd5753ae5ccc2e439eb8 | 1,107 | py | Python | tests/test_dht.py | fakegit/stilio | cf198b8ccadc7dcadc462ce83b801af00ef4e2f2 | [
"Apache-2.0"
] | 71 | 2019-10-09T17:18:12.000Z | 2022-02-26T12:15:53.000Z | tests/test_dht.py | zinsking/stilio | eade3c1993e185bef53fa25b4e12fe8be330251c | [
"Apache-2.0"
] | 3 | 2019-10-16T17:52:48.000Z | 2021-12-01T16:50:18.000Z | tests/test_dht.py | zinsking/stilio | eade3c1993e185bef53fa25b4e12fe8be330251c | [
"Apache-2.0"
] | 11 | 2020-01-21T09:09:14.000Z | 2022-03-27T12:05:36.000Z | from stilio.crawler.dht.node import Node
class TestNode:
def setup_method(self):
self.node = Node.create_random("192.168.1.1", 8000)
def test_create_random(self) -> None:
assert self.node.address == "192.168.1.1"
assert self.node.port == 8000
def test_generate_random_id(self) -> None:
assert len(Node.generate_random_id()) == 20
def test_hex_id(self) -> None:
assert self.node.hex_id == self.node.nid.hex()
def test_eq(self) -> None:
random_id = Node.generate_random_id()
assert Node(random_id, "192.168.1.1", 8000) == Node(
random_id, "192.168.1.1", 8000
)
assert Node(random_id, "192.168.1.2", 8000) != Node(
random_id, "192.168.1.1", 8000
)
assert Node(random_id, "192.168.1.1", 8000) != Node(
random_id, "192.168.1.1", 8001
)
assert Node(random_id, "192.168.1.1", 8000) != Node(
Node.generate_random_id(), "192.168.1.1", 8001
)
def test_repr(self) -> None:
assert repr(self.node) == self.node.nid.hex()
| 31.628571 | 60 | 0.581752 | 165 | 1,107 | 3.745455 | 0.2 | 0.15534 | 0.113269 | 0.116505 | 0.438511 | 0.347896 | 0.347896 | 0.309061 | 0.309061 | 0.309061 | 0 | 0.150062 | 0.265583 | 1,107 | 34 | 61 | 32.558824 | 0.610086 | 0 | 0 | 0.148148 | 0 | 0 | 0.099368 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.222222 | false | 0 | 0.037037 | 0 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49bec7c54696e35577e6576d879d884656bd76e8 | 1,937 | py | Python | wordonhd/ApiException.py | Mechazawa/WordOn-HD-Bot | d5a9dedd3d548ad1a9b33f49646e532bf511dd3e | [
"BSD-2-Clause"
] | null | null | null | wordonhd/ApiException.py | Mechazawa/WordOn-HD-Bot | d5a9dedd3d548ad1a9b33f49646e532bf511dd3e | [
"BSD-2-Clause"
] | null | null | null | wordonhd/ApiException.py | Mechazawa/WordOn-HD-Bot | d5a9dedd3d548ad1a9b33f49646e532bf511dd3e | [
"BSD-2-Clause"
] | null | null | null | from enum import Enum
from requests import Response
from urllib.parse import unquote
import json
class ApiErrorCode(Enum):
PHP_INVALID = 0
PHP_MISSING_PARAMS = 1
PHP_AUTH_FAILED = 2
PHP_NAME_INVALID = 4
PHP_USERNAME_INVALID = 5
PHP_USER_ALREADY_EXISTS = 6
PHP_PASSWORD_INVALID = 7
PHP_USER_NOT_FOUND = 8
PHP_WORD_INVALID = 9
PHP_USER_UNAUTH = 10
PHP_NAME_EXISTS = 11
PHP_ALREADY_HAS_ITEM = 12
PHP_NOT_ENOUGH_COINS = 13
PHP_MAX_NAMECHANGES = 14
PHP_USER_MAX_GAMES = 15
PHP_OTHER_USER_MAX_GAMES = 16
PHP_FB_ALREADY_EXISTS = 17
PHP_GAME_INVITE_ALREADY_SENT = 18
PHP_GET_LOCK_FAIL = 19
PHP_NOT_ENOUGH_STARS = 20
PHP_PAYMENT_APPROVAL = 21
PHP_MAX_HS = 22
PHP_USER_TYPE_INVALID = 23
PHP_MISSING_ITEM = 24
PHP_IS_FB_USER = 25
PHP_PROMOCODE_INVALID = 32
PHP_PROMOCODE_ONLY_NEW_PLAYERS = 33
PHP_PROMOCODE_ALREADY_REDEEMED = 34
PHP_DEFINITION_UNSUPPORTED = 48
PHP_DEFINITION_UNAVAILABLE = 49
PHP_DEFINITION_PARSE_ERROR = 50
POLL_INVALID_GAME = 237
POLL_INVALID_AUTH = 238
POLL_INVALID_REQUEST = 239
ALERT_MAX_GAMES = 1
ALERT_SNEAK_PEEK = 2
NULL_ERROR = 251
PARSE_ERROR = 252
SECURITY_ERROR = 253
IO_ERROR = 254
TIME_OUT_ERROR = 255
class ApiException(Exception):
def __init__(self, code):
message = ''
if isinstance(code, dict):
code = int(code['error'])
if isinstance(code, Response):
body = code.request.body
body = dict(list((x.split('=')[0], unquote(x.split('=')[1]))
for x in body.split('&')))
message = body
code = int(code.json()['error'])
name = ApiErrorCode(code).name
message = "{name}, {code}\n{extra}".format(name=name, code=code, extra=message)
message = message.strip()
super(ApiException, self).__init__(message) | 28.485294 | 87 | 0.661848 | 265 | 1,937 | 4.449057 | 0.483019 | 0.029686 | 0.020356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 0.261229 | 1,937 | 68 | 88 | 28.485294 | 0.767296 | 0 | 0 | 0 | 0 | 0 | 0.018576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0.016393 | 0.065574 | 0 | 0.786885 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
49c27444ea8191b6871d22350e36ce9770315509 | 752 | py | Python | qurry/libraries/standard_library/constructs/gaussian.py | LSaldyt/curry | 9004a396ec2e351aa143a10a53156649a6747343 | [
"MIT"
] | 11 | 2018-07-28T17:08:23.000Z | 2019-02-08T03:04:03.000Z | qurry/libraries/standard_library/constructs/gaussian.py | LSaldyt/Qurry | 9004a396ec2e351aa143a10a53156649a6747343 | [
"MIT"
] | 33 | 2019-07-09T09:46:44.000Z | 2019-09-23T23:44:37.000Z | qurry/libraries/standard_library/constructs/gaussian.py | LSaldyt/Qurry | 9004a396ec2e351aa143a10a53156649a6747343 | [
"MIT"
] | 4 | 2019-05-28T01:27:49.000Z | 2019-12-26T18:01:51.000Z | from math import erf, sqrt
from functools import partial
from ..library.multinomial import multinomial, to_multinomial
def gaussian_cdf(x, mu, sigma):
y = (1.0 + erf((x - mu) / (sigma * sqrt(2.0)))) / 2.0
y = (1.0 + erf((x) / (sqrt(2.0)))) / 2.0
assert y >= 0 and y <= 1.0, 'y is not a valid probability: y={}'.format(y)
return y
def gaussian_cdfp(mu, sigma):
return partial(gaussian_cdf, mu=mu, sigma=sigma)
def gaussian(mu, sigma, block, kernel=None):
'''
Construct to create a discrete approximation of the gaussian distribution using mu and sigma
(gaussian 0 1 blocka)
'''
return multinomial(*multinomial(-3, 3, 64, gaussian_cdfp(float(mu), float(sigma))), offset=block, definitions=kernel.definitions)
| 34.181818 | 133 | 0.668883 | 117 | 752 | 4.25641 | 0.401709 | 0.070281 | 0.018072 | 0.024096 | 0.060241 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034426 | 0.18883 | 752 | 21 | 134 | 35.809524 | 0.781967 | 0.151596 | 0 | 0 | 0 | 0 | 0.055105 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.25 | false | 0 | 0.25 | 0.083333 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49c45451bcf8f4588b0bba3456a64c9403ea4bc6 | 1,071 | py | Python | kickbase_api/models/league_user_stats.py | jhelgert/kickbase-api-python | 6e8b12c69cf36a4ce5c3ac37f9328cde5946a3e2 | [
"MIT"
] | 7 | 2020-08-17T07:20:30.000Z | 2022-02-03T19:21:53.000Z | kickbase_api/models/league_user_stats.py | jhelgert/kickbase-api-python | 6e8b12c69cf36a4ce5c3ac37f9328cde5946a3e2 | [
"MIT"
] | 4 | 2020-11-01T10:39:11.000Z | 2021-07-30T12:20:52.000Z | kickbase_api/models/league_user_stats.py | jhelgert/kickbase-api-python | 6e8b12c69cf36a4ce5c3ac37f9328cde5946a3e2 | [
"MIT"
] | 4 | 2020-11-01T09:12:39.000Z | 2021-08-23T13:25:00.000Z | from datetime import datetime
from kickbase_api.models._transforms import parse_date, parse_key_value_array_to_dict
from kickbase_api.models.base_model import BaseModel
from kickbase_api.models.league_user_season_stats import LeagueUserSeasonStats
class LeagueUserStats(BaseModel):
name: str = None
profile_image_path: str = None
cover_image_path: str = None
flags: int = None
placement: int = None
points: int = None
team_value: float = None
seasons: [LeagueUserSeasonStats] = None
team_values: {datetime: float}
def __init__(self, d: dict = {}):
self._json_transform = {
"teamValues": parse_key_value_array_to_dict(lambda o: parse_date(o["d"]), lambda o: o["v"]),
"seasons": lambda v: [LeagueUserSeasonStats(_d) for _d in v]
}
self._json_mapping = {
"profileUrl": "profile_image_path",
"coverUrl": "cover_image_path",
"teamValue": "team_value",
"teamValues": "team_values"
}
super().__init__(d)
| 32.454545 | 104 | 0.655462 | 126 | 1,071 | 5.206349 | 0.444444 | 0.054878 | 0.068598 | 0.096037 | 0.073171 | 0.073171 | 0 | 0 | 0 | 0 | 0 | 0 | 0.248366 | 1,071 | 32 | 105 | 33.46875 | 0.814907 | 0 | 0 | 0 | 0 | 0 | 0.103641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.153846 | 0 | 0.576923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
49c93ee339debd703889e1a8187ecdfd356689ca | 1,999 | py | Python | settings.py | ArneBinder/Pytorch-LRP | c17902138f1d560f1f5d38f401ac856e071a5800 | [
"BSD-3-Clause"
] | 117 | 2019-03-19T08:47:03.000Z | 2022-03-31T04:14:51.000Z | settings.py | ArneBinder/Pytorch-LRP | c17902138f1d560f1f5d38f401ac856e071a5800 | [
"BSD-3-Clause"
] | 10 | 2019-09-15T14:59:43.000Z | 2022-03-15T14:18:02.000Z | settings.py | ArneBinder/Pytorch-LRP | c17902138f1d560f1f5d38f401ac856e071a5800 | [
"BSD-3-Clause"
] | 49 | 2019-03-19T08:47:03.000Z | 2021-11-30T01:02:04.000Z | """
Settings for re-running the experiments from the paper "Layer-wise
relevance propagation for explaining deep neural network decisions
in MRI-based Alzheimer’s disease classification".
Please note that you need to download the ADNI data from
http://adni.loni.usc.edu/ and preprocess it using
https://github.com/ANTsX/ANTs/blob/master/Scripts/antsRegistrationSyNQuick.sh
Please prepare the data, such that you will get three HDF5 files,
consisting of a training, a validation and a holdout (test) set.
Each HDF5 file is required to have 2 datasets, namely X and y,
containing the data matrix and label vector accordingly. We have
included the "Data Split ADNI.ipynb" file as a guideline for data splitting.
Please note that it is highly dependent on the format of your data storage
and needs to be individualized as such.
Furthermore you will need SPM12 https://www.fil.ion.ucl.ac.uk/spm/software/spm12/
in order to access the Neuromorphometrics atlas.
Arguments:
model_path: Path to the trained pytorch model parameters
data_path: Path where the outputs will be stored and retrieved
ADNI_DIR: Path to the root of your downloaded ADNI data
train_h5: Path to the training set HDF5 file
val_h5: Path to the validation set HDF5 file
holdout_h5: Path to the holdout set HDF5 file
binary_brain_mask: Path to the mask used for masking the images,
included in the repository.
nmm_mask_path: Path to the Neuromorphometrics mask. Needs to be
acquired from SPM12. Typically located under
~/spm12/tpm/labels_Neuromorphometrics.nii
nmm_mask_path_scaled: Path to the rescaled Neuromorphometrics mask.
"""
settings = {
"model_path": INSERT,
"data_path": INSERT,
"ADNI_DIR": INSERT,
"train_h5": INSERT,
"val_h5": INSERT,
"holdout_h5": INSERT,
"binary_brain_mask": "binary_brain_mask.nii.gz",
"nmm_mask_path": "~/spm12/tpm/labels_Neuromorphometrics.nii",
"nmm_mask_path_scaled": "nmm_mask_rescaled.nii"
}
| 40.795918 | 81 | 0.758379 | 308 | 1,999 | 4.818182 | 0.474026 | 0.032345 | 0.048518 | 0.022237 | 0.070081 | 0.070081 | 0.070081 | 0.070081 | 0.070081 | 0 | 0 | 0.013366 | 0.176588 | 1,999 | 48 | 82 | 41.645833 | 0.888214 | 0.826913 | 0 | 0 | 0 | 0 | 0.556548 | 0.255952 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49ca8eabe12b4dbe3823135f9cccd4003e5ec8f9 | 274 | py | Python | compiler_test.py | zpcore/ACOW | 9d9186eb28af3e5e1242621457f36d5a7910366a | [
"MIT"
] | null | null | null | compiler_test.py | zpcore/ACOW | 9d9186eb28af3e5e1242621457f36d5a7910366a | [
"MIT"
] | null | null | null | compiler_test.py | zpcore/ACOW | 9d9186eb28af3e5e1242621457f36d5a7910366a | [
"MIT"
] | null | null | null | '''
# Test the compiler
'''
from ACOW import *
data = '''
a1 U[1,2] !a0&G[1,3]a3
'''
print('MTL Formula:',data)
# Test lex
print('\nLex Test:')
lexer.input(data)
for tok in lexer:
print(tok)
# Test parser
print('\nParser Test:')
result = parser.parse(data)
print(result) | 13.7 | 27 | 0.645985 | 45 | 274 | 3.933333 | 0.644444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030172 | 0.153285 | 274 | 20 | 28 | 13.7 | 0.732759 | 0.149635 | 0 | 0 | 0 | 0 | 0.271111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.416667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
49cc558662f5dd7e7fb056fd6f79d57effb78d66 | 315 | py | Python | insta/admin.py | Stephenremmi/insta-clone | 88af361dca160f7840842ebebc306a02f97920ca | [
"MIT"
] | null | null | null | insta/admin.py | Stephenremmi/insta-clone | 88af361dca160f7840842ebebc306a02f97920ca | [
"MIT"
] | 3 | 2021-03-30T13:54:34.000Z | 2021-09-08T02:17:46.000Z | insta/admin.py | Stephenremmi/insta-clone | 88af361dca160f7840842ebebc306a02f97920ca | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Post, Comment, UserProfile
class ProfileAdmin(admin.ModelAdmin):
filter_horizontal =("followers", "following",)
# Register your models here.
admin.site.register(Post)
admin.site.register(Comment)
admin.site.register(UserProfile, admin_class=ProfileAdmin)
| 26.25 | 58 | 0.796825 | 38 | 315 | 6.552632 | 0.526316 | 0.108434 | 0.204819 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098413 | 315 | 11 | 59 | 28.636364 | 0.876761 | 0.08254 | 0 | 0 | 0 | 0 | 0.062937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49cd3c2fd0bbd8a92289c21bd54ca7e440919719 | 25,042 | py | Python | travelling/migrations/0001_initial.py | HerbyDE/jagdreisencheck-webapp | 9af5deda2423b787da88a0c893f3c474d8e4f73f | [
"BSD-3-Clause"
] | null | null | null | travelling/migrations/0001_initial.py | HerbyDE/jagdreisencheck-webapp | 9af5deda2423b787da88a0c893f3c474d8e4f73f | [
"BSD-3-Clause"
] | null | null | null | travelling/migrations/0001_initial.py | HerbyDE/jagdreisencheck-webapp | 9af5deda2423b787da88a0c893f3c474d8e4f73f | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.16 on 2018-11-27 14:43
from __future__ import unicode_literals
import ckeditor.fields
from django.conf import settings
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django.db.models.deletion
import django_countries.fields
import jagdreisencheck.custom_fields
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('accounts', '0006_auto_20181121_2205'),
('cms', '0020_old_tree_cleanup'),
]
operations = [
migrations.CreateModel(
name='AccommodationPrice',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('acc_type', models.CharField(choices=[('S', 'Self Organized'), ('C', 'Camping Sight'), ('B', 'Bungalow/Simple Accomodation'), ('BB', 'Bed & Breakfast'), ('H', 'Hotel')], max_length=2, null=True, verbose_name='Accommodation')),
('price_hunter', models.FloatField(null=True, verbose_name='Price per hunter')),
('price_non_hunter', models.FloatField(null=True, verbose_name='Price per accompanying person')),
('calc_base', models.CharField(choices=[('DAY', 'Per day')], max_length=3, null=True, verbose_name='Calculation base')),
],
),
migrations.CreateModel(
name='Game',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=150, unique=True, verbose_name='Name')),
('pub_date', models.DateTimeField(auto_now_add=True, verbose_name='Date of Creation')),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Creator')),
],
options={
'verbose_name': 'Game',
'verbose_name_plural': 'Games',
},
),
migrations.CreateModel(
name='GamePrice',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('type', models.CharField(blank=True, max_length=75, null=True, verbose_name='Gender/Type')),
('calc_base', models.CharField(choices=[('CIC', 'CIC points'), ('PCS', 'Pieces'), ('KGS', 'Per kg'), ('AGE', 'Age class')], max_length=3, null=True, verbose_name='Calculation base')),
('base_range', models.CharField(max_length=20, null=True, verbose_name='Range')),
('trophy_costs', models.FloatField(null=True, verbose_name='Trophy costs')),
('wounded_costs', models.FloatField(null=True, verbose_name='Wounded but not found - costs')),
('private_notes', models.TextField(blank=True, null=True, verbose_name='Private notes')),
('public_notes', models.TextField(blank=True, null=True, verbose_name='Public notes')),
('game', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='travelling.Game', verbose_name='Game')),
],
),
migrations.CreateModel(
name='PriceList',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('last_modified', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Last modified')),
],
),
migrations.CreateModel(
name='Rating',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language', models.CharField(max_length=6, null=True, verbose_name='Language')),
('date_created', models.DateTimeField(auto_now_add=True, null=True, verbose_name='Creation Date')),
('last_modified', models.DateTimeField(blank=True, null=True, verbose_name='Last Modified')),
('agree_to_rules_of_contribution', models.BooleanField(default=False, verbose_name='Agree to Rules of Contribution')),
('name', models.CharField(max_length=90, null=True, verbose_name='Title')),
('description', models.TextField(blank=True, max_length=3000, null=True, verbose_name='Detailed Trip Description')),
('nps_indication', models.PositiveIntegerField(choices=[(1, 'No recommendation'), (2, 'Rather no recommendation'), (3, 'Indifferent'), (4, 'Recommendation'), (5, 'Definite recommendation')], default=3, null=True, verbose_name='Would you recommend the trip?')),
('trophies', django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True, verbose_name='Trophies')),
('meal_option', models.CharField(choices=[('N', 'No Meals Included'), ('B', 'Breakfast Included'), ('H', 'Breakfast & Dinner Included'), ('A', 'All Inclusive')], max_length=2, null=True, verbose_name='Catering Option')),
('meal_quality', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Catering Quality')),
('accommodation_type', models.CharField(choices=[('S', 'Self Organized'), ('C', 'Camping Sight'), ('B', 'Bungalow/Simple Accomodation'), ('BB', 'Bed & Breakfast'), ('H', 'Hotel')], default='S', max_length=2, null=True, verbose_name='Accommodation Type')),
('accommodation_rating', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Accommodation Rating')),
('support_with_issues', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Operator Support with Issues')),
('price_utility', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Price/Utility')),
('use_of_dogs', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Did you make use of dogs?')),
('dog_purpose', models.CharField(blank=True, choices=[('NO', 'No Dogs were needed'), ('NH', 'Chasing Dogs'), ('DR', 'Joint Hunt'), ('PI', 'Deerstalking Support')], max_length=3, null=True, verbose_name='What did you use the dogs for?')),
('dog_quality', models.IntegerField(blank=True, choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Quality of dogs')),
('game_density', models.IntegerField(choices=[(1, 'Too sparse'), (3, 'Rather too sparse'), (5, 'Optimal density'), (3, 'Rather too dense'), (1, 'Too dense')], null=True, verbose_name='How dense was the wildlife?')),
('game_age_dist', models.IntegerField(choices=[(1, 'Too young'), (3, 'Rather too young'), (5, 'Optimal'), (3, 'Rather too old'), (1, 'Too old'), (0, 'Unknown')], null=True, verbose_name="How was the wildlife's age distributed?")),
('game_gender_dist', models.IntegerField(choices=[(1, 'Predominantly female game'), (3, 'Slight overweight of female game'), (5, 'Good gender distribution'), (3, 'Slight overweight of male game'), (1, 'Predominantly male game'), (0, 'Unknown')], null=True, verbose_name="How did you experience the wildlife's gender distribution?")),
('hunt_in_wilderness', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Did you hunt in the wilderness?')),
('check_strike_pos', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Was the strike position of your rifle checked?')),
('check_hunt_license', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Was your hunting license validated?')),
('professional_hunter_quality', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Quality of the professional hunter')),
('customer_support', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Customer Support')),
('hunting_introduction', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Introduction to local hunting conditions')),
('staff_languages', jagdreisencheck.custom_fields.ListField(null=True, verbose_name='Languages spoken at the hunting site')),
('communication_quality', models.IntegerField(choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Communication between staff and yourself')),
('alternative_program', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Did you make use of alternative program')),
('quality_alternative_program', models.IntegerField(blank=True, choices=[(1, 'Bad'), (2, 'Rather Bad'), (3, 'Neutral'), (4, 'Rather Good'), (5, 'Good')], null=True, verbose_name='Quality of the alternative program')),
('economic_rating', models.DecimalField(decimal_places=4, max_digits=5, null=True, verbose_name='Economic Rating')),
('ecologic_rating', models.DecimalField(decimal_places=4, max_digits=5, null=True, verbose_name='Ecologic Rating')),
('social_rating', models.DecimalField(decimal_places=4, max_digits=5, null=True, verbose_name='Socio-Cultural Rating')),
('overall_rating', models.DecimalField(decimal_places=4, max_digits=5, null=True, verbose_name='Total Rating')),
],
),
migrations.CreateModel(
name='TravelInquiry',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=150, null=True, verbose_name='Name')),
('email', models.EmailField(max_length=254, null=True, verbose_name='E-Mail')),
('kind_of_inquiry', models.CharField(choices=[('S', 'Solo Travel'), ('HG', 'Group Travel (Only Hunters)'), ('MG', 'Group Travel (Hunters/Non-Hunters)'), ('OT', 'Other')], max_length=2, null=True, verbose_name='Kind of Inquiry')),
('inquiry', ckeditor.fields.RichTextField(null=True, verbose_name='Travel Inquiry')),
('consent_to_be_contacted', models.BooleanField(default=False, verbose_name='Consent to be contacted')),
('date', models.DateTimeField(auto_now_add=True, verbose_name='Date of Inquiry')),
('status', models.BooleanField(default=True, verbose_name='Status')),
],
),
migrations.CreateModel(
name='Trip',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('consent_to_travel_rules', models.BooleanField(default=False, verbose_name='Consent to Publishing Rules')),
('name', models.CharField(blank=True, max_length=150, null=True, verbose_name='Name')),
('country', django_countries.fields.CountryField(max_length=2, verbose_name='Country')),
('region', models.CharField(max_length=300, verbose_name='Region / Territory')),
('available_accommodation_types', jagdreisencheck.custom_fields.ListField(blank=True, null=True, verbose_name='Available Accommodations')),
('private_parking', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Private Parking')),
('airport_transfer', models.NullBooleanField(choices=[(False, 'No'), (True, 'Yes'), (None, 'Unknown')], default=None, verbose_name='Airport Transfer')),
('available_hunting_types', jagdreisencheck.custom_fields.ListField(verbose_name='Hunting Types')),
('rifle_rentals', models.NullBooleanField(choices=[(False, 'No'), (True, 'Yes'), (None, 'Unknown')], default=None, verbose_name='Rifle Rentals')),
('hunting_start_time', models.IntegerField(choices=[(1, 'January'), (2, 'February'), (3, 'March'), (4, 'April'), (5, 'May'), (6, 'June'), (7, 'July'), (8, 'August'), (9, 'September'), (10, 'October'), (11, 'November'), (12, 'December')], default=5, verbose_name='Start of Season')),
('hunting_end_time', models.IntegerField(choices=[(1, 'January'), (2, 'February'), (3, 'March'), (4, 'April'), (5, 'May'), (6, 'June'), (7, 'July'), (8, 'August'), (9, 'September'), (10, 'October'), (11, 'November'), (12, 'December')], default=10, verbose_name='End of Season')),
('family_offers', models.NullBooleanField(choices=[(False, 'No'), (True, 'Yes'), (None, 'Unknown')], default=None, verbose_name='Family Offers')),
('alternative_activities', models.NullBooleanField(choices=[(False, 'No'), (True, 'Yes'), (None, 'Unknown')], default=None, verbose_name='Alternative Offers')),
('available_meal_options', jagdreisencheck.custom_fields.ListField(blank=True, null=True, verbose_name='Catering Options')),
('staff_languages', jagdreisencheck.custom_fields.ListField(verbose_name='Staff Languages')),
('interpreter_at_site', models.NullBooleanField(choices=[(False, 'No'), (True, 'Yes'), (None, 'Unknown')], default=None, verbose_name='Interpreting Service')),
('wireless_coverage', models.NullBooleanField(choices=[(False, 'No'), (True, 'Yes'), (None, 'Unknown')], default=None, verbose_name='Wireless Coverage')),
('broadband_internet', models.NullBooleanField(choices=[(False, 'No'), (True, 'Yes'), (None, 'Unknown')], default=None, verbose_name='Broadband Internet')),
('vendor_link', models.URLField(blank=True, null=True, verbose_name='Vendor Link')),
('description', ckeditor.fields.RichTextField(blank=True, max_length=8000, null=True, verbose_name='Trip Description')),
('featured', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Featured')),
('featured_start_date', models.DateTimeField(auto_now=True, null=True, verbose_name='Featuring Start')),
('featured_end_date', models.DateTimeField(blank=True, null=True, verbose_name='Featuring End')),
('sponsored', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Sponsored')),
('sponsored_start_date', models.DateTimeField(auto_now=True, null=True, verbose_name='Sponsoring Start')),
('sponsored_end_date', models.DateTimeField(blank=True, null=True, verbose_name='Sponsoring End')),
('reviewed', models.BooleanField(choices=[(False, 'No'), (True, 'Yes')], default=False, verbose_name='Reviewed')),
('overall_rating', models.DecimalField(decimal_places=4, max_digits=6, null=True, verbose_name='Overall Rating')),
('rating_economic', models.DecimalField(decimal_places=4, max_digits=6, null=True, verbose_name='Economic Rating')),
('rating_ecologic', models.DecimalField(decimal_places=4, max_digits=6, null=True, verbose_name='Ecologic Rating')),
('rating_sociocultural', models.DecimalField(decimal_places=4, max_digits=6, null=True, verbose_name='Socio-Cultural Rating')),
('slogan', models.CharField(blank=True, max_length=75, null=True, verbose_name='Slogan')),
('pub_date', models.DateTimeField(auto_now=True, verbose_name='Publication Date')),
('last_modified', models.DateTimeField(auto_now=True, null=True, verbose_name='Last Modified')),
('views', models.IntegerField(default=0, verbose_name='Views')),
('tech_name', models.CharField(max_length=30, null=True, verbose_name='Technical Name')),
('slug', models.SlugField(null=True, verbose_name='Absolute URL')),
('headline_image', models.ImageField(blank=True, null=True, upload_to='trips/headline_images/', verbose_name='Title Image')),
('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='accounts.CompanyName', verbose_name='Company')),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='creator', to=settings.AUTH_USER_MODEL, verbose_name='Creator')),
('game', models.ManyToManyField(to='travelling.Game', verbose_name='Game')),
('reviewed_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='reviewer', to=settings.AUTH_USER_MODEL, verbose_name='Reviewed By')),
],
options={
'verbose_name': 'Trip',
'verbose_name_plural': 'Trips',
},
),
migrations.CreateModel(
name='TripBestOfModel',
fields=[
('cmsplugin_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, related_name='travelling_tripbestofmodel', serialize=False, to='cms.CMSPlugin')),
('name', models.CharField(max_length=75, verbose_name='Name')),
('num_objects', models.IntegerField(default=10, verbose_name='Number of Entries')),
('set_featured', models.BooleanField(default=False, verbose_name='Show Featured Only')),
('set_sponsored', models.BooleanField(default=False, verbose_name='Show Sponsored Only')),
('template', models.CharField(choices=[('travelling/components/trip-thumbnail.html', 'Standard Template')], default='travelling/components/trip-thumbnail.html', max_length=300, verbose_name='Template')),
],
options={
'abstract': False,
},
bases=('cms.cmsplugin',),
),
migrations.CreateModel(
name='TripCarouselConfig',
fields=[
('cmsplugin_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, related_name='travelling_tripcarouselconfig', serialize=False, to='cms.CMSPlugin')),
('name', models.CharField(max_length=75, verbose_name='Name')),
('application', models.CharField(max_length=75, verbose_name='Application')),
('model', models.CharField(max_length=75, verbose_name='Database Model')),
('num_objects', models.IntegerField(default=10, verbose_name='Number of Entries')),
('set_featured', models.BooleanField(default=False, verbose_name='Show Featured Only')),
('set_sponsored', models.BooleanField(default=False, verbose_name='Show Sponsored Only')),
('selection_criteria', models.CharField(blank=True, max_length=450, null=True, verbose_name='Selection Criteria')),
('template', models.CharField(choices=[('travelling/components/trip-thumbnail.html', 'Default Template')], max_length=300, verbose_name='Template')),
],
options={
'abstract': False,
},
bases=('cms.cmsplugin',),
),
migrations.CreateModel(
name='TripCatalogueModel',
fields=[
('cmsplugin_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, related_name='travelling_tripcataloguemodel', serialize=False, to='cms.CMSPlugin')),
('name', models.CharField(max_length=75, verbose_name='Name')),
],
options={
'abstract': False,
},
bases=('cms.cmsplugin',),
),
migrations.CreateModel(
name='Trophy',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('weight', models.DecimalField(blank=True, decimal_places=4, max_digits=8, null=True, verbose_name='Weight (kg)')),
('length', models.DecimalField(blank=True, decimal_places=4, max_digits=8, null=True, verbose_name='Length (cm)')),
('cic_pt', models.DecimalField(blank=True, decimal_places=4, max_digits=8, null=True, verbose_name='CIC Points')),
('game', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='travelling.Game', verbose_name='Game')),
('rating', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='trophy_rating', to='travelling.Rating', verbose_name='Associated Rating')),
('trip', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='travelling.Trip', verbose_name='Assiciated Trip')),
],
),
migrations.CreateModel(
name='AccommodationPriceList',
fields=[
('pricelist_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='travelling.PriceList')),
('name', models.CharField(max_length=75, verbose_name='Price list name')),
],
bases=('travelling.pricelist',),
),
migrations.CreateModel(
name='GamePriceList',
fields=[
('pricelist_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='travelling.PriceList')),
('name', models.CharField(max_length=75, verbose_name='Price list name')),
],
bases=('travelling.pricelist',),
),
migrations.AddField(
model_name='travelinquiry',
name='trip',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='travelling.Trip', verbose_name='Trip'),
),
migrations.AddField(
model_name='travelinquiry',
name='user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='accounts.IndividualProfile', verbose_name='User'),
),
migrations.AddField(
model_name='rating',
name='trip',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='travelling.Trip', verbose_name='Assiciated Trip'),
),
migrations.AddField(
model_name='rating',
name='user',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, verbose_name='Author'),
),
migrations.AddField(
model_name='pricelist',
name='last_modified_by',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL, verbose_name='Author'),
),
migrations.AddField(
model_name='pricelist',
name='trip',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='travelling.Trip', verbose_name='Associated Trip'),
),
migrations.AddField(
model_name='pricelist',
name='user',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='inital_creator', to=settings.AUTH_USER_MODEL, verbose_name='Author'),
),
migrations.AddField(
model_name='gameprice',
name='price_list',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='travelling.PriceList', verbose_name='Associated Price List'),
),
migrations.AddField(
model_name='accommodationprice',
name='price_list',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='travelling.PriceList', verbose_name='Associated Price List'),
),
migrations.AlterUniqueTogether(
name='trip',
unique_together=set([('company', 'country', 'region')]),
),
migrations.AlterUniqueTogether(
name='rating',
unique_together=set([('user', 'trip')]),
),
]
| 83.473333 | 349 | 0.633376 | 2,760 | 25,042 | 5.585145 | 0.149275 | 0.10133 | 0.068115 | 0.080117 | 0.686474 | 0.651508 | 0.607979 | 0.581447 | 0.556601 | 0.521116 | 0 | 0.01182 | 0.20266 | 25,042 | 299 | 350 | 83.752508 | 0.760204 | 0.002755 | 0 | 0.439863 | 1 | 0 | 0.251382 | 0.023588 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027491 | 0 | 0.041237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49db3a1ecbfa19102c7269a3533f50d40a8b3fab | 4,838 | py | Python | config/access/model_base.py | torrua/loglan_converter | e040825354bd07dda4f44d8dd84c79dc1db405c9 | [
"MIT"
] | null | null | null | config/access/model_base.py | torrua/loglan_converter | e040825354bd07dda4f44d8dd84c79dc1db405c9 | [
"MIT"
] | null | null | null | config/access/model_base.py | torrua/loglan_converter | e040825354bd07dda4f44d8dd84c79dc1db405c9 | [
"MIT"
] | null | null | null | from sqlalchemy import Column, String, Integer, Text, Boolean, DateTime
from config.access import Base
from sqlalchemy.ext.declarative import declared_attr
from datetime import datetime
class BaseFunctions:
"""
Base class for common methods
"""
__tablename__ = None
@declared_attr
def import_file_name(cls):
return f"{cls.__tablename__}.txt"
@declared_attr
def export_file_name(cls):
return f"AC_{datetime.now().strftime('%y%m%d%H%M')}_{cls.__tablename__}.txt"
def __init__(self, *initial_data, **kwargs):
"""Constructor"""
for dictionary in initial_data:
for key in dictionary:
setattr(self, key, dictionary[key])
for key in kwargs:
setattr(self, key, kwargs[key])
@classmethod
def export_file_path(cls, export_directory):
return export_directory + cls.export_file_name
def export(self):
pass
class AccessAuthor(Base, BaseFunctions):
"""
Author model
"""
__tablename__ = "Author"
sort_name = "Author"
id = Column(Integer, primary_key=True)
abbreviation = Column(String(64), unique=True, nullable=False)
full_name = Column(String(64))
notes = Column(String(128))
class AccessDefinition(Base, BaseFunctions):
__tablename__ = 'WordDefinition'
sort_name = "Definition"
word_id = Column("WID", Integer)
position = Column("I", Integer, nullable=False)
usage = Column("Usage", String(64))
grammar = Column("Grammar", String(8))
body = Column("Definition", Text, nullable=False)
main = Column("Main", String(8))
case_tags = Column("Tags", String(16))
id = Column("id", Integer, primary_key=True)
class AccessEvent(Base, BaseFunctions):
"""
Event model
"""
__tablename__ = "LexEvent"
sort_name = "Event"
id = Column("EVT", Integer, primary_key=True)
name = Column("Event", String(64), nullable=False)
date = Column("When", String(32), nullable=False)
definition = Column("WhyWhat", Text, nullable=False)
annotation = Column("DictionaryAnnotation", String(16))
suffix = Column("FilenameSuffix", String(16))
class AccessSetting(Base, BaseFunctions):
"""
Setting model
"""
__tablename__ = "Settings"
sort_name = "Settings"
date = Column("DateModified", DateTime, primary_key=True)
db_version = Column("DBVersion", Integer, nullable=False)
last_word_id = Column("LastWID", Integer, nullable=False)
db_release = Column("DBRelease", String(16), nullable=False)
class AccessSyllable(Base, BaseFunctions):
"""
Syllable model
"""
__tablename__ = "Syllable"
sort_name = "Syllable"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column("characters", String(8), primary_key=True)
type = Column(String(32), nullable=False)
allowed = Column(Boolean)
class AccessType(Base, BaseFunctions):
"""
Type model
"""
__tablename__ = "Type"
sort_name = "Type"
id = Column(Integer, primary_key=True)
type = Column(String(16), nullable=False)
type_x = Column(String(16), nullable=False)
group = Column(String(16), nullable=False)
parentable = Column(Boolean, nullable=False)
description = Column(String(255), nullable=True)
class AccessWord(Base, BaseFunctions):
"""
Word model
"""
__tablename__ = "Words"
sort_name = "Word"
word_id = Column("WID", Integer, nullable=False, primary_key=True)
type = Column("Type", String(16), nullable=False)
type_x = Column("XType", String(16), nullable=False)
affixes = Column("Affixes", String(16))
match = Column("Match", String(8))
authors = Column("Source", String(64))
year = Column("Year", String(128))
rank = Column("Rank", String(128))
origin = Column("Origin", String(128))
origin_x = Column("OriginX", String(64))
used_in = Column("UsedIn", Text)
TID_old = Column("TID", Integer) # references
class AccessWordSpell(Base, BaseFunctions):
"""WordSpell model"""
__tablename__ = "WordSpell"
sort_name = "WordSpell"
word_id = Column("WID", Integer, nullable=False)
word = Column("Word", String(64), nullable=False)
sort_a = Column("SortA", String(64), nullable=False)
sort_b = Column("SortB", String(64), nullable=False)
event_start_id = Column("SEVT", Integer, nullable=False)
event_end_id = Column("EEVT", Integer, nullable=False)
origin_x = Column("OriginX", String(64))
id = Column(Integer, primary_key=True)
'''
class AccessXWord(Base, BaseFunctions):
"""XWord model"""
__tablename__ = "XWord"
sort_name = "XWord"
XSortA = Column(String)
XSortB = Column(String)
WID = Column(String, primary_key=True)
I = Column(String)
XWord = Column(String)
'''
| 28.627219 | 84 | 0.65895 | 557 | 4,838 | 5.527828 | 0.263914 | 0.097109 | 0.045469 | 0.040922 | 0.180903 | 0.112374 | 0.043521 | 0 | 0 | 0 | 0 | 0.016398 | 0.20587 | 4,838 | 168 | 85 | 28.797619 | 0.785008 | 0.029971 | 0 | 0.070707 | 0 | 0.010101 | 0.100557 | 0.020669 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050505 | false | 0.010101 | 0.050505 | 0.030303 | 0.919192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
49de3b66d3ba8d7b390aa4f38533368a7826b8e9 | 689 | py | Python | WebEmpresarial/social/models.py | MarcosKlender/Web_Empresarial | 79b481488a74415e88898cff029233f339dc1e97 | [
"BSD-3-Clause"
] | null | null | null | WebEmpresarial/social/models.py | MarcosKlender/Web_Empresarial | 79b481488a74415e88898cff029233f339dc1e97 | [
"BSD-3-Clause"
] | null | null | null | WebEmpresarial/social/models.py | MarcosKlender/Web_Empresarial | 79b481488a74415e88898cff029233f339dc1e97 | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
# Create your models here.
class Link(models.Model):
key = models.SlugField(max_length = 100, unique = True, verbose_name = 'Nombre Clave')
name = models.CharField(max_length = 200, verbose_name = 'Red Social')
url = models.URLField(max_length = 200, null = True, blank = True, verbose_name = 'Enlace')
created = models.DateTimeField(auto_now_add = True, verbose_name = 'Fecha de Creación')
updated = models.DateTimeField(auto_now = True, verbose_name = 'Fecha de Edición')
class Meta:
verbose_name = 'enlace'
verbose_name_plural = 'enlaces'
ordering = ['name']
def __str__(self):
return self.name | 40.529412 | 95 | 0.685051 | 88 | 689 | 5.159091 | 0.556818 | 0.169604 | 0.132159 | 0.114537 | 0.096916 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016484 | 0.207547 | 689 | 17 | 96 | 40.529412 | 0.815018 | 0.034833 | 0 | 0 | 0 | 0 | 0.11747 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0.076923 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
49e46e9b59b725cb283f9125430ec7a34bd75825 | 9,521 | py | Python | 3_0_pgo_icp/solution/pose_graph_optimization/assignment_I_2/pgo_2D.py | karanchawla/ai_for_robotics | 03bb66bae99bac3acd79bc1ec6d3b9c0eeabcdf8 | [
"BSD-3-Clause"
] | 65 | 2017-03-03T07:30:28.000Z | 2021-08-19T01:12:47.000Z | 3_0_pgo_icp/solution/pose_graph_optimization/assignment_I_2/pgo_2D.py | karanchawla/ai_for_robotics | 03bb66bae99bac3acd79bc1ec6d3b9c0eeabcdf8 | [
"BSD-3-Clause"
] | 4 | 2017-03-02T13:51:40.000Z | 2017-11-01T16:49:22.000Z | 3_0_pgo_icp/solution/pose_graph_optimization/assignment_I_2/pgo_2D.py | ethz-asl/ai_for_robotics | 03bb66bae99bac3acd79bc1ec6d3b9c0eeabcdf8 | [
"BSD-3-Clause"
] | 43 | 2017-03-02T11:31:21.000Z | 2020-10-30T07:10:59.000Z | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Sun Apr 2 10:00 2017
@author: Timo Hinzmann (hitimo@ethz.ch)
"""
import math
from math import floor, ceil
import numpy as np
import matplotlib.pyplot as plt
from scipy.sparse import linalg as sla
from scipy import array, linalg, dot
from enum import Enum
import copy
import pylab
# References:
# [1] Grisetti, Kuemmerle, Stachniss et al. "A Tutorial on Graph-Based SLAM"
# Pose-graph optimization closely following Algorithm 1, 2D from [1].
class PoseGraphOptimization2D():
def __init__(self, vertices, constraints):
self.vertices = vertices
self.constraints = constraints
# State x := [x,y,theta].
self.x = self.vertices[:, 1:]
self.index_x = 0
self.index_y = 1
self.index_theta = 2
# Dimensions.
self.num_nodes = self.vertices.shape[0]
self.dimensions = 3
self.num_states = self.dimensions * self.num_nodes
# Residual of the constraint [dim.: 3x1]
def e_ij(self, R_ij, R_i, t_i, t_j, t_ij, theta_i, theta_j, theta_ij):
# Equation (30).
e_ij = np.zeros([3, 1])
# 2x1 block
e_ij[0:2, 0] = np.dot(R_ij.T, np.dot(R_i.T,(t_j - t_i)) - t_ij).reshape(2)
e_ij[2, 0] = theta_j - theta_i - theta_ij
return e_ij
# 2D rotation matrix [dim.: 2x2]
def R_i(self, theta_i):
# Equation (31).
R_i = np.zeros([2, 2])
R_i[0, 0] = np.cos(theta_i)
R_i[0, 1] = -np.sin(theta_i)
R_i[1, 0] = np.sin(theta_i)
R_i[1, 1] = np.cos(theta_i)
return R_i
# Derivate of 2D rotation matrix wrt. theta [dim.: 2x2]
def dR_i(self, theta_i):
# Required for equation (32).
dR_i = np.zeros([2, 2])
dR_i[0, 0] = -np.sin(theta_i)
dR_i[0, 1] = -np.cos(theta_i)
dR_i[1, 0] = np.cos(theta_i)
dR_i[1, 1] = -np.sin(theta_i)
return dR_i
# Derivative of error function wrt. x_i [dim.: 3x3]
def A_ij(self, R_ij, R_i, dR_i, t_j, t_i):
# Equation (32).
# The dimension of A_ij is [num_states x num_states]
A_ij = np.zeros([3, 3])
# 2x2 block
A_ij[0:2, 0:2] = -np.dot(R_ij.T, R_i.T)
# 2x1 block
A_ij[0:2, 2] = np.dot(np.dot(R_ij.T, dR_i.T), (t_j-t_i)).reshape(2)
A_ij[2, 2] = -1.0
return A_ij
# Derivative of error function wrt. x_j [dim.: 3x3]
def B_ij(self, R_ij, R_i):
# Equation (33).
# The dimension of B_ij is [num_states x num_states]
B_ij = np.zeros([3, 3])
# 2x2 block
B_ij[0:2, 0:2] = np.dot(R_ij.T, R_i.T)
B_ij[2, 2] = 1.0
return B_ij
# Normalize angles to [-pi,pi).
def normalizeAngles(self, theta):
# Iterate through the nodes and normalize the angles.
for i in range(0, self.num_nodes):
while theta[i] < -math.pi:
theta += 2 * math.pi
while theta[i] >= math.pi:
theta -= 2 * math.pi
return theta
def optimizePoseGraph(self):
# Maximum number of optimization iterations to avoid getting stuck
# in infinite while loop.
max_number_optimization_iterations = 1000
optimization_iteration_counter = 0
optimization_error = np.inf
tolerance = 1.0e-11
t_i = np.zeros([2, 1])
t_j = np.zeros([2, 1])
t_ij = np.zeros([2, 1])
Omega_ij = np.zeros([3, 3])
# Make sure we achieve the desired accuracy.
while optimization_error > tolerance:
# num_states = 3 * num_nodes (since x,y,theta)
H = np.zeros([self.num_states, self.num_states])
b = np.zeros([self.num_states, 1])
# Iterate over all constraints.
for constraint in self.constraints:
# Node i.
i = int(constraint[0])
# Node j.
j = int(constraint[1])
# Relative translation from node i to node j,
t_ij[self.index_x] = constraint[2]
t_ij[self.index_y] = constraint[3]
# Relative rotation from node i to node j.
theta_ij = constraint[4]
# *Global* position of node i (initial guess).
t_i[self.index_x] = self.x[i, self.index_x]
t_i[self.index_y] = self.x[i, self.index_y]
# *Global* position of node j (initial guess).
t_j[self.index_x] = self.x[j, self.index_x]
t_j[self.index_y] = self.x[j, self.index_y]
# *Global* orientation of node i (initial guess).
theta_i = self.x[i, self.index_theta]
# *Global* orientation of node j (initial guess).
theta_j = self.x[j, self.index_theta]
# Information matrix Omega.
# First row.
Omega_ij[0, 0] = constraint[5]
Omega_ij[0, 1] = constraint[6]
Omega_ij[0, 2] = constraint[7]
# Second row.
Omega_ij[1, 0] = constraint[6]
Omega_ij[1, 1] = constraint[8]
Omega_ij[1, 2] = constraint[9]
# Third row.
Omega_ij[2, 0] = constraint[7]
Omega_ij[2, 1] = constraint[9]
Omega_ij[2, 2] = constraint[10]
# Compute R_ij, the *local* rotation matrix from node i to node j.
R_ij = self.R_i(theta_ij)
# Compute R_i, the *global* orientation of node i.
R_i = self.R_i(theta_i)
# Compute R_j, the *global* orientation of node j.
R_j = self.R_i(theta_j)
# Compute dR_i, the derivate of R_i wrt. theta_i.
dR_i = self.dR_i(theta_i)
# Compute the derivative of the error function wrt. x_i.
A_ij = self.A_ij(R_ij, R_i, dR_i, t_j, t_i)
# Compute the derivate of the error function wrt. x_j.
B_ij = self.B_ij(R_ij, R_i)
# Compute the residual of the constraint connecting node i and node j
e_ij = self.e_ij(R_ij, R_i, t_i, t_j, t_ij, theta_i, theta_j, theta_ij)
# Make sure to get the indices right...
# i=0: b[0:3]; i=1: b[3:6]; ...
# j=1: b[3:6]; i=2: b[6:9]; ...
i_r = 3*i
i_c = 3*i+3
j_r = 3*j
j_c = 3*j+3
# Compute the coefficient vector.
# b_i
b[i_r:i_c] += np.dot(A_ij.T, np.dot(Omega_ij, e_ij)).reshape(3, 1)
# b_j
b[j_r:j_c] += np.dot(B_ij.T, np.dot(Omega_ij, e_ij)).reshape(3, 1)
# Compute the contribution of this constraint to the linear system.
# H_ii
H[i_r:i_c,i_r:i_c] += np.dot(A_ij.T, np.dot(Omega_ij, A_ij))
# H_ij
H[i_r:i_c,j_r:j_c] += np.dot(A_ij.T, np.dot(Omega_ij, B_ij))
# H_ji
H[j_r:j_c,i_r:i_c] += np.dot(B_ij.T, np.dot(Omega_ij, A_ij))
# H_jj
H[j_r:j_c,j_r:j_c] += np.dot(B_ij.T, np.dot(Omega_ij, B_ij))
# Keep the first node fixed.
H[0:3, 0:3] += np.eye(3, 3)
# Solve the linear system.
delta_x = sla.spsolve(H, -b)
delta_x = delta_x.reshape(self.num_nodes, self.dimensions)
# Equation (34): Update the states by applying the increments.
self.x += delta_x
# Save the current optimization error.
optimization_error = np.linalg.norm(delta_x, 2)
# Maximum number of optimization iterations to avoid getting stuck
# in infinite while loop.
optimization_iteration_counter += 1
if optimization_iteration_counter > max_number_optimization_iterations:
print "WARNING! Reached max. number of iterations before converging to desired tolerance!"
break
print "Optimization iter.: ", optimization_iteration_counter, " optimization error: ", optimization_error
# The angles are normalized to [-pi,pi) *after* applying the increments.
self.x[:, self.index_theta] = self.normalizeAngles(self.x[:, self.index_theta])
return self.x
def main():
# Relative path to data from exercise sheet.
base = "../../../pose_graph_optimization/assignment_I_2/"
# Load the input data.
vertices = np.genfromtxt(open(base + "vertices.dat"))
edges = np.genfromtxt(open(base + "edges.dat"))
lc = np.genfromtxt(open(base + "loop_closures.dat"))
# Edges and loop-closures are constraints that can be handled the same
# way in the pose graph optimization backend as remarked in the exercise sheet.
all_constraints = []
all_constraints = np.append(edges, lc, axis = 0)
# Plot the initial values.
pylab.plot(vertices[:, 1], vertices[:, 2], 'b')
plt.pause(1)
# Perform the 2D pose graph optimization according to [1], Algorithm 1, 2D
pgo = PoseGraphOptimization2D(vertices, all_constraints)
x_opt = pgo.optimizePoseGraph()
# Save the optimized states in rows: [x_0, y_0, th_0; x_1, y_1, th_1; ...]
np.savetxt('results_2D.txt', np.transpose(x_opt))
# Plot the optimized values.
pylab.plot(x_opt[:,0], x_opt[:,1], 'g')
plt.pause(5)
if __name__ == "__main__":
main()
| 36.619231 | 119 | 0.553513 | 1,445 | 9,521 | 3.459516 | 0.177855 | 0.010802 | 0.005401 | 0.011202 | 0.291058 | 0.174635 | 0.132827 | 0.10242 | 0.10122 | 0.088018 | 0 | 0.033093 | 0.330322 | 9,521 | 259 | 120 | 36.760618 | 0.750941 | 0.278752 | 0 | 0 | 0 | 0 | 0.035007 | 0.00715 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.014815 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49e8bc016b4a92e63bbff49dadf2d0f5ff48a5c0 | 7,673 | py | Python | mobi_parse_data.py | josting/CS538_Project | b503de4f8e632166f715bb28b621d21770e3142e | [
"MIT"
] | null | null | null | mobi_parse_data.py | josting/CS538_Project | b503de4f8e632166f715bb28b621d21770e3142e | [
"MIT"
] | null | null | null | mobi_parse_data.py | josting/CS538_Project | b503de4f8e632166f715bb28b621d21770e3142e | [
"MIT"
] | null | null | null | import os
import datetime as dt
import random
import networkx
# import matplotlib as mpl
import matplotlib.pyplot as plt
from const import *
activity = {}
with open(os.path.join(DATA_DIR, "mobiclique", "activity.csv")) as activity_fd:
for line in activity_fd.readlines():
line = line.strip()
if "#" in line:
line = line[:line.index("#")]
if not line:
continue
user_id, start_ts, end_ts = line.split(';')
if user_id not in activity:
activity[user_id] = []
activity[user_id].append( (int(start_ts), int(end_ts)) )
def is_awake(user_id, ts, activity):
for start_ts, end_ts in activity.get(user_id, []):
if ts >= start_ts and ts <= end_ts:
return True
return False
transmission = {}
with open(os.path.join(DATA_DIR, "mobiclique", "transmission.csv")) as transmission_fd:
for line in transmission_fd.readlines():
line = line.strip()
if "#" in line:
line = line[:line.index("#")]
if not line:
continue
msg_type, msg_id, bytes, src_user_id, dst_user_id, ts, status = line.split(';')
#if status != '0':
# continue
if src_user_id not in transmission:
transmission[src_user_id] = {}
if dst_user_id not in transmission[src_user_id]:
transmission[src_user_id][dst_user_id] = []
ts = int(ts)
transmission[src_user_id][dst_user_id].append(ts)
reception = {}
with open(os.path.join(DATA_DIR, "mobiclique", "reception.csv")) as reception_fd:
for line in reception_fd.readlines():
line = line.strip()
if "#" in line:
line = line[:line.index("#")]
if not line:
continue
msg_type, msg_id, src_user_id, dst_user_id, ts = line.split(';')
if src_user_id not in reception:
reception[src_user_id] = {}
if dst_user_id not in reception[src_user_id]:
reception[src_user_id][dst_user_id] = []
ts = int(ts)
reception[src_user_id][dst_user_id].append(ts)
drift_dict = {}
for src_user_id in sorted(reception):
for dst_user_id in sorted(reception[src_user_id]):
for rcp_ts in reception[src_user_id][dst_user_id]:
if src_user_id not in transmission:
continue
transmissions = transmission[src_user_id].get(dst_user_id, None)
if transmissions is None:
continue
if (src_user_id, dst_user_id) not in drift_dict:
drift_dict[(src_user_id, dst_user_id)] = []
diff = [abs(rcp_ts - trn_ts) for trn_ts in transmissions]
idx = diff.index(min(diff))
trn_ts = transmission[src_user_id][dst_user_id][idx]
drift = trn_ts - rcp_ts
drift_dict[(src_user_id, dst_user_id)].append((trn_ts, drift))
for (src_user_id, dst_user_id) in sorted(drift_dict):
print src_user_id, dst_user_id, drift_dict[(src_user_id, dst_user_id)]
break
proximity = {}
with open(os.path.join(DATA_DIR, "mobiclique", "proximity.csv")) as proximity_fd:
for line in proximity_fd.readlines():
line = line.strip()
if "#" in line:
line = line[:line.index("#")]
if not line:
continue
ts, user_id, seen_user_id, major_code, minor_code = line.split(';')
ts = int(ts)
if ts not in proximity:
proximity[ts] = []
proximity[ts].append((user_id, seen_user_id))
def visit(node, edges, unvisited):
if node not in unvisited:
return []
unvisited.remove(node)
my_network = [node]
for (node1, node2) in edges:
if node == node1 and node2 in unvisited:
my_network.extend(visit(node2, edges, unvisited))
elif node == node2 and node1 in unvisited:
my_network.extend(visit(node1, edges, unvisited))
return my_network
def get_networks(nodes, edges):
networks = []
unvisited = list(nodes)
while unvisited:
node = unvisited[0]
my_network = []
networks.append(visit(node, edges, unvisited))
return map(sorted,(map(set,networks)))
MAX_RNG = 75
timestamps = sorted(proximity)
#write traces to user.dat files
if 0:
user_fds = {}
for ts in timestamps:
for (user_id, seen_id) in proximity[ts]:
if user_id not in user_fds:
fd = open(r"mobiclique\%s.dat" % user_id, 'w')
last_ts = -1
user_fds[user_id] = [fd, last_ts]
else:
[fd, last_ts] = user_fds[user_id]
if last_ts != ts:
if last_ts > 0:
fd.write('\n')
fd.write("{} {} {}".format(ts, user_id, seen_id))
else:
fd.write(",{}".format(seen_id))
user_fds[user_id][1] = ts
for (fd, last_ts) in user_fds.values():
fd.close()
# Graph using networkx
if 1:
idx = random.sample(xrange(len(timestamps)), 25)
idx.sort()
sample_timestamps = map(timestamps.__getitem__, idx)
sample_dts = map(lambda ts: START_DT + dt.timedelta(seconds=ts),sample_timestamps)
for ts in sample_timestamps:
other_timestamps = filter(lambda x: abs(x-ts) < MAX_RNG, timestamps)
edges = sorted(set(reduce(list.__add__, [proximity[x] for x in other_timestamps])))
G = networkx.Graph(edges)
networkx.draw(G)
fig_fname = os.path.join(r"C:\Users\Jon\Google Drive\Grad_School\CS 538\project\scripts\figures", "%s.png" % ts)
plt.savefig(fig_fname)
plt.close()
networks = []
n_networks = []
max_size = []
idx = random.sample(xrange(len(timestamps)), 1500)
idx.sort()
sample_timestamps = map(timestamps.__getitem__, idx)
sample_dts = map(lambda ts: START_DT + dt.timedelta(seconds=ts),sample_timestamps)
for ts in sample_timestamps:
other_timestamps = filter(lambda x: abs(x-ts) < MAX_RNG, timestamps)
edges = sorted(set(reduce(list.__add__, [proximity[x] for x in other_timestamps])))
nodes = sorted(set(reduce(list.__add__, map(list, edges))))
new_networks = get_networks(nodes, edges)
networks.append(new_networks)
n_networks.append(len(new_networks))
max_size.append(max(map(len,new_networks)))
fd = open("output2.csv", 'w')
for vals in zip(sample_dts, n_networks, max_size):
fd.write(','.join(map(str,(vals))))
fd.write('\n')
fd.close()
# Get networks
if 0:
networks = []
n_networks = []
max_size = []
idx = random.sample(xrange(len(timestamps)), 1500)
idx.sort()
sample_timestamps = map(timestamps.__getitem__, idx)
sample_dts = map(lambda ts: START_DT + dt.timedelta(seconds=ts),sample_timestamps)
for ts in sample_timestamps:
other_timestamps = filter(lambda x: abs(x-ts) < MAX_RNG, timestamps)
edges = sorted(set(reduce(list.__add__, [proximity[x] for x in other_timestamps])))
nodes = sorted(set(reduce(list.__add__, map(list, edges))))
new_networks = get_networks(nodes, edges)
networks.append(new_networks)
n_networks.append(len(new_networks))
max_size.append(max(map(len,new_networks)))
fd = open("output2.csv", 'w')
for vals in zip(sample_dts, n_networks, max_size):
fd.write(','.join(map(str,(vals))))
fd.write('\n')
fd.close()
| 37.247573 | 121 | 0.59377 | 1,024 | 7,673 | 4.206055 | 0.150391 | 0.082192 | 0.050151 | 0.039006 | 0.59856 | 0.550267 | 0.511725 | 0.474808 | 0.401207 | 0.377989 | 0 | 0.006032 | 0.28698 | 7,673 | 205 | 122 | 37.429268 | 0.78121 | 0.015248 | 0 | 0.446927 | 0 | 0 | 0.032816 | 0.003676 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.03352 | null | null | 0.005587 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49edf4b8c87add119d94e632341ab23299a577d3 | 1,726 | py | Python | boardgames/main/migrations/0001_initial.py | diophung/django-sample | 4916f4aa70506f6f40b736f68a0bbe398ea1ea8e | [
"Apache-2.0"
] | null | null | null | boardgames/main/migrations/0001_initial.py | diophung/django-sample | 4916f4aa70506f6f40b736f68a0bbe398ea1ea8e | [
"Apache-2.0"
] | null | null | null | boardgames/main/migrations/0001_initial.py | diophung/django-sample | 4916f4aa70506f6f40b736f68a0bbe398ea1ea8e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.4 on 2017-08-16 07:49
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Game',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('start_time', models.DateTimeField(auto_now_add=True)),
('last_active', models.DateTimeField(auto_now=True)),
('first_player', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='game_first_player', to=settings.AUTH_USER_MODEL)),
('next_to_move', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='games_to_move', to=settings.AUTH_USER_MODEL)),
('second_player', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='game_second_player', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Move',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('x', models.IntegerField()),
('y', models.IntegerField()),
('comment', models.CharField(max_length=300)),
('game', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='main.Game')),
],
),
]
| 42.097561 | 162 | 0.636153 | 193 | 1,726 | 5.466321 | 0.398964 | 0.045498 | 0.066351 | 0.104265 | 0.477725 | 0.455924 | 0.400948 | 0.400948 | 0.400948 | 0.350711 | 0 | 0.015094 | 0.232329 | 1,726 | 40 | 163 | 43.15 | 0.781132 | 0.039397 | 0 | 0.3125 | 1 | 0 | 0.087009 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49f45e903b240c04c0489fac65ede708075df463 | 1,458 | py | Python | apps/approval/api/serializers.py | emilps/onlineweb4 | 6f4aca2a4522698366ecdc6ab63c807ce5df2a96 | [
"MIT"
] | null | null | null | apps/approval/api/serializers.py | emilps/onlineweb4 | 6f4aca2a4522698366ecdc6ab63c807ce5df2a96 | [
"MIT"
] | null | null | null | apps/approval/api/serializers.py | emilps/onlineweb4 | 6f4aca2a4522698366ecdc6ab63c807ce5df2a96 | [
"MIT"
] | null | null | null | from django.core.exceptions import ValidationError as DjangoValidationError
from rest_framework import serializers
from apps.approval.models import CommitteeApplication, CommitteePriority
from apps.authentication.serializers import UserSerializer
class CommitteeSerializer(serializers.ModelSerializer):
group_name = serializers.SerializerMethodField(source='group')
class Meta:
model = CommitteePriority
fields = ('group', 'group_name', 'priority')
def get_group_name(self, instance):
return instance.group.name
class CommitteeApplicationSerializer(serializers.ModelSerializer):
committees = CommitteeSerializer(many=True, source='committeepriority_set')
applicant = UserSerializer(read_only=True)
class Meta:
model = CommitteeApplication
fields = ('name', 'email', 'applicant', 'application_text', 'prioritized', 'committees')
def create(self, validated_data):
committees = validated_data.pop('committeepriority_set')
application = CommitteeApplication(**validated_data)
try:
application.clean()
except DjangoValidationError as django_error:
raise serializers.ValidationError(django_error.message)
application.save()
for committee in committees:
CommitteePriority.objects.create(committee_application=application, **committee)
return CommitteeApplication.objects.get(pk=application.pk)
| 36.45 | 96 | 0.742798 | 132 | 1,458 | 8.090909 | 0.469697 | 0.033708 | 0.026217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176955 | 1,458 | 39 | 97 | 37.384615 | 0.89 | 0 | 0 | 0.071429 | 0 | 0 | 0.085734 | 0.028807 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0.035714 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
49f6f1f5b6e7113a385ba89e9bd8fb4c985968b5 | 421 | py | Python | examples/board_toolkit_simpletest.py | Neradoc/Adafruit_Board_Toolkit | c1602192f015924ce4ffd4e90dcd44769e565780 | [
"MIT",
"BSD-3-Clause",
"MIT-0",
"Unlicense"
] | 10 | 2021-03-16T18:05:53.000Z | 2022-03-20T20:40:38.000Z | examples/board_toolkit_simpletest.py | Neradoc/Adafruit_Board_Toolkit | c1602192f015924ce4ffd4e90dcd44769e565780 | [
"MIT",
"BSD-3-Clause",
"MIT-0",
"Unlicense"
] | 8 | 2021-03-17T18:32:54.000Z | 2021-12-31T19:58:01.000Z | examples/board_toolkit_simpletest.py | Neradoc/Adafruit_Board_Toolkit | c1602192f015924ce4ffd4e90dcd44769e565780 | [
"MIT",
"BSD-3-Clause",
"MIT-0",
"Unlicense"
] | 4 | 2021-04-21T13:48:18.000Z | 2022-03-13T15:07:01.000Z | # SPDX-FileCopyrightText: Copyright (c) 2021 Dan Halbert for Adafruit Industries
#
# SPDX-License-Identifier: Unlicense
import adafruit_board_toolkit.circuitpython_serial
comports = adafruit_board_toolkit.circuitpython_serial.repl_comports()
if not comports:
raise Exception("No CircuitPython boards found")
# Print the device paths or names that connect to a REPL.
print([comport.device for comport in comports])
| 32.384615 | 80 | 0.812352 | 55 | 421 | 6.090909 | 0.709091 | 0.077612 | 0.119403 | 0.197015 | 0.232836 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010811 | 0.12114 | 421 | 12 | 81 | 35.083333 | 0.894595 | 0.401425 | 0 | 0 | 0 | 0 | 0.117409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49f8927dba9de24eccfdfa6bd46fde3e6e325f82 | 221 | py | Python | pipeline.py | sanidhya-singh/dagster-pipelines | 671c4869dca14f96902981e2e8c84df1319ca89e | [
"MIT"
] | null | null | null | pipeline.py | sanidhya-singh/dagster-pipelines | 671c4869dca14f96902981e2e8c84df1319ca89e | [
"MIT"
] | null | null | null | pipeline.py | sanidhya-singh/dagster-pipelines | 671c4869dca14f96902981e2e8c84df1319ca89e | [
"MIT"
] | null | null | null | from dagster import job, op
@op
def get_name():
return "dagster"
@op
def hello(name: str):
print(f"Hello, {name}!")
@job(description="Hello world Dagster pipeline")
def hello_dagster():
hello(get_name()) | 13.8125 | 48 | 0.669683 | 32 | 221 | 4.53125 | 0.5 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180995 | 221 | 16 | 49 | 13.8125 | 0.801105 | 0 | 0 | 0.2 | 0 | 0 | 0.220721 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0.1 | 0.5 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49fd04fd3ec6534f06e8ff42c0869a4f70bf3dd5 | 1,484 | py | Python | meiduo_mall/apps/meiduo_admin/views/order.py | zzZaida/meiduo_backend | c4f94ea7f9c47a08d3e37fb0ac2c1ec1dcf2c18b | [
"MIT"
] | null | null | null | meiduo_mall/apps/meiduo_admin/views/order.py | zzZaida/meiduo_backend | c4f94ea7f9c47a08d3e37fb0ac2c1ec1dcf2c18b | [
"MIT"
] | null | null | null | meiduo_mall/apps/meiduo_admin/views/order.py | zzZaida/meiduo_backend | c4f94ea7f9c47a08d3e37fb0ac2c1ec1dcf2c18b | [
"MIT"
] | null | null | null | from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.viewsets import ModelViewSet
from apps.meiduo_admin.serializers.order import OrderInfoSerializer
from apps.meiduo_admin.utils import PageNum
from apps.orders.models import OrderInfo
class OrderModelViewSet(ModelViewSet):
queryset = OrderInfo.objects.all()
serializer_class = OrderInfoSerializer
pagination_class = PageNum
def destroy(self, request, *args, **kwargs):
return Response({'msg': '妖怪,吃俺老孙一棒,敢删除我的数据!'})
@action(methods=['PUT'],detail=True)
def status(self,request,pk):
# 1.查询订单
try:
order=OrderInfo.objects.get(order_id=pk)
except OrderInfo.DoesNotExist:
from rest_framework import status
return Response(status=status.HTTP_400_BAD_REQUEST)
# order=self.get_object()
# 2.修改订单状态
order.status=request.data.get('status')
order.save()
#3.返回相应
return Response({
'order_id':pk,
'status':order.status
})
"""
GET
{
"order_id": "20190909155657000000003",
"create_time": "2019-09-09T15:56:57.524510+08:00",
"update_time": "2019-09-09T15:57:02.595491+08:00",
"total_count": 1,
"total_amount": "11.00",
"freight": "10.00",
"pay_method": 2,
"status": 1,
"user": 3,
"address": 4,
"goods":[{},{},{},{}]
}
""" | 26.035088 | 67 | 0.623989 | 168 | 1,484 | 5.392857 | 0.517857 | 0.03532 | 0.075055 | 0.041943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080429 | 0.245957 | 1,484 | 57 | 68 | 26.035088 | 0.729223 | 0.030323 | 0 | 0 | 0 | 0 | 0.041627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.28 | 0.04 | 0.64 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
49fd9dcc627b703550931ebd10aa32549f023644 | 29,587 | py | Python | QA/pycopia/remote/windows_server.py | kdart/pycopia3 | 8a7c820f096245411eabbb72345e4f30a35988b6 | [
"Apache-2.0"
] | 3 | 2018-11-26T15:00:20.000Z | 2022-01-28T23:17:58.000Z | QA/pycopia/remote/windows_server.py | kdart/pycopia3 | 8a7c820f096245411eabbb72345e4f30a35988b6 | [
"Apache-2.0"
] | null | null | null | QA/pycopia/remote/windows_server.py | kdart/pycopia3 | 8a7c820f096245411eabbb72345e4f30a35988b6 | [
"Apache-2.0"
] | 1 | 2018-11-26T15:00:21.000Z | 2018-11-26T15:00:21.000Z | #!/usr/bin/python3.4
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Implements a Windows version of a client responder. This should run with the
native Python for Windows.
Install on a Windows server:
Place the following lines in c:\autoexec.bat::
PATH=%PATH%;C:\Python26;C:\Python26\Scripts
Now run (all on one line)::
C:\Python26>python.exe %PYTHONLIB%\site-packages\pycopia\remote\WindowsServer.py
--username DOMAIN\Administrator --password xxxxxxxx install
OR, for system process that can interact with console::
C:\Python26>python.exe %PYTHONLIB%\site-packages\pycopia\remote\WindowsServer.py
--interactive install
Note: if you get an error about an account not existing, you may need
to supply the username like this:
.\Administrator
If a username was supplied to run as, go to the Service Manger from the
Windows control panel, and perform the following.
- Select "Remote Agent Server" from the list. Right-clieck and select "properties".
- Select the "Log On" tab.
- Click the "This account:" radio button.
- Enter DOMAIN\Administrator in the account box (or something else appropriate).
- Enter the proper password (twice).
- Click "Apply". You should confirm a message saying user is
enabled to log in as a service.
- Click "General" tab.
- You may now start the service.
You may also need to disable the Windows firewall for this to function
properly. This service is a massive security hole, so only run it on
a throw-away test machine on an isolated network.
"""
import os, sys, shutil, errno
import threading
# Pycopia imports
from pycopia.aid import IF
from pycopia.anypath import cygwin2nt, nt2cygwin
from pycopia import shparser
# returnable objects
from pycopia.remote.WindowsObjects import ExitStatus
# Windows stuff
import msvcrt
import win32api
import win32file
import win32net
import win32process
import win32event
# constants
import pywintypes
import win32con
import win32netcon
# some constants that the API forgot...
USE_WILDCARD = -1
USE_DISKDEV = 0
USE_SPOOLDEV = 1
USE_CHARDEV = 2
USE_IPC = 3
def setConfig():
Pyro.config.PYRO_STORAGE = "C:\\tmp\\"
Pyro.config.PYRO_LOGFILE = "C:\\tmp\\agent_svc.log"
Pyro.config.PYRO_TRACELEVEL=3
Pyro.config.PYRO_USER_LOGFILE = "C:\\tmp\\agent_user.log"
Pyro.config.PYRO_USER_TRACELEVEL = 3
Pyro.config.PYRO_PORT = 7867 # don't conflict with cygwin Pyro
import Pyro
import Pyro.util
setConfig()
Log=Pyro.util.Log
import Pyro.core
import Pyro.naming
from Pyro.ext.BasicNTService import BasicNTService, getRegistryParameters
_EXIT = False
UserLog = Pyro.util.UserLogger()
# msg, warn, or error methods
class WindowsFile(file):
"""A file object with some extra methods that match those in UserFile
(which has Posix extensions)."""
def locking(self, mode, nbytes):
return msvcrt.locking(self.fileno(), mode, nbytes)
def __repr__(self):
return "WindowsFile(%r, %r)" % (self.name, self.mode)
def lock_exclusive(self, length, start=0, whence=0, nb=0):
"""Locking method compatible with Posix files."""
if nb:
mode = msvcrt.LK_NBLCK
else:
mode = msvcrt.LK_LOCK
orig = self.tell()
self.seek(start, whence)
try:
msvcrt.locking(self.fileno(), mode, length)
finally:
self.seek(orig)
lock = lock_exclusive
def unlock(self, length, start=0, whence=0):
"""Posix compatible unlock."""
orig = self.tell()
self.seek(start, whence)
try:
msvcrt.locking(self.fileno(), msvcrt.LK_UNLCK, length)
finally:
self.seek(orig)
def get_osfhandle(self):
return msvcrt.get_osfhandle(self.fileno())
split_command_line = shparser.get_command_splitter()
# quick hack ... Windows sucks. No signal handling or anything useful, so it has to be faked.
class WindowsProcess(object):
def __init__(self, cmdline, logfile=None, env=None, callback=None, merge=True, pwent=None, async=False):
self.deadchild = False
self.exitstatus = None
self.cmdline = cmdline
self._callback = callback
self._buf = ""
self._log = logfile
if merge:
self.child_stdin, self.child_stdout = os.popen2(cmdline, "t", -1)
self.child_stderr = None
else:
self.child_stdin, self.child_stdout, self.child_stderr = os.popen3(cmdline, "t", -1)
self.childpid, self.handle = self._scan_for_self()
# since the Python popenX functions do not provide the PID, it must be
# scanned for in this ugly manner. 8-(
def _scan_for_self(self):
win32api.Sleep(2000) # sleep to give time for process to be seen in system table.
basename = self.cmdline.split()[0]
pids = win32process.EnumProcesses()
if not pids:
UserLog.warn("WindowsProcess", "no pids", pids)
for pid in pids:
try:
handle = win32api.OpenProcess(
win32con.PROCESS_QUERY_INFORMATION | win32con.PROCESS_VM_READ,
pywintypes.FALSE, pid)
except pywintypes.error as err:
UserLog.warn("WindowsProcess", str(err))
continue
try:
modlist = win32process.EnumProcessModules(handle)
except pywintypes.error as err:
UserLog.warn("WindowsProcess",str(err))
continue
for mod in modlist:
mname = win32process.GetModuleFileNameEx(handle, mod)
if mname.find(basename) >= 0:
return int(pid), handle
raise WindowsError("could not find process for %r" % (basename,))
def write(self, data):
return self.child_stdin.write(data)
def kill(self):
handle = win32api.OpenProcess(
win32con.PROCESS_VM_READ | win32con.PROCESS_TERMINATE, pywintypes.FALSE, self.childpid)
win32process.TerminateProcess(handle, 3)
def read(self, amt=1048576):
bs = len(self._buf)
while bs < amt:
c = self._read(4096)
if not c:
break
self._buf += c
bs = len(self._buf)
data = self._buf[:amt]
self._buf = self._buf[amt:]
return data
def readerr(self, amt=-1):
if self.child_stderr:
return self.child_stderr.read(amt)
def _read(self, amt):
data = self.child_stdout.read(amt)
if self._log:
self._log.write(data)
return data
def close(self):
if win32process.GetExitCodeProcess(self.handle) == win32con.STILL_ACTIVE:
self.kill()
self.child_stdin.close()
self.child_stdin = None
if self.child_stderr:
self.child_stdin.close()
self.child_stdin = None
es = ExitStatus(self.cmdline, self.child_stdout.close())
if self.exitstatus is None:
self.exitstatus = es
self.child_stdout = None
self.dead()
return self.exitstatus
def poll(self):
es = win32process.GetExitCodeProcess(self.handle)
if es == win32con.STILL_ACTIVE:
return None
else:
self.exitstatus = ExitStatus(self.cmdline, es)
self.dead()
return self.exitstatus
# called when process determined to be daed
def dead(self):
if not self.deadchild:
self.deadchild = True
if self._callback:
self._callback(self)
# check if still running
def alive(self):
es = win32process.GetExitCodeProcess(self.handle)
if es == win32con.STILL_ACTIVE:
return True
else:
return False
# wait until finished
def wait(self):
# let python read until EOF for a wait
try:
self._buf += self.child_stdout.read()
self.close()
except: # closed file?
pass
return self.exitstatus
def status(self):
return self.exitstatus
def isdead(self):
return self.deadchild
# considered true if child alive, false if child dead
def __bool__(self):
return not self.deadchild
# A server that performs filer client operations. This mostly delegates to the
# os module. But some special methods are provided for common functions.
class Win32Agent(Pyro.core.SynchronizedObjBase):
def __init__(self):
Pyro.core.SynchronizedObjBase.__init__(self)
self._files = {}
self._procs = {}
self._dirstack = []
def platform(self):
return sys.platform
def whatami(self):
"""Return agent implementation (class name)."""
return self.__class__.__name__
# Since file objects are not pickle-able, a handle is returned. Use the
# handle for subsequent file operations on f* methods.
def fopen(self, fname, mode="r", bufsize=-1):
"Opens a file object and returns a handle to it."
fname = cygwin2nt(fname)
fo = WindowsFile(fname, mode, bufsize)
UserLog.msg("fopen", fname)
handle = fo.fileno()
self._files[handle] = fo
return handle
def CreateFile(self, fname, mode="r", bufsize=-1):
"Open a file the same way a File Directory migration engine would."
fname = cygwin2nt(fname)
UserLog.msg("CreateFile", fname)
if mode == "r":
wmode = win32file.GENERIC_READ
elif mode == "w":
wmode = win32file.GENERIC_WRITE
elif mode in ( 'r+', 'w+', 'a+'):
wmode = win32file.GENERIC_READ | win32file.GENERIC_WRITE
else:
raise ValueError("invalid file mode")
h = win32file.CreateFile(
fname, # CTSTR lpFileName,
wmode, # DWORD dwDesiredAccess,
win32file.FILE_SHARE_DELETE | win32file.FILE_SHARE_READ | win32file.FILE_SHARE_WRITE, # DWORD dwShareMode,
None, # LPSECURITY_ATTRIBUTES lpSecurityAttributes,
win32file.OPEN_EXISTING, # DWORD dwCreationDisposition,
win32file.FILE_ATTRIBUTE_NORMAL, # DWORD dwFlagsAndAttributes,
0, # HANDLE hTemplateFile
)
self._files[int(h)] = h
return int(h)
def fclose(self, handle):
"Closes a file object given the handle."
fo = self._files.get(handle, None)
if fo:
if type(fo) is WindowsFile:
fo.close()
del self._files[handle]
else:
fo.Close() # pyHANDLE from CreateFile
def fread(self, handle, amt=-1):
"Reads from the file object given the handle and amount to read."
fo = self._files.get(handle, None)
if fo:
if type(fo) is WindowsFile:
return fo.read(amt)
else:
return win32file.ReadFile(fo, amt, None)
def fwrite(self, handle, data):
"Writes to a file object given the handle."
fo = self._files.get(handle, None)
if fo:
if type(fo) is WindowsFile:
return fo.write(data)
else:
return win32file.WriteFile(fo, data, None)
def fsync(self, handle):
"fsync the file object."
fo = self._files.get(handle, None)
if fo:
fo.flush()
return os.fsync(fo.fileno())
def fseek(self, handle, pos, how=0):
"Seek in the file object."
fo = self._files.get(handle, None)
if fo:
if type(fo) is WindowsFile:
return fo.seek(pos, how)
else:
win32file.SetFilePointer(fo, pos, how)
def ftell(self, handle):
"Tell where the seek pointer is in the file object."
fo = self._files.get(handle, None)
if fo:
if type(fo) is WindowsFile:
return fo.tell()
def fflush(self, handle):
"""Flush the file object buffer."""
fo = self._files.get(handle, None)
if fo:
return fo.flush()
def fileno(self, handle):
"Return the file objects file descriptor."
fo = self._files.get(handle, None)
if fo:
return fo.fileno()
def get_handle_info(self, handle):
fo = self._files.get(handle, None)
if fo:
return repr(fo) # XXX
else:
return None
def flock(self, handle, length=0, start=0, whence=0, nonblocking=False):
"""Lock the file with the given range."""
fo = self._files.get(handle, None)
if fo:
return fo.lock_exclusive(length, start, whence, nonblocking)
def funlock(self, handle, length, start=0, whence=0):
fo = self._files.get(handle, None)
if fo:
fo.unlock(length, start, whence)
def flist(self):
return list(self._files.keys())
def unlink(self, path):
"Unlink (delete) the given file."
path = cygwin2nt(path)
return os.unlink(path)
def rename(self, src, dst):
"Rename file from src to dst."
src = cygwin2nt(src)
dst = cygwin2nt(dst)
return os.rename(src, dst)
# directory methods
def mkdir(self, path, mode=0o777):
"Make a directory."
path = cygwin2nt(path)
return os.mkdir(path, mode)
def makedirs(self, path, mode=0o777):
"Make a full path."
path = cygwin2nt(path)
return os.makedirs(path, mode)
def chdir(self, path):
path = cygwin2nt(path)
return os.chdir(path)
def rmdir(self, path):
"Delete a directory."
path = cygwin2nt(path)
return os.rmdir(path)
def getcwd(self):
return os.getcwd()
def getcwdu(self):
return os.getcwd()
def pushd(self, path=None):
self._dirstack.append(os.getcwd())
if path:
path = cygwin2nt(path)
os.chdir(path)
def popd(self):
try:
path = self._dirstack.pop()
except IndexError:
return None
else:
os.chdir(path)
return path
def listdir(self, path):
path = cygwin2nt(path)
return os.listdir(path)
ls = listdir
def listfiles(self, path):
path = cygwin2nt(path)
isfile = os.path.isfile
pjoin = os.path.join
rv = []
for fname in os.listdir(path):
if isfile(pjoin(path, fname)):
rv.append(nt2cygwin(fname))
return rv
def chmod(self, path, mode):
path = cygwin2nt(path)
return os.chmod(path, mode)
def chown(self, path, uid, gid):
path = cygwin2nt(path)
return os.chown(path, uid, gid)
def stat(self, path):
path = cygwin2nt(path)
return os.stat(path)
def statvfs(self, path):
path = cygwin2nt(path)
return os.statvfs(path)
# fd ops ruturn the file descript as handle (of course)
def open(self, fname, flags, mode=0o777):
fd = os.open(fname, mode)
return fd
def close(self, fd):
return os.close(fd)
def write(self, fd, data):
return os.write(fd, data)
def read(self, fd, n):
return os.read(fd, n)
# end fd ops
# shutil interface
def copyfile(self,src, dst):
return shutil.copyfile(src, dst)
def copymode(self, src, dst):
return shutil.copymode(src, dst)
def copystat(self, src, dst):
return shutil.copystat(src, dst)
def copy(self, src, dst):
return shutil.copy(src, dst)
def copy2(self, src, dst):
return shutil.copy2(src, dst)
def copytree(self, src, dst, symlinks=False):
return shutil.copytree(src, dst, symlinks)
def move(self, src, dst):
return win32file.MoveFile(str(src), str(dst))
def rmtree(self, path):
path = cygwin2nt(path)
for fname in os.listdir(path):
file_or_dir = os.path.join(path, fname)
if os.path.isdir(file_or_dir) and not os.path.islink(file_or_dir):
self.rmtree(file_or_dir) #it's a directory reucursive call to function again
else:
try:
os.remove(file_or_dir) #it's a file, delete it
except:
#probably failed because it is not a normal file
win32api.SetFileAttributes(file_or_dir, win32file.FILE_ATTRIBUTE_NORMAL)
os.remove(file_or_dir) #it's a file, delete it
os.rmdir(path) #delete the directory here
# os.path delegates
def exists(self, path):
path = cygwin2nt(path)
return os.path.exists(path)
def isabs(self, path):
path = cygwin2nt(path)
return os.path.isabs(path)
def isdir(self, path):
path = cygwin2nt(path)
return os.path.isdir(path)
def isfile(self, path):
path = cygwin2nt(path)
return os.path.isfile(path)
def islink(self, path):
path = cygwin2nt(path)
return os.path.islink(path)
def ismount(self, path):
path = cygwin2nt(path)
return os.path.ismount(path)
# process control, these calls are syncronous (they block)
def system(self, cmd):
UserLog.msg("system", cmd)
return os.system(cmd) # remember, stdout is on the server
def run(self, cmd, user=None):
if user is None:
return self.pipe(cmd)
else:
return self.run_as(cmd, user.name, user.passwd)
def run_async(self, cmd, user=None):
UserLog.msg("run_async", cmd, str(user))
proc = WindowsProcess(cmd, pwent=user)
self._procs[proc.childpid] = proc
return proc.childpid
def _get_process(self, pid):
return self._procs.get(pid, None)
def read_process(self, pid, N=-1):
proc = self._get_process(pid)
if proc:
return proc.read(N)
else:
return ''
def write_process(self, pid, data):
proc = self._get_process(pid)
if proc:
return proc.write(data)
def poll(self, pid):
"""Poll for async process. Returns exitstatus if done."""
try:
proc = self._procs[pid]
except KeyError:
return -errno.ENOENT
if proc.poll() is None:
return -errno.EAGAIN
else:
del self._procs[pid]
return proc.exitstatus
def waitpid(self, pid):
while True:
rv = self.poll(pid)
if rv == -errno.ENOENT:
return None
if rv == -errno.EAGAIN:
proc = self._procs[pid]
es = proc.wait()
del self._procs[pid]
return es
else: # already exited
del self._procs[pid]
return rv
def kill(self, pid):
"""Kills a process that was started by run_async."""
try:
proc = self._procs.pop(pid)
except KeyError:
return -errno.ENOENT
else:
proc.kill()
sts = proc.wait()
return sts
def killall(self):
rv = []
for pid in self._procs:
rv.append(self.kill(pid))
return rv
def plist(self):
return list(self._procs.keys())
def spawn(self, cmd, user=None, async=True):
# keep the "async" parameter for compatibility with the
# PosixServer.
if user:
cmd = ("runas /user:%s " % user) + cmd
UserLog.msg("spawn", cmd)
L = split_command_line(cmd)
pid = os.spawnv(os.P_DETACH, L[0], L)
return pid
def pipe(self, cmd):
UserLog.msg("pipe", cmd)
proc = os.popen(cmd, 'r')
text = proc.read()
sts = proc.close()
if sts is None:
sts = 0
return ExitStatus(cmd, sts), text
def python(self, snippet):
try:
code = compile(str(snippet) + '\n', '<WindowsServer>', 'eval')
rv = eval(code, globals(), vars(self))
except:
t, v, tb = sys.exc_info()
return '*** %s (%s)' % (t, v)
else:
return rv
def pyexec(self, snippet):
try:
code = compile(str(snippet) + '\n', '<WindowsServer>', 'exec')
exec(code, globals(), vars(self))
except:
t, v, tb = sys.exc_info()
return '*** %s (%s)' % (t, v)
else:
return
# method that exists just to check if everything is working
def alive(self):
return True
def suicide(self):
"Kill myself. The server manager will ressurect me. How nice."
global _EXIT
_EXIT = True
def clean(self):
self.chdir("C:\\tmp")
for f in self.flist():
try:
self.fclose(f)
except:
pass
for pid in self.plist():
try:
self.kill(pid)
except:
pass
def NetUseAdd(self, drive, share, username=None, domainname=None, password=None):
"""Calls Windows API to map a drive. Note that this does not automatically use DFS."""
ui2={}
ui2['local'] = "%s:" % drive[0].upper()
ui2['remote'] = str(share) # \\servername\sharename
ui2['asg_type'] = USE_DISKDEV
if username:
ui2['username'] = str(username)
if domainname:
ui2['domainname'] = str(domainname)
if password:
ui2['password'] = str(password)
return win32net.NetUseAdd(None,2,ui2)
def NetUseDelete(self, drive, forcelevel=0):
"""Remove a mapped drive."""
ui2 = win32net.NetUseGetInfo(None, "%s:" % drive[0].upper(), 2)
return win32net.NetUseDel(None, ui2['remote'], max(0, min(forcelevel, 3)))
#win32net.USE_NOFORCE
#win32net.USE_FORCE
#win32net.USE_LOTS_OF_FORCE
def net_use(self, drive, share, user=None, domainname=None, password=None):
"""Map a drive on a Windows client using the *net* command."""
cmd = "net use %s: %s %s" % (drive[0].upper(), share, IF(password, password, ""))
if user:
cmd += " /USER:%s%s" % (IF(domainname, "%s\\"%domainname, ""), user)
return self.pipe(cmd)
def net_use_delete(self, drive):
"""Unmap a drive on a Windows client using the *net* command."""
cmd = "net use %s: /delete /y" % (drive[0].upper(),)
return self.pipe(cmd)
def md5sums(self, path):
"""Reads the md5sums.txt file in path and returns the number of files
checked good, then number bad (failures), and a list of the failures."""
from pycopia import md5lib
failures = []
counter = Counter()
md5lib.check_md5sums(path, failures.append, counter)
return counter.good, counter.bad, failures
def _get_home(self):
try: # F&*#!&@ windows
HOME = os.environ['USERPROFILE']
except KeyError:
try:
HOME = os.path.join(os.environ["HOMEDRIVE"], os.environ["HOMEPATH"])
except KeyError:
HOME = "C:\\"
return HOME
def get_tarball(self, url):
self.pushd(self._get_home())
# the ncftpget will check if the file is current, will not download if not needed
exitstatus, out = self.pipe('wget -q "%s"' % (url,))
self.popd()
return exitstatus
def run_script(self, script):
"""Runs a script from a shell."""
name = os.path.join("c:\\", "tmp", "clnt%d.bat" % (os.getpid(),))
sfile = open(name, "w")
sfile.write(str(script))
sfile.write("\n") # just in case string has no newline at the end
sfile.close()
try:
sts, out = self.pipe(name)
finally:
os.unlink(name)
return ExitStatus("cmd.exe", sts), out
# for PosixServer duck typing
def mount(self, host, export, mountpoint):
"""Map a drive on a client. Same as mount on NFS. The mountpoint should
be a drive letter (without the colon). """
return self.net_use(mountpoint, r"\\%s\%s" % (host, export))
def umount(self, mountpoint):
"""Unmap a drive on a client."""
return self.net_use_delete(mountpoint)
def run_as(self, cmd, user, password):
cmd = 'runas /user:%s %s' % (user, cmd)
return self.pipe(cmd)
def get_short_pathname(self, path):
"""Get the short file name of path."""
path = cygwin2nt(path)
return win32api.GetShortPathName(path)
def win32(self, funcname, *args, **kwargs):
"""Generic interface to win32. Calls a win32api function by name."""
f = getattr(win32api, funcname)
return f(*args, **kwargs)
def hostname(self):
"""Returns the client hosts name."""
return win32api.GetComputerName()
# Windows file API interface
def CopyFile(self, src, dst):
src = cygwin2nt(src)
dst = cygwin2nt(dst)
return win32file.CopyFile(src, dst, 1)
def GetFileAttributes(self, name):
name = cygwin2nt(name)
return win32file.GetFileAttributes(name)
def GetFileAttributeFlags(self):
return {
"ARCHIVE":win32file.FILE_ATTRIBUTE_ARCHIVE,
"COMPRESSED":win32file.FILE_ATTRIBUTE_COMPRESSED,
"DIRECTORY":win32file.FILE_ATTRIBUTE_DIRECTORY,
"HIDDEN":win32file.FILE_ATTRIBUTE_HIDDEN,
"NORMAL":win32file.FILE_ATTRIBUTE_NORMAL,
"OFFLINE":win32file.FILE_ATTRIBUTE_OFFLINE,
"READONLY":win32file.FILE_ATTRIBUTE_READONLY,
"SYSTEM":win32file.FILE_ATTRIBUTE_SYSTEM,
"TEMPORARY":win32file.FILE_ATTRIBUTE_TEMPORARY,
}
def SetFileAttributes(self, name, flags):
name = cygwin2nt(name)
return win32file.SetFileAttributes(name, flags)
def add_share(self, pathname):
"""Create a new share on this server. A directory is also created. """
drive, sharename = os.path.split(pathname)
if not os.path.isdir(pathname):
os.mkdir(pathname)
shinfo={} # shinfo struct
shinfo['netname'] = sharename
shinfo['type'] = win32netcon.STYPE_DISKTREE
shinfo['remark'] = 'Testing share %s' % (sharename,)
shinfo['permissions'] = 0
shinfo['max_uses'] = -1
shinfo['current_uses'] = 0
shinfo['path'] = pathname
shinfo['passwd'] = ''
win32net.NetShareAdd(None,2,shinfo)
return sharename
def del_share(self, pathname):
"""Remove a share. Returns True if successful, False otherwise."""
drive, sharename = os.path.split(pathname)
try:
win32net.NetShareDel(None, sharename, 0)
except:
ex, val, tb = sys.exc_info()
UserLog.warn("del_share", str(ex), str(val))
return False
else:
return True
# md5sums callback for counting files
class Counter(object):
def __init__(self):
self.good = 0
self.bad = 0
def __call__(self, name, disp):
if disp:
self.good += 1
else:
self.bad += 1
######## main program #####
class AgentThread(threading.Thread):
""" Agent runs in this thread.
"""
def __init__(self, stopcallback):
threading.Thread.__init__(self)
Log.msg("Win32Agent", "initializing")
self._stopcallback = stopcallback
def run(self):
try:
run_server()
except Exception as x :
Log.error("NS daemon","COULD NOT START!!!",x)
raise SystemExit
self._stopcallback()
def run_server():
os.chdir(r"C:\tmp")
Pyro.core.initServer(banner=0, storageCheck=0)
ns=Pyro.naming.NameServerLocator().getNS()
daemon=Pyro.core.Daemon()
daemon.useNameServer(ns)
uri=daemon.connectPersistent(Win32Agent(),
"Agents.%s" % (win32api.GetComputerName().lower(),))
daemon.requestLoop(_checkexit)
daemon.shutdown()
def _checkexit():
global _EXIT
return not _EXIT
class RemoteAgentService(BasicNTService):
_svc_name_ = 'RemoteAgentService'
_svc_display_name_ = "Remote Agent Server"
_svc_description_ = 'Provides Windows remote control agent.'
def __init__(self, args):
super(RemoteAgentService, self).__init__(args)
if not os.path.isdir(Pyro.config.PYRO_STORAGE):
os.mkdir(Pyro.config.PYRO_STORAGE)
self._thread = AgentThread(self.SvcStop)
def _doRun(self):
self._thread.start()
def _doStop(self):
self._thread.join()
self._thread = None
if __name__ == '__main__':
RemoteAgentService.HandleCommandLine()
| 31.475532 | 119 | 0.590935 | 3,610 | 29,587 | 4.76205 | 0.206094 | 0.011169 | 0.019778 | 0.022744 | 0.205747 | 0.151707 | 0.124367 | 0.112617 | 0.090862 | 0.076668 | 0 | 0.014281 | 0.304188 | 29,587 | 939 | 120 | 31.509052 | 0.820761 | 0.085781 | 0 | 0.287591 | 0 | 0 | 0.056853 | 0.001863 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.016058 | 0.030657 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b701550eed98d3100b7b0a2a4ed10c335a6dc06a | 2,587 | py | Python | src/models/transformer_encoder.py | tsumita/implicit_emotion | dae2d5a8162a2665b8e76812716068650feae710 | [
"MIT"
] | 6 | 2018-09-03T00:55:35.000Z | 2020-01-09T11:53:31.000Z | src/models/transformer_encoder.py | tsumita/implicit_emotion | dae2d5a8162a2665b8e76812716068650feae710 | [
"MIT"
] | null | null | null | src/models/transformer_encoder.py | tsumita/implicit_emotion | dae2d5a8162a2665b8e76812716068650feae710 | [
"MIT"
] | 2 | 2019-06-23T11:32:27.000Z | 2019-07-04T22:15:33.000Z | import copy
import torch.nn as nn
from .transformer import (Encoder,
EncoderLayer,
MultiHeadedAttention,
PositionwiseFeedforward,
PositionalEncoding)
class TransformerEncoder(nn.Module):
"""Transformer Encoder"""
def __init__(self, embedding_dim, hidden_sizes, num_layers=6, num_heads=8,
dropout=0.1, batch_first=True, use_cuda=True):
"""Take a batch of representations and add context transformer-style
Parameters
----------
embedding_dim : TODO
hidden_sizes : TODO
num_layers : TODO, optional
num_heads : TODO, optional
dropout : TODO, optional
batch_first: TODO, optional
use_cuda : TODO, optional
"""
if not batch_first:
raise NotImplementedError
super(TransformerEncoder, self).__init__()
self.embedding_dim = embedding_dim
self.hidden_sizes = hidden_sizes
self.num_layers = num_layers
self.num_heads = num_heads
self.dropout = dropout
self.use_cuda = use_cuda
self.out_dim = embedding_dim
# FIXME: I don't know how will deepcopies work within a pytorch module
# <2018-06-25 12:06:59, Jorge Balazs>
c = copy.deepcopy
attn = MultiHeadedAttention(self.num_heads, self.embedding_dim)
ff = PositionwiseFeedforward(self.embedding_dim, self.hidden_sizes,
self.dropout)
position = PositionalEncoding(self.embedding_dim, self.dropout)
self.encoder = Encoder(
EncoderLayer(embedding_dim, c(attn), c(ff), dropout), self.num_layers
)
self.positional_embedding = c(position)
for p in self.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
def forward(self, emb_batch, masks=None, lengths=None):
"""Add context to a batch of vectors
Parameters
----------
emb_batch : torch.FloatTensor, dim(batch_size, seq_len, hidden_dim)
mask : torch.Floattensor, dim(batch_size, seq_len)
lengths : kept for compatibility with other layers
Returns
-------
A torch.FloatTensor of dim(batch_size, seq_len, hidden_dim) containing
context-enriched vectors
"""
# for compatibility with Annotated Transformer implementation
masks = masks.unsqueeze(1)
return self.encoder(self.positional_embedding(emb_batch), masks)
| 32.3375 | 81 | 0.609586 | 282 | 2,587 | 5.397163 | 0.368794 | 0.070959 | 0.052562 | 0.029566 | 0.103811 | 0.068331 | 0.068331 | 0 | 0 | 0 | 0 | 0.011179 | 0.308465 | 2,587 | 79 | 82 | 32.746835 | 0.839575 | 0.305373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101266 | 0 | 1 | 0.057143 | false | 0 | 0.085714 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b706818aa45f72b58b9687e3a435833411cd0110 | 5,325 | py | Python | launchMinecraft.py | Timurinyo/tchrHlprStudent | 598f0e1321b11555d327393ab78723e1e286703e | [
"MIT"
] | null | null | null | launchMinecraft.py | Timurinyo/tchrHlprStudent | 598f0e1321b11555d327393ab78723e1e286703e | [
"MIT"
] | null | null | null | launchMinecraft.py | Timurinyo/tchrHlprStudent | 598f0e1321b11555d327393ab78723e1e286703e | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#coding:utf-8
__author__ = 'CoderZh and Tymur'
import sys
from time import sleep
# Important for multithreading
sys.coinit_flags = 0 # pythoncom.COINIT_MULTITHREADED
import win32com
import win32com.client
import win32gui
import win32con
import pythoncom
#import keyboard
from pathlib import Path
import os
import re
import subprocess
import psutil
def dump(obj):
for attr in dir(obj):
print("obj.%s = %r" % (attr, getattr(obj, attr)))
def getIEServer(hwnd, ieServer):
if win32gui.GetClassName(hwnd) == 'Internet Explorer_Server':
ieServer.append(hwnd)
#def connectToIEServer():
def changeLanguage(lang):
#lang should be uk_UA or en_US
userprofile_folder = os.environ['userprofile']
data_folder = Path(f"{userprofile_folder}/AppData/Local/Packages/Microsoft.MinecraftEducationEdition_8wekyb3d8bbwe/LocalState/games/com.mojang/minecraftpe/")
file_to_open = data_folder / "options.txt"
s = open(file_to_open).read()
repl_result = re.subn(r'game_language:.*', f'game_language:{lang}', s)
f = open(file_to_open, 'w')
f.write(repl_result[0])
f.close()
print("language changed")
def launchMinecraft():
subprocess.call('explorer.exe shell:appsFolder\Microsoft.MinecraftEducationEdition_8wekyb3d8bbwe!Microsoft.MinecraftEducationEdition')
def getCredentials():
cred_path = os.path.join(os.path.dirname(sys.executable), 'credentials.txt')
with open(cred_path) as f:
lines = f.readlines()
login = lines[0]
password = lines[1]
print("credentials received")
return login, password
def wait_password_page_to_load(login_element):
#Wait until password input page is loaded
while(login_element.className != "moveOffScreen"):
for el in doc.all:
try:
if el.name == "loginfmt" and el.className == "moveOffScreen":
login_element = el
#print(el.className)
#sleep(0.1)
except:
print("passwd screen isn't loaded yet")
#sleep(0.1)
continue
sleep(0.1)
def loginIE(login, password):
pythoncom.CoInitializeEx(0) # not use this for multithreading
#Connect to internet explorer server instance
mainHwnd = win32gui.FindWindow('ADALWebBrowserHost', '')
if mainHwnd:
ieServers = []
win32gui.EnumChildWindows(mainHwnd, getIEServer, ieServers)
if len(ieServers) > 0:
ieServer = ieServers[0]
msg = win32gui.RegisterWindowMessage('WM_HTML_GETOBJECT')
ret, result = win32gui.SendMessageTimeout(ieServer, msg, 0, 0, win32con.SMTO_ABORTIFHUNG, 20000)
ob = pythoncom.ObjectFromLresult(result, pythoncom.IID_IDispatch, 0)
doc = win32com.client.dynamic.Dispatch(ob)
print("connected to IE server")
try:
win32gui.SetForegroundWindow(mainHwnd)
except:
print("couldn't SetForegroundWindow 1")
return False
#for i in range(2):
#Make sure that we've got all elements loaded
page_type = ""
login_not_ready = True
submit_not_ready = True
password_not_ready = True
while(login_not_ready or submit_not_ready or password_not_ready):
#Get elements from document
try:
for el in doc.all:
#Try is needed because not all elements have both name and type fields
try:
if el.name == "loginfmt":
login_element = el
login_not_ready = False
print("received login element")
if el.type == "submit":
submit_element = el
submit_not_ready = False
print("received btn element")
if el.name == "passwd":
password_element = el
password_not_ready = False
except:
print("element has no name attribute")
#sleep(0.1)
continue
except:
print("doc isn't loaded yet")
return False
sleep(0.1)
#Figure out what page is loaded
if password_element.className == "moveOffScreen":
page_type = "login_page"
elif login_element.className == "moveOffScreen":
page_type = "password_page"
if page_type == "login_page":
#Paste login
login_element.focus()
login_element.value = login
submit_element.style.backgroundColor = "#000000"
submit_element.focus()
submit_element.blur()
submit_element.click()
wait_password_page_to_load(login_element)
elif page_type == "password_page":
#Paste password
password_element.focus()
password_element.value = password
submit_element.style.backgroundColor = "#000000"
submit_element.focus()
submit_element.blur()
submit_element.click()
print("ok")
return True
else:
print("page_type unspecified")
else:
print("No IE server found")
return False
def launchMine(lessonType):
if lessonType == "PS":
changeLanguage("uk_UA")
elif lessonType == "PR":
changeLanguage("en_US")
else:
print("Unavailable lesson type specified. Should be PS or PR")
login, password = getCredentials()
launchMinecraft()
login_successfull = False
times_launched = 0
while not(login_successfull):
try:
login_successfull = loginIE(login, password)
sleep(0.5)
times_launched += 1
if times_launched > 1200:
return False
except:
print("something went completely wrong...")
return True
def closeMine():
os.system("TASKKILL /F /IM Minecraft.Windows.exe")
| 28.475936 | 159 | 0.689202 | 658 | 5,325 | 5.431611 | 0.358663 | 0.030218 | 0.009793 | 0.007834 | 0.12451 | 0.079463 | 0.070509 | 0.051483 | 0.051483 | 0.051483 | 0 | 0.018554 | 0.210516 | 5,325 | 186 | 160 | 28.629032 | 0.831589 | 0.101033 | 0 | 0.244755 | 0 | 0.006993 | 0.204803 | 0.056114 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062937 | false | 0.125874 | 0.083916 | 0 | 0.195804 | 0.104895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b713985ca32368cb00dff148dea34d4486a5b5ad | 1,293 | py | Python | trello/searchs.py | fif911/trello3_little_bit_updated | baf0275c5a89b3bcf9c1544897cbe25fafbc53d0 | [
"BSD-2-Clause"
] | 16 | 2016-01-19T17:02:24.000Z | 2020-02-20T19:23:32.000Z | trello/searchs.py | fif911/trello3_little_bit_updated | baf0275c5a89b3bcf9c1544897cbe25fafbc53d0 | [
"BSD-2-Clause"
] | 3 | 2016-02-10T14:17:58.000Z | 2016-07-26T01:31:54.000Z | trello/searchs.py | fif911/trello3_little_bit_updated | baf0275c5a89b3bcf9c1544897cbe25fafbc53d0 | [
"BSD-2-Clause"
] | 7 | 2016-02-09T23:47:00.000Z | 2021-06-05T17:03:22.000Z | import json
import requests
class Searchs(object):
__module__ = 'trello'
def __init__(self, apikey, token=None):
self._apikey = apikey
self._token = token
def get(self, query, idOrganizations, idBoards=None, idCards=None, modelTypes=None, board_fields=None, boards_limit=None, card_fields=None, cards_limit=None, card_board=None, card_list=None, card_members=None, organization_fields=None, organizations_limit=None, member_fields=None, members_limit=None, action_fields=None, actions_limit=None, actions_since=None, partial=None):
resp = requests.get("https://trello.com/1/search" % (), params=dict(key=self._apikey, token=self._token, query=query, idOrganizations=idOrganizations, idBoards=idBoards, idCards=idCards, modelTypes=modelTypes, board_fields=board_fields, boards_limit=boards_limit, card_fields=card_fields, cards_limit=cards_limit, card_board=card_board, card_list=card_list, card_members=card_members, organization_fields=organization_fields, organizations_limit=organizations_limit, member_fields=member_fields, members_limit=members_limit, action_fields=action_fields, actions_limit=actions_limit, actions_since=actions_since, partial=partial), data=None)
resp.raise_for_status()
return resp.json()
| 76.058824 | 649 | 0.784996 | 169 | 1,293 | 5.674556 | 0.284024 | 0.052138 | 0.031283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000874 | 0.115236 | 1,293 | 16 | 650 | 80.8125 | 0.837413 | 0 | 0 | 0 | 0 | 0 | 0.025842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b7187d387790af8d5795d75e9899699ce907f9df | 6,366 | py | Python | chrome/test/chromedriver/run_buildbot_steps.py | devasia1000/chromium | 919a8a666862fb866a6bb7aa7f3ae8c0442b4828 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 2 | 2019-02-03T05:19:48.000Z | 2021-11-15T15:07:21.000Z | chrome/test/chromedriver/run_buildbot_steps.py | devasia1000/chromium | 919a8a666862fb866a6bb7aa7f3ae8c0442b4828 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | chrome/test/chromedriver/run_buildbot_steps.py | devasia1000/chromium | 919a8a666862fb866a6bb7aa7f3ae8c0442b4828 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Runs all the buildbot steps for ChromeDriver except for update/compile."""
import optparse
import os
import platform
import shutil
import subprocess
import sys
import tempfile
import time
import urllib2
import zipfile
_THIS_DIR = os.path.abspath(os.path.dirname(__file__))
sys.path.insert(0, os.path.join(_THIS_DIR, os.pardir, 'pylib'))
from common import chrome_paths
from common import util
import archive
GS_BUCKET = 'gs://chromedriver-prebuilts'
GS_ZIP_PREFIX = 'chromedriver2_prebuilts'
SLAVE_SCRIPT_DIR = os.path.join(_THIS_DIR, os.pardir, os.pardir, os.pardir,
os.pardir, os.pardir, os.pardir, os.pardir,
'scripts', 'slave')
UPLOAD_SCRIPT = os.path.join(SLAVE_SCRIPT_DIR, 'skia', 'upload_to_bucket.py')
DOWNLOAD_SCRIPT = os.path.join(SLAVE_SCRIPT_DIR, 'gsutil_download.py')
def Archive(revision):
print '@@@BUILD_STEP archive@@@'
prebuilts = ['libchromedriver2.so', 'chromedriver2_server',
'chromedriver2_unittests', 'chromedriver2_tests']
build_dir = chrome_paths.GetBuildDir(prebuilts[0:1])
zip_name = '%s_r%s.zip' % (GS_ZIP_PREFIX, revision)
temp_dir = util.MakeTempDir()
zip_path = os.path.join(temp_dir, zip_name)
print 'Zipping prebuilts %s' % zip_path
f = zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED)
for prebuilt in prebuilts:
f.write(os.path.join(build_dir, prebuilt), prebuilt)
f.close()
cmd = [
sys.executable,
UPLOAD_SCRIPT,
'--source_filepath=%s' % zip_path,
'--dest_gsbase=%s' % GS_BUCKET
]
if util.RunCommand(cmd):
print '@@@STEP_FAILURE@@@'
def Download():
print '@@@BUILD_STEP Download chromedriver prebuilts@@@'
temp_dir = util.MakeTempDir()
zip_path = os.path.join(temp_dir, 'chromedriver2_prebuilts.zip')
cmd = [
sys.executable,
DOWNLOAD_SCRIPT,
'--url=%s' % GS_BUCKET,
'--partial-name=%s' % GS_ZIP_PREFIX,
'--dst=%s' % zip_path
]
if util.RunCommand(cmd):
print '@@@STEP_FAILURE@@@'
build_dir = chrome_paths.GetBuildDir(['host_forwarder'])
print 'Unzipping prebuilts %s to %s' % (zip_path, build_dir)
f = zipfile.ZipFile(zip_path, 'r')
f.extractall(build_dir)
f.close()
# Workaround for Python bug: http://bugs.python.org/issue15795
os.chmod(os.path.join(build_dir, 'chromedriver2_server'), 0700)
def MaybeRelease(revision):
# Version is embedded as: const char kChromeDriverVersion[] = "0.1";
with open(os.path.join(_THIS_DIR, 'chrome', 'version.cc'), 'r') as f:
version_line = filter(lambda x: 'kChromeDriverVersion' in x, f.readlines())
version = version_line[0].split('"')[1]
bitness = '32'
if util.IsLinux() and platform.architecture()[0] == '64bit':
bitness = '64'
zip_name = 'chromedriver2_%s%s_%s.zip' % (
util.GetPlatformName(), bitness, version)
site = 'https://code.google.com/p/chromedriver/downloads/list'
s = urllib2.urlopen(site)
downloads = s.read()
s.close()
if zip_name in downloads:
return 0
print '@@@BUILD_STEP releasing %s@@@' % zip_name
if util.IsWindows():
server_orig_name = 'chromedriver2_server.exe'
server_name = 'chromedriver.exe'
else:
server_orig_name = 'chromedriver2_server'
server_name = 'chromedriver'
server = os.path.join(chrome_paths.GetBuildDir([server_orig_name]),
server_orig_name)
print 'Zipping ChromeDriver server', server
temp_dir = util.MakeTempDir()
zip_path = os.path.join(temp_dir, zip_name)
f = zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED)
f.write(server, server_name)
if util.IsLinux() or util.IsMac():
adb_commands = os.path.join(_THIS_DIR, 'chrome', 'adb_commands.py')
f.write(adb_commands, 'adb_commands.py')
f.close()
cmd = [
sys.executable,
os.path.join(_THIS_DIR, 'third_party', 'googlecode',
'googlecode_upload.py'),
'--summary', 'version of ChromeDriver2 r%s' % revision,
'--project', 'chromedriver',
'--user', 'chromedriver.bot@gmail.com',
'--label', 'Release',
zip_path
]
with open(os.devnull, 'wb') as no_output:
if subprocess.Popen(cmd, stdout=no_output, stderr=no_output).wait():
print '@@@STEP_FAILURE@@@'
def KillChromes():
chrome_map = {
'win': 'chrome.exe',
'mac': 'Chromium',
'linux': 'chrome',
}
if util.IsWindows():
cmd = ['taskkill', '/F', '/IM']
else:
cmd = ['killall', '-9']
cmd.append(chrome_map[util.GetPlatformName()])
util.RunCommand(cmd)
def CleanTmpDir():
tmp_dir = tempfile.gettempdir()
print 'cleaning temp directory:', tmp_dir
for file_name in os.listdir(tmp_dir):
if os.path.isdir(os.path.join(tmp_dir, file_name)):
print 'deleting sub-directory', file_name
shutil.rmtree(os.path.join(tmp_dir, file_name), True)
def WaitForLatestSnapshot(revision):
print '@@@BUILD_STEP wait_for_snapshot@@@'
while True:
snapshot_revision = archive.GetLatestRevision(archive.Site.SNAPSHOT)
if snapshot_revision >= revision:
break
print 'Waiting for snapshot >= %s, found %s' % (revision, snapshot_revision)
time.sleep(60)
print 'Got snapshot revision', snapshot_revision
def main():
parser = optparse.OptionParser()
parser.add_option(
'', '--android-package',
help='Application package name, if running tests on Android.')
parser.add_option(
'-r', '--revision', type='string', default=None,
help='Chromium revision')
options, _ = parser.parse_args()
if not options.android_package:
KillChromes()
CleanTmpDir()
if options.android_package:
Download()
else:
if not options.revision:
parser.error('Must supply a --revision')
if util.IsLinux() and platform.architecture()[0] == '64bit':
Archive(options.revision)
WaitForLatestSnapshot(options.revision)
cmd = [
sys.executable,
os.path.join(_THIS_DIR, 'run_all_tests.py'),
]
if options.android_package:
cmd.append('--android-package=' + options.android_package)
passed = (util.RunCommand(cmd) == 0)
if not options.android_package and passed:
MaybeRelease(options.revision)
if __name__ == '__main__':
main()
| 29.887324 | 80 | 0.678291 | 826 | 6,366 | 5.041162 | 0.292978 | 0.027378 | 0.038425 | 0.020173 | 0.232949 | 0.169549 | 0.158501 | 0.105668 | 0.069645 | 0.049952 | 0 | 0.00922 | 0.182218 | 6,366 | 212 | 81 | 30.028302 | 0.790626 | 0.048382 | 0 | 0.203593 | 0 | 0 | 0.219749 | 0.029289 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011976 | 0.077844 | null | null | 0.083832 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.