hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
92b2d78d5ff09cb30ba6d2d2e9b4946a5468a6cd | 167 | py | Python | backend/server/mlearn/urls.py | jarifrahman/mlearn | 63d4da28bba1e05b077ba0bf27aa4793caedaf10 | [
"MIT"
] | null | null | null | backend/server/mlearn/urls.py | jarifrahman/mlearn | 63d4da28bba1e05b077ba0bf27aa4793caedaf10 | [
"MIT"
] | 5 | 2021-03-19T12:01:28.000Z | 2021-06-10T20:28:47.000Z | backend/server/mlearn/urls.py | jarifrahman/mlearn | 63d4da28bba1e05b077ba0bf27aa4793caedaf10 | [
"MIT"
] | null | null | null | from django.conf.urls import include
from . import views
from django.urls import path
urlpatterns = [
path('test',views.test_predict_view, name = 'test'),
] | 16.7 | 53 | 0.712575 | 23 | 167 | 5.086957 | 0.565217 | 0.17094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179641 | 167 | 10 | 54 | 16.7 | 0.854015 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
92b494d87d4d53c3c36d3794eb027e7fad3fd2ed | 6,877 | py | Python | core/tests/unittests/test_feature_selection.py | zhiqiangdon/autogluon | 71ee7ef0f05d8f0aad112d8c1719174aa33194d9 | [
"Apache-2.0"
] | 4,462 | 2019-12-09T17:41:07.000Z | 2022-03-31T22:00:41.000Z | core/tests/unittests/test_feature_selection.py | zhiqiangdon/autogluon | 71ee7ef0f05d8f0aad112d8c1719174aa33194d9 | [
"Apache-2.0"
] | 1,408 | 2019-12-09T17:48:59.000Z | 2022-03-31T20:24:12.000Z | core/tests/unittests/test_feature_selection.py | zhiqiangdon/autogluon | 71ee7ef0f05d8f0aad112d8c1719174aa33194d9 | [
"Apache-2.0"
] | 623 | 2019-12-10T02:04:18.000Z | 2022-03-20T17:11:01.000Z | from autogluon.core.utils.feature_selection import *
from autogluon.core.utils.utils import unevaluated_fi_df_template
import numpy as np
from numpy.core.fromnumeric import sort
import pandas as pd
import pytest
def evaluated_fi_df_template(features, importance=None, n=None):
rng = np.random.default_rng(0)
importance_df = pd.DataFrame({'name': features})
importance_df['importance'] = rng.standard_normal(len(features)) if importance is None else importance
importance_df['stddev'] = rng.standard_normal(len(features))
importance_df['p_value'] = None
importance_df['n'] = 5 if n is None else n
importance_df.set_index('name', inplace=True)
importance_df.index.name = None
return importance_df
@pytest.fixture
def sample_features():
return ['a', 'b', 'c', 'd', 'e']
@pytest.fixture
def sample_importance_df_1(sample_features):
return evaluated_fi_df_template(sample_features, importance=[0.2, 0.2, None, 1., None], n=[10, 5, 0, 5, 0])
@pytest.fixture
def sample_importance_df_2(sample_features):
return evaluated_fi_df_template(sample_features, importance=[-0.1, -0.1, 0.1, None, None], n=[5, 10, 10, 0, 0])
def test_add_noise_column_df():
# test noise columns are appended to input dataframe and feature_metadata
X = pd.DataFrame({'a': [1, 2]})
args = {'rng': np.random.default_rng(0), 'count': 2}
X_noised, noise_columns = add_noise_column(X, **args)
expected_features = X.columns.tolist() + noise_columns
assert expected_features == X_noised.columns.tolist()
def test_merge_importance_dfs_base(sample_features):
# test the scenario when previous feature importance df is none
prev_df, curr_df = None, unevaluated_fi_df_template(sample_features)
assert merge_importance_dfs(prev_df, curr_df, using_prev_fit_fi=set()) is curr_df
def test_merge_importance_dfs_same_model(sample_features, sample_importance_df_1, sample_importance_df_2):
# test the scenario where previous feature importance df exists and its importance estimates come from the same fitted model
prev_df, curr_df = sample_importance_df_1, sample_importance_df_2
result_df = merge_importance_dfs(prev_df, curr_df, using_prev_fit_fi=set())
assert [score if score == score else None for score in result_df['importance'].tolist()] == [0., 0.1, 0.1, 1., None]
assert result_df['n'].tolist() == [15, 15, 10, 5, 0]
def test_merge_importance_dfs_different_model(sample_features, sample_importance_df_1, sample_importance_df_2):
# test the scenario where previous feature importance df exists and its importance estimates come from a different fitted model
prev_df, curr_df = sample_importance_df_1, sample_importance_df_2
using_prev_fit_fi = set(sample_features)
result_df = merge_importance_dfs(prev_df, curr_df, using_prev_fit_fi=using_prev_fit_fi).sort_index()
assert len(using_prev_fit_fi) == 2
assert [score if score == score else None for score in result_df['importance'].tolist()] == [-0.1, -0.1, 0.1, 1., None]
assert result_df['n'].tolist() == [5, 10, 10, 5, 0]
def test_merge_importance_dfs_all(sample_features, sample_importance_df_1, sample_importance_df_2):
# test the scenario where previous feature importance df exists and its importance estimates come from both same and different fitted models
prev_df, curr_df = sample_importance_df_1, sample_importance_df_2
using_prev_fit_fi = set([sample_features[0]])
result_df = merge_importance_dfs(prev_df, curr_df, using_prev_fit_fi=using_prev_fit_fi).sort_index()
assert [score if score == score else None for score in result_df['importance'].tolist()] == [-0.1, 0., 0.1, 1., None]
assert result_df['n'].tolist() == [5, 15, 10, 5, 0]
assert using_prev_fit_fi == set()
def test_sort_features_by_priority_base(sample_features):
# test the ordering of feature importance computation when no prior feature importance computation was done
sorted_features = sort_features_by_priority(features=sample_features, prev_importance_df=None, using_prev_fit_fi=set())
assert sorted_features == sample_features
def test_sort_features_by_priority_same_model(sample_features):
# test the ordering of feature importance computation when prior feature importance computation from the same fitted model was done
prev_importance_df = evaluated_fi_df_template(sample_features)
sorted_features = sort_features_by_priority(features=sample_features, prev_importance_df=prev_importance_df, using_prev_fit_fi=set())
assert sorted_features == prev_importance_df.sort_values('importance').index.tolist()
def test_sort_features_by_priority_different_model(sample_features):
# test the ordering of feature importance computation when prior feature importance computation from a different fitted model was done
prev_importance_df = evaluated_fi_df_template(sample_features)
using_prev_fit_fi = sample_features[-2:]
sorted_features = sort_features_by_priority(features=sample_features, prev_importance_df=prev_importance_df, using_prev_fit_fi=using_prev_fit_fi)
sorted_prev_fit_features = prev_importance_df[prev_importance_df.index.isin(using_prev_fit_fi)].sort_values('importance').index.tolist()
sorted_curr_fit_features = prev_importance_df[~prev_importance_df.index.isin(using_prev_fit_fi)].sort_values('importance').index.tolist()
expected_features = sorted_prev_fit_features + sorted_curr_fit_features
assert sorted_features == expected_features
def test_sort_features_by_priority_all(sample_features):
# test the ordering of feature importance computation when feature impotance computation comes from mix of current and previous fit models,
# and some feature are unevaluated
length = len(sample_features)
using_prev_fit_fi = set(sample_features[:length//3])
evaluated_rows, unevaluated_rows = evaluated_fi_df_template(sample_features[:length//2]), unevaluated_fi_df_template(sample_features[length//2:])
prev_importance_df = pd.concat([evaluated_rows, unevaluated_rows])
sorted_features = sort_features_by_priority(features=sample_features, prev_importance_df=prev_importance_df, using_prev_fit_fi=using_prev_fit_fi)
unevaluated_features = unevaluated_rows.index.tolist()
sorted_prev_fit_features = evaluated_rows[(~evaluated_rows.index.isin(sample_features[length//2:]))
& (evaluated_rows.index.isin(using_prev_fit_fi))].sort_values('importance').index.tolist()
sorted_curr_fit_features = evaluated_rows[(~evaluated_rows.index.isin(sample_features[length//2:]))
& (~evaluated_rows.index.isin(using_prev_fit_fi))].sort_values('importance').index.tolist()
expected_features = unevaluated_features + sorted_prev_fit_features + sorted_curr_fit_features
assert sorted_features == expected_features
| 58.279661 | 149 | 0.774611 | 1,015 | 6,877 | 4.88867 | 0.121182 | 0.099154 | 0.053204 | 0.062072 | 0.738412 | 0.694478 | 0.64208 | 0.60923 | 0.585449 | 0.585449 | 0 | 0.015636 | 0.135088 | 6,877 | 117 | 150 | 58.777778 | 0.818594 | 0.154282 | 0 | 0.17284 | 0 | 0 | 0.022222 | 0 | 0 | 0 | 0 | 0 | 0.17284 | 1 | 0.160494 | false | 0 | 0.555556 | 0.037037 | 0.765432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
92b88c9cf80b050e0265852984800902421fc86c | 4,352 | py | Python | Week 3/While Loops/Infinite loops and how to break them/topic.py | chanchanchong/Crash-Course-On-Python | ce98bdd5c5355a582c76b864dae1c5eecdf6ea44 | [
"MIT"
] | null | null | null | Week 3/While Loops/Infinite loops and how to break them/topic.py | chanchanchong/Crash-Course-On-Python | ce98bdd5c5355a582c76b864dae1c5eecdf6ea44 | [
"MIT"
] | null | null | null | Week 3/While Loops/Infinite loops and how to break them/topic.py | chanchanchong/Crash-Course-On-Python | ce98bdd5c5355a582c76b864dae1c5eecdf6ea44 | [
"MIT"
] | null | null | null | # You may remember by now that while loops use the condition to check when
# to exit. The body of the while loop needs to make sure that the condition begin
# checked will change. If it doesn't change, the loop may never finish and we get
# what's called an infinite loop, a loop that keeps executing and never stops.
# Check out this example. It uses the modulo operator that we saw a while back.
# This cycle will finish for positive and negative values of x. But what would
# happen if x was zero? The remainder of 0 divided by 2 is 0, so the condition
# would be true. The result of dividing 0 by 2 would also be zero, so the value of
# x wouldn't change. This loop would go on for ever, and so we'd get an infinite
# loop. If our code was called with x having the value of zero, the computer
# would just waster resources doing a division that would never lead to the loop
# stopping. The program would be stuck in an infinite loop circling background
# endlessly, and we don't want that. All that looping might make your computer
# dizzy. To avoid this, we need to think about what needs to be different than zero.
# So we could nest this while loop inside an if statement just like this. With this
# approach, the while loop is executed only when x is not zero. Alternatively, we
# could add the condition directly to the loop using a logical operator like in this
# example. This makes sure we only enter the body of the loop for values of x
# that are both different than zero and even. Talking about infinite loop reminds
# me of one of the first times I used while loops myself. I wrote a script that
# emailed me as a way of verifying that the code worked, and while some
# condition was true, I forgot to exit the loop. Turns out those e-mails get sent
# faster than once per second. As you can imagine, I got about 500 e-mails
# before I realized what was going on. Infinitely grateful for that little lesson.
# When you're done laughing at my story, remember, when you're writing loops,
# it's a good idea to take a moment to consider the different values a variable
# can take. This helps you make sure your loop won't get stuck, If you see that
# your program is running forever without finishing, have a second look at your
# loops to check there's no infinite loop hiding somewhere in the code. While
# you need to watch out for infinite loops, they are not always a bad thing.
# Sometimes you actually want your program to execute continuously until
# some external condition is met. If you've used the ping utility on Linux or
# macOS system, or ping-t on a Windows system, you've seen an infinite loop in
# action. This tool will keep sending packets and printing the results to the
# terminal unless you send it the interrupt signal, usually pressing Ctrl+C. If you
# were looking at the program source code you'll see that it uses an infinite loop
# to do hits with a block of code with instructions to keep sending the packets
# forever. One thing to call out is it should always be possible to break the loop
# by sending a certain signal. In the ping example, that signal is the user pressing
# Ctrl+C. In other cases, it could be that the user pressed the button on a
# graphical application, or that another program sent a specific signal, or even
# that a time limit was reached. In your code, you could have an infinite loop that
# looks something like this. In Python, we use the break keyword which you can
# see here to signal that the current loop should stop running. We can use it not
# only to stop infinite loops but also to stop a loop early if the code has already
# achieved what's needed. So quick refresh. How do you avoid the most
# common pitfalls when writing while loops? First, remember to initialize your
# variables, and second, check that your loops won't run forever. Wow, All this
# talk of loops is making me feel a little loopy. I'm going to have to go and lie
# down while you do the next practice quiz. Best of luck, and meet me over in
# the next video when you're done.
# The following code causes an infinite loop. Can you figure out what's missing
# and how to fix it?
def print_range(start, end):
# Loop through the numbers from stand to end
n = start
while n <= end:
print(n)
n += 1
print_range(1, 5) # Should print 1 2 3 4 5 (each number on its own line)
| 68 | 84 | 0.76011 | 810 | 4,352 | 4.081481 | 0.391358 | 0.032668 | 0.029643 | 0.00726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004598 | 0.200368 | 4,352 | 63 | 85 | 69.079365 | 0.945402 | 0.946691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.166667 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
2b7e226561c5f225ac4f722c57e5e1ca316b693d | 1,509 | py | Python | src/chatproject/loginmodule/migrations/addloginentries.py | BinitaBharati/pychat | 10fa8876016b594852ca954b6c41b869851f6afd | [
"Apache-1.1"
] | null | null | null | src/chatproject/loginmodule/migrations/addloginentries.py | BinitaBharati/pychat | 10fa8876016b594852ca954b6c41b869851f6afd | [
"Apache-1.1"
] | null | null | null | src/chatproject/loginmodule/migrations/addloginentries.py | BinitaBharati/pychat | 10fa8876016b594852ca954b6c41b869851f6afd | [
"Apache-1.1"
] | null | null | null | # Generated by Django 2.1.5 on 2019-02-05 05:51
#This template has been copied from the output migration file generated by "python3 src/chatproject/manage.py makemigrations loginmodule --empty" command.
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
#0002_login_avatar is the previous migration file name.Check loginmodule/migrations directory content
('loginmodule', '0002_login_avatar'),
]
def insertData(apps, schema_editor):
#First clear all entries if any.
Login = apps.get_model('loginmodule', 'Login')
Login.objects.all().delete()
user = Login(name = "admin", login_id = "admin", password = "password", email = "admin@pychat.com", type = "Admin", avatar="admin.jpg")
user.save()
user = Login(name = "Nisha Singh", login_id = "nisha", password = "password", email = "nisha@pychat.com", type = "User", avatar="nishasingh.jpg")
user.save()
user = Login(name = "Mark Mackron", login_id = "mark", password = "password", email = "mark@pychat.com", type = "User", avatar="markmackron.jpg")
user.save()
user = Login(name = "Rose White", login_id = "rose", password = "password", email = "rose@pychat.com", type = "User", avatar="rosewhite.jpg")
user.save()
user = Login(name = "Yug Bhatia", login_id = "yug", password = "password", email = "yug@pychat.com", type = "User", avatar="yug.jpg")
user.save()
operations = [
migrations.RunPython(insertData),
]
| 48.677419 | 154 | 0.666667 | 190 | 1,509 | 5.236842 | 0.431579 | 0.045226 | 0.065327 | 0.060302 | 0.188945 | 0.096482 | 0 | 0 | 0 | 0 | 0 | 0.019528 | 0.185553 | 1,509 | 30 | 155 | 50.3 | 0.790073 | 0.218025 | 0 | 0.238095 | 1 | 0 | 0.262128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.238095 | 0.047619 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2b85107d40ff4bcf29478e4d8b33919c89bfb5dc | 57 | py | Python | mqtt_io/__init__.py | se7enmilgram/mqtt-io | 10b6eba48e494b2377456e58b7efffaf4fcf30f1 | [
"MIT"
] | null | null | null | mqtt_io/__init__.py | se7enmilgram/mqtt-io | 10b6eba48e494b2377456e58b7efffaf4fcf30f1 | [
"MIT"
] | null | null | null | mqtt_io/__init__.py | se7enmilgram/mqtt-io | 10b6eba48e494b2377456e58b7efffaf4fcf30f1 | [
"MIT"
] | null | null | null | """
Top level of MQTT IO package.
"""
VERSION = "2.1.4"
| 9.5 | 29 | 0.578947 | 10 | 57 | 3.3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.210526 | 57 | 5 | 30 | 11.4 | 0.666667 | 0.508772 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2b93e2dd60ad0bcd5afb60a260b94c8cd6343bef | 550 | py | Python | File for Lab 9/lab9ex1.py | lifeplay2019/ECOR1051-project | fbd0ed3651adc2cfc4dca06d155a03fb9fcfb3f3 | [
"MIT"
] | null | null | null | File for Lab 9/lab9ex1.py | lifeplay2019/ECOR1051-project | fbd0ed3651adc2cfc4dca06d155a03fb9fcfb3f3 | [
"MIT"
] | null | null | null | File for Lab 9/lab9ex1.py | lifeplay2019/ECOR1051-project | fbd0ed3651adc2cfc4dca06d155a03fb9fcfb3f3 | [
"MIT"
] | null | null | null | def first_last6(list1: list) -> bool:
"""Return True if either/both first and last element are assigned a value of 6
>>> first_last6(test_list1)
False
>>> first_last6(test_list2)
True
>>> first_last6(test_list3)
True
"""
if list1[0] or list1[5] == 6:
return True
else:
return False
test_list1 = [0, 1, 2, 3, 4, 5]
test_list2 = [6, 1, 2, 3, 4, 9]
test_list3 = [6, 1, 2, 3, 4, 6]
print(first_last6(test_list1))
print(first_last6(test_list2))
print(first_last6(test_list3)) | 25 | 83 | 0.603636 | 87 | 550 | 3.632184 | 0.390805 | 0.221519 | 0.265823 | 0.037975 | 0.031646 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101485 | 0.265455 | 550 | 22 | 84 | 25 | 0.680693 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.272727 | 0.272727 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2b9d50296cf348e9bc9f1d5d1a04e464619b6a74 | 3,101 | py | Python | sdc-os-chef/sdc-backend/chef-repo/cookbooks/sdc-catalog-be/files/default/consumers.py | onapdemo/sdc | 3f1fee2ca76332b48e6f36662b32f2b5096c25e7 | [
"Apache-2.0"
] | null | null | null | sdc-os-chef/sdc-backend/chef-repo/cookbooks/sdc-catalog-be/files/default/consumers.py | onapdemo/sdc | 3f1fee2ca76332b48e6f36662b32f2b5096c25e7 | [
"Apache-2.0"
] | null | null | null | sdc-os-chef/sdc-backend/chef-repo/cookbooks/sdc-catalog-be/files/default/consumers.py | onapdemo/sdc | 3f1fee2ca76332b48e6f36662b32f2b5096c25e7 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import subprocess
#from time import sleep
import time
from datetime import datetime
class bcolors:
HEADER = '\033[95m'
OKBLUE = '\033[94m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
##############################
# Functions
##############################
def checkBackend():
command="curl -s -o /dev/null -I -w \"%{http_code}\" -i http://localhost:8080/sdc2/rest/v1/user/jh0003"
proc = subprocess.Popen( command , shell=True , stdout=subprocess.PIPE )
(out, err) = proc.communicate()
result = out.strip()
return result
def checkConsumer(consumerName):
command="curl -s -o /dev/null -I -w \"%{http_code}\" -i -H \"Accept: application/json; charset=UTF-8\" -H \"Content-Type: application/json\" -H \"USER_ID: jh0003\" http://localhost:8080/sdc2/rest/v1/consumers/" + consumerName
proc = subprocess.Popen( command , shell=True , stdout=subprocess.PIPE )
(out, err) = proc.communicate()
result = out.strip()
return result
def createConsumer( consumerName, consumerSalt, consumerPass ):
print '[INFO] ' + consumerName
command="curl -s -o /dev/null -w \"%{http_code}\" -X POST -i -H \"Accept: application/json; charset=UTF-8\" -H \"Content-Type: application/json\" -H \"USER_ID: jh0003\" http://localhost:8080/sdc2/rest/v1/consumers/ -d '{\"consumerName\": '" + consumerName + "', \"consumerSalt\": '" + consumerSalt + "',\"consumerPassword\": '" + consumerPass + "'}'"
proc = subprocess.Popen( command , shell=True , stdout=subprocess.PIPE)
(out, err) = proc.communicate()
result = out.strip()
return result
##############################
# Definitions
##############################
consumersList = [ "aai" , "appc" , "dcae" , "mso" , "sdnc" , "vid" , "cognita", "clamp" , "vfc" ]
salt = "9cd4c3ad2a6f6ce3f3414e68b5157e63"
password = "35371c046f88c603ccba152cb3db34ec4475cb2e5713f2fc0a43bf18a5243495"
beStat=0
##############################
# Main
##############################
for i in range(1,10):
myResult = checkBackend()
if myResult == '200':
print '[INFO]: Backend is up and running'
beStat=1
break
else:
currentTime = datetime.now()
print '[ERROR]: ' + currentTime.strftime('%Y/%m/%d %H:%M:%S') + bcolors.FAIL + ' Backend not responding, try #' + str(i) + bcolors.ENDC
time.sleep(10)
if beStat == 0:
print '[ERROR]: ' + time.strftime('%Y/%m/%d %H:%M:%S') + bcolors.FAIL + 'Backend is DOWN :-(' + bcolors.ENDC
exit()
for consumer in consumersList:
myResult = checkConsumer(consumer)
if myResult == '200':
print '[INFO]: ' + consumer + ' already exists'
else:
myResult = createConsumer( consumer, salt, password )
if myResult == '201':
print '[INFO]: ' + consumer + ' created, result: [' + myResult + ']'
else:
print '[ERROR]: ' + bcolors.FAIL + consumer + bcolors.ENDC + ' error creating , result: [' + myResult + ']'
| 34.455556 | 354 | 0.579168 | 334 | 3,101 | 5.362275 | 0.398204 | 0.033501 | 0.020101 | 0.021776 | 0.431044 | 0.406477 | 0.391401 | 0.366834 | 0.366834 | 0.366834 | 0 | 0.059157 | 0.204128 | 3,101 | 89 | 355 | 34.842697 | 0.666532 | 0.023863 | 0 | 0.293103 | 0 | 0 | 0.255454 | 0.033779 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.068966 | 0.051724 | null | null | 0.12069 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2b9d5adb0a3291018a51f70e3d35d8e7f172b1b0 | 720 | py | Python | running_modes/lib_invent/logging/reinforcement_logger.py | truehanwj/REINVENT | b36b9d206e76590c7d584683fc45de8a74ce6033 | [
"Apache-2.0"
] | null | null | null | running_modes/lib_invent/logging/reinforcement_logger.py | truehanwj/REINVENT | b36b9d206e76590c7d584683fc45de8a74ce6033 | [
"Apache-2.0"
] | null | null | null | running_modes/lib_invent/logging/reinforcement_logger.py | truehanwj/REINVENT | b36b9d206e76590c7d584683fc45de8a74ce6033 | [
"Apache-2.0"
] | null | null | null | from running_modes.lib_invent.configurations.log_configuration import LogConfiguration
from running_modes.lib_invent.logging.base_reinforcement_logger import BaseReinforcementLogger
from running_modes.lib_invent.logging.local_reinforcement_logger import LocalReinforcementLogger
from running_modes.enums.logging_mode_enum import LoggingModeEnum
class ReinforcementLogger:
def __new__(cls, configuration: LogConfiguration) -> BaseReinforcementLogger:
logging_mode_enum = LoggingModeEnum()
if configuration.recipient == logging_mode_enum.LOCAL:
return LocalReinforcementLogger(configuration)
else:
raise NotImplemented("Remote logging mode is not implemented yet !")
| 48 | 96 | 0.815278 | 73 | 720 | 7.739726 | 0.506849 | 0.077876 | 0.113274 | 0.100885 | 0.157522 | 0.113274 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136111 | 720 | 14 | 97 | 51.428571 | 0.90836 | 0 | 0 | 0 | 0 | 0 | 0.061111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2ba0d0a1dc94baae9ed177b912ee153f7bd3243c | 1,737 | py | Python | UCube/models/base.py | MujyKun/united-cube | 0aa69c81a66b7fec11ade5fe3d83f717e20d65e2 | [
"MIT"
] | 4 | 2021-07-26T10:18:28.000Z | 2021-08-13T23:41:06.000Z | UCube/models/base.py | MujyKun/united-cube | 0aa69c81a66b7fec11ade5fe3d83f717e20d65e2 | [
"MIT"
] | null | null | null | UCube/models/base.py | MujyKun/united-cube | 0aa69c81a66b7fec11ade5fe3d83f717e20d65e2 | [
"MIT"
] | null | null | null | import re
from typing import Optional
class BaseModel:
r"""
The Base Class for Model objects.
.. container:: operations
.. describe:: x == y
Checks if two models have the same slug.
.. describe:: x != y
Checks if two models do not have the same slug.
.. describe:: str(x)
Returns the model's name.
Parameters
----------
slug: :class:`str`
The unique identifier of the model.
Other Parameters
----------------
name: Optional[:class:`str`]
The name of the object.
Attributes
----------
slug: :class:`str`
The unique identifier.
name: Optional[:class:`str`]
The name of the object.
"""
def __init__(self, slug: str, name: str = None):
self.slug: str = slug
self.name: Optional[str] = name
def __eq__(self, other):
return self.slug == other.slug
def __ne__(self, other):
return not self == other
def __str__(self):
return self.name
@staticmethod
def remove_html(content: str) -> str:
"""
Removes HTML tags of the html content and returns a cleansed version.
Parameters
----------
content: :class:`str`
The raw html content to remove html tags from.
Returns
-------
A cleansed string with no HTML.: :class:`str`
"""
if not content:
return ""
content = content.replace("<br>", "\n") # replace new line tags before they get replaced.
html_cleaner = re.compile('<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});')
clean_text = re.sub(html_cleaner, '', content)
return clean_text
| 23.472973 | 98 | 0.540012 | 211 | 1,737 | 4.345972 | 0.383886 | 0.052345 | 0.059978 | 0.034896 | 0.250818 | 0.209378 | 0.141767 | 0.082879 | 0.082879 | 0 | 0 | 0.008503 | 0.322971 | 1,737 | 73 | 99 | 23.794521 | 0.771259 | 0.493955 | 0 | 0 | 0 | 0.047619 | 0.075714 | 0.067143 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.095238 | 0.142857 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
2ba8272b0b9e45c7757c5506d9d869b19f617b88 | 45,673 | py | Python | mi/dataset/parser/test/test_dosta_abcdjm_dcl.py | rmanoni/mi-dataset | c1012a0cd8f2ea075e008cdd1ab291ed54f44d43 | [
"BSD-2-Clause"
] | null | null | null | mi/dataset/parser/test/test_dosta_abcdjm_dcl.py | rmanoni/mi-dataset | c1012a0cd8f2ea075e008cdd1ab291ed54f44d43 | [
"BSD-2-Clause"
] | null | null | null | mi/dataset/parser/test/test_dosta_abcdjm_dcl.py | rmanoni/mi-dataset | c1012a0cd8f2ea075e008cdd1ab291ed54f44d43 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
"""
@package mi.dataset.parser.test.test_dosta_abcdjm_dcl
@file marine-integrations/mi/dataset/parser/test/test_dosta_abcdjm_dcl.py
@author Steve Myerson
@brief Test code for a Dosta_abcdjm_dcl data parser
In the following files, Metadata consists of 4 records
and Garbled consist of 3 records.
There is 1 group of Sensor Data records for each set of metadata.
Files used for testing:
20000101.dosta0.log
Metadata - 1 set, Sensor Data - 0 records, Garbled - 0, Newline - \n
20010121.dosta1.log
Metadata - 1 set, Sensor Data - 21 records, Garbled - 0, Newline - \n
20020222.dosta2.log
Metadata - 2 sets, Sensor Data - 22 records, Garbled - 0, Newline - \r\n
20030314.dosta3.log
Metadata - 3 sets, Sensor Data - 14 records, Garbled - 0, Newline - \n
20041225.dosta4.log
Metadata - 2 sets, Sensor Data - 250 records, Garbled - 0, Newline - \n
20050103.dosta5.log
Metadata - 1 set, Sensor Data - 3 records, Garbled - 1, Newline - \n
20060207.dosta6.log
Metadata - 2 sets, Sensor Data - 7 records, Garbled - 2, Newline \r\n
20070114.dosta7.log
This file contains a boatload of invalid sensor data records. Newline - \r\n
1. invalid year
2. invalid month
3. invalid day
4. invalid hour
5. invalid minute
6. invalid second
7. invalid product
8. spaces instead of tabs
9. a 2-digit serial number
10. floating point number missing the decimal point
11. serial number missing
12. one of the floating point numbers missing
13. Date in form YYYY-MM-DD
14. time field missing milliseconds
15. extra floating point number in sensor data
"""
import unittest
import os
from nose.plugins.attrib import attr
from mi.core.log import get_logger ; log = get_logger()
from mi.dataset.test.test_parser import ParserUnitTestCase
from mi.dataset.parser.dosta_abcdjm_dcl import \
DostaAbcdjmDclParser, \
DostaAbcdjmDclRecoveredParser, \
DostaAbcdjmDclTelemeteredParser, \
DostaAbcdjmDclRecoveredInstrumentDataParticle, \
DostaAbcdjmDclTelemeteredInstrumentDataParticle, \
DostaStateKey
from mi.dataset.dataset_parser import DataSetDriverConfigKeys
from mi.idk.config import Config
RESOURCE_PATH = os.path.join(Config().base_dir(), 'mi', 'dataset', 'driver',
'dosta_abcdjm', 'dcl', 'resource')
# Expected tuples for data in file 20010121.dosta1.log
# This file has no sensor data records and should not produce any particles.
# Expected tuples for data in file 20010121.dosta1.log
EXPECTED_20010121_dosta1 = [
('2001/01/21 03:06:02.329', '2001', '01', '21', '03', '06', '02', '329', '4831', '2001', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'),
('2001/01/21 03:06:15.908', '2001', '01', '21', '03', '06', '15', '908', '4831', '2001', '25.000', '6.000', '5.000', '4.500', '4.500', '4.500', '4.500', '75.000', '75.000', '12.500'),
('2001/01/21 03:06:29.487', '2001', '01', '21', '03', '06', '29', '487', '4831', '2001', '50.000', '12.000', '10.000', '9.000', '9.000', '9.000', '9.000', '150.000', '150.000', '25.000'),
('2001/01/21 03:06:43.066', '2001', '01', '21', '03', '06', '43', '066', '4831', '2001', '75.000', '18.000', '15.000', '13.500', '13.500', '13.500', '13.500', '225.000', '225.000', '37.500'),
('2001/01/21 03:06:56.645', '2001', '01', '21', '03', '06', '56', '645', '4831', '2001', '100.000', '24.000', '20.000', '18.000', '18.000', '18.000', '18.000', '300.000', '300.000', '50.000'),
('2001/01/21 03:07:10.224', '2001', '01', '21', '03', '07', '10', '224', '4831', '2001', '125.000', '30.000', '25.000', '22.500', '22.500', '22.500', '22.500', '375.000', '375.000', '62.500'),
('2001/01/21 03:07:23.803', '2001', '01', '21', '03', '07', '23', '803', '4831', '2001', '150.000', '36.000', '30.000', '27.000', '27.000', '27.000', '27.000', '450.000', '450.000', '75.000'),
('2001/01/21 03:07:37.382', '2001', '01', '21', '03', '07', '37', '382', '4831', '2001', '175.000', '42.000', '35.000', '31.500', '31.500', '31.500', '31.500', '525.000', '525.000', '87.500'),
('2001/01/21 03:07:50.961', '2001', '01', '21', '03', '07', '50', '961', '4831', '2001', '200.000', '48.000', '40.000', '36.000', '36.000', '36.000', '36.000', '600.000', '600.000', '100.000'),
('2001/01/21 03:08:04.540', '2001', '01', '21', '03', '08', '04', '540', '4831', '2001', '225.000', '54.000', '45.000', '40.500', '40.500', '40.500', '40.500', '675.000', '675.000', '112.500'),
('2001/01/21 03:08:18.119', '2001', '01', '21', '03', '08', '18', '119', '4831', '2001', '250.000', '60.000', '50.000', '45.000', '45.000', '45.000', '45.000', '750.000', '750.000', '125.000'),
('2001/01/21 03:08:31.698', '2001', '01', '21', '03', '08', '31', '698', '4831', '2001', '275.000', '66.000', '55.000', '49.500', '49.500', '49.500', '49.500', '825.000', '825.000', '137.500'),
('2001/01/21 03:08:45.277', '2001', '01', '21', '03', '08', '45', '277', '4831', '2001', '300.000', '72.000', '60.000', '54.000', '54.000', '54.000', '54.000', '900.000', '900.000', '150.000'),
('2001/01/21 03:08:58.856', '2001', '01', '21', '03', '08', '58', '856', '4831', '2001', '325.000', '78.000', '65.000', '58.500', '58.500', '58.500', '58.500', '975.000', '975.000', '162.500'),
('2001/01/21 03:09:12.435', '2001', '01', '21', '03', '09', '12', '435', '4831', '2001', '350.000', '84.000', '70.000', '63.000', '63.000', '63.000', '63.000', '1050.000', '1050.000', '175.000'),
('2001/01/21 03:09:26.014', '2001', '01', '21', '03', '09', '26', '014', '4831', '2001', '375.000', '90.000', '75.000', '67.500', '67.500', '67.500', '67.500', '1125.000', '1125.000', '187.500'),
('2001/01/21 03:09:39.593', '2001', '01', '21', '03', '09', '39', '593', '4831', '2001', '400.000', '96.000', '80.000', '72.000', '72.000', '72.000', '72.000', '1200.000', '1200.000', '200.000'),
('2001/01/21 03:09:53.172', '2001', '01', '21', '03', '09', '53', '172', '4831', '2001', '425.000', '102.000', '85.000', '76.500', '76.500', '76.500', '76.500', '1275.000', '1275.000', '212.500'),
('2001/01/21 03:10:06.751', '2001', '01', '21', '03', '10', '06', '751', '4831', '2001', '450.000', '108.000', '90.000', '81.000', '81.000', '81.000', '81.000', '1350.000', '1350.000', '225.000'),
('2001/01/21 03:10:20.330', '2001', '01', '21', '03', '10', '20', '330', '4831', '2001', '475.000', '114.000', '95.000', '85.500', '85.500', '85.500', '85.500', '1425.000', '1425.000', '237.500'),
('2001/01/21 03:10:33.909', '2001', '01', '21', '03', '10', '33', '909', '4831', '2001', '500.000', '120.000', '100.000', '90.000', '90.000', '90.000', '90.000', '1500.000', '1500.000', '250.000'),
]
# Expected tuples for data in file 20020222.dosta2.log
EXPECTED_20020222_dosta2 = [
('2002/02/22 05:08:04.331', '2002', '02', '22', '05', '08', '04', '331', '4831', '2002', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'),
('2002/02/22 05:08:17.910', '2002', '02', '22', '05', '08', '17', '910', '4831', '2002', '11.628', '2.791', '2.326', '2.093', '2.093', '2.093', '2.093', '34.884', '34.884', '5.814'),
('2002/02/22 05:08:31.489', '2002', '02', '22', '05', '08', '31', '489', '4831', '2002', '23.256', '5.581', '4.651', '4.186', '4.186', '4.186', '4.186', '69.767', '69.767', '11.628'),
('2002/02/22 05:08:45.068', '2002', '02', '22', '05', '08', '45', '068', '4831', '2002', '34.884', '8.372', '6.977', '6.279', '6.279', '6.279', '6.279', '104.651', '104.651', '17.442'),
('2002/02/22 05:08:58.647', '2002', '02', '22', '05', '08', '58', '647', '4831', '2002', '46.512', '11.163', '9.302', '8.372', '8.372', '8.372', '8.372', '139.535', '139.535', '23.256'),
('2002/02/22 05:09:12.226', '2002', '02', '22', '05', '09', '12', '226', '4831', '2002', '58.140', '13.953', '11.628', '10.465', '10.465', '10.465', '10.465', '174.419', '174.419', '29.070'),
('2002/02/22 05:09:25.805', '2002', '02', '22', '05', '09', '25', '805', '4831', '2002', '69.767', '16.744', '13.953', '12.558', '12.558', '12.558', '12.558', '209.302', '209.302', '34.884'),
('2002/02/22 05:09:39.384', '2002', '02', '22', '05', '09', '39', '384', '4831', '2002', '81.395', '19.535', '16.279', '14.651', '14.651', '14.651', '14.651', '244.186', '244.186', '40.698'),
('2002/02/22 05:09:52.963', '2002', '02', '22', '05', '09', '52', '963', '4831', '2002', '93.023', '22.326', '18.605', '16.744', '16.744', '16.744', '16.744', '279.070', '279.070', '46.512'),
('2002/02/22 05:10:06.542', '2002', '02', '22', '05', '10', '06', '542', '4831', '2002', '104.651', '25.116', '20.930', '18.837', '18.837', '18.837', '18.837', '313.953', '313.953', '52.326'),
('2002/02/22 05:10:20.121', '2002', '02', '22', '05', '10', '20', '121', '4831', '2002', '116.279', '27.907', '23.256', '20.930', '20.930', '20.930', '20.930', '348.837', '348.837', '58.140'),
('2002/02/22 05:10:33.700', '2002', '02', '22', '05', '10', '33', '700', '4831', '2002', '127.907', '30.698', '25.581', '23.023', '23.023', '23.023', '23.023', '383.721', '383.721', '63.953'),
('2002/02/22 05:10:47.279', '2002', '02', '22', '05', '10', '47', '279', '4831', '2002', '139.535', '33.488', '27.907', '25.116', '25.116', '25.116', '25.116', '418.605', '418.605', '69.767'),
('2002/02/22 05:11:00.858', '2002', '02', '22', '05', '11', '00', '858', '4831', '2002', '151.163', '36.279', '30.233', '27.209', '27.209', '27.209', '27.209', '453.488', '453.488', '75.581'),
('2002/02/22 05:11:14.437', '2002', '02', '22', '05', '11', '14', '437', '4831', '2002', '162.791', '39.070', '32.558', '29.302', '29.302', '29.302', '29.302', '488.372', '488.372', '81.395'),
('2002/02/22 05:11:28.016', '2002', '02', '22', '05', '11', '28', '016', '4831', '2002', '174.419', '41.860', '34.884', '31.395', '31.395', '31.395', '31.395', '523.256', '523.256', '87.209'),
('2002/02/22 05:11:41.595', '2002', '02', '22', '05', '11', '41', '595', '4831', '2002', '186.047', '44.651', '37.209', '33.488', '33.488', '33.488', '33.488', '558.140', '558.140', '93.023'),
('2002/02/22 05:11:55.174', '2002', '02', '22', '05', '11', '55', '174', '4831', '2002', '197.674', '47.442', '39.535', '35.581', '35.581', '35.581', '35.581', '593.023', '593.023', '98.837'),
('2002/02/22 05:12:08.753', '2002', '02', '22', '05', '12', '08', '753', '4831', '2002', '209.302', '50.233', '41.860', '37.674', '37.674', '37.674', '37.674', '627.907', '627.907', '104.651'),
('2002/02/22 05:12:22.332', '2002', '02', '22', '05', '12', '22', '332', '4831', '2002', '220.930', '53.023', '44.186', '39.767', '39.767', '39.767', '39.767', '662.791', '662.791', '110.465'),
('2002/02/22 05:12:35.911', '2002', '02', '22', '05', '12', '35', '911', '4831', '2002', '232.558', '55.814', '46.512', '41.860', '41.860', '41.860', '41.860', '697.674', '697.674', '116.279'),
('2002/02/22 05:12:49.490', '2002', '02', '22', '05', '12', '49', '490', '4831', '2002', '244.186', '58.605', '48.837', '43.953', '43.953', '43.953', '43.953', '732.558', '732.558', '122.093'),
('2002/02/22 05:13:57.385', '2002', '02', '22', '05', '13', '57', '385', '4831', '2002', '255.814', '61.395', '51.163', '46.047', '46.047', '46.047', '46.047', '767.442', '767.442', '127.907'),
('2002/02/22 05:14:10.964', '2002', '02', '22', '05', '14', '10', '964', '4831', '2002', '267.442', '64.186', '53.488', '48.140', '48.140', '48.140', '48.140', '802.326', '802.326', '133.721'),
('2002/02/22 05:14:24.543', '2002', '02', '22', '05', '14', '24', '543', '4831', '2002', '279.070', '66.977', '55.814', '50.233', '50.233', '50.233', '50.233', '837.209', '837.209', '139.535'),
('2002/02/22 05:14:38.122', '2002', '02', '22', '05', '14', '38', '122', '4831', '2002', '290.698', '69.767', '58.140', '52.326', '52.326', '52.326', '52.326', '872.093', '872.093', '145.349'),
('2002/02/22 05:14:51.701', '2002', '02', '22', '05', '14', '51', '701', '4831', '2002', '302.326', '72.558', '60.465', '54.419', '54.419', '54.419', '54.419', '906.977', '906.977', '151.163'),
('2002/02/22 05:15:05.280', '2002', '02', '22', '05', '15', '05', '280', '4831', '2002', '313.953', '75.349', '62.791', '56.512', '56.512', '56.512', '56.512', '941.860', '941.860', '156.977'),
('2002/02/22 05:15:18.859', '2002', '02', '22', '05', '15', '18', '859', '4831', '2002', '325.581', '78.140', '65.116', '58.605', '58.605', '58.605', '58.605', '976.744', '976.744', '162.791'),
('2002/02/22 05:15:32.438', '2002', '02', '22', '05', '15', '32', '438', '4831', '2002', '337.209', '80.930', '67.442', '60.698', '60.698', '60.698', '60.698', '1011.628', '1011.628', '168.605'),
('2002/02/22 05:15:46.017', '2002', '02', '22', '05', '15', '46', '017', '4831', '2002', '348.837', '83.721', '69.767', '62.791', '62.791', '62.791', '62.791', '1046.512', '1046.512', '174.419'),
('2002/02/22 05:15:59.596', '2002', '02', '22', '05', '15', '59', '596', '4831', '2002', '360.465', '86.512', '72.093', '64.884', '64.884', '64.884', '64.884', '1081.395', '1081.395', '180.233'),
('2002/02/22 05:16:13.175', '2002', '02', '22', '05', '16', '13', '175', '4831', '2002', '372.093', '89.302', '74.419', '66.977', '66.977', '66.977', '66.977', '1116.279', '1116.279', '186.047'),
('2002/02/22 05:16:26.754', '2002', '02', '22', '05', '16', '26', '754', '4831', '2002', '383.721', '92.093', '76.744', '69.070', '69.070', '69.070', '69.070', '1151.163', '1151.163', '191.860'),
('2002/02/22 05:16:40.333', '2002', '02', '22', '05', '16', '40', '333', '4831', '2002', '395.349', '94.884', '79.070', '71.163', '71.163', '71.163', '71.163', '1186.047', '1186.047', '197.674'),
('2002/02/22 05:16:53.912', '2002', '02', '22', '05', '16', '53', '912', '4831', '2002', '406.977', '97.674', '81.395', '73.256', '73.256', '73.256', '73.256', '1220.930', '1220.930', '203.488'),
('2002/02/22 05:17:07.491', '2002', '02', '22', '05', '17', '07', '491', '4831', '2002', '418.605', '100.465', '83.721', '75.349', '75.349', '75.349', '75.349', '1255.814', '1255.814', '209.302'),
('2002/02/22 05:17:21.070', '2002', '02', '22', '05', '17', '21', '070', '4831', '2002', '430.233', '103.256', '86.047', '77.442', '77.442', '77.442', '77.442', '1290.698', '1290.698', '215.116'),
('2002/02/22 05:17:34.649', '2002', '02', '22', '05', '17', '34', '649', '4831', '2002', '441.860', '106.047', '88.372', '79.535', '79.535', '79.535', '79.535', '1325.581', '1325.581', '220.930'),
('2002/02/22 05:17:48.228', '2002', '02', '22', '05', '17', '48', '228', '4831', '2002', '453.488', '108.837', '90.698', '81.628', '81.628', '81.628', '81.628', '1360.465', '1360.465', '226.744'),
('2002/02/22 05:18:01.807', '2002', '02', '22', '05', '18', '01', '807', '4831', '2002', '465.116', '111.628', '93.023', '83.721', '83.721', '83.721', '83.721', '1395.349', '1395.349', '232.558'),
('2002/02/22 05:18:15.386', '2002', '02', '22', '05', '18', '15', '386', '4831', '2002', '476.744', '114.419', '95.349', '85.814', '85.814', '85.814', '85.814', '1430.233', '1430.233', '238.372'),
('2002/02/22 05:18:28.965', '2002', '02', '22', '05', '18', '28', '965', '4831', '2002', '488.372', '117.209', '97.674', '87.907', '87.907', '87.907', '87.907', '1465.116', '1465.116', '244.186'),
('2002/02/22 05:18:42.544', '2002', '02', '22', '05', '18', '42', '544', '4831', '2002', '500.000', '120.000', '100.000', '90.000', '90.000', '90.000', '90.000', '1500.000', '1500.000', '250.000'),
]
# Expected tuples for data in file 20030314.dosta3.log
EXPECTED_20030314_dosta3 = [
('2003/03/14 07:10:06.333', '2003', '03', '14', '07', '10', '06', '333', '4831', '2003', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'),
('2003/03/14 07:10:19.912', '2003', '03', '14', '07', '10', '19', '912', '4831', '2003', '12.195', '2.927', '2.439', '2.195', '2.195', '2.195', '2.195', '36.585', '36.585', '6.098'),
('2003/03/14 07:10:33.491', '2003', '03', '14', '07', '10', '33', '491', '4831', '2003', '24.390', '5.854', '4.878', '4.390', '4.390', '4.390', '4.390', '73.171', '73.171', '12.195'),
('2003/03/14 07:10:47.070', '2003', '03', '14', '07', '10', '47', '070', '4831', '2003', '36.585', '8.780', '7.317', '6.585', '6.585', '6.585', '6.585', '109.756', '109.756', '18.293'),
('2003/03/14 07:11:00.649', '2003', '03', '14', '07', '11', '00', '649', '4831', '2003', '48.780', '11.707', '9.756', '8.780', '8.780', '8.780', '8.780', '146.341', '146.341', '24.390'),
('2003/03/14 07:11:14.228', '2003', '03', '14', '07', '11', '14', '228', '4831', '2003', '60.976', '14.634', '12.195', '10.976', '10.976', '10.976', '10.976', '182.927', '182.927', '30.488'),
('2003/03/14 07:11:27.807', '2003', '03', '14', '07', '11', '27', '807', '4831', '2003', '73.171', '17.561', '14.634', '13.171', '13.171', '13.171', '13.171', '219.512', '219.512', '36.585'),
('2003/03/14 07:11:41.386', '2003', '03', '14', '07', '11', '41', '386', '4831', '2003', '85.366', '20.488', '17.073', '15.366', '15.366', '15.366', '15.366', '256.098', '256.098', '42.683'),
('2003/03/14 07:11:54.965', '2003', '03', '14', '07', '11', '54', '965', '4831', '2003', '97.561', '23.415', '19.512', '17.561', '17.561', '17.561', '17.561', '292.683', '292.683', '48.780'),
('2003/03/14 07:12:08.544', '2003', '03', '14', '07', '12', '08', '544', '4831', '2003', '109.756', '26.341', '21.951', '19.756', '19.756', '19.756', '19.756', '329.268', '329.268', '54.878'),
('2003/03/14 07:12:22.123', '2003', '03', '14', '07', '12', '22', '123', '4831', '2003', '121.951', '29.268', '24.390', '21.951', '21.951', '21.951', '21.951', '365.854', '365.854', '60.976'),
('2003/03/14 07:12:35.702', '2003', '03', '14', '07', '12', '35', '702', '4831', '2003', '134.146', '32.195', '26.829', '24.146', '24.146', '24.146', '24.146', '402.439', '402.439', '67.073'),
('2003/03/14 07:12:49.281', '2003', '03', '14', '07', '12', '49', '281', '4831', '2003', '146.341', '35.122', '29.268', '26.341', '26.341', '26.341', '26.341', '439.024', '439.024', '73.171'),
('2003/03/14 07:13:02.860', '2003', '03', '14', '07', '13', '02', '860', '4831', '2003', '158.537', '38.049', '31.707', '28.537', '28.537', '28.537', '28.537', '475.610', '475.610', '79.268'),
('2003/03/14 07:14:10.755', '2003', '03', '14', '07', '14', '10', '755', '4831', '2003', '170.732', '40.976', '34.146', '30.732', '30.732', '30.732', '30.732', '512.195', '512.195', '85.366'),
('2003/03/14 07:14:24.334', '2003', '03', '14', '07', '14', '24', '334', '4831', '2003', '182.927', '43.902', '36.585', '32.927', '32.927', '32.927', '32.927', '548.780', '548.780', '91.463'),
('2003/03/14 07:14:37.913', '2003', '03', '14', '07', '14', '37', '913', '4831', '2003', '195.122', '46.829', '39.024', '35.122', '35.122', '35.122', '35.122', '585.366', '585.366', '97.561'),
('2003/03/14 07:14:51.492', '2003', '03', '14', '07', '14', '51', '492', '4831', '2003', '207.317', '49.756', '41.463', '37.317', '37.317', '37.317', '37.317', '621.951', '621.951', '103.659'),
('2003/03/14 07:15:05.071', '2003', '03', '14', '07', '15', '05', '071', '4831', '2003', '219.512', '52.683', '43.902', '39.512', '39.512', '39.512', '39.512', '658.537', '658.537', '109.756'),
('2003/03/14 07:15:18.650', '2003', '03', '14', '07', '15', '18', '650', '4831', '2003', '231.707', '55.610', '46.341', '41.707', '41.707', '41.707', '41.707', '695.122', '695.122', '115.854'),
('2003/03/14 07:15:32.229', '2003', '03', '14', '07', '15', '32', '229', '4831', '2003', '243.902', '58.537', '48.780', '43.902', '43.902', '43.902', '43.902', '731.707', '731.707', '121.951'),
('2003/03/14 07:15:45.808', '2003', '03', '14', '07', '15', '45', '808', '4831', '2003', '256.098', '61.463', '51.220', '46.098', '46.098', '46.098', '46.098', '768.293', '768.293', '128.049'),
('2003/03/14 07:15:59.387', '2003', '03', '14', '07', '15', '59', '387', '4831', '2003', '268.293', '64.390', '53.659', '48.293', '48.293', '48.293', '48.293', '804.878', '804.878', '134.146'),
('2003/03/14 07:16:12.966', '2003', '03', '14', '07', '16', '12', '966', '4831', '2003', '280.488', '67.317', '56.098', '50.488', '50.488', '50.488', '50.488', '841.463', '841.463', '140.244'),
('2003/03/14 07:16:26.545', '2003', '03', '14', '07', '16', '26', '545', '4831', '2003', '292.683', '70.244', '58.537', '52.683', '52.683', '52.683', '52.683', '878.049', '878.049', '146.341'),
('2003/03/14 07:16:40.124', '2003', '03', '14', '07', '16', '40', '124', '4831', '2003', '304.878', '73.171', '60.976', '54.878', '54.878', '54.878', '54.878', '914.634', '914.634', '152.439'),
('2003/03/14 07:16:53.703', '2003', '03', '14', '07', '16', '53', '703', '4831', '2003', '317.073', '76.098', '63.415', '57.073', '57.073', '57.073', '57.073', '951.220', '951.220', '158.537'),
('2003/03/14 07:17:07.282', '2003', '03', '14', '07', '17', '07', '282', '4831', '2003', '329.268', '79.024', '65.854', '59.268', '59.268', '59.268', '59.268', '987.805', '987.805', '164.634'),
('2003/03/14 07:18:15.177', '2003', '03', '14', '07', '18', '15', '177', '4831', '2003', '341.463', '81.951', '68.293', '61.463', '61.463', '61.463', '61.463', '1024.390', '1024.390', '170.732'),
('2003/03/14 07:18:28.756', '2003', '03', '14', '07', '18', '28', '756', '4831', '2003', '353.659', '84.878', '70.732', '63.659', '63.659', '63.659', '63.659', '1060.976', '1060.976', '176.829'),
('2003/03/14 07:18:42.335', '2003', '03', '14', '07', '18', '42', '335', '4831', '2003', '365.854', '87.805', '73.171', '65.854', '65.854', '65.854', '65.854', '1097.561', '1097.561', '182.927'),
('2003/03/14 07:18:55.914', '2003', '03', '14', '07', '18', '55', '914', '4831', '2003', '378.049', '90.732', '75.610', '68.049', '68.049', '68.049', '68.049', '1134.146', '1134.146', '189.024'),
('2003/03/14 07:19:09.493', '2003', '03', '14', '07', '19', '09', '493', '4831', '2003', '390.244', '93.659', '78.049', '70.244', '70.244', '70.244', '70.244', '1170.732', '1170.732', '195.122'),
('2003/03/14 07:19:23.072', '2003', '03', '14', '07', '19', '23', '072', '4831', '2003', '402.439', '96.585', '80.488', '72.439', '72.439', '72.439', '72.439', '1207.317', '1207.317', '201.220'),
('2003/03/14 07:19:36.651', '2003', '03', '14', '07', '19', '36', '651', '4831', '2003', '414.634', '99.512', '82.927', '74.634', '74.634', '74.634', '74.634', '1243.902', '1243.902', '207.317'),
('2003/03/14 07:19:50.230', '2003', '03', '14', '07', '19', '50', '230', '4831', '2003', '426.829', '102.439', '85.366', '76.829', '76.829', '76.829', '76.829', '1280.488', '1280.488', '213.415'),
('2003/03/14 07:20:03.809', '2003', '03', '14', '07', '20', '03', '809', '4831', '2003', '439.024', '105.366', '87.805', '79.024', '79.024', '79.024', '79.024', '1317.073', '1317.073', '219.512'),
('2003/03/14 07:20:17.388', '2003', '03', '14', '07', '20', '17', '388', '4831', '2003', '451.220', '108.293', '90.244', '81.220', '81.220', '81.220', '81.220', '1353.659', '1353.659', '225.610'),
('2003/03/14 07:20:30.967', '2003', '03', '14', '07', '20', '30', '967', '4831', '2003', '463.415', '111.220', '92.683', '83.415', '83.415', '83.415', '83.415', '1390.244', '1390.244', '231.707'),
('2003/03/14 07:20:44.546', '2003', '03', '14', '07', '20', '44', '546', '4831', '2003', '475.610', '114.146', '95.122', '85.610', '85.610', '85.610', '85.610', '1426.829', '1426.829', '237.805'),
('2003/03/14 07:20:58.125', '2003', '03', '14', '07', '20', '58', '125', '4831', '2003', '487.805', '117.073', '97.561', '87.805', '87.805', '87.805', '87.805', '1463.415', '1463.415', '243.902'),
('2003/03/14 07:21:11.704', '2003', '03', '14', '07', '21', '11', '704', '4831', '2003', '500.000', '120.000', '100.000', '90.000', '90.000', '90.000', '90.000', '1500.000', '1500.000', '250.000'),
]
# Expected tuples for data in file 20041225.dosta4.log
# This file produces 500 particles. It will be used for the large import test only.
# No verification of particles from this file are done in unit test.
# Particle results will be verified in integration and qualification tests.
# Expected tuples for data in file 20050103.dosta5.log
EXPECTED_20050103_dosta5 = [
('2005/01/03 11:14:10.337', '2005', '01', '03', '11', '14', '10', '337', '4831', '205', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'),
('2005/01/03 11:14:23.916', '2005', '01', '03', '11', '14', '23', '916', '4831', '205', '250.000', '60.000', '50.000', '45.000', '45.000', '45.000', '45.000', '750.000', '750.000', '125.000'),
('2005/01/03 11:14:37.495', '2005', '01', '03', '11', '14', '37', '495', '4831', '205', '500.000', '120.000', '100.000', '90.000', '90.000', '90.000', '90.000', '1500.000', '1500.000', '250.000'),
]
# Expected tuples for data in file 20060207.dosta6.log
EXPECTED_20060207_dosta6 = [
('2006/02/07 13:16:12.339', '2006', '02', '07', '13', '16', '12', '339', '4831', '2006', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'),
('2006/02/07 13:16:25.918', '2006', '02', '07', '13', '16', '25', '918', '4831', '2006', '38.462', '9.231', '7.692', '6.923', '6.923', '6.923', '6.923', '115.385', '115.385', '19.231'),
('2006/02/07 13:16:39.497', '2006', '02', '07', '13', '16', '39', '497', '4831', '2006', '76.923', '18.462', '15.385', '13.846', '13.846', '13.846', '13.846', '230.769', '230.769', '38.462'),
('2006/02/07 13:16:53.076', '2006', '02', '07', '13', '16', '53', '076', '4831', '2006', '115.385', '27.692', '23.077', '20.769', '20.769', '20.769', '20.769', '346.154', '346.154', '57.692'),
('2006/02/07 13:17:06.655', '2006', '02', '07', '13', '17', '06', '655', '4831', '2006', '153.846', '36.923', '30.769', '27.692', '27.692', '27.692', '27.692', '461.538', '461.538', '76.923'),
('2006/02/07 13:17:20.234', '2006', '02', '07', '13', '17', '20', '234', '4831', '2006', '192.308', '46.154', '38.462', '34.615', '34.615', '34.615', '34.615', '576.923', '576.923', '96.154'),
('2006/02/07 13:17:33.813', '2006', '02', '07', '13', '17', '33', '813', '4831', '2006', '230.769', '55.385', '46.154', '41.538', '41.538', '41.538', '41.538', '692.308', '692.308', '115.385'),
('2006/02/07 13:19:22.445', '2006', '02', '07', '13', '19', '22', '445', '4831', '2006', '269.231', '64.615', '53.846', '48.462', '48.462', '48.462', '48.462', '807.692', '807.692', '134.615'),
('2006/02/07 13:19:36.024', '2006', '02', '07', '13', '19', '36', '024', '4831', '2006', '307.692', '73.846', '61.538', '55.385', '55.385', '55.385', '55.385', '923.077', '923.077', '153.846'),
('2006/02/07 13:19:49.603', '2006', '02', '07', '13', '19', '49', '603', '4831', '2006', '346.154', '83.077', '69.231', '62.308', '62.308', '62.308', '62.308', '1038.462', '1038.462', '173.077'),
('2006/02/07 13:20:03.182', '2006', '02', '07', '13', '20', '03', '182', '4831', '2006', '384.615', '92.308', '76.923', '69.231', '69.231', '69.231', '69.231', '1153.846', '1153.846', '192.308'),
('2006/02/07 13:20:16.761', '2006', '02', '07', '13', '20', '16', '761', '4831', '2006', '423.077', '101.538', '84.615', '76.154', '76.154', '76.154', '76.154', '1269.231', '1269.231', '211.538'),
('2006/02/07 13:20:30.340', '2006', '02', '07', '13', '20', '30', '340', '4831', '2006', '461.538', '110.769', '92.308', '83.077', '83.077', '83.077', '83.077', '1384.615', '1384.615', '230.769'),
('2006/02/07 13:20:43.919', '2006', '02', '07', '13', '20', '43', '919', '4831', '2006', '500.000', '120.000', '100.000', '90.000', '90.000', '90.000', '90.000', '1500.000', '1500.000', '250.000'),
]
FILE0 = '20000101.dosta0.log'
FILE1 = '20010121.dosta1.log'
FILE2 = '20020222.dosta2.log'
FILE3 = '20030314.dosta3.log'
FILE4 = '20041225.dosta4.log'
FILE5 = '20050103.dosta5.log'
FILE6 = '20060207.dosta6.log'
FILE7 = '20070114.dosta7.log'
EXPECTED_FILE1 = EXPECTED_20010121_dosta1
EXPECTED_FILE2 = EXPECTED_20020222_dosta2
EXPECTED_FILE3 = EXPECTED_20030314_dosta3
EXPECTED_FILE5 = EXPECTED_20050103_dosta5
EXPECTED_FILE6 = EXPECTED_20060207_dosta6
@attr('UNIT', group='mi')
class DostaAbcdjmDclParserUnitTestCase(ParserUnitTestCase):
def create_rec_parser(self, file_handle, new_state=None):
"""
This function creates a DostaAbcdjmDcl parser for recovered data.
"""
parser = DostaAbcdjmDclRecoveredParser(self.rec_config,
file_handle, new_state, self.rec_state_callback,
self.rec_pub_callback, self.rec_exception_callback)
return parser
def create_tel_parser(self, file_handle, new_state=None):
"""
This function creates a DostaAbcdjmDcl parser for telemetered data.
"""
parser = DostaAbcdjmDclTelemeteredParser(self.tel_config,
file_handle, new_state, self.rec_state_callback,
self.tel_pub_callback, self.tel_exception_callback)
return parser
def open_file(self, filename):
file = open(os.path.join(RESOURCE_PATH, filename), mode='r')
return file
def rec_state_callback(self, state, file_ingested):
""" Call back method to watch what comes in via the position callback """
self.rec_state_callback_value = state
self.rec_file_ingested_value = file_ingested
def tel_state_callback(self, state, file_ingested):
""" Call back method to watch what comes in via the position callback """
self.tel_state_callback_value = state
self.tel_file_ingested_value = file_ingested
def rec_pub_callback(self, pub):
""" Call back method to watch what comes in via the publish callback """
self.rec_publish_callback_value = pub
def tel_pub_callback(self, pub):
""" Call back method to watch what comes in via the publish callback """
self.tel_publish_callback_value = pub
def rec_exception_callback(self, exception):
""" Call back method to watch what comes in via the exception callback """
self.rec_exception_callback_value = exception
self.rec_exceptions_detected += 1
def tel_exception_callback(self, exception):
""" Call back method to watch what comes in via the exception callback """
self.tel_exception_callback_value = exception
self.tel_exceptions_detected += 1
def setUp(self):
ParserUnitTestCase.setUp(self)
self.rec_config = {
DataSetDriverConfigKeys.PARTICLE_MODULE:
'mi.dataset.parser.dosta_abcdjm_dcl',
DataSetDriverConfigKeys.PARTICLE_CLASS:
None
}
self.tel_config = {
DataSetDriverConfigKeys.PARTICLE_MODULE:
'mi.dataset.parser.dosta_abcdjm_dcl',
DataSetDriverConfigKeys.PARTICLE_CLASS:
None
}
self.rec_state_callback_value = None
self.rec_file_ingested_value = False
self.rec_publish_callback_value = None
self.rec_exception_callback_value = None
self.rec_exceptions_detected = 0
self.tel_state_callback_value = None
self.tel_file_ingested_value = False
self.tel_publish_callback_value = None
self.tel_exception_callback_value = None
self.tel_exceptions_detected = 0
self.maxDiff = None
def test_big_giant_input(self):
"""
Read a large file and verify that all expected particles can be read.
Verification is not done at this time, but will be done during
integration and qualification testing.
File used for this test has 500 total particles.
"""
log.debug('===== START TEST BIG GIANT INPUT RECOVERED =====')
in_file = self.open_file(FILE4)
parser = self.create_rec_parser(in_file)
number_expected_results = 500
# In a single read, get all particles in this file.
result = parser.get_records(number_expected_results)
self.assertEqual(len(result), number_expected_results)
in_file.close()
self.assertEqual(self.rec_exception_callback_value, None)
log.debug('===== START TEST BIG GIANT INPUT TELEMETERED =====')
in_file = self.open_file(FILE4)
parser = self.create_tel_parser(in_file)
# In a single read, get all particles in this file.
result = parser.get_records(number_expected_results)
self.assertEqual(len(result), number_expected_results)
in_file.close()
self.assertEqual(self.tel_exception_callback_value, None)
log.debug('===== END TEST BIG GIANT INPUT =====')
def test_get_many(self):
"""
Read a file and pull out multiple data particles at one time.
Verify that the results are those we expected.
"""
log.debug('===== START TEST GET MANY RECOVERED =====')
in_file = self.open_file(FILE3)
parser = self.create_rec_parser(in_file)
# Generate a list of expected result particles.
expected_particle = []
for expected in EXPECTED_FILE3:
particle = DostaAbcdjmDclRecoveredInstrumentDataParticle(expected)
expected_particle.append(particle)
# In a single read, get all particles for this file.
result = parser.get_records(len(expected_particle))
self.assertEqual(result, expected_particle)
self.assertEqual(self.rec_exception_callback_value, None)
in_file.close()
log.debug('===== START TEST GET MANY TELEMETERED =====')
in_file = self.open_file(FILE3)
parser = self.create_tel_parser(in_file)
# Generate a list of expected result particles.
expected_particle = []
for expected in EXPECTED_FILE3:
particle = DostaAbcdjmDclTelemeteredInstrumentDataParticle(expected)
expected_particle.append(particle)
# In a single read, get all particles for this file.
result = parser.get_records(len(expected_particle))
self.assertEqual(result, expected_particle)
self.assertEqual(self.tel_exception_callback_value, None)
in_file.close()
log.debug('===== END TEST GET MANY =====')
def test_invalid_metadata_records(self):
"""
Read data from a file containing invalid metadata records as well
as valid metadata records and sensor data records.
Verify that the sensor data records can be read correctly
and that invalid metadata records are detected.
File 5 has 3 invalid metadata records.
File 6 has 6 invalid metadata records.
"""
log.debug('===== START TEST INVALID METADATA RECOVERED =====')
in_file = self.open_file(FILE5)
parser = self.create_rec_parser(in_file)
for expected in EXPECTED_FILE5:
# Generate expected particle
expected_particle = DostaAbcdjmDclRecoveredInstrumentDataParticle(expected)
# Get record and verify.
result = parser.get_records(1)
self.assertEqual(result, [expected_particle])
self.assertEqual(self.rec_exceptions_detected, 3)
in_file.close()
log.debug('===== START TEST INVALID METADATA TELEMETERED =====')
in_file = self.open_file(FILE6)
parser = self.create_tel_parser(in_file)
# Generate a list of expected result particles.
expected_particle = []
for expected in EXPECTED_FILE6:
particle = DostaAbcdjmDclTelemeteredInstrumentDataParticle(expected)
expected_particle.append(particle)
# In a single read, get all particles for this file.
result = parser.get_records(len(expected_particle))
self.assertEqual(result, expected_particle)
self.assertEqual(self.tel_exceptions_detected, 6)
in_file.close()
log.debug('===== END TEST INVALID METADATA =====')
def test_invalid_sensor_data_records(self):
"""
Read data from a file containing invalid sensor data records.
Verify that no instrument particles are produced
and the correct number of exceptions are detected.
"""
log.debug('===== START TEST INVALID SENSOR DATA RECOVERED =====')
in_file = self.open_file(FILE7)
parser = self.create_rec_parser(in_file)
expected_exceptions = 15
# Try to get records and verify that none are returned.
result = parser.get_records(1)
self.assertEqual(result, [])
self.assertEqual(self.rec_exceptions_detected, expected_exceptions)
in_file.close()
log.debug('===== START TEST INVALID SENSOR DATA TELEMETERED =====')
in_file = self.open_file(FILE7)
parser = self.create_tel_parser(in_file)
# Try to get records and verify that none are returned.
result = parser.get_records(1)
self.assertEqual(result, [])
self.assertEqual(self.tel_exceptions_detected, expected_exceptions)
in_file.close()
log.debug('===== END TEST INVALID SENSOR DATA =====')
def test_mid_state_start(self):
"""
Test starting a parser with a state in the middle of processing.
"""
log.debug('===== START TEST MID-STATE START RECOVERED =====')
in_file = self.open_file(FILE3)
# Start at the beginning of the 21st record (of 42 total).
initial_state = {
DostaStateKey.POSITION: 2738
}
parser = self.create_rec_parser(in_file, new_state=initial_state)
# Generate a list of expected result particles.
expected_particle = []
for expected in EXPECTED_FILE3[-22: ]:
particle = DostaAbcdjmDclRecoveredInstrumentDataParticle(expected)
expected_particle.append(particle)
# In a single read, get all particles for this file.
result = parser.get_records(len(expected_particle))
self.assertEqual(result, expected_particle)
self.assertEqual(self.rec_exception_callback_value, None)
in_file.close()
log.debug('===== START TEST MID-STATE START TELEMETERED =====')
in_file = self.open_file(FILE2)
# Start at the beginning of the 33rd record (of 44 total).
initial_state = {
DostaStateKey.POSITION: 4079
}
parser = self.create_tel_parser(in_file, new_state=initial_state)
# Generate a list of expected result particles.
expected_particle = []
for expected in EXPECTED_FILE2[-12: ]:
particle = DostaAbcdjmDclTelemeteredInstrumentDataParticle(expected)
expected_particle.append(particle)
# In a single read, get all particles for this file.
result = parser.get_records(len(expected_particle))
self.assertEqual(result, expected_particle)
self.assertEqual(self.tel_exception_callback_value, None)
in_file.close()
log.debug('===== END TEST MID-STATE START =====')
def test_no_sensor_data(self):
"""
Read a file containing no sensor data records
and verify that no particles are produced.
"""
log.debug('===== START TEST NO SENSOR DATA RECOVERED =====')
in_file = self.open_file(FILE0)
parser = self.create_rec_parser(in_file)
# Try to get a record and verify that none are produced.
result = parser.get_records(1)
self.assertEqual(result, [])
self.assertEqual(self.rec_exception_callback_value, None)
in_file.close()
log.debug('===== START TEST NO SENSOR DATA TELEMETERED =====')
in_file = self.open_file(FILE0)
parser = self.create_tel_parser(in_file)
# Try to get a record and verify that none are produced.
result = parser.get_records(1)
self.assertEqual(result, [])
self.assertEqual(self.tel_exception_callback_value, None)
in_file.close()
log.debug('===== END TEST SENSOR DATA =====')
def test_set_state(self):
"""
This test verifies that the state can be changed after starting.
Some particles are read and then the parser state is modified to
skip ahead or back.
"""
log.debug('===== START TEST SET STATE RECOVERED =====')
in_file = self.open_file(FILE2)
parser = self.create_rec_parser(in_file)
# Read and verify 5 particles (of the 44).
for expected in EXPECTED_FILE2[ : 5]:
# Generate expected particle
expected_particle = DostaAbcdjmDclRecoveredInstrumentDataParticle(expected)
# Get record and verify.
result = parser.get_records(1)
self.assertEqual(result, [expected_particle])
# Skip ahead in the file so that we get the last 10 particles.
new_state = {
DostaStateKey.POSITION: 4301
}
# Set the state.
parser.set_state(new_state)
# Read and verify the last 10 particles.
for expected in EXPECTED_FILE2[-10: ]:
# Generate expected particle
expected_particle = DostaAbcdjmDclRecoveredInstrumentDataParticle(expected)
# Get record and verify.
result = parser.get_records(1)
self.assertEqual(result, [expected_particle])
log.debug('===== START TEST SET STATE TELEMETERED =====')
in_file = self.open_file(FILE3)
parser = self.create_tel_parser(in_file)
# Read and verify 30 particles (of the 42).
for expected in EXPECTED_FILE3[ : 30]:
# Generate expected particle
expected_particle = DostaAbcdjmDclTelemeteredInstrumentDataParticle(expected)
# Get record and verify.
result = parser.get_records(1)
self.assertEqual(result, [expected_particle])
# Skip back in the file so that we get the last 25 particles.
new_state = {
DostaStateKey.POSITION: 2414,
}
# Set the state.
parser.set_state(new_state)
# Read and verify the last 25 particles.
for expected in EXPECTED_FILE3[-25: ]:
# Generate expected particle
expected_particle = DostaAbcdjmDclTelemeteredInstrumentDataParticle(expected)
# Get record and verify.
result = parser.get_records(1)
self.assertEqual(result, [expected_particle])
log.debug('===== END TEST SET STATE =====')
def test_simple(self):
"""
Read data from a file and pull out data particles
one at a time. Verify that the results are those we expected.
"""
log.debug('===== START TEST SIMPLE RECOVERED =====')
in_file = self.open_file(FILE1)
parser = self.create_rec_parser(in_file)
for expected in EXPECTED_FILE1:
# Generate expected particle
expected_particle = DostaAbcdjmDclRecoveredInstrumentDataParticle(expected)
# Get record and verify.
result = parser.get_records(1)
self.assertEqual(result, [expected_particle])
self.assertEqual(self.rec_exception_callback_value, None)
in_file.close()
log.debug('===== START TEST SIMPLE TELEMETERED =====')
in_file = self.open_file(FILE1)
parser = self.create_tel_parser(in_file)
for expected in EXPECTED_FILE1:
# Generate expected particle
expected_particle = DostaAbcdjmDclTelemeteredInstrumentDataParticle(expected)
# Get record and verify.
result = parser.get_records(1)
self.assertEqual(result, [expected_particle])
self.assertEqual(self.tel_exception_callback_value, None)
in_file.close()
log.debug('===== END TEST SIMPLE =====')
def test_many_with_yml(self):
"""
Read a file and verify that all records can be read.
Verify that the contents of the particles are correct.
There should be no exceptions generated.
"""
log.debug('===== START TEST MANY WITH YML RECOVERED =====')
num_particles = 21
in_file = self.open_file(FILE1)
parser = self.create_rec_parser(in_file)
particles = parser.get_records(num_particles)
log.debug("Num particles: %d", len(particles))
self.assert_particles(particles, "rec_20010121.dosta1.yml", RESOURCE_PATH)
self.assertEquals(self.exception_callback_value, [])
in_file.close()
log.debug('===== START TEST MANY WITH YML TELEMETERED =====')
in_file = self.open_file(FILE1)
parser = self.create_tel_parser(in_file)
particles = parser.get_records(num_particles)
log.debug("Num particles: %d", len(particles))
self.assert_particles(particles, "tel_20010121.dosta1.yml", RESOURCE_PATH)
self.assertEquals(self.exception_callback_value, [])
in_file.close()
log.debug('===== END TEST MANY WITH YML =====') | 64.147472 | 202 | 0.55731 | 6,940 | 45,673 | 3.606628 | 0.106484 | 0.021095 | 0.028126 | 0.035158 | 0.65865 | 0.61642 | 0.503915 | 0.313744 | 0.301358 | 0.278066 | 0 | 0.334804 | 0.195893 | 45,673 | 712 | 203 | 64.147472 | 0.34673 | 0.126092 | 0 | 0.346154 | 0 | 0 | 0.394611 | 0.002945 | 0 | 0 | 0 | 0 | 0.086538 | 1 | 0.045673 | false | 0 | 0.019231 | 0 | 0.074519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2bac9ab813215f865580405406ebdfe81b2abbdc | 295 | py | Python | src/test.py | nsde/manudis | 048136986cbd92ec6887a7ce8b430b83aa484c70 | [
"MIT"
] | null | null | null | src/test.py | nsde/manudis | 048136986cbd92ec6887a7ce8b430b83aa484c70 | [
"MIT"
] | null | null | null | src/test.py | nsde/manudis | 048136986cbd92ec6887a7ce8b430b83aa484c70 | [
"MIT"
] | null | null | null | import io
import os
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
img = Image.new('RGB', (300, 28))
d = ImageDraw.Draw(img)
font = ImageFont.truetype('fonts/whitney.ttf', size=20)
d.text(font=font, xy=(4, 4), text='xz72', fill=(255, 255, 255))
img.save('test.png') | 22.692308 | 63 | 0.705085 | 51 | 295 | 4.078431 | 0.607843 | 0.100962 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077821 | 0.128814 | 295 | 13 | 64 | 22.692308 | 0.731518 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
2bae91860a9f762df16c58f906ed9c4fbb6a623a | 418 | py | Python | v1/repositories/views/repository.py | jamessspanggg/Website-API | 60c65d0714145a67800d91f420351c4ae04aa7fa | [
"MIT"
] | 1 | 2021-04-19T02:28:28.000Z | 2021-04-19T02:28:28.000Z | v1/repositories/views/repository.py | emmantek/Website-API | 1f1173ffe4dcc5451b5f70bbb954983cbbd4f8f9 | [
"MIT"
] | null | null | null | v1/repositories/views/repository.py | emmantek/Website-API | 1f1173ffe4dcc5451b5f70bbb954983cbbd4f8f9 | [
"MIT"
] | null | null | null | from rest_framework.viewsets import ModelViewSet
from v1.third_party.rest_framework.permissions import IsStaffOrReadOnly
from ..models.repository import Repository
from ..serializers.repository import RepositorySerializer
class RepositoryViewSet(ModelViewSet):
queryset = Repository.objects.order_by('created_date').all()
serializer_class = RepositorySerializer
permission_classes = [IsStaffOrReadOnly]
| 34.833333 | 71 | 0.834928 | 42 | 418 | 8.142857 | 0.642857 | 0.076023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00266 | 0.100478 | 418 | 11 | 72 | 38 | 0.906915 | 0 | 0 | 0 | 0 | 0 | 0.028708 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2bbcfa6063deed56bf9e6e5967f6a36f0071e9fe | 281 | py | Python | server/stats/__init__.py | harnash/mistats | 551a61615fc48e9cd8ead425ed709dd5860723e7 | [
"MIT"
] | 1 | 2019-01-05T22:38:33.000Z | 2019-01-05T22:38:33.000Z | server/stats/__init__.py | harnash/mistats | 551a61615fc48e9cd8ead425ed709dd5860723e7 | [
"MIT"
] | null | null | null | server/stats/__init__.py | harnash/mistats | 551a61615fc48e9cd8ead425ed709dd5860723e7 | [
"MIT"
] | null | null | null | from stats.collector import MiioDeviceCollector
DeviceCollector = None
def initialize_collector(db: 'db.SQLAlchemy', metrics_prefix: str) -> MiioDeviceCollector:
global DeviceCollector
DeviceCollector = MiioDeviceCollector(db, metrics_prefix)
return DeviceCollector
| 28.1 | 90 | 0.807829 | 26 | 281 | 8.615385 | 0.615385 | 0.116071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131673 | 281 | 9 | 91 | 31.222222 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0.046263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2bd1d7ad940e2e4ecaf1fe36f337eb02432d5f11 | 151 | py | Python | test/test_module.py | woo-lang/woolang-project-generator | 99b156e98e81545c839e8a4b73fba94085d56f19 | [
"MIT"
] | 1 | 2021-03-08T04:19:50.000Z | 2021-03-08T04:19:50.000Z | test/test_module.py | woo-lang/woolang-project-generator | 99b156e98e81545c839e8a4b73fba94085d56f19 | [
"MIT"
] | null | null | null | test/test_module.py | woo-lang/woolang-project-generator | 99b156e98e81545c839e8a4b73fba94085d56f19 | [
"MIT"
] | null | null | null | from context import Project, ProjectFiles
project = Project("t", ProjectFiles(
{
"hello" : "frfefe"
},
["data"]
)).create() | 18.875 | 42 | 0.556291 | 13 | 151 | 6.461538 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.284768 | 151 | 8 | 43 | 18.875 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.110345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2be4eb803a0139bb90e637af23a207ce767f266a | 1,232 | py | Python | part_6-auth-session-web/app/response.py | perogeremmer/latihan-flask | 4a0098d8f23595d2b092b35b2f9b15f8abcf8ff5 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 1 | 2021-09-18T17:48:34.000Z | 2021-09-18T17:48:34.000Z | part_8-pattern-design/app/response.py | perogeremmer/latihan-flask | 4a0098d8f23595d2b092b35b2f9b15f8abcf8ff5 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | part_8-pattern-design/app/response.py | perogeremmer/latihan-flask | 4a0098d8f23595d2b092b35b2f9b15f8abcf8ff5 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | from flask import make_response, jsonify
class Response(object):
__instance = None
def __new__(cls):
if Response.__instance is None:
Response.__instance = object.__new__(cls)
return Response.__instance
def __init__(self):
self.status_code = None
self.payload = {
"values": None,
"message": "",
}
def create_payload_response(self, message, values):
self.payload["values"] = values
self.payload["message"] = message
self.payload["status_code"] = self.status_code
return make_response(jsonify(self.payload), self.status_code)
def ok(self, message, values):
self.status_code = 200
return self.create_payload_response(message, values)
def bad_request(self, message, values):
self.status_code = 400
return self.create_payload_response(message, values)
def not_found(self, message, values):
self.status_code = 404
return self.create_payload_response(message, values)
def un_authorized(self, message, values):
self.status_code = 401
return self.create_payload_response(message, values)
response = Response() | 25.666667 | 69 | 0.644481 | 139 | 1,232 | 5.402878 | 0.251799 | 0.155792 | 0.130493 | 0.139814 | 0.411451 | 0.411451 | 0.246338 | 0.18775 | 0 | 0 | 0 | 0.01323 | 0.263799 | 1,232 | 48 | 70 | 25.666667 | 0.814774 | 0 | 0 | 0.129032 | 0 | 0 | 0.030008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0 | 0.032258 | 0 | 0.516129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2bfb0d4338ecfe2e9ced935f1589998a2129377d | 1,360 | py | Python | paths_cli/compiling/plugins.py | sroet/openpathsampling-cli | 600d361195b5a3dfe68630358554e8f3e1e99599 | [
"MIT"
] | 2 | 2020-02-11T13:32:57.000Z | 2021-07-11T13:07:01.000Z | paths_cli/compiling/plugins.py | sroet/openpathsampling-cli | 600d361195b5a3dfe68630358554e8f3e1e99599 | [
"MIT"
] | 44 | 2020-02-11T10:17:21.000Z | 2022-01-03T19:53:06.000Z | paths_cli/compiling/plugins.py | sroet/openpathsampling-cli | 600d361195b5a3dfe68630358554e8f3e1e99599 | [
"MIT"
] | 3 | 2020-02-11T10:18:25.000Z | 2021-08-15T16:03:24.000Z | from paths_cli.compiling.core import InstanceCompilerPlugin
from paths_cli.plugin_management import OPSPlugin
class CategoryPlugin(OPSPlugin):
"""
Category plugins only need to be made for top-level
"""
def __init__(self, plugin_class, aliases=None, requires_ops=(1, 0),
requires_cli=(0, 3)):
super().__init__(requires_ops, requires_cli)
self.plugin_class = plugin_class
if aliases is None:
aliases = []
self.aliases = aliases
@property
def name(self):
return self.plugin_class.category
def __repr__(self):
return (f"CompilerPlugin({self.plugin_class.__name__}, "
f"{self.aliases})")
class EngineCompilerPlugin(InstanceCompilerPlugin):
category = 'engine'
class CVCompilerPlugin(InstanceCompilerPlugin):
category = 'cv'
class VolumeCompilerPlugin(InstanceCompilerPlugin):
category = 'volume'
class NetworkCompilerPlugin(InstanceCompilerPlugin):
category = 'network'
class SchemeCompilerPlugin(InstanceCompilerPlugin):
category = 'scheme'
class StrategyCompilerPlugin(InstanceCompilerPlugin):
category = 'strategy'
class ShootingPointSelectorPlugin(InstanceCompilerPlugin):
category = 'shooting-point-selector'
class InterfaceSetPlugin(InstanceCompilerPlugin):
category = 'interface-set'
| 24.285714 | 71 | 0.715441 | 126 | 1,360 | 7.5 | 0.47619 | 0.253968 | 0.063492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003663 | 0.197059 | 1,360 | 55 | 72 | 24.727273 | 0.861722 | 0.0375 | 0 | 0 | 0 | 0 | 0.101315 | 0.051817 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.0625 | 0.0625 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
920543055a2532b967e80f523d23019d236a2ce3 | 445 | py | Python | custom/ilsgateway/tanzania/handlers/help.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | 1 | 2020-05-05T13:10:01.000Z | 2020-05-05T13:10:01.000Z | custom/ilsgateway/tanzania/handlers/help.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | 1 | 2019-12-09T14:00:14.000Z | 2019-12-09T14:00:14.000Z | custom/ilsgateway/tanzania/handlers/help.py | MaciejChoromanski/commcare-hq | fd7f65362d56d73b75a2c20d2afeabbc70876867 | [
"BSD-3-Clause"
] | 5 | 2015-11-30T13:12:45.000Z | 2019-07-01T19:27:07.000Z | from __future__ import absolute_import
from custom.ilsgateway.tanzania.handlers.keyword import KeywordHandler
from custom.ilsgateway.tanzania.reminders import HELP_REGISTERED, HELP_UNREGISTERED
class HelpHandler(KeywordHandler):
def help(self):
if self.user:
self.respond(HELP_REGISTERED)
else:
self.respond(HELP_UNREGISTERED)
return True
def handle(self):
return self.help()
| 24.722222 | 83 | 0.716854 | 49 | 445 | 6.326531 | 0.510204 | 0.064516 | 0.129032 | 0.180645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217978 | 445 | 17 | 84 | 26.176471 | 0.890805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.25 | 0.083333 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
920a4e5831270ff6ec7b223922bf790a4761d47b | 1,163 | py | Python | Pl_02_prog2/Point.py | Tjccs/College-Python | 66186f898a5c3b23763f3110c9423427236ca4a5 | [
"MIT"
] | null | null | null | Pl_02_prog2/Point.py | Tjccs/College-Python | 66186f898a5c3b23763f3110c9423427236ca4a5 | [
"MIT"
] | null | null | null | Pl_02_prog2/Point.py | Tjccs/College-Python | 66186f898a5c3b23763f3110c9423427236ca4a5 | [
"MIT"
] | null | null | null | import math
class Point:
"""
Point in the plane.
"""
def __init__(self, x = 0, y = 0):
'''
Creates a point at the center of the axis
'''
self._x = x
self._y = y
def setX(self, ex):
'''
Sets the x coordinate of the point.
Requires: ex is an int, the coordinate in the x-axis
Ensures: self.getX() == ex
'''
self._x=ex
def setY(self, wye):
'''
Sets the y coordinate of the point.
Requires: wye is an int, the coordinate in the y-axis
Ensures: self.getY() == wye
'''
self._y=wye
def getX(self):
'''
The x coordinate of the point
'''
return self._x
def getY(self):
'''
The y coordinate of the point
'''
return self._y
def shift(self, xInc, yInc):
self.setX(self.getX() + xInc) , self.setY(self.getY() + yInc)
def distance(self, otherPoint):
delta_x = otherPoint.getX() - self.getX()
delta_y = otherPoint.getY() - self.getY()
return math.sqrt(delta_x**2 + delta_y**2) | 21.145455 | 69 | 0.50215 | 152 | 1,163 | 3.75 | 0.269737 | 0.04386 | 0.105263 | 0.140351 | 0.319298 | 0.291228 | 0.087719 | 0 | 0 | 0 | 0 | 0.005548 | 0.380052 | 1,163 | 55 | 70 | 21.145455 | 0.785021 | 0.305245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.368421 | false | 0 | 0.052632 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9214ea6abeb225ca99baef4d10fb171d1b8d7bdc | 543 | py | Python | src/python/fsqio/pants/buildgen/python/register.py | stuhood/fsqio | 5f133d74e88649da336c362f1af71ca1a42a41d7 | [
"Apache-2.0"
] | null | null | null | src/python/fsqio/pants/buildgen/python/register.py | stuhood/fsqio | 5f133d74e88649da336c362f1af71ca1a42a41d7 | [
"Apache-2.0"
] | null | null | null | src/python/fsqio/pants/buildgen/python/register.py | stuhood/fsqio | 5f133d74e88649da336c362f1af71ca1a42a41d7 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2016 Foursquare Labs Inc. All Rights Reserved.
from __future__ import absolute_import
from pants.goal.task_registrar import TaskRegistrar as task
from fsqio.pants.buildgen.python.buildgen_python import BuildgenPython
from fsqio.pants.buildgen.python.map_python_exported_symbols import MapPythonExportedSymbols
def register_goals():
task(
name='map-python-exported-symbols',
action=MapPythonExportedSymbols,
).install()
task(
name='python',
action=BuildgenPython,
).install('buildgen')
| 23.608696 | 92 | 0.786372 | 64 | 543 | 6.5 | 0.546875 | 0.100962 | 0.067308 | 0.105769 | 0.134615 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010571 | 0.128913 | 543 | 22 | 93 | 24.681818 | 0.868922 | 0.127072 | 0 | 0.153846 | 0 | 0 | 0.087049 | 0.057325 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | true | 0 | 0.307692 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
ecf7165c1132a7ab5a2f22df0d26c718ebed8c9c | 284 | py | Python | tests/test_rdfdataset.py | surroundaustralia/ndesgateway-testclient | fe6aa6ed7bc2e3664edde16ce4f82fdf8b3b47f2 | [
"BSD-3-Clause"
] | 1 | 2022-03-08T02:32:20.000Z | 2022-03-08T02:32:20.000Z | tests/test_rdfdataset.py | surroundaustralia/ndesgateway-testclient | fe6aa6ed7bc2e3664edde16ce4f82fdf8b3b47f2 | [
"BSD-3-Clause"
] | 2 | 2022-03-16T06:40:48.000Z | 2022-03-17T00:08:05.000Z | tests/test_rdfdataset.py | surroundaustralia/ndesgateway-testclient | fe6aa6ed7bc2e3664edde16ce4f82fdf8b3b47f2 | [
"BSD-3-Clause"
] | 1 | 2022-03-09T23:10:39.000Z | 2022-03-09T23:10:39.000Z | from rdflib.namespace import OWL, RDF
from client.model import RDFDataset
from client.model._TERN import TERN
def test_basic_rdf():
r1 = RDFDataset()
rdf = r1.to_graph()
assert (None, RDF.type, OWL.Class) not in rdf
assert (None, RDF.type, TERN.RDFDataset) in rdf
| 21.846154 | 51 | 0.714789 | 44 | 284 | 4.522727 | 0.5 | 0.100503 | 0.150754 | 0.170854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0.190141 | 284 | 12 | 52 | 23.666667 | 0.856522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
ecf97ada97cfd101dabfd04c21aaad26e128ca9f | 423 | py | Python | qual_id/categories/programminglanguage.py | AgaskarJ/qual-id | 9393600ca2e55439b2003a37ac8d55363d1c75bd | [
"MIT"
] | null | null | null | qual_id/categories/programminglanguage.py | AgaskarJ/qual-id | 9393600ca2e55439b2003a37ac8d55363d1c75bd | [
"MIT"
] | null | null | null | qual_id/categories/programminglanguage.py | AgaskarJ/qual-id | 9393600ca2e55439b2003a37ac8d55363d1c75bd | [
"MIT"
] | null | null | null | from ..category import Category
class ProgrammingLanguage(Category):
def get_values(self):
return [
"assembly",
"basic",
"cobol",
"dart",
"fortran",
"go",
"java",
"javascript",
"kotlin",
"matlab",
"php",
"python",
"ruby",
"spark",
]
| 19.227273 | 36 | 0.368794 | 26 | 423 | 5.961538 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.508274 | 423 | 21 | 37 | 20.142857 | 0.745192 | 0 | 0 | 0 | 0 | 0 | 0.177305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0.052632 | 0.210526 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ecfda1bed78754730e6ce1a84582ef7ccd1efcb2 | 1,731 | py | Python | stubs.min/Autodesk/Revit/DB/Analysis_parts/gbXMLBuildingOperatingSchedule.py | ricardyn/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | 1 | 2021-02-02T13:39:16.000Z | 2021-02-02T13:39:16.000Z | stubs.min/Autodesk/Revit/DB/Analysis_parts/gbXMLBuildingOperatingSchedule.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | stubs.min/Autodesk/Revit/DB/Analysis_parts/gbXMLBuildingOperatingSchedule.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | class gbXMLBuildingOperatingSchedule(Enum,IComparable,IFormattable,IConvertible):
"""
Enumerations for gbXML (Green Building XML) format,used for energy
analysis,schema version 0.34.
enum gbXMLBuildingOperatingSchedule,values: DefaultOperatingSchedule (0),KindergartenThruTwelveGradeSchool (7),NoOfOperatingScheduleEnums (11),TheaterPerformingArts (9),TwelveHourFiveDayFacility (6),TwelveHourSevenDayFacility (4),TwelveHourSixDayFacility (5),TwentyFourHourHourFiveDayFacility (3),TwentyFourHourHourSixDayFacility (2),TwentyFourHourSevenDayFacility (1),Worship (10),YearRoundSchool (8)
"""
def __eq__(self,*args):
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self,*args):
""" __format__(formattable: IFormattable,format: str) -> str """
pass
def __ge__(self,*args):
pass
def __gt__(self,*args):
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self,*args):
pass
def __lt__(self,*args):
pass
def __ne__(self,*args):
pass
def __reduce_ex__(self,*args):
pass
def __str__(self,*args):
pass
DefaultOperatingSchedule=None
KindergartenThruTwelveGradeSchool=None
NoOfOperatingScheduleEnums=None
TheaterPerformingArts=None
TwelveHourFiveDayFacility=None
TwelveHourSevenDayFacility=None
TwelveHourSixDayFacility=None
TwentyFourHourHourFiveDayFacility=None
TwentyFourHourHourSixDayFacility=None
TwentyFourHourSevenDayFacility=None
value__=None
Worship=None
YearRoundSchool=None
| 38.466667 | 403 | 0.760254 | 172 | 1,731 | 7.098837 | 0.401163 | 0.06552 | 0.068796 | 0.07371 | 0.105651 | 0.105651 | 0.105651 | 0.092547 | 0.092547 | 0.092547 | 0 | 0.011236 | 0.125939 | 1,731 | 44 | 404 | 39.340909 | 0.79577 | 0.481225 | 0 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0.294118 | 0 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a6094cf9e9e053f723df8ab74df5f30fd6e66f89 | 316 | py | Python | backend/app/literature/schemas/reference_comment_and_correction_type.py | dougli1sqrd/agr_literature_service | 5090c6a8f4d6ad6a58569fdfd843c0a15907b797 | [
"MIT"
] | null | null | null | backend/app/literature/schemas/reference_comment_and_correction_type.py | dougli1sqrd/agr_literature_service | 5090c6a8f4d6ad6a58569fdfd843c0a15907b797 | [
"MIT"
] | 39 | 2021-10-18T17:02:49.000Z | 2022-03-28T20:56:24.000Z | backend/app/literature/schemas/reference_comment_and_correction_type.py | dougli1sqrd/agr_literature_service | 5090c6a8f4d6ad6a58569fdfd843c0a15907b797 | [
"MIT"
] | 1 | 2021-10-21T00:11:18.000Z | 2021-10-21T00:11:18.000Z | from enum import Enum
class ReferenceCommentAndCorrectionType(str, Enum):
RetractionOf = "RetractionOf"
CommentOn = "CommentOn"
ErratumFor = "ErratumFor"
ReprintOf = "ReprintOf"
UpdateOf = "UpdateOf"
ExpressionOfConcernFor = "ExpressionOfConcernFor"
RepublishedFrom = "RepublishedFrom"
| 26.333333 | 53 | 0.737342 | 22 | 316 | 10.590909 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183544 | 316 | 11 | 54 | 28.727273 | 0.903101 | 0 | 0 | 0 | 0 | 0 | 0.268987 | 0.06962 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 1 | 0.111111 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a60f31052aabb158374185ffe72c5e30d582b853 | 202 | py | Python | tests/tests/icu/factories.py | maykinmedia/mobetta | 7c6ce4d9ccb41371e9a6171f35002730b841cc5c | [
"BSD-3-Clause"
] | 5 | 2017-10-26T18:40:48.000Z | 2019-04-09T21:06:33.000Z | tests/tests/icu/factories.py | maykinmedia/mobetta | 7c6ce4d9ccb41371e9a6171f35002730b841cc5c | [
"BSD-3-Clause"
] | 23 | 2017-02-10T16:23:35.000Z | 2019-05-02T11:54:28.000Z | tests/tests/icu/factories.py | maykinmedia/mobetta | 7c6ce4d9ccb41371e9a6171f35002730b841cc5c | [
"BSD-3-Clause"
] | 1 | 2017-03-10T15:05:24.000Z | 2017-03-10T15:05:24.000Z | import factory
class ICUTranslationFileFactory(factory.django.DjangoModelFactory):
name = factory.Faker('word')
language_code = 'en'
class Meta:
model = 'icu.ICUTranslationFile'
| 18.363636 | 67 | 0.712871 | 19 | 202 | 7.526316 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193069 | 202 | 10 | 68 | 20.2 | 0.877301 | 0 | 0 | 0 | 0 | 0 | 0.138614 | 0.108911 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a617b640b126e757f73718f43f196bf353d1451d | 857 | py | Python | AMAO/apps/Professor/tests/test_models.py | arruda/amao | 83648aa2c408b1450d721b3072dc9db4b53edbb8 | [
"MIT"
] | 2 | 2017-04-26T14:08:02.000Z | 2017-09-01T13:10:17.000Z | AMAO/apps/Professor/tests/test_models.py | arruda/amao | 83648aa2c408b1450d721b3072dc9db4b53edbb8 | [
"MIT"
] | null | null | null | AMAO/apps/Professor/tests/test_models.py | arruda/amao | 83648aa2c408b1450d721b3072dc9db4b53edbb8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Professor.test_models
~~~~~~~~~~~~~~
Testa coisas relacionada ao modelo.
:copyright: (c) 2011 by Felipe Arruda Pontes.
"""
from django.test import TestCase
from model_mommy import mommy
from Professor.models import Professor, Monitor
class ProfessorTest(TestCase):
def setUp(self):
self.professor = mommy.make_one(Professor)
def test_professor_save(self):
" verifica se consegue salvar um professor "
self.professor.save()
self.assertEqual(self.professor.id, 1)
class MonitorTest(TestCase):
def setUp(self):
self.monitor = mommy.make_one(Monitor)
def test_monitor_save(self):
" verifica se consegue salvar um monitor "
self.monitor.save()
self.assertEqual(self.monitor.id, 1)
| 20.404762 | 52 | 0.628938 | 98 | 857 | 5.418367 | 0.428571 | 0.060264 | 0.060264 | 0.07533 | 0.218456 | 0.12806 | 0.12806 | 0 | 0 | 0 | 0 | 0.011111 | 0.264877 | 857 | 41 | 53 | 20.902439 | 0.831746 | 0.262544 | 0 | 0.117647 | 0 | 0 | 0.118841 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.235294 | false | 0 | 0.176471 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a617db190541e071e004cbe4987ca476fbfe9782 | 6,305 | py | Python | lighting_replay_app/analysis/lux_time_utils.py | mpi-sws-rse/antevents-examples | 4f6e4abae416739d3a988e5b390e4477b6b41f75 | [
"Apache-2.0"
] | null | null | null | lighting_replay_app/analysis/lux_time_utils.py | mpi-sws-rse/antevents-examples | 4f6e4abae416739d3a988e5b390e4477b6b41f75 | [
"Apache-2.0"
] | 1 | 2017-01-15T03:46:43.000Z | 2017-01-15T03:46:43.000Z | lighting_replay_app/analysis/lux_time_utils.py | mpi-sws-rse/antevents-examples | 4f6e4abae416739d3a988e5b390e4477b6b41f75 | [
"Apache-2.0"
] | null | null | null | """
Time utilities for lux analysis and replay
"""
# Sunrise sunset data for Sunnyvale, CA, 2016.
# From http://aa.usno.navy.mil/cgi-bin/aa_rstablew.pl?ID=AA&year=2016&task=0&state=CA&place=Sunnyvale
# Eventually use ephem (https://pypi.python.org/pypi/pyephem/) to
# compute the sunrise/sunset.
sunrise_sunset_data =\
"""01 0722 1701 0712 1732 0638 1803 0553 1831 0512 1858 0449 1924 0451 1933 0513 1915 0539 1836 0604 1750 0633 1709 0704 1650
02 0723 1702 0711 1733 0637 1804 0551 1832 0511 1859 0449 1924 0452 1933 0514 1914 0540 1835 0605 1749 0635 1708 0705 1650
03 0723 1703 0710 1734 0636 1805 0550 1833 0510 1900 0448 1925 0452 1932 0515 1913 0541 1833 0606 1747 0636 1707 0706 1650
04 0723 1703 0709 1735 0634 1806 0548 1834 0509 1901 0448 1925 0453 1932 0516 1912 0542 1832 0607 1746 0637 1706 0707 1650
05 0723 1704 0708 1737 0633 1807 0547 1835 0508 1902 0448 1926 0453 1932 0516 1911 0542 1830 0608 1744 0638 1705 0708 1650
06 0723 1705 0707 1738 0631 1808 0545 1836 0507 1903 0448 1927 0454 1932 0517 1910 0543 1829 0608 1743 0639 1704 0709 1650
07 0723 1706 0706 1739 0630 1808 0544 1837 0506 1904 0447 1927 0455 1931 0518 1909 0544 1827 0609 1742 0640 1704 0710 1650
08 0723 1707 0705 1740 0629 1809 0543 1838 0505 1905 0447 1928 0455 1931 0519 1908 0545 1826 0610 1740 0641 1703 0710 1650
09 0723 1708 0704 1741 0627 1810 0541 1839 0504 1906 0447 1928 0456 1931 0520 1907 0546 1824 0611 1739 0642 1702 0711 1650
10 0723 1709 0703 1742 0626 1811 0540 1839 0503 1906 0447 1929 0456 1930 0521 1905 0547 1822 0612 1737 0643 1701 0712 1650
11 0722 1710 0702 1743 0624 1812 0538 1840 0502 1907 0447 1929 0457 1930 0521 1904 0547 1821 0613 1736 0644 1700 0713 1651
12 0722 1711 0701 1744 0623 1813 0537 1841 0501 1908 0447 1930 0458 1930 0522 1903 0548 1819 0614 1734 0645 1659 0713 1651
13 0722 1712 0700 1745 0621 1814 0535 1842 0500 1909 0447 1930 0458 1929 0523 1902 0549 1818 0615 1733 0646 1659 0714 1651
14 0722 1713 0659 1746 0620 1815 0534 1843 0500 1910 0447 1930 0459 1929 0524 1901 0550 1816 0616 1732 0647 1658 0715 1651
15 0721 1714 0658 1747 0618 1816 0533 1844 0459 1911 0447 1931 0500 1928 0525 1859 0551 1815 0617 1730 0648 1657 0716 1652
16 0721 1715 0656 1748 0617 1817 0531 1845 0458 1912 0447 1931 0501 1928 0526 1858 0551 1813 0618 1729 0649 1657 0716 1652
17 0721 1716 0655 1749 0615 1818 0530 1846 0457 1912 0447 1931 0501 1927 0527 1857 0552 1812 0619 1728 0650 1656 0717 1652
18 0720 1717 0654 1751 0614 1819 0529 1847 0456 1913 0447 1932 0502 1926 0527 1856 0553 1810 0620 1726 0651 1655 0717 1653
19 0720 1718 0653 1752 0612 1820 0527 1848 0456 1914 0447 1932 0503 1926 0528 1854 0554 1809 0620 1725 0652 1655 0718 1653
20 0719 1719 0652 1753 0611 1821 0526 1848 0455 1915 0448 1932 0503 1925 0529 1853 0555 1807 0621 1724 0653 1654 0718 1654
21 0719 1720 0650 1754 0609 1821 0525 1849 0454 1916 0448 1932 0504 1924 0530 1852 0556 1806 0622 1722 0654 1654 0719 1654
22 0718 1721 0649 1755 0608 1822 0523 1850 0454 1916 0448 1932 0505 1924 0531 1850 0556 1804 0623 1721 0655 1653 0719 1655
23 0718 1722 0648 1756 0606 1823 0522 1851 0453 1917 0448 1933 0506 1923 0532 1849 0557 1803 0624 1720 0656 1653 0720 1655
24 0717 1723 0647 1757 0605 1824 0521 1852 0453 1918 0449 1933 0507 1922 0532 1847 0558 1801 0625 1719 0657 1652 0720 1656
25 0717 1724 0645 1758 0603 1825 0520 1853 0452 1919 0449 1933 0507 1921 0533 1846 0559 1759 0626 1717 0658 1652 0721 1657
26 0716 1726 0644 1759 0602 1826 0518 1854 0451 1919 0449 1933 0508 1921 0534 1845 0600 1758 0627 1716 0659 1652 0721 1657
27 0715 1727 0643 1800 0600 1827 0517 1855 0451 1920 0450 1933 0509 1920 0535 1843 0601 1756 0628 1715 0700 1651 0721 1658
28 0715 1728 0641 1801 0559 1828 0516 1856 0450 1921 0450 1933 0510 1919 0536 1842 0602 1755 0629 1714 0701 1651 0722 1659
29 0714 1729 0640 1802 0557 1829 0515 1857 0450 1922 0451 1933 0511 1918 0537 1840 0602 1753 0630 1713 0702 1651 0722 1659
30 0713 1730 0556 1830 0514 1857 0450 1922 0451 1933 0511 1917 0537 1839 0603 1752 0631 1712 0703 1651 0722 1700
31 0712 1731 0554 1830 0449 1923 0512 1916 0538 1837 0632 1711 0722 1701"""
DST_START=(3,13)
DST_END=(11,6)
SUNRISE_SUNSET_TABLE=[[None for d in range(31)] for m in range(12)]
def parse_sunrise_sunset_table():
day = 1
for line in sunrise_sunset_data.split('\n'):
assert day==int(line[0:2])
for m in range(12):
start = (m*11)+4
sunrise = line[start:start+4]
sunset = line[start+5:start+10]
month = m+1
if (month>DST_START[0] and month<DST_END[0]) or \
(month==DST_START[0] and day>=DST_START[1]) or \
(month==DST_END[0] and day<DST_END[1]):
dst = 60
else:
dst = 0
if sunrise!=" ":
sunrise_minutes = int(sunrise[0:2])*60+int(sunrise[2:4]) + dst
sunset_minutes = int(sunset[0:2])*60+int(sunset[2:4]) + dst
SUNRISE_SUNSET_TABLE[m][day-1] = (sunrise_minutes, sunset_minutes)
#print("%d day %d, sunrise=%s [%d], sunset=%s [%d]" %
# (m+1, day, sunrise, sunrise_minutes, sunset, sunset_minutes))
day += 1
parse_sunrise_sunset_table()
def get_sunrise_sunset(month, day):
return SUNRISE_SUNSET_TABLE[month-1][day-1]
# # We divide the day into "zones" based on a rough idea of the amount of sunlight.
def time_of_day_to_zone(minutes, sunrise, sunset):
if minutes < sunrise:
return 0 # early morning
elif minutes <= (sunset-30):
return 1 # daytime
elif minutes <= max(sunset+60,21.5*60):
return 2 # evening
else:
return 3 # later evening
NUM_ZONES=4
def dt_to_minutes(dt):
return dt.time().hour*60 + dt.time().minute
def minutes_to_time(minutes):
hrs = int(minutes/60)
mins = minutes-(hrs*60)
assert mins<60
return (hrs, mins)
| 65.677083 | 137 | 0.675654 | 1,081 | 6,305 | 3.901943 | 0.417206 | 0.033902 | 0.021337 | 0.007587 | 0.025605 | 0.01138 | 0 | 0 | 0 | 0 | 0 | 0.674874 | 0.278509 | 6,305 | 95 | 138 | 66.368421 | 0.252363 | 0.082474 | 0 | 0.045455 | 0 | 0 | 0.00381 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.113636 | false | 0 | 0 | 0.045455 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a654046012f0a0be8b5fe2ef5b6e639fae49832b | 450 | py | Python | Zadanie3/srcV2/Misc.py | letv3/NSIETE | 3e65c66ddb14cf11b4757bdc8a70d5438785dec2 | [
"BSD-2-Clause"
] | null | null | null | Zadanie3/srcV2/Misc.py | letv3/NSIETE | 3e65c66ddb14cf11b4757bdc8a70d5438785dec2 | [
"BSD-2-Clause"
] | null | null | null | Zadanie3/srcV2/Misc.py | letv3/NSIETE | 3e65c66ddb14cf11b4757bdc8a70d5438785dec2 | [
"BSD-2-Clause"
] | null | null | null | import numpy as np
def calculate_exploration_prob(loss_history, act_explor_prob,threshold):
mean = np.mean(loss_history)
variance = 0
for i in loss_history:
variance += np.square(i-mean)
if threshold >= variance:
act_explor_prob += 0.05
else:
act_explor_prob -= 0.05
if act_explor_prob < 0:
return 0
elif act_explor_prob > 1:
return 1
else:
return act_explor_prob
| 19.565217 | 72 | 0.635556 | 64 | 450 | 4.203125 | 0.40625 | 0.200743 | 0.289963 | 0.156134 | 0.118959 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034591 | 0.293333 | 450 | 22 | 73 | 20.454545 | 0.811321 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6638b465bca1a00b34204f9a982bad6fabf6fbb | 1,054 | py | Python | plot_builder/lookup_time.py | SokolovVadim/Radix-Tree | 74e4a0d7fe0e0993c2266220738c902d673489ee | [
"MIT"
] | null | null | null | plot_builder/lookup_time.py | SokolovVadim/Radix-Tree | 74e4a0d7fe0e0993c2266220738c902d673489ee | [
"MIT"
] | null | null | null | plot_builder/lookup_time.py | SokolovVadim/Radix-Tree | 74e4a0d7fe0e0993c2266220738c902d673489ee | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
csa_len = [977, 879, 782, 684, 586, 489, 391, 293, 196, 98]
csa_time = [157.230, 144.460, 110.304, 137.585, 97.767, 87.997, 86.016, 83.449, 61.127, 50.558]
sa_len = [977, 879, 782, 684, 586, 489, 391, 293, 196, 98]
sa_time = [12.251, 11.857, 12.331, 12.969, 12.777, 12.680, 11.795, 11.555, 11.752, 11.657]
radix_len = [20, 50, 60, 70, 80, 90, 100, 120, 150, 200, 300, 400, 500, 600, 700, 800, 900]
radix_time = [0.220, 0.242, 0.243, 0.237, 0.243, 0.244, 0.243, 0.245, 0.256, 0.258, 0.261, 0.263, 0.267, 0.268, 0.270, 0.274, 0.277]
fig, ax = plt.subplots()
fig.suptitle('Lookup', fontsize=14)
plt.xlabel('Document size, Kbyte', fontsize=12)
plt.ylabel('Lookup time, s', fontsize=12)
ax.scatter(csa_len, csa_time, marker = 'o', c = 'r', label = 'CSA')
ax.scatter(sa_len, sa_time, marker = 'o', c = 'b', label = 'SA')
ax.scatter(radix_len, radix_time, marker = 'o', c = 'g', label = 'Radix')
ax.legend(bbox_to_anchor=(1.05, 0.6), fancybox=True, shadow=True)
fig.savefig('Lookup_time.jpg')
plt.show()
| 43.916667 | 132 | 0.644213 | 211 | 1,054 | 3.146919 | 0.56872 | 0.018072 | 0.02259 | 0.054217 | 0.096386 | 0.096386 | 0.096386 | 0.096386 | 0.096386 | 0.096386 | 0 | 0.317425 | 0.145161 | 1,054 | 23 | 133 | 45.826087 | 0.419534 | 0 | 0 | 0 | 0 | 0 | 0.067362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6745d9f72b29c8d7f4f02d85821fe8882e97131 | 698 | py | Python | converter_site/converter/views.py | PurelyApplied/easy-statblock | 63e5a495688e826c8f4c3dc53506cf73ee8f2f33 | [
"Apache-2.0"
] | null | null | null | converter_site/converter/views.py | PurelyApplied/easy-statblock | 63e5a495688e826c8f4c3dc53506cf73ee8f2f33 | [
"Apache-2.0"
] | null | null | null | converter_site/converter/views.py | PurelyApplied/easy-statblock | 63e5a495688e826c8f4c3dc53506cf73ee8f2f33 | [
"Apache-2.0"
] | null | null | null | from django.http import HttpResponse
from django.views import generic
from django.views.generic import TemplateView
from easy_statblock import *
from .models import Creature
class CreatureView(TemplateView):
model = Creature
template_name = "creature/index.html"
def get(self, request, *args, **kwargs):
return HttpResponse(f"Hello from the other side."
f"\nrequest = {request}"
f"\nargs = {args}"
f"\nkwargs = {kwargs}")
class CreatureAsHtmlView(generic.TemplateView):
template_name = 'converter/creature/valloric.html'
def index(request):
return HttpResponse("Hello, world.")
| 26.846154 | 57 | 0.65043 | 74 | 698 | 6.094595 | 0.513514 | 0.066519 | 0.066519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255014 | 698 | 25 | 58 | 27.92 | 0.867308 | 0 | 0 | 0 | 0 | 0 | 0.207736 | 0.045845 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.294118 | 0.117647 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
a67c4c6a7d017c1d2e9c55c5abd03093bad693fc | 297 | py | Python | src/my_app/urls.py | odie907665/django_ajax_guide | b017bb9dd9f59b5eb2f87f6776baad24eb045901 | [
"bzip2-1.0.6"
] | null | null | null | src/my_app/urls.py | odie907665/django_ajax_guide | b017bb9dd9f59b5eb2f87f6776baad24eb045901 | [
"bzip2-1.0.6"
] | null | null | null | src/my_app/urls.py | odie907665/django_ajax_guide | b017bb9dd9f59b5eb2f87f6776baad24eb045901 | [
"bzip2-1.0.6"
] | null | null | null | from django.urls import path, include
from . import views
from .views import *
urlpatterns = [
path('', views.indexView, name="index"),
path('post/ajax/friend', views.postFriend, name='post_friend'),
path('get/ajax/validate/nickname', views.checkNickName, name='validate_nickname'),
] | 33 | 86 | 0.717172 | 37 | 297 | 5.702703 | 0.513514 | 0.151659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127946 | 297 | 9 | 87 | 33 | 0.814672 | 0 | 0 | 0 | 0 | 0 | 0.251678 | 0.087248 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
a6821dbb64562f0c8c7ccaf516e443ddcba45163 | 450 | py | Python | klanad/context_processors.py | lancea-development/product-showcase | 32bdafaa5aa474f08db1a41e4d54b215d43ca6e6 | [
"MIT"
] | 1 | 2021-08-02T12:50:04.000Z | 2021-08-02T12:50:04.000Z | klanad/context_processors.py | wlansu/product-showcase | a0589f7e79eacfc70b3b66ced08a0890c4cc39c5 | [
"MIT"
] | 11 | 2019-06-09T12:01:02.000Z | 2022-01-13T01:22:52.000Z | klanad/context_processors.py | wlansu/product-showcase | a0589f7e79eacfc70b3b66ced08a0890c4cc39c5 | [
"MIT"
] | null | null | null | from typing import Dict
from django.http import HttpRequest
from klanad.models import KlanadTranslations
def footer_processor(request: HttpRequest) -> Dict[str, str]:
"""Add the footer email me message to the context of all templates since the footer is included everywhere."""
try:
message = KlanadTranslations.objects.all()[0].footer_email_me
return {"footer_email_me": message}
except IndexError:
return {}
| 30 | 114 | 0.728889 | 57 | 450 | 5.666667 | 0.614035 | 0.102167 | 0.120743 | 0.123839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002755 | 0.193333 | 450 | 14 | 115 | 32.142857 | 0.887052 | 0.231111 | 0 | 0 | 0 | 0 | 0.044118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a6b270dd57e92052680a5769ecfe1bf3b6734c98 | 2,393 | py | Python | confidant/routes/static_files.py | cclauss/confidant | 85bf2980c47bced1fd71f7a515baa2c824f3ac39 | [
"Apache-2.0"
] | 4 | 2019-06-04T17:07:57.000Z | 2020-11-20T00:02:08.000Z | confidant/routes/static_files.py | cclauss/confidant | 85bf2980c47bced1fd71f7a515baa2c824f3ac39 | [
"Apache-2.0"
] | 1,601 | 2018-09-13T14:56:27.000Z | 2021-03-31T20:06:16.000Z | confidant/routes/static_files.py | cclauss/confidant | 85bf2980c47bced1fd71f7a515baa2c824f3ac39 | [
"Apache-2.0"
] | 5 | 2019-10-30T20:37:02.000Z | 2021-07-04T00:45:36.000Z | import os
import logging
from flask import send_from_directory
from werkzeug.exceptions import NotFound
from confidant import authnz
from confidant.app import app
@app.route('/')
@authnz.redirect_to_logout_if_no_auth
def index():
return app.send_static_file('index.html')
@app.route('/loggedout')
@authnz.require_logout_for_goodbye
def goodbye():
return app.send_static_file('goodbye.html')
@app.route('/healthcheck')
def healthcheck():
return '', 200
@app.route('/favicon.ico')
def favicon():
return app.send_static_file('favicon.ico')
@app.route('/404.html')
def not_found():
return app.send_static_file('404.html')
@app.route('/robots.txt')
def robots():
return app.send_static_file('robots.txt')
@app.route('/bower_components/<path:path>')
def components(path):
return app.send_static_file(os.path.join('bower_components', path))
@app.route('/modules/<path:path>')
def modules(path):
return app.send_static_file(os.path.join('modules', path))
@app.route('/styles/<path:path>')
def static_proxy(path):
return app.send_static_file(os.path.join('styles', path))
@app.route('/scripts/<path:path>')
def scripts(path):
return app.send_static_file(os.path.join('scripts', path))
@app.route('/fonts/<path:path>')
def fonts(path):
return app.send_static_file(os.path.join('fonts', path))
@app.route('/custom/modules/<path:path>')
@authnz.require_auth
def custom_modules(path):
if not app.config['CUSTOM_FRONTEND_DIRECTORY']:
return '', 200
try:
return send_from_directory(
os.path.join(app.config['CUSTOM_FRONTEND_DIRECTORY'], 'modules'),
path
)
except NotFound:
logging.warning(
'Client requested missing custom module {0}.'.format(path)
)
return '', 200
@app.route('/custom/styles/<path:path>')
@authnz.require_auth
def custom_styles(path):
if not app.config['CUSTOM_FRONTEND_DIRECTORY']:
return '', 404
return send_from_directory(
os.path.join(app.config['CUSTOM_FRONTEND_DIRECTORY'], 'styles'),
path
)
@app.route('/custom/images/<path:path>')
@authnz.require_auth
def custom_images(path):
if not app.config['CUSTOM_FRONTEND_DIRECTORY']:
return '', 404
return send_from_directory(
os.path.join(app.config['CUSTOM_FRONTEND_DIRECTORY'], 'images'),
path
)
| 22.790476 | 77 | 0.687422 | 322 | 2,393 | 4.928571 | 0.189441 | 0.070573 | 0.081916 | 0.119723 | 0.468809 | 0.396345 | 0.396345 | 0.332073 | 0.332073 | 0.185885 | 0 | 0.010967 | 0.161722 | 2,393 | 104 | 78 | 23.009615 | 0.78016 | 0 | 0 | 0.22973 | 0 | 0 | 0.22733 | 0.107814 | 0 | 0 | 0 | 0 | 0 | 1 | 0.189189 | false | 0 | 0.081081 | 0.148649 | 0.513514 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
a6c47d88bc1eefe5d53bdb1de97df1cf172f04a6 | 827 | py | Python | src/byro/plugins/profile/migrations/0004_auto_20171206_1919.py | uescher/byro | e43d646dc8e833591c82b2ea1711c70b9ce7e0b2 | [
"Apache-2.0"
] | null | null | null | src/byro/plugins/profile/migrations/0004_auto_20171206_1919.py | uescher/byro | e43d646dc8e833591c82b2ea1711c70b9ce7e0b2 | [
"Apache-2.0"
] | null | null | null | src/byro/plugins/profile/migrations/0004_auto_20171206_1919.py | uescher/byro | e43d646dc8e833591c82b2ea1711c70b9ce7e0b2 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.0 on 2017-12-06 19:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('profile', '0003_auto_20171012_2032'),
]
operations = [
migrations.AlterField(
model_name='memberprofile',
name='birth_date',
field=models.DateField(null=True, verbose_name='Birth date'),
),
migrations.AlterField(
model_name='memberprofile',
name='nick',
field=models.CharField(max_length=200, null=True, verbose_name='Nick'),
),
migrations.AlterField(
model_name='memberprofile',
name='phone_number',
field=models.CharField(blank=True, max_length=32, null=True, verbose_name='Phone number'),
),
]
| 28.517241 | 102 | 0.600967 | 85 | 827 | 5.694118 | 0.517647 | 0.123967 | 0.154959 | 0.179752 | 0.285124 | 0.285124 | 0 | 0 | 0 | 0 | 0 | 0.058923 | 0.281741 | 827 | 28 | 103 | 29.535714 | 0.755892 | 0.051995 | 0 | 0.409091 | 1 | 0 | 0.154731 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6cda55413b5fed2d8a57f2dd0cd6463691dc3b5 | 436 | py | Python | 11-MergingDataframeWithPandas/Chapter_2/07-Slicing MultiIndexed DataFrames.py | Pegasus-01/Data-manipulation-and-merging-with-pandas | 5346678d25820d9fe352bd70294484ecd96fccf7 | [
"Apache-2.0"
] | 1 | 2020-10-18T16:42:28.000Z | 2020-10-18T16:42:28.000Z | 11-MergingDataframeWithPandas/Chapter_2/07-Slicing MultiIndexed DataFrames.py | Pegasus-01/Data-manipulation-and-merging-with-pandas | 5346678d25820d9fe352bd70294484ecd96fccf7 | [
"Apache-2.0"
] | null | null | null | 11-MergingDataframeWithPandas/Chapter_2/07-Slicing MultiIndexed DataFrames.py | Pegasus-01/Data-manipulation-and-merging-with-pandas | 5346678d25820d9fe352bd70294484ecd96fccf7 | [
"Apache-2.0"
] | null | null | null | # Sort the entries of medals: medals_sorted
medals_sorted = medals.sort_index(level=0)
# Print the number of Bronze medals won by Germany
print(medals_sorted.loc[('bronze','Germany')])
# Print data about silver medals
print(medals_sorted.loc['silver'])
# Create alias for pd.IndexSlice: idx
idx = pd.IndexSlice
# Print all the data on medals won by the United Kingdom
print(medals_sorted.loc[idx[:,'United Kingdom'],:]) | 31.142857 | 57 | 0.738532 | 66 | 436 | 4.787879 | 0.439394 | 0.189873 | 0.161392 | 0.189873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00271 | 0.15367 | 436 | 14 | 58 | 31.142857 | 0.853659 | 0.486239 | 0 | 0 | 0 | 0 | 0.160194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
a6fa1482a2d7e8cec830d1b52c858bdbdd5c91d5 | 679 | py | Python | boss/__init__.py | yoshrote/boss | e82ae1e66cb0c56949c4222299437cf6c14d635d | [
"MIT"
] | null | null | null | boss/__init__.py | yoshrote/boss | e82ae1e66cb0c56949c4222299437cf6c14d635d | [
"MIT"
] | null | null | null | boss/__init__.py | yoshrote/boss | e82ae1e66cb0c56949c4222299437cf6c14d635d | [
"MIT"
] | 1 | 2018-09-07T10:59:46.000Z | 2018-09-07T10:59:46.000Z | """
The meat.
Things to do:
TaskFinder
- querys a (file|db) for what sort of tasks to run
Task
- initialize from task data pulled by TaskFinder
- enumerate the combination of args and kwargs to sent to this task
- provide a function which takes *args and **kwargs to execute the task
- specify the type of schedule to be used for this task
ScopeFinder
- querys a (file|db) for the set of parameter combinations to send to the task
Scheduler
- initialize from schedule data pulled by ScheduleFinder
- determine if a task and arguement combination should run
Registry
- manage the state of all jobs in progress
"""
| 33.95 | 83 | 0.701031 | 102 | 679 | 4.666667 | 0.558824 | 0.029412 | 0.046218 | 0.054622 | 0.067227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.260677 | 679 | 19 | 84 | 35.736842 | 0.948207 | 0.986745 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a6fc942594c5b632fc0068318de32b4aae0efd14 | 1,577 | py | Python | backend/about/models.py | michaelscales88/charles_site | 2cdf85b51aaa5e81da3bbce7d391cc1475ca6561 | [
"MIT"
] | 1 | 2019-06-10T21:15:05.000Z | 2019-06-10T21:15:05.000Z | backend/about/models.py | michaelscales88/charles_site | 2cdf85b51aaa5e81da3bbce7d391cc1475ca6561 | [
"MIT"
] | null | null | null | backend/about/models.py | michaelscales88/charles_site | 2cdf85b51aaa5e81da3bbce7d391cc1475ca6561 | [
"MIT"
] | null | null | null | import os
import os.path as op
from sqlalchemy.event import listens_for
from sqlalchemy.ext.hybrid import hybrid_property
from flask_admin import form
from ..extensions import db
from .. import server
# Create directory for file fields to use
file_path = op.join(server.config['BASE_DIR'], 'static/img')
try:
os.mkdir(file_path)
except OSError:
pass
class AboutImageModel(db.Model):
__tablename__ = "about_image"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.Unicode(64))
path = db.Column(db.Unicode(128))
about_info = db.relationship("AboutPageInfo", back_populates="image")
def __str__(self):
return self.name
class AboutPageInfo(db.Model):
__tablename__ = "about_info"
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String)
description = db.Column(db.Text)
image_id = db.Column(db.Integer, db.ForeignKey('about_image.id'))
image = db.relationship("AboutImageModel", back_populates="about_info")
@hybrid_property
def path(self):
return self.image.path
@listens_for(AboutImageModel, 'after_delete')
def del_image(mapper, connection, target):
if target.path:
# Delete image
try:
os.remove(op.join(file_path, target.path))
except OSError:
pass
# Delete thumbnail
try:
os.remove(
op.join(
file_path,
form.thumbgen_filename(target.path)
)
)
except OSError:
pass
| 23.893939 | 75 | 0.642993 | 196 | 1,577 | 4.994898 | 0.392857 | 0.057201 | 0.071502 | 0.064351 | 0.203269 | 0.14811 | 0.118488 | 0.067416 | 0 | 0 | 0 | 0.004263 | 0.256183 | 1,577 | 65 | 76 | 24.261538 | 0.83035 | 0.043754 | 0 | 0.23913 | 0 | 0 | 0.071809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0.065217 | 0.152174 | 0.043478 | 0.543478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
a6ff16f23d2766253d7c8a66580dfc08907180f6 | 3,881 | py | Python | agora/retry/strategy.py | perkexchange/kin-python | 632186125b9c12db1bfb4bbec0b4c6da19413bb5 | [
"MIT"
] | 11 | 2020-09-02T00:41:29.000Z | 2022-03-22T20:01:32.000Z | agora/retry/strategy.py | perkexchange/kin-python | 632186125b9c12db1bfb4bbec0b4c6da19413bb5 | [
"MIT"
] | 12 | 2020-08-19T15:16:35.000Z | 2021-09-29T03:40:22.000Z | agora/retry/strategy.py | perkexchange/kin-python | 632186125b9c12db1bfb4bbec0b4c6da19413bb5 | [
"MIT"
] | 5 | 2020-09-02T00:43:31.000Z | 2022-03-31T15:48:51.000Z | import random
import time
from agora.retry.backoff import Backoff
class Strategy:
"""Determines whether or not an action should be retried. Strategies are allowed to delay or cause other side
effects.
"""
def should_retry(self, attempts: int, e: Exception) -> bool:
"""Returns whether or not to retry, based on this strategy.
:param attempts: Tee number of attempts that have occurred. Starts at 1, since the action is evaluated first.
:param e: The :class:`Exception <Exception>` that was raised.
:return: A bool indicating whether the action should be retried, based on this strategy.
"""
raise NotImplementedError('Strategy is an abstract class. Strategy must implement should_retry().')
class LimitStrategy(Strategy):
"""A strategy that limits the total umber of retries.
:param max_attempts: The max number of attempts. Should be greater than 1, since the action is evaluated first.
"""
def __init__(self, max_attempts):
self.max_attempts = max_attempts
def should_retry(self, attempts: int, e: Exception) -> bool:
return attempts < self.max_attempts
class RetriableErrorsStrategy(Strategy):
"""A strategy that specifies which errors can be retried.
:param: retriable_errors: A list of :class:`Exception <Exception>` classes that can be retried.
"""
def __init__(self, retriable_errors):
self.retriable_errors = retriable_errors
def should_retry(self, attempts: int, e: Exception) -> bool:
for error in self.retriable_errors:
if isinstance(e, error):
return True
return False
class NonRetriableErrorsStrategy(Strategy):
"""A strategy that specifies which errors should not be retried.
:param: non_retriable_errors: A list of :class:`Exception <Exception>` classes that shouldn't be retried.
"""
def __init__(self, non_retriable_errors):
self.non_retriable_errors = non_retriable_errors
def should_retry(self, attempts: int, e: Exception) -> bool:
for error in self.non_retriable_errors:
if isinstance(e, error):
return False
return True
class BackoffStrategy(Strategy):
"""A strategy that will delay the next retry, provided the action raised an error.
:param: backoff: The :class:`Backoff <agora.retry.backoff.Backoff> to use to determine the amount of time to delay.
:param max_backoff: The maximum backoff, in seconds.
"""
def __init__(self, backoff: Backoff, max_backoff: float):
self.backoff = backoff
self.max_backoff = max_backoff
def should_retry(self, attempts: int, e: Exception) -> bool:
delay = min(self.max_backoff, self.backoff.get_backoff(attempts))
time.sleep(delay)
return True
class BackoffWithJitterStrategy(Strategy):
"""A strategy that will delay the next retry, with jitter induced on the delay provided by `backoff`.
The jitter parameter is a percentage of the total delay (after capping) that the timing can be off by. For example,
a capped delay of 0.1s with a jitter of 0.1 will result in a delay of 0.1s +/- 0.01s.
:param: backoff: The :class:`Backoff <agora.retry.backoff.Backoff> to use to determine the amount of time to delay.
:param max_backoff: The maximum backoff, in seconds.
:param jitter: A percentage of the total delay that timing can be off by.
"""
def __init__(self, backoff: Backoff, max_backoff: float, jitter: float):
self.backoff = backoff
self.max_backoff = max_backoff
self.jitter = jitter
def should_retry(self, attempts: int, e: Exception) -> bool:
delay = min(self.max_backoff, self.backoff.get_backoff(attempts))
time.sleep(delay * (1 + random.random() * self.jitter * 2 - self.jitter))
return True
| 36.271028 | 119 | 0.691832 | 524 | 3,881 | 5.01145 | 0.232824 | 0.057121 | 0.031988 | 0.041127 | 0.539604 | 0.512186 | 0.492384 | 0.407845 | 0.379284 | 0.278751 | 0 | 0.004326 | 0.225715 | 3,881 | 106 | 120 | 36.613208 | 0.869551 | 0.441896 | 0 | 0.454545 | 0 | 0 | 0.034619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.068182 | 0.022727 | 0.613636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4704da2290b4682894633752abafe742a2ca2810 | 3,886 | py | Python | fvp-core.py | duck-master/FVP | 3a0875a2dc5bf5279837e597ac62778163580701 | [
"MIT"
] | null | null | null | fvp-core.py | duck-master/FVP | 3a0875a2dc5bf5279837e597ac62778163580701 | [
"MIT"
] | null | null | null | fvp-core.py | duck-master/FVP | 3a0875a2dc5bf5279837e597ac62778163580701 | [
"MIT"
] | null | null | null | #code written by duck_master since 2 january 2021
#licensed under MIT license
#based on Final Version Perfected, invented by willbradshaw on 27 november 2020
#as described in https://www.lesswrong.com/posts/xfcKYznQ6B9yuxB28/final-version-perfected-an-underused-execution-algorithm
#TODO: make into PyPI package or shell script
#libraries
import random #for randomizing reminder list
#functions for reading/normalizing input
#TODO: implement already having stars
#TODO: implement try-catch in case of bad filepath
def read_reminders_from_file(reminders_pathname):
'''Reads in a list of reminders from a file.
Notes: * indicates important reminders; filtered by no. of stars.
! indicates completed reminders; these are ignored.
str -> [str]'''
with open(reminders_pathname, mode = 'r') as f:
rlines = f.readlines() #read from reminders_pathname and normalize
rlines = [rr.strip('\n') for rr in reminders]
return reminder
def read_reminders_from_console():
'''Reads in a list of reminders from text input.
(To finish the list, the user should type nothing and enter.)
None, str input -> [str]'''
print('Enter your reminders here. (Type nothing and enter to stop.)')
reminders = [] #type in reminders
while True:
newreminder = input('What is one of your reminders? ')
if newreminder != '':
reminders.append(newreminder)
else:
break
return reminders
#function to add stars to reminder file lines
def add_stars_to_reminderlines(reminderlines, items_to_star):
'''This function should add stars to the marked items given by the indices in the second column.
[str], [(str, int)] -> [str]
'''
result = reminderlines
for (item, index) in items_to_star:
result[index] = '*' + result[index]
return result
#function to normalize reminders
def normalize_reminderlines(reminderlines):
'''Normalizes reminderlines.
Deletes lines starting with -.
Filters lines not starting with *, if any exists, and eliminates beginning *.
[str] -> [str].'''
result = reminderlines
result = list(filter((lambda s: s[0] != '-'), result)) #detes lines starting with -
imp = list(filter((lambda s: s[0] = '*'), result)) #selects lines starting with *
if len(imp) > 0:
result = list(map((lambda s: s[1:]), imp))
return result
#main code
#read in reminders
if input('Do you want to read in your reminders from a file? ') in ['yes', 'Yes']:
reminders_pathname = input('What is the file path? ')
reminders = read_reminders_from_file(reminders_pathname)
else:
reminders = read_reminders_from_console()
#normalize reminders
reminders = normalize_reminderlines(reminders)
#running FVP on idea list
print('\nNow, let\'s sort your reminders!')
#keep track of indices (to make file-writing easy)
reminders_shuffled = [(reminders[index], index) for index in range(len(reminders))]
random.shuffle(reminders_shuffled) #to remove bias from idea order; TODO: allow option to keep original order
bestideas = [reminders_shuffled[0]]
for i in range(1, len(reminders_shuffled)):
response = input(f'Would you rather do "{reminders_shuffled[i][0]}" rather than "{bestideas[-1][0]}"? ')
if response in ['yes', 'Yes']:
bestideas.append(reminders_shuffled[i])
#output
print(f'\nYou should do {list(bestideas.__reversed__())[0][0]}.')
#for file writes
if reminders_pathname != '':
if input('Write to file? ') in ['yes', 'Yes']:
with open(reminders_pathname, 'r') as f:
reminderfile_contents = f.readlines()
with open(reminders_pathname, 'w') as f:
f.writelines(add_stars_to_reminderlines(reminderfile_contents, bestideas))
if input('\nQuit? ') in ['yes', 'Yes']:
exit()
| 41.340426 | 123 | 0.680391 | 512 | 3,886 | 5.076172 | 0.386719 | 0.052328 | 0.026164 | 0.028857 | 0.069257 | 0.069257 | 0.040015 | 0 | 0 | 0 | 0 | 0.008483 | 0.211271 | 3,886 | 93 | 124 | 41.784946 | 0.839478 | 0.226969 | 0 | 0.117647 | 0 | 0 | 0.15776 | 0.037623 | 0 | 0 | 0 | 0.021505 | 0 | 0 | null | null | 0 | 0.019608 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
471f42228029ee1a617d1b5a53b9ef6718c6f7ab | 568 | py | Python | examples/generate_battery.py | poldracklab/expfactory-python | db17afa729beb3b668b15e0d929f3987d4f65e70 | [
"MIT"
] | 2 | 2018-04-24T03:10:29.000Z | 2019-08-03T16:42:12.000Z | examples/generate_battery.py | poldracklab/expfactory-python | db17afa729beb3b668b15e0d929f3987d4f65e70 | [
"MIT"
] | 95 | 2015-11-28T15:57:49.000Z | 2020-08-06T22:33:30.000Z | examples/generate_battery.py | psiturk/psiturk-python | a39993268d78e2c4f7e4b8b05cb7b281d82a95c7 | [
"MIT"
] | 10 | 2016-01-12T23:16:08.000Z | 2019-10-21T22:05:01.000Z | #!/usr/bin/python
from expfactory.battery import generate
# Location of battery repo, and experiment repo
# In future, these will not be required
battery_repo = "/home/vanessa/Documents/Dropbox/Code/psiturk/psiturk-battery"
experiment_repo = "/home/vanessa/Documents/Dropbox/Code/psiturk/psiturk-experiments"
battery_dest = "/home/vanessa/Desktop/battery"
### This is the command line way to generate a battery
# config parameters are specified via dictionary
# Not specifying experiments will include all valid
generate(battery_repo,battery_dest,experiment_repo)
| 35.5 | 84 | 0.808099 | 78 | 568 | 5.807692 | 0.602564 | 0.072848 | 0.066225 | 0.10596 | 0.216336 | 0.216336 | 0.216336 | 0.216336 | 0 | 0 | 0 | 0 | 0.107394 | 568 | 15 | 85 | 37.866667 | 0.893491 | 0.43662 | 0 | 0 | 1 | 0 | 0.490385 | 0.490385 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b2c5ddfbb39522ead7924da1bfa8f01e915674f | 219 | py | Python | job/route.py | AnsGoo/cronJob | 0f9aedbe2ffe3c405376c13a7c2d24540360bd0e | [
"MIT"
] | 11 | 2021-06-27T05:00:09.000Z | 2022-02-15T14:31:21.000Z | job/route.py | AnsGoo/cornJob | 0f9aedbe2ffe3c405376c13a7c2d24540360bd0e | [
"MIT"
] | 1 | 2021-12-01T12:20:54.000Z | 2021-12-08T11:54:12.000Z | job/route.py | AnsGoo/cornJob | 0f9aedbe2ffe3c405376c13a7c2d24540360bd0e | [
"MIT"
] | 2 | 2021-06-27T05:00:16.000Z | 2021-08-09T06:36:09.000Z | from fastapi import APIRouter
from .api.v1.job import router as job_router
from .api.v1.record import router as record_router
router = APIRouter()
router.include_router(job_router)
router.include_router(record_router) | 27.375 | 50 | 0.83105 | 34 | 219 | 5.176471 | 0.323529 | 0.079545 | 0.102273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010152 | 0.100457 | 219 | 8 | 51 | 27.375 | 0.883249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5b62688d808442c1d6121131573c0612146dd9a6 | 303 | py | Python | parse_api/models.py | damiso15/parser_api | ef5717f0922413d0a4524ef1075ce8b13ec05ece | [
"MIT"
] | null | null | null | parse_api/models.py | damiso15/parser_api | ef5717f0922413d0a4524ef1075ce8b13ec05ece | [
"MIT"
] | 4 | 2021-03-30T13:54:39.000Z | 2021-09-22T19:36:09.000Z | parse_api/models.py | damiso15/parser_api | ef5717f0922413d0a4524ef1075ce8b13ec05ece | [
"MIT"
] | 1 | 2020-07-14T09:59:18.000Z | 2020-07-14T09:59:18.000Z | # from __future__ import unicode_literals
# from django.db import models
#
#
# # Create your models here.
#
#
# class LinkUpload(models.Model):
# link = models.CharField(max_length=2083)
# upload_at = models.DateTimeField(auto_now_add=True)
#
# def __str__(self):
# return self.link
| 21.642857 | 57 | 0.69637 | 38 | 303 | 5.210526 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016327 | 0.191419 | 303 | 13 | 58 | 23.307692 | 0.791837 | 0.907591 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5b7d92fef567cadae71d334fdb630206365e3ac9 | 671 | py | Python | src/glglue/ctypesmath/__init__.py | ousttrue/glglue | 2a54812c2868da668c5931ef7e2209f4cd881c69 | [
"MIT"
] | 7 | 2016-11-11T20:42:30.000Z | 2020-01-26T07:57:59.000Z | src/glglue/ctypesmath/__init__.py | ousttrue/glglue | 2a54812c2868da668c5931ef7e2209f4cd881c69 | [
"MIT"
] | null | null | null | src/glglue/ctypesmath/__init__.py | ousttrue/glglue | 2a54812c2868da668c5931ef7e2209f4cd881c69 | [
"MIT"
] | 1 | 2021-11-25T02:44:56.000Z | 2021-11-25T02:44:56.000Z | from typing import NamedTuple
from .float3 import Float3
from .quaternion import Quaternion
from .mat4 import Mat4, Float4
from .camera import Camera, FrameState
from .aabb import AABB
from .hittest import Ray
class TRS(NamedTuple):
trnslation: Float3
rotation: Quaternion
scale: Float3
def to_matrix(self) -> Mat4:
s = Mat4.new_scale(self.scale.x, self.scale.y, self.scale.z)
r = Mat4.new_from_quaternion(
self.rotation.x, self.rotation.y, self.rotation.z, self.rotation.w)
t = Mat4.new_translation(
self.trnslation.x, self.trnslation.y, self.trnslation.z)
return s * r * t
| 30.5 | 80 | 0.669151 | 91 | 671 | 4.879121 | 0.351648 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021611 | 0.241431 | 671 | 21 | 81 | 31.952381 | 0.850688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.388889 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
5b800d600f7a0fdf1a3fd3a30314aa0f9f18c028 | 2,452 | py | Python | linear_binning/test/test_linear_binning.py | jhetherly/linear_binning | 70ee583db1ad53fc744998a44de44a2ddbf9ca92 | [
"MIT"
] | null | null | null | linear_binning/test/test_linear_binning.py | jhetherly/linear_binning | 70ee583db1ad53fc744998a44de44a2ddbf9ca92 | [
"MIT"
] | null | null | null | linear_binning/test/test_linear_binning.py | jhetherly/linear_binning | 70ee583db1ad53fc744998a44de44a2ddbf9ca92 | [
"MIT"
] | null | null | null | from linear_binning import linear_binning
import numpy as np
import logging
from timeit import default_timer as timer
logging.basicConfig(level=logging.INFO)
def generate_data(n_samples=100000, D=2):
sample_coords = np.random.random(size=(n_samples, D))
sample_weights = np.random.random(size=n_samples)
# NOTE: purposely limiting the range to test over- and underflow bins
extents = np.tile([0.02, 0.8999], D).reshape((D, 2))
sizes = np.full(D, 51)
return sample_coords, sample_weights, extents, sizes
def test_sum_of_weights():
# tests that the sum of weights in the binned grid is preserved
sample_coords, sample_weights, extents, sizes = generate_data(1000000)
start = timer()
coords, weights = linear_binning(sample_coords, sample_weights,
extents, sizes)
end = timer()
logging.info('\n')
logging.info('One million 2D points binned with linear_binning in {}s'.format(end - start))
assert np.allclose(weights.sum(), sample_weights.sum())
x = np.ascontiguousarray(sample_coords[:,0])
y = np.ascontiguousarray(sample_coords[:,1])
start = timer()
np.histogram2d(x, y,
weights=sample_weights,
bins=sizes, range=extents)
end = timer()
logging.info('For comparison, np.histogram2d finished in {}s'.format(end - start))
# tests specific values on the grid
sample_coords = np.array([[0.2, 0.9], [0.5, 1.1], [-0.1, 0.7]])
sample_weights = np.array([25, 50, 25])
extents = np.array([[0.0, 1.0], [0.0, 1.0]])
sizes = np.array([11, 11])
coords, weights = linear_binning(sample_coords, sample_weights,
extents, sizes)
pass_value_test = True
value_tests = 0
for i in range(coords.shape[0]):
if np.allclose(coords[i, 0], 0.0) and np.allclose(coords[i, 1], 0.7):
pass_value_test &= np.allclose(weights[i], 25.0)
value_tests += 1
elif np.allclose(coords[i, 0], 0.2) and np.allclose(coords[i, 1], 0.9):
pass_value_test &= np.allclose(weights[i], 25.0)
value_tests += 1
elif np.allclose(coords[i, 0], 0.5) and np.allclose(coords[i, 1], 1.0):
pass_value_test &= np.allclose(weights[i], 50.0)
value_tests += 1
else:
pass_value_test &= np.allclose(weights[i], 0.0)
assert pass_value_test and value_tests == 3
| 37.151515 | 95 | 0.62969 | 359 | 2,452 | 4.169916 | 0.267409 | 0.07348 | 0.052104 | 0.068136 | 0.379426 | 0.356713 | 0.245825 | 0.175017 | 0.175017 | 0.175017 | 0 | 0.051075 | 0.241436 | 2,452 | 65 | 96 | 37.723077 | 0.753763 | 0.066476 | 0 | 0.265306 | 1 | 0 | 0.045077 | 0 | 0 | 0 | 0 | 0 | 0.040816 | 1 | 0.040816 | false | 0.122449 | 0.081633 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5b84a909f482161b6e0426d78f6f2656af35967c | 496 | py | Python | xambda/data_layer/base.py | xyla-io/lambda_io | a6e04a68a241880e573aa14f41dd19bfec37d73c | [
"MIT"
] | null | null | null | xambda/data_layer/base.py | xyla-io/lambda_io | a6e04a68a241880e573aa14f41dd19bfec37d73c | [
"MIT"
] | null | null | null | xambda/data_layer/base.py | xyla-io/lambda_io | a6e04a68a241880e573aa14f41dd19bfec37d73c | [
"MIT"
] | null | null | null | import pg8000
from typing import Iterable, Union, Dict, Optional
def get_connection(database: str, host: str, port: int, user: str, password: str, ssl: Union[Dict[str, any], bool]=True) -> any:
connection = pg8000.connect(database=database, host=host, port=port, user=user, password=password, ssl=ssl)
return connection
def run_query(connection: any, query: str, parameters: Iterable[str]=[]) -> any:
cursor = connection.cursor()
cursor.execute(query, tuple(parameters))
return cursor | 45.090909 | 128 | 0.739919 | 68 | 496 | 5.367647 | 0.441176 | 0.049315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018476 | 0.127016 | 496 | 11 | 129 | 45.090909 | 0.82448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.222222 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5b8c4edf3ed299e883d0fe0f83d5ec7d1d7d43d4 | 1,272 | py | Python | pmca/usb/driver/__init__.py | kubawolanin/Sony-PMCA-RE | 69750d743a9586917642d8d73a0578520a5342d6 | [
"MIT"
] | 1,313 | 2015-05-13T22:31:44.000Z | 2022-03-31T07:34:01.000Z | pmca/usb/driver/__init__.py | 923552514/Sony-PMCA-RE | d4da4882e4d59b35f59e4ac919a866e2daf4bbdd | [
"MIT"
] | 289 | 2015-05-30T17:31:49.000Z | 2022-03-31T09:33:48.000Z | pmca/usb/driver/__init__.py | 923552514/Sony-PMCA-RE | d4da4882e4d59b35f59e4ac919a866e2daf4bbdd | [
"MIT"
] | 176 | 2015-07-08T06:30:28.000Z | 2022-03-27T20:15:22.000Z | import abc
from collections import namedtuple
from ...util import *
USB_CLASS_PTP = 6
USB_CLASS_MSC = 8
USB_CLASS_VENDOR_SPECIFIC = 255
UsbDeviceHandle = namedtuple('UsbDeviceHandle', 'handle, idVendor, idProduct')
MSC_SENSE_OK = (0, 0, 0)
MSC_SENSE_ERROR_UNKNOWN = (0x2, 0xff, 0xff)
def parseMscSense(buffer):
return parse8(buffer[2:3]) & 0xf, parse8(buffer[12:13]), parse8(buffer[13:14])
class BaseUsbDriver(object):
def reset(self):
pass
class BaseMscDriver(BaseUsbDriver, abc.ABC):
@abc.abstractmethod
def sendCommand(self, command):
pass
@abc.abstractmethod
def sendWriteCommand(self, command, data):
pass
@abc.abstractmethod
def sendReadCommand(self, command, size):
pass
class BaseMtpDriver(BaseUsbDriver, abc.ABC):
@abc.abstractmethod
def sendCommand(self, code, args):
pass
@abc.abstractmethod
def sendWriteCommand(self, code, args, data):
pass
@abc.abstractmethod
def sendReadCommand(self, code, args):
pass
class BaseUsbContext(abc.ABC):
def __init__(self, name, classType):
self.name = name
self.classType = classType
def __enter__(self):
return self
def __exit__(self, *ex):
pass
@abc.abstractmethod
def listDevices(self, vendor):
pass
@abc.abstractmethod
def openDevice(self, device):
pass
| 18.171429 | 79 | 0.742925 | 162 | 1,272 | 5.685185 | 0.388889 | 0.147666 | 0.173724 | 0.156352 | 0.314875 | 0.314875 | 0.219327 | 0.117264 | 0 | 0 | 0 | 0.023985 | 0.147799 | 1,272 | 69 | 80 | 18.434783 | 0.825646 | 0 | 0 | 0.375 | 0 | 0 | 0.033019 | 0 | 0 | 0 | 0.011006 | 0 | 0 | 1 | 0.270833 | false | 0.208333 | 0.0625 | 0.041667 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5ba48df7af8f48c3064721bc84b7ab00b1798a9d | 1,693 | py | Python | cointracker/crawler.py | dakyskye/cointracker | ec11daef88e80a8adfd5587a89cad11d6b438b2b | [
"MIT"
] | 1 | 2022-03-27T08:27:37.000Z | 2022-03-27T08:27:37.000Z | cointracker/crawler.py | dakyskye/cointracker | ec11daef88e80a8adfd5587a89cad11d6b438b2b | [
"MIT"
] | null | null | null | cointracker/crawler.py | dakyskye/cointracker | ec11daef88e80a8adfd5587a89cad11d6b438b2b | [
"MIT"
] | null | null | null | from typing import TypedDict
from selenium.webdriver.common.by import By
from selenium.webdriver.remote.webdriver import WebDriver
from selenium.webdriver.remote.webelement import WebElement
def _normalise(elem: WebElement):
return float(elem.text.replace(",", "").removeprefix("$"))
class CrawlerResult(TypedDict):
price_current: float
price_24h_low: float
price_24h_high: float
class Crawler:
_URL = ""
_CURRENT_XPATH = ""
_24H_LOW_XPATH = ""
_24H_HIGH_XPATH = ""
def __init__(self, driver: WebDriver):
self._driver = driver
def finish(self):
self._driver.close()
def _get_current(self):
return _normalise(self._driver.find_element(by=By.XPATH, value=self._CURRENT_XPATH))
def _get_24h_low(self):
return _normalise(self._driver.find_element(by=By.XPATH, value=self._24H_LOW_XPATH))
def _get_24h_high(self):
return _normalise(self._driver.find_element(by=By.XPATH, value=self._24H_HIGH_XPATH))
def get_all(self) -> CrawlerResult:
self._driver.get(self._URL)
return {
'price_current': self._get_current(),
'price_24h_low': self._get_24h_low(),
'price_24h_high': self._get_24h_high()
}
class BitcoinCrawler(Crawler):
_URL = "https://www.coindesk.com/price/bitcoin/"
_CURRENT_XPATH = "/html/body/div[1]/div/div[2]/div[2]/div[1]/div[1]/div/div[1]/div/div[2]/div[1]/span[2]"
_24H_LOW_XPATH = "/html/body/div[1]/div/div[2]/div[2]/div[1]/div[1]/div/div[1]/div/div[2]/div[2]/div[2]/div[2]/span"
_24H_HIGH_XPATH = "/html/body/div[1]/div/div[2]/div[2]/div[1]/div[1]/div/div[1]/div/div[2]/div[2]/div[3]/div[2]/span"
| 31.943396 | 121 | 0.675724 | 252 | 1,693 | 4.257937 | 0.202381 | 0.05219 | 0.078285 | 0.083877 | 0.328052 | 0.328052 | 0.323392 | 0.323392 | 0.323392 | 0.323392 | 0 | 0.040454 | 0.16775 | 1,693 | 52 | 122 | 32.557692 | 0.721079 | 0 | 0 | 0 | 0 | 0.081081 | 0.213231 | 0.165387 | 0 | 0 | 0 | 0 | 0 | 1 | 0.189189 | false | 0 | 0.108108 | 0.108108 | 0.810811 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
5bcb6dbeda103b0488dfe1e484d266729102933f | 1,029 | py | Python | python/decorator/demo.py | csparpa/gof-design-patterns | 1c24c414be495e53f9572914c5b4337f6d2f9aa3 | [
"Unlicense"
] | 45 | 2015-04-13T14:01:06.000Z | 2021-10-02T20:24:04.000Z | python/decorator/demo.py | kaushiknishant/gof-design-patterns | 1c24c414be495e53f9572914c5b4337f6d2f9aa3 | [
"Unlicense"
] | null | null | null | python/decorator/demo.py | kaushiknishant/gof-design-patterns | 1c24c414be495e53f9572914c5b4337f6d2f9aa3 | [
"Unlicense"
] | 33 | 2015-10-23T04:18:13.000Z | 2021-04-23T18:46:42.000Z | from component import Component
from decorator import Decorator
if __name__ == "__main__":
print("***Demo: pattern Decorator")
print("Creating: a component with name=Bob, age=30")
component = Component("Bob", 30)
print("Decorating: with new state and behaviour regarding " + \
"job. Will be job=teacher");
decorator = Decorator(component, "teacher")
print("Testing: behaviours of decorated component")
print("name="+decorator.get_name())
print("age="+str(decorator.get_age()))
print("job="+decorator.get_job())
print("***Demo: pattern Decorator via duck-typed, dynamically added methods")
print("Defining: new behaviour as a function")
def greet(self):
print "Hi I am %s, I am %s and I am a %s" % \
(self.get_name(), str(self.get_age()), self.get_job())
print greet
print("Binding: the new behaviour to the Decorator class")
setattr(Decorator, 'greet', greet)
print("Invoking: the new behaviour")
decorator.greet()
| 39.576923 | 81 | 0.654033 | 132 | 1,029 | 4.992424 | 0.409091 | 0.054628 | 0.048558 | 0.075873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004926 | 0.210884 | 1,029 | 26 | 82 | 39.576923 | 0.80665 | 0 | 0 | 0 | 0 | 0 | 0.423301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.086957 | null | null | 0.565217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
5be0782bea5a2ed75d43ae256aad4943c0f1fc09 | 897 | py | Python | setup.py | benosteen/RecordSilo | 0b76526e8e80e119b6adc967693d7625e3f65079 | [
"MIT"
] | 1 | 2015-05-28T22:37:04.000Z | 2015-05-28T22:37:04.000Z | setup.py | benosteen/RecordSilo | 0b76526e8e80e119b6adc967693d7625e3f65079 | [
"MIT"
] | 1 | 2016-04-16T19:28:49.000Z | 2016-04-16T19:28:49.000Z | setup.py | benosteen/RecordSilo | 0b76526e8e80e119b6adc967693d7625e3f65079 | [
"MIT"
] | null | null | null | from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup, find_packages
setup(name="RecordSilo",
version="0.4.16",
description="An adaptation of a pairtree store, each object with simple JSON keyvalue manifest and crude versioning.",
long_description="""An adaptation of a pairtree store, each object with simple JSON keyvalue manifest and crude versioning.
Designed to be used as a repository of harvested records from OAI-PMH based services and the like.
As of version 0.3, it now includes an RDF-enhanced version of the Silo - RDFSilo.""",
author="Ben O'Steen, Anusha Ranganathan",
author_email="bosteen@gmail.com / anusha3@gmail.com",
packages=find_packages(exclude='tests'),
#install_requires=['pairtree>0.5.4', 'rdfobject>=0.4', 'simplejson', 'datetime'],
install_requires=['pairtree>0.5.4', 'simplejson'],
)
| 52.764706 | 130 | 0.730212 | 127 | 897 | 5.086614 | 0.574803 | 0.040248 | 0.071207 | 0.077399 | 0.380805 | 0.380805 | 0.30031 | 0.30031 | 0.30031 | 0.30031 | 0 | 0.020027 | 0.164994 | 897 | 16 | 131 | 56.0625 | 0.842457 | 0.089186 | 0 | 0 | 0 | 0.071429 | 0.615196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5bea06b96d4af766d6717bb5b0d4b605adf1009d | 399 | py | Python | environment/custom/knapsack/tests/runner.py | AndreMaz/transformer-pointer-critic | 97cfa1e667514a5651d855d6ffd498ac49339c00 | [
"MIT"
] | 5 | 2021-12-11T20:51:16.000Z | 2021-12-16T06:10:03.000Z | environment/custom/knapsack/tests/runner.py | AndreMaz/transformer-pointer-critic | 97cfa1e667514a5651d855d6ffd498ac49339c00 | [
"MIT"
] | 1 | 2021-12-12T21:25:38.000Z | 2021-12-12T21:25:38.000Z | environment/custom/knapsack/tests/runner.py | AndreMaz/transformer-pointer-critic | 97cfa1e667514a5651d855d6ffd498ac49339c00 | [
"MIT"
] | null | null | null | import unittest
import backpack_test
import item_test
import env_test
# initialize the test suite
loader = unittest.TestLoader()
suite = unittest.TestSuite()
suite.addTests(loader.loadTestsFromModule(backpack_test))
suite.addTests(loader.loadTestsFromModule(item_test))
suite.addTests(loader.loadTestsFromModule(env_test))
runner = unittest.TextTestRunner(verbosity=3)
result = runner.run(suite) | 24.9375 | 57 | 0.83208 | 48 | 399 | 6.791667 | 0.416667 | 0.082822 | 0.174847 | 0.349693 | 0.257669 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002717 | 0.077694 | 399 | 16 | 58 | 24.9375 | 0.883152 | 0.062657 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
750934d838a18aef97c859a1ccfda43e137cd846 | 510 | py | Python | dr-octo/dr-octo/apis/playground/testEagleEyeVulnPush.py | cihatyildiz/vm-scripts | 53aec327dce3327aa2610b6b703ad2bebab9c8ff | [
"Apache-2.0"
] | null | null | null | dr-octo/dr-octo/apis/playground/testEagleEyeVulnPush.py | cihatyildiz/vm-scripts | 53aec327dce3327aa2610b6b703ad2bebab9c8ff | [
"Apache-2.0"
] | null | null | null | dr-octo/dr-octo/apis/playground/testEagleEyeVulnPush.py | cihatyildiz/vm-scripts | 53aec327dce3327aa2610b6b703ad2bebab9c8ff | [
"Apache-2.0"
] | null | null | null | from flask import Flask, request, jsonify, Blueprint, abort
from flask_api import status
import requests
import json
import os
from requests.auth import HTTPBasicAuth
from libs.eagle_eye.vulns import EagleEyeVulns
test_ee_push = Blueprint('test_ee_push', __name__)
@test_ee_push.route('/test_ee_push', methods = ['GET'])
def testEagleEyeVulnPush():
eagle_eye_vulns = EagleEyeVulns()
response = eagle_eye_vulns.pushVuln(
'title', 'criticality', 'status', 'app', 'details'
)
return response.json() | 28.333333 | 59 | 0.770588 | 67 | 510 | 5.597015 | 0.522388 | 0.064 | 0.106667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12549 | 510 | 18 | 60 | 28.333333 | 0.840807 | 0 | 0 | 0 | 0 | 0 | 0.117417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.466667 | 0 | 0.6 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
751e0e8b0fd81e7ffcddc396b42dce9faf007bfd | 430 | py | Python | mandelbrot-lambda.py | nicolargo/mandelbrot | 39dfc98e31db35414647331fae2f87551c2c847b | [
"MIT"
] | 1 | 2021-04-26T15:21:28.000Z | 2021-04-26T15:21:28.000Z | mandelbrot-lambda.py | nicolargo/mandelbrot | 39dfc98e31db35414647331fae2f87551c2c847b | [
"MIT"
] | null | null | null | mandelbrot-lambda.py | nicolargo/mandelbrot | 39dfc98e31db35414647331fae2f87551c2c847b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
try:
from functools import reduce
except:
pass
def mandelbrot(a): return reduce(lambda z, _: z*z + a, range(50), 0)
def step(start, step, iterations): return (start + (i * step) for i in range(iterations))
rows = (('*' if abs(mandelbrot(complex(x, y))) < 2 else ' '
for x in step(-2.0, .0315, 80))
for y in step(1, -.05, 41))
print( '\n'.join(''.join(row) for row in rows) )
| 25.294118 | 89 | 0.595349 | 71 | 430 | 3.591549 | 0.605634 | 0.015686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051051 | 0.225581 | 430 | 16 | 90 | 26.875 | 0.714715 | 0.046512 | 0 | 0 | 0 | 0 | 0.00978 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.1 | 0.1 | 0.2 | 0.3 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 2 |
7522f9660ef27ed652ad03837cb4a53bd3b6cc07 | 1,434 | py | Python | cscmiko/tools/snmp.py | Ali-aqrabawi/cscmiko | e923c5bf940734ad3b034820de34d6bde0d2f837 | [
"MIT"
] | 2 | 2018-09-19T07:13:16.000Z | 2019-07-04T03:42:32.000Z | cscmiko/tools/snmp.py | Ali-aqrabawi/cisco_sdk | e923c5bf940734ad3b034820de34d6bde0d2f837 | [
"MIT"
] | null | null | null | cscmiko/tools/snmp.py | Ali-aqrabawi/cisco_sdk | e923c5bf940734ad3b034820de34d6bde0d2f837 | [
"MIT"
] | 1 | 2018-08-30T21:00:26.000Z | 2018-08-30T21:00:26.000Z | # """
# SNMP tools
# """
#
# from easysnmp import Session
# from collections import defaultdict
#
#
# class SNMPManager:
# """
# snmp manager , take hostname and community and return dict of results
# example :
# snmp_client = SNMPManager('192.168.1.1','private')
# interfaces = snmp_client.get_fields(name='ifName',mtu='mtu',admin_status='ifAdminStatus')
# interfaces will equal to ; {'oid_index':'11', 'name':'Ethernet1/0', 'mtu':'1500', 'admin_status':'up'}
# """
#
# def __init__(self, hostname, community):
# self.snmp = Session(hostname=hostname, community=community, version=2, use_sprint_value=True)
#
# def get_fields(self, **kwargs):
# """
# method takes kwargs of field_name and oid name (name='ifName',mtu='mtu',admin_status='ifAdminStatus')
# :param kwargs: field_name and it's oid name
# :return: dict: {'oid_index':'11', 'name':'Ethernet1/0', 'mtu':'1500', 'admin_status':'up'}
# """
#
# inner_res = defaultdict(dict)
# for key in kwargs:
# res = self.snmp.walk(oids=kwargs[key])
# for item in res:
#
# if 'ipAdEntIfIndex' in item.oid:
# print(item)
# inner_res[item.value].update({key: item.oid_index})
#
# else:
# inner_res[item.oid_index].update({key: item.value})
#
# return dict(inner_res)
| 34.97561 | 111 | 0.585077 | 169 | 1,434 | 4.822485 | 0.431953 | 0.053988 | 0.031902 | 0.039264 | 0.206135 | 0.206135 | 0.206135 | 0.107975 | 0.107975 | 0.107975 | 0 | 0.02354 | 0.259414 | 1,434 | 40 | 112 | 35.85 | 0.743879 | 0.943515 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
752b7071e698e4d48d9a6721e7c5f2f081047340 | 283 | py | Python | tests/read_file_blocks.py | jenssss/runana | d8470d5785a9c499b91d6c068a609d81324fbb17 | [
"Apache-2.0"
] | null | null | null | tests/read_file_blocks.py | jenssss/runana | d8470d5785a9c499b91d6c068a609d81324fbb17 | [
"Apache-2.0"
] | null | null | null | tests/read_file_blocks.py | jenssss/runana | d8470d5785a9c499b91d6c068a609d81324fbb17 | [
"Apache-2.0"
] | null | null | null | from runana.read_numbers import read_file_sev_blocks_new
# import sys
# sys.setrecursionlimit(50)
from pprint import pprint
blocks = read_file_sev_blocks_new('test3.dat')
print('')
pprint(blocks)
import numpy as np
nparray = np.asarray(blocks)
print( nparray)
print( nparray.shape)
| 21.769231 | 56 | 0.798587 | 43 | 283 | 5.046512 | 0.511628 | 0.073733 | 0.101382 | 0.156682 | 0.184332 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011858 | 0.106007 | 283 | 12 | 57 | 23.583333 | 0.84585 | 0.127208 | 0 | 0 | 0 | 0 | 0.036885 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.555556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 2 |
7544666c88a54c5b5700dfe525fbd3e569a8764c | 444 | py | Python | Unit 5/Assignment/Tester.py | KevinBoxuGao/ICS3UI | 2091a7c0276b888dd88f2063e6acd6e7ff7fb6fa | [
"MIT"
] | null | null | null | Unit 5/Assignment/Tester.py | KevinBoxuGao/ICS3UI | 2091a7c0276b888dd88f2063e6acd6e7ff7fb6fa | [
"MIT"
] | null | null | null | Unit 5/Assignment/Tester.py | KevinBoxuGao/ICS3UI | 2091a7c0276b888dd88f2063e6acd6e7ff7fb6fa | [
"MIT"
] | 1 | 2020-03-09T16:22:33.000Z | 2020-03-09T16:22:33.000Z | from FactoringToolbox import *
#input of trinomial should be in form ax^2+bx+c
#use function factorQuadratic() on trinomial string
Cases = [
"x^2+18x+32",
"x^2+17x+32",
"x^2–16x+63",
"x^2+5x–24",
"x^2–5x–24",
"x^2–9",
"x^2–10",
"x^2+9",
"2x^2+11x+5",
"12x^2–7x-10",
"87x^2–29x+143",
"9x^2–100",
"9x^2+1",
"3x^2+12x+6",
"2x^2+10x+8",
"5x^2–500",
"x^2+7x",
"-10x^2+5x",
]
for i in Cases:
print(str(i), "=", factorQuadratic(i))
| 15.310345 | 51 | 0.572072 | 104 | 444 | 2.538462 | 0.490385 | 0.068182 | 0.045455 | 0.045455 | 0.060606 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0.208995 | 0.148649 | 444 | 28 | 52 | 15.857143 | 0.462963 | 0.216216 | 0 | 0 | 0 | 0 | 0.452174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
754c56ca6923c9be9e6d85243564cf3da3a6ba62 | 593 | py | Python | payments/migrations/0002_auto_20210519_2205.py | KumarSantosh22/TheShoppingCArt | e2e2f5c80f6d8a1c418c16ed95ada39b3b3bd502 | [
"FTL"
] | null | null | null | payments/migrations/0002_auto_20210519_2205.py | KumarSantosh22/TheShoppingCArt | e2e2f5c80f6d8a1c418c16ed95ada39b3b3bd502 | [
"FTL"
] | null | null | null | payments/migrations/0002_auto_20210519_2205.py | KumarSantosh22/TheShoppingCArt | e2e2f5c80f6d8a1c418c16ed95ada39b3b3bd502 | [
"FTL"
] | 3 | 2021-04-28T07:49:40.000Z | 2021-04-28T17:59:13.000Z | # Generated by Django 3.1.7 on 2021-05-19 16:35
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('payments', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='transaction',
name='pay_date',
field=models.DateTimeField(default=django.utils.timezone.now),
),
migrations.AlterField(
model_name='transaction',
name='sign',
field=models.TextField(max_length=100),
),
]
| 23.72 | 74 | 0.600337 | 60 | 593 | 5.85 | 0.683333 | 0.062678 | 0.108262 | 0.165242 | 0.250712 | 0.250712 | 0 | 0 | 0 | 0 | 0 | 0.052009 | 0.286678 | 593 | 24 | 75 | 24.708333 | 0.777778 | 0.075885 | 0 | 0.333333 | 1 | 0 | 0.098901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
755a9041d7a10c6711fd28f672de7623985d4910 | 288 | py | Python | subscription/admin.py | Amechi101/indieapp | 606c1346f65c343eb2cc8f7fba9d555b8c30a7fa | [
"MIT"
] | null | null | null | subscription/admin.py | Amechi101/indieapp | 606c1346f65c343eb2cc8f7fba9d555b8c30a7fa | [
"MIT"
] | null | null | null | subscription/admin.py | Amechi101/indieapp | 606c1346f65c343eb2cc8f7fba9d555b8c30a7fa | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.contrib import admin
from subscription.models import Subscription
class SubscriptionAdmin(admin.ModelAdmin):
list_display = ["brand", "user"]
search_fields = ["user"]
admin.site.register(Subscription, SubscriptionAdmin)
| 18 | 52 | 0.78125 | 31 | 288 | 7.032258 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135417 | 288 | 15 | 53 | 19.2 | 0.875502 | 0 | 0 | 0 | 0 | 0 | 0.045296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f32f59ff59d3b4f044ccbc23691d651040bda225 | 166 | py | Python | x_rebirth_station_calculator/station_data/wares/plasma_pumps.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | 1 | 2016-04-17T11:00:22.000Z | 2016-04-17T11:00:22.000Z | x_rebirth_station_calculator/station_data/wares/plasma_pumps.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | null | null | null | x_rebirth_station_calculator/station_data/wares/plasma_pumps.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | null | null | null | from x_rebirth_station_calculator.station_data.station_base import Ware
names = {'L044': 'Plasma Pumps',
'L049': 'Plasmapumpen'}
PlasmaPumps = Ware(names)
| 23.714286 | 71 | 0.73494 | 20 | 166 | 5.85 | 0.8 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.150602 | 166 | 6 | 72 | 27.666667 | 0.787234 | 0 | 0 | 0 | 0 | 0 | 0.192771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3312fd4fc7277528ce81081ce0b15821a0a0e9d | 15,494 | py | Python | venv/lib/python3.8/site-packages/spaceone/api/statistics/v1/history_pb2.py | choonho/plugin-prometheus-mon-webhook | afa7d65d12715fd0480fb4f92a9c62da2d6128e0 | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.8/site-packages/spaceone/api/statistics/v1/history_pb2.py | choonho/plugin-prometheus-mon-webhook | afa7d65d12715fd0480fb4f92a9c62da2d6128e0 | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.8/site-packages/spaceone/api/statistics/v1/history_pb2.py | choonho/plugin-prometheus-mon-webhook | afa7d65d12715fd0480fb4f92a9c62da2d6128e0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: spaceone/api/statistics/v1/history.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
from spaceone.api.core.v1 import query_pb2 as spaceone_dot_api_dot_core_dot_v1_dot_query__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='spaceone/api/statistics/v1/history.proto',
package='spaceone.api.statistics.v1',
syntax='proto3',
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n(spaceone/api/statistics/v1/history.proto\x12\x1aspaceone.api.statistics.v1\x1a\x1bgoogle/protobuf/empty.proto\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1cgoogle/api/annotations.proto\x1a spaceone/api/core/v1/query.proto\">\n\x14\x43reateHistoryRequest\x12\x13\n\x0bschedule_id\x18\x01 \x01(\t\x12\x11\n\tdomain_id\x18\x02 \x01(\t\"c\n\x13QueryHistoryRequest\x12*\n\x05query\x18\x01 \x01(\x0b\x32\x1b.spaceone.api.core.v1.Query\x12\r\n\x05topic\x18\x02 \x01(\t\x12\x11\n\tdomain_id\x18\x03 \x01(\t\"q\n\x10HistoryValueInfo\x12\r\n\x05topic\x18\x01 \x01(\t\x12\'\n\x06values\x18\x02 \x01(\x0b\x32\x17.google.protobuf.Struct\x12\x11\n\tdomain_id\x18\x03 \x01(\t\x12\x12\n\ncreated_at\x18\x04 \x01(\t\"a\n\x0bHistoryInfo\x12=\n\x07results\x18\x01 \x03(\x0b\x32,.spaceone.api.statistics.v1.HistoryValueInfo\x12\x13\n\x0btotal_count\x18\x02 \x01(\x05\"l\n\x12HistoryStatRequest\x12\x34\n\x05query\x18\x01 \x01(\x0b\x32%.spaceone.api.core.v1.StatisticsQuery\x12\r\n\x05topic\x18\x02 \x01(\t\x12\x11\n\tdomain_id\x18\x03 \x01(\t2\x96\x03\n\x07History\x12r\n\x06\x63reate\x12\x30.spaceone.api.statistics.v1.CreateHistoryRequest\x1a\x16.google.protobuf.Empty\"\x1e\x82\xd3\xe4\x93\x02\x18\"\x16/statistics/v1/history\x12\xa0\x01\n\x04list\x12/.spaceone.api.statistics.v1.QueryHistoryRequest\x1a\'.spaceone.api.statistics.v1.HistoryInfo\">\x82\xd3\xe4\x93\x02\x38\x12\x16/statistics/v1/historyZ\x1e\"\x1c/statistics/v1/history/query\x12t\n\x04stat\x12..spaceone.api.statistics.v1.HistoryStatRequest\x1a\x17.google.protobuf.Struct\"#\x82\xd3\xe4\x93\x02\x1d\"\x1b/statistics/v1/history/statb\x06proto3'
,
dependencies=[google_dot_protobuf_dot_empty__pb2.DESCRIPTOR,google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,google_dot_api_dot_annotations__pb2.DESCRIPTOR,spaceone_dot_api_dot_core_dot_v1_dot_query__pb2.DESCRIPTOR,])
_CREATEHISTORYREQUEST = _descriptor.Descriptor(
name='CreateHistoryRequest',
full_name='spaceone.api.statistics.v1.CreateHistoryRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='schedule_id', full_name='spaceone.api.statistics.v1.CreateHistoryRequest.schedule_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='domain_id', full_name='spaceone.api.statistics.v1.CreateHistoryRequest.domain_id', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=195,
serialized_end=257,
)
_QUERYHISTORYREQUEST = _descriptor.Descriptor(
name='QueryHistoryRequest',
full_name='spaceone.api.statistics.v1.QueryHistoryRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='query', full_name='spaceone.api.statistics.v1.QueryHistoryRequest.query', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='topic', full_name='spaceone.api.statistics.v1.QueryHistoryRequest.topic', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='domain_id', full_name='spaceone.api.statistics.v1.QueryHistoryRequest.domain_id', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=259,
serialized_end=358,
)
_HISTORYVALUEINFO = _descriptor.Descriptor(
name='HistoryValueInfo',
full_name='spaceone.api.statistics.v1.HistoryValueInfo',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='topic', full_name='spaceone.api.statistics.v1.HistoryValueInfo.topic', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='values', full_name='spaceone.api.statistics.v1.HistoryValueInfo.values', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='domain_id', full_name='spaceone.api.statistics.v1.HistoryValueInfo.domain_id', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='created_at', full_name='spaceone.api.statistics.v1.HistoryValueInfo.created_at', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=360,
serialized_end=473,
)
_HISTORYINFO = _descriptor.Descriptor(
name='HistoryInfo',
full_name='spaceone.api.statistics.v1.HistoryInfo',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='results', full_name='spaceone.api.statistics.v1.HistoryInfo.results', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='total_count', full_name='spaceone.api.statistics.v1.HistoryInfo.total_count', index=1,
number=2, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=475,
serialized_end=572,
)
_HISTORYSTATREQUEST = _descriptor.Descriptor(
name='HistoryStatRequest',
full_name='spaceone.api.statistics.v1.HistoryStatRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='query', full_name='spaceone.api.statistics.v1.HistoryStatRequest.query', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='topic', full_name='spaceone.api.statistics.v1.HistoryStatRequest.topic', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='domain_id', full_name='spaceone.api.statistics.v1.HistoryStatRequest.domain_id', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=574,
serialized_end=682,
)
_QUERYHISTORYREQUEST.fields_by_name['query'].message_type = spaceone_dot_api_dot_core_dot_v1_dot_query__pb2._QUERY
_HISTORYVALUEINFO.fields_by_name['values'].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT
_HISTORYINFO.fields_by_name['results'].message_type = _HISTORYVALUEINFO
_HISTORYSTATREQUEST.fields_by_name['query'].message_type = spaceone_dot_api_dot_core_dot_v1_dot_query__pb2._STATISTICSQUERY
DESCRIPTOR.message_types_by_name['CreateHistoryRequest'] = _CREATEHISTORYREQUEST
DESCRIPTOR.message_types_by_name['QueryHistoryRequest'] = _QUERYHISTORYREQUEST
DESCRIPTOR.message_types_by_name['HistoryValueInfo'] = _HISTORYVALUEINFO
DESCRIPTOR.message_types_by_name['HistoryInfo'] = _HISTORYINFO
DESCRIPTOR.message_types_by_name['HistoryStatRequest'] = _HISTORYSTATREQUEST
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
CreateHistoryRequest = _reflection.GeneratedProtocolMessageType('CreateHistoryRequest', (_message.Message,), {
'DESCRIPTOR' : _CREATEHISTORYREQUEST,
'__module__' : 'spaceone.api.statistics.v1.history_pb2'
# @@protoc_insertion_point(class_scope:spaceone.api.statistics.v1.CreateHistoryRequest)
})
_sym_db.RegisterMessage(CreateHistoryRequest)
QueryHistoryRequest = _reflection.GeneratedProtocolMessageType('QueryHistoryRequest', (_message.Message,), {
'DESCRIPTOR' : _QUERYHISTORYREQUEST,
'__module__' : 'spaceone.api.statistics.v1.history_pb2'
# @@protoc_insertion_point(class_scope:spaceone.api.statistics.v1.QueryHistoryRequest)
})
_sym_db.RegisterMessage(QueryHistoryRequest)
HistoryValueInfo = _reflection.GeneratedProtocolMessageType('HistoryValueInfo', (_message.Message,), {
'DESCRIPTOR' : _HISTORYVALUEINFO,
'__module__' : 'spaceone.api.statistics.v1.history_pb2'
# @@protoc_insertion_point(class_scope:spaceone.api.statistics.v1.HistoryValueInfo)
})
_sym_db.RegisterMessage(HistoryValueInfo)
HistoryInfo = _reflection.GeneratedProtocolMessageType('HistoryInfo', (_message.Message,), {
'DESCRIPTOR' : _HISTORYINFO,
'__module__' : 'spaceone.api.statistics.v1.history_pb2'
# @@protoc_insertion_point(class_scope:spaceone.api.statistics.v1.HistoryInfo)
})
_sym_db.RegisterMessage(HistoryInfo)
HistoryStatRequest = _reflection.GeneratedProtocolMessageType('HistoryStatRequest', (_message.Message,), {
'DESCRIPTOR' : _HISTORYSTATREQUEST,
'__module__' : 'spaceone.api.statistics.v1.history_pb2'
# @@protoc_insertion_point(class_scope:spaceone.api.statistics.v1.HistoryStatRequest)
})
_sym_db.RegisterMessage(HistoryStatRequest)
_HISTORY = _descriptor.ServiceDescriptor(
name='History',
full_name='spaceone.api.statistics.v1.History',
file=DESCRIPTOR,
index=0,
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_start=685,
serialized_end=1091,
methods=[
_descriptor.MethodDescriptor(
name='create',
full_name='spaceone.api.statistics.v1.History.create',
index=0,
containing_service=None,
input_type=_CREATEHISTORYREQUEST,
output_type=google_dot_protobuf_dot_empty__pb2._EMPTY,
serialized_options=b'\202\323\344\223\002\030\"\026/statistics/v1/history',
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='list',
full_name='spaceone.api.statistics.v1.History.list',
index=1,
containing_service=None,
input_type=_QUERYHISTORYREQUEST,
output_type=_HISTORYINFO,
serialized_options=b'\202\323\344\223\0028\022\026/statistics/v1/historyZ\036\"\034/statistics/v1/history/query',
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='stat',
full_name='spaceone.api.statistics.v1.History.stat',
index=2,
containing_service=None,
input_type=_HISTORYSTATREQUEST,
output_type=google_dot_protobuf_dot_struct__pb2._STRUCT,
serialized_options=b'\202\323\344\223\002\035\"\033/statistics/v1/history/stat',
create_key=_descriptor._internal_create_key,
),
])
_sym_db.RegisterServiceDescriptor(_HISTORY)
DESCRIPTOR.services_by_name['History'] = _HISTORY
# @@protoc_insertion_point(module_scope)
| 44.395415 | 1,610 | 0.77288 | 1,960 | 15,494 | 5.791837 | 0.108163 | 0.053911 | 0.056818 | 0.085095 | 0.704898 | 0.645877 | 0.610377 | 0.540786 | 0.516297 | 0.50229 | 0 | 0.039657 | 0.104879 | 15,494 | 348 | 1,611 | 44.522989 | 0.778859 | 0.041758 | 0 | 0.624595 | 1 | 0.016181 | 0.21521 | 0.173341 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02589 | 0 | 0.02589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f349b222cba44d989014122430ce462d3240ba22 | 2,521 | py | Python | authlete/django/handler/configuration_request_handler.py | authlete/authlete-python-django | ccdfecbac5205a7ed7c14186b5ea4552fd390d2c | [
"Apache-2.0"
] | 6 | 2019-08-10T03:07:05.000Z | 2020-11-06T13:59:29.000Z | authlete/django/handler/configuration_request_handler.py | authlete/authlete-python-django | ccdfecbac5205a7ed7c14186b5ea4552fd390d2c | [
"Apache-2.0"
] | null | null | null | authlete/django/handler/configuration_request_handler.py | authlete/authlete-python-django | ccdfecbac5205a7ed7c14186b5ea4552fd390d2c | [
"Apache-2.0"
] | null | null | null | #
# Copyright (C) 2019 Authlete, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific
# language governing permissions and limitations under the
# License.
from authlete.django.handler.base_request_handler import BaseRequestHandler
from authlete.django.web.response_utility import ResponseUtility
class ConfigurationRequestHandler(BaseRequestHandler):
"""Handler for requests to a configuration endpoint.
An OpenID Provider that supports "OpenID Connect Discovery 1.0" provides an
endpoint that returns its configuration information in JSON format. Details
about the format are described in "3. OpenId Provider Metadata" of
"OpenID Connect Discovery 1.0".
Note that the URI of an OpenID Provider configuration endpoint is defined
in "4.1. OpenID Provider Configuration Request". In short, the URI must be
"{Issue-Identifier}/.well-known/openid-configuration".
"{Issuer-Identifier}" is a URL that identifies an OpenID Provider. For
example, "https://example.com". For details about Issuer Identifier, see
the description about the "issuer" metadata defined in "3. OpenID Provider
Metadata" (OpenID Connecto Discovery 1.0) and the "iss" claim in
"2. ID Token" (OpenID Connect Core 1.0).
"""
def __init__(self, api):
"""Constructor
Args:
api (authlete.api.AuthleteApi)
"""
super().__init__(api)
def handle(self, request, pretty=True):
"""Handle a request to a configuration endpoint.
This method calls Authlete's /api/service/configuration API.
Args:
requet (django.http.HttpRequest)
pretty (bool)
Returns:
django.http.HttpResponse
Raises:
authlete.api.AuthleteApiException
"""
# Call Authlete's /api/service/configuration API. The API returns
# JSON that complies with OpenID Connect Discovery 1.0.
jsn = self.api.getServiceConfiguration(pretty)
# 200 OK, application/json;charset=UTF-8
return ResponseUtility.okJson(jsn)
| 34.067568 | 79 | 0.701309 | 323 | 2,521 | 5.439628 | 0.482972 | 0.034149 | 0.025043 | 0.039271 | 0.109277 | 0.039841 | 0 | 0 | 0 | 0 | 0 | 0.013768 | 0.222134 | 2,521 | 73 | 80 | 34.534247 | 0.882203 | 0.736216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f34f15850cac7783f34cd330188e1ec68fa52503 | 352 | py | Python | toontown/chat/TTWhiteList.py | LittleNed/toontown-stride | 1252a8f9a8816c1810106006d09c8bdfe6ad1e57 | [
"Apache-2.0"
] | 3 | 2020-01-02T08:43:36.000Z | 2020-07-05T08:59:02.000Z | toontown/chat/TTWhiteList.py | NoraTT/Historical-Commits-Project-Altis-Source | fe88e6d07edf418f7de6ad5b3d9ecb3d0d285179 | [
"Apache-2.0"
] | null | null | null | toontown/chat/TTWhiteList.py | NoraTT/Historical-Commits-Project-Altis-Source | fe88e6d07edf418f7de6ad5b3d9ecb3d0d285179 | [
"Apache-2.0"
] | 4 | 2019-06-20T23:45:23.000Z | 2020-10-14T20:30:15.000Z | from otp.chat.WhiteList import WhiteList
from toontown.toonbase import TTLocalizer
from toontown.chat import WhiteListData
class TTWhiteList(WhiteList):
notify = directNotify.newCategory('TTWhiteList')
def __init__(self):
WhiteList.__init__(self, WhiteListData.WHITELIST)
self.defaultWord = TTLocalizer.ChatGarblerDefault[0] | 29.333333 | 60 | 0.78125 | 36 | 352 | 7.416667 | 0.555556 | 0.089888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003322 | 0.144886 | 352 | 12 | 60 | 29.333333 | 0.883721 | 0 | 0 | 0 | 0 | 0 | 0.031161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f35002601cca7ab085bc4c3ff5789e912d2422dc | 865 | py | Python | ichubOpenApi/client.py | qingfenglaike/ichub-openapi-sdk-python | bbf77d17c0acfb21b1d014aeb2458d32e00a5eba | [
"MIT"
] | null | null | null | ichubOpenApi/client.py | qingfenglaike/ichub-openapi-sdk-python | bbf77d17c0acfb21b1d014aeb2458d32e00a5eba | [
"MIT"
] | null | null | null | ichubOpenApi/client.py | qingfenglaike/ichub-openapi-sdk-python | bbf77d17c0acfb21b1d014aeb2458d32e00a5eba | [
"MIT"
] | null | null | null | from ichubOpenApi import const
from ichubOpenApi.request.requestBase import requestBase
from ichubOpenApi.request.models import Method
import json
class Client:
def __init__(self, host=None, v='1.0.0', app_id='', sign_type='', key='', public_key='', private_key=''):
self.request = requestBase(host=host, v=v, app_id=app_id, sign_type=sign_type, key=key, public_key=public_key,
private_key=private_key)
def uploadsupply(self, currency_id, tax_rate, items):
"""
上传供货
:param currency_id: U|R 币种
:param tax_rate: 0-2 税率
:param items: [] 供货项
:return: bool
"""
data = {'api_code': Method.supply, 'tax_rate': tax_rate, 'currency_id': currency_id, 'items': json.dumps(items)}
return self.request.send_request(params=data, filter_param={'items'})
| 39.318182 | 120 | 0.647399 | 116 | 865 | 4.594828 | 0.422414 | 0.075047 | 0.067542 | 0.04878 | 0.082552 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007463 | 0.225434 | 865 | 21 | 121 | 41.190476 | 0.78806 | 0.104046 | 0 | 0 | 0 | 0 | 0.058414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.363636 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f351c4efb7438a4f02fbf1f21640189777ffca52 | 143 | py | Python | helios/__init__.py | fury-gl/helios | 14e39e0350b4b9666775ba0c4840d2e9887678c2 | [
"MIT"
] | 3 | 2021-10-13T14:38:57.000Z | 2021-10-16T19:40:14.000Z | helios/__init__.py | fury-gl/helios | 14e39e0350b4b9666775ba0c4840d2e9887678c2 | [
"MIT"
] | 14 | 2021-07-04T19:00:57.000Z | 2021-10-16T18:35:45.000Z | helios/__init__.py | fury-gl/helios | 14e39e0350b4b9666775ba0c4840d2e9887678c2 | [
"MIT"
] | 3 | 2021-06-06T14:43:59.000Z | 2021-10-17T19:03:54.000Z | """
API reference
"""
from helios.backends.fury.draw import NetworkDraw
__version__ = '0.1.0'
__release__ = 'beta'
__all__ = ['NetworkDraw']
| 14.3 | 49 | 0.713287 | 17 | 143 | 5.294118 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024194 | 0.132867 | 143 | 9 | 50 | 15.888889 | 0.701613 | 0.090909 | 0 | 0 | 0 | 0 | 0.163934 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f355a8d7645825b9535772c162ad4f39137cf7a3 | 6,899 | py | Python | Validation/EventGenerator/python/BPhysicsValidation_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | Validation/EventGenerator/python/BPhysicsValidation_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | Validation/EventGenerator/python/BPhysicsValidation_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from DQMServices.Core.DQMEDAnalyzer import DQMEDAnalyzer
JPsiMuMuValidation = DQMEDAnalyzer('BPhysicsValidation',
genparticleCollection = cms.InputTag("genParticles",""),
name = cms.string("JPsiMuMuValidation"),
pname = cms.string("J/#Psi"),
pdgid = cms.int32(443),
massmin = cms.double(3.0),
massmax = cms.double(4.0),
daughters = cms.vstring("muminus","muplus"),
muminus = cms.untracked.PSet(pname = cms.string("#mu^{-}"),
pdgid = cms.int32(13),
massmin = cms.double(0.1),
massmax = cms.double(0.2),
),
muplus = cms.untracked.PSet(pname = cms.string("#mu^{+}"),
pdgid = cms.int32(-13),
massmin = cms.double(0.1),
massmax = cms.double(0.2),
)
)
LambdabPiPiMuMuValidation = DQMEDAnalyzer('BPhysicsValidation',
genparticleCollection = cms.InputTag("genParticles",""),
name = cms.string("LambdabPiPiMuMuValidation"),
pname = cms.string("#Lambda_{b}"),
pdgid = cms.int32(5122),
massmin = cms.double(5.5),
massmax = cms.double(6.0),
daughters = cms.vstring("muminus","muplus","piminus","piplus","pminus","pplus","Lambda","Lambdabar"),
muminus = cms.untracked.PSet(pname = cms.string("#mu^{-}"),
pdgid = cms.int32(13),
massmin = cms.double(0.1),
massmax = cms.double(0.2),
),
muplus = cms.untracked.PSet(pname = cms.string("#mu^{+}"),
pdgid = cms.int32(-13),
massmin = cms.double(0.1),
massmax = cms.double(0.2),
),
piminus = cms.untracked.PSet(pname = cms.string("#pi^{-}"),
pdgid = cms.int32(-211),
massmin = cms.double(0.1),
massmax = cms.double(0.2),
),
piplus = cms.untracked.PSet(pname = cms.string("#pi^{+}"),
pdgid = cms.int32(211),
massmin = cms.double(0.1),
massmax = cms.double(0.2),
),
pminus = cms.untracked.PSet(pname = cms.string("p^{-}"),
pdgid = cms.int32(-2212),
massmin = cms.double(0.9),
massmax = cms.double(1.1),
),
pplus = cms.untracked.PSet(pname = cms.string("p^{+}"),
pdgid = cms.int32(2212),
massmin = cms.double(0.9),
massmax = cms.double(1.1),
),
Lambda = cms.untracked.PSet(pname = cms.string("#Lambda"),
pdgid = cms.int32(3122),
massmin = cms.double(1.0),
massmax = cms.double(1.2),
),
Lambdabar = cms.untracked.PSet(pname = cms.string("#bar{#Lambda}"),
pdgid = cms.int32(-3122),
massmin = cms.double(1.0),
massmax = cms.double(1.2),
)
)
PsiSpectrum = DQMEDAnalyzer('BPhysicsSpectrum',
genparticleCollection = cms.InputTag("genParticles",""),
name = cms.string("JPsiSpectrum"),
pdgids = cms.vint32(443,100443,30443,9000443,9010443,9020443),
massmin = cms.double(3.0),
massmax = cms.double(4.5)
)
LambdaSpectrum = DQMEDAnalyzer('BPhysicsSpectrum',
genparticleCollection = cms.InputTag("genParticles",""),
name = cms.string("LambdaSpectrum"),
pdgids = cms.vint32(5122),
massmin = cms.double(5.5),
massmax = cms.double(6.0)
)
| 74.98913 | 144 | 0.271634 | 364 | 6,899 | 5.145604 | 0.186813 | 0.134544 | 0.119594 | 0.11212 | 0.766151 | 0.766151 | 0.699413 | 0.699413 | 0.699413 | 0.476241 | 0 | 0.065466 | 0.645746 | 6,899 | 91 | 145 | 75.813187 | 0.7009 | 0 | 0 | 0.52381 | 0 | 0 | 0.049138 | 0.003624 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3662a9af638d1c21d22d4ef1738b5af0598f737 | 523 | py | Python | hw/hw07/tests/q1d.py | surajrampure/data-94-sp21 | 074543103579c28d796c681f78f3c38449825328 | [
"BSD-3-Clause"
] | 1 | 2020-11-21T09:42:52.000Z | 2020-11-21T09:42:52.000Z | hw/hw07/tests/q1d.py | surajrampure/data-94-sp21 | 074543103579c28d796c681f78f3c38449825328 | [
"BSD-3-Clause"
] | null | null | null | hw/hw07/tests/q1d.py | surajrampure/data-94-sp21 | 074543103579c28d796c681f78f3c38449825328 | [
"BSD-3-Clause"
] | null | null | null | test = { 'name': 'q1d',
'points': 1,
'suites': [ { 'cases': [ {'code': ">>> species_by_island.labels == ('species', 'Biscoe', 'Dream', 'Torgersen')\nTrue", 'hidden': False, 'locked': False},
{'code': ">>> np.all(species_by_island.column('Biscoe') == np.array([44, 0, 119]))\nTrue", 'hidden': False, 'locked': False}],
'scored': True,
'setup': '',
'teardown': '',
'type': 'doctest'}]}
| 58.111111 | 163 | 0.424474 | 45 | 523 | 4.844444 | 0.711111 | 0.082569 | 0.137615 | 0.201835 | 0.247706 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02346 | 0.347992 | 523 | 8 | 164 | 65.375 | 0.615836 | 0 | 0 | 0 | 0 | 0.125 | 0.468451 | 0.124283 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3788c43294c384644a9a5f0d996e273b6275d4f | 77 | py | Python | ml_model_evaluation/models/model_registry.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | ml_model_evaluation/models/model_registry.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | ml_model_evaluation/models/model_registry.py | nkaenzig/ml_model_evaluation | 0064a223b3a6362b7e281d9241cb9ffe97247bb0 | [
"MIT"
] | null | null | null | from enum import Enum
class ModelRegistry(Enum):
mlflow = 1
s3 = 2
| 11 | 26 | 0.649351 | 11 | 77 | 4.545455 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 0.285714 | 77 | 6 | 27 | 12.833333 | 0.854545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3832e66d8bf28392c623224704242e896a3abd3 | 216 | py | Python | Section 3/jinja_template/jinja_simple_template.py | cleonb-packt/Enterprise-Automation-with-Python- | 37b8a28a722e76d300a4e3568102315ed0dd8a4e | [
"MIT"
] | 14 | 2018-04-19T01:41:24.000Z | 2022-01-06T02:15:56.000Z | Section 3/jinja_template/jinja_simple_template.py | cleonb-packt/Enterprise-Automation-with-Python- | 37b8a28a722e76d300a4e3568102315ed0dd8a4e | [
"MIT"
] | null | null | null | Section 3/jinja_template/jinja_simple_template.py | cleonb-packt/Enterprise-Automation-with-Python- | 37b8a28a722e76d300a4e3568102315ed0dd8a4e | [
"MIT"
] | 12 | 2018-06-06T06:43:44.000Z | 2021-04-19T05:17:38.000Z | from jinja2 import Template
template = Template('Hello {{ name }}!')
print(template.render(name='John Doe'))
t = Template("My favorite numbers: {% for n in range(1,10) %}{{n}} " "{% endfor %}")
print(t.render())
| 21.6 | 84 | 0.638889 | 30 | 216 | 4.6 | 0.7 | 0.231884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.148148 | 216 | 9 | 85 | 24 | 0.728261 | 0 | 0 | 0 | 0 | 0 | 0.418605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.4 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f39ed79db830e5192b23b278fde252069241efbe | 1,187 | py | Python | api/subscription_api/serializers.py | vmariiechko/sport-club | 613c35b35526b1d69fb5f1ae48140db25f371bf6 | [
"MIT"
] | null | null | null | api/subscription_api/serializers.py | vmariiechko/sport-club | 613c35b35526b1d69fb5f1ae48140db25f371bf6 | [
"MIT"
] | null | null | null | api/subscription_api/serializers.py | vmariiechko/sport-club | 613c35b35526b1d69fb5f1ae48140db25f371bf6 | [
"MIT"
] | null | null | null | from django.utils import timezone
from rest_framework import serializers
from ..accounts_api.models import Member
from ..cards_api.models import Pass
from .models import Subscription
class SubscriptionSerializer(serializers.ModelSerializer):
member = serializers.SlugRelatedField(queryset=Member.objects.all(), slug_field='email')
card = serializers.SlugRelatedField(queryset=Pass.objects.all(), slug_field='name')
class Meta:
model = Subscription
fields = ('member', 'card', 'visits_count', 'purchased', 'expires')
extra_kwargs = {
'visits_count': {'read_only': True},
'purchased': {'read_only': True},
'expires': {'read_only': True}
}
def validate_member(self, value):
if Subscription.objects.filter(member=value, expires__gt=timezone.now(), visits_count__gt=0).exists():
raise serializers.ValidationError("You already have an active subscription")
return value
def create(self, validated_data):
instance = super().create(validated_data)
instance.visits_count = validated_data['card'].visits_count
instance.save()
return instance
| 35.969697 | 110 | 0.689132 | 129 | 1,187 | 6.170543 | 0.488372 | 0.069095 | 0.045226 | 0.047739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001054 | 0.200505 | 1,187 | 32 | 111 | 37.09375 | 0.837724 | 0 | 0 | 0 | 0 | 0 | 0.122157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.08 | 0.2 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
f3a0ae16ce41e91f85073ea1c4ec66acccc60720 | 2,236 | py | Python | tools/optimization/data_logging/a_data_recorder.py | jmartin4nrel/HOPP-1 | c66ff2d97b43a785ac15004958615290a76477c5 | [
"BSD-3-Clause"
] | 3 | 2021-03-10T20:03:42.000Z | 2022-03-18T17:10:04.000Z | tools/optimization/data_logging/a_data_recorder.py | jmartin4nrel/HOPP-1 | c66ff2d97b43a785ac15004958615290a76477c5 | [
"BSD-3-Clause"
] | 14 | 2020-12-28T22:32:07.000Z | 2022-03-17T15:33:04.000Z | tools/optimization/data_logging/a_data_recorder.py | jmartin4nrel/HOPP-1 | c66ff2d97b43a785ac15004958615290a76477c5 | [
"BSD-3-Clause"
] | 8 | 2021-01-19T02:39:01.000Z | 2022-01-31T18:04:39.000Z | from abc import abstractmethod
class ADataRecorder:
"""
Abstract class defining an interface for accumulating data from an experimental run in a tabular format and
possibly writing that data out to disk. Data is accumulated in a tabular format, and is expected to always match
the columns defined.
"""
@abstractmethod
def add_columns(self, *column_names) -> None:
"""
Adds columns to the schema. Add columns in the same order you will record them in.
"""
pass
@abstractmethod
def set_schema(self) -> None:
"""
Call this after all columns have been defined via add_columns().
Schema changes can only happen before this point.
Data can only be accumulated after this point.
"""
pass
@abstractmethod
def accumulate(self, *data, **kwdata) -> None:
"""
Accumulates data into the recorder.
Data must be either accumulated in the same order as defined with add_columns() or as keywords using kwdata.
Don't mix these two approaches or you will get undefined behavior.
:return:
"""
pass
@abstractmethod
def store(self) -> None:
"""
Closes the accumulated record, adds it to self.records and logs it to the logger
"""
pass
@abstractmethod
def is_setup(self) -> bool:
"""
:return: true if set_schema() has been called
"""
pass
@abstractmethod
def get_column(self, name) -> []:
"""
gets a column from the recorded data
:param name: column name
:return: iterable column
"""
pass
@abstractmethod
def get_record(self, index) -> []:
"""
:param index:
:return: record at given index in the recorded data.
"""
pass
@abstractmethod
def get_records(self) -> []:
"""
:return: all records
"""
pass
@abstractmethod
def get_column_map(self) -> {}:
pass
@abstractmethod
def close(self) -> None:
"""
Must be called to dispose of an instance
"""
pass
| 26.305882 | 116 | 0.575134 | 255 | 2,236 | 4.996078 | 0.435294 | 0.133438 | 0.148352 | 0.075353 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.347496 | 2,236 | 84 | 117 | 26.619048 | 0.873201 | 0.469589 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0.3125 | 0.03125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f3b972d699485ddc96c142d3dc0a6a10ae10591e | 1,470 | py | Python | utility/format.py | ghowland/deployman | ada4d94677919113f4ac23b337015ba02f2082ce | [
"MIT"
] | null | null | null | utility/format.py | ghowland/deployman | ada4d94677919113f4ac23b337015ba02f2082ce | [
"MIT"
] | null | null | null | utility/format.py | ghowland/deployman | ada4d94677919113f4ac23b337015ba02f2082ce | [
"MIT"
] | null | null | null | """
sysync: utility: format
Methods for formatting data for errors and output
"""
import sys
import pprint
import yaml
import json
def FormatAndOuput(data, options):
"""Format and output the result (pprint/json/yaml to stdout/file)"""
#TODO(g): Format with data or whatever... Output to file or whatever... Whatever...
if options.get('format', None) == 'json':
output = json.dumps(data)
elif options.get('format', None) == 'yaml':
output = yaml.safe_dump(data)
elif type(data) == dict:
# Wrap the top level of dicts on a key per block basis, then pretty print to clean it up
output = ''
for key in data:
output += '%s: %s\n\n' % (key, pprint.pformat(data[key]).replace("\\'", "'"))
elif type(data) == list:
# Wrap the top level of dicts on a key per block basis, then pretty print to clean it up
output = ''
for item in data:
output += '%s\n\n' % pprint.pformat(item)
else:
output = pprint.pformat(data)
# If we are outputting to a file...
if 'output' in options:
try:
fp = open(options['output'], 'w')
fp.write(output)
fp.close()
except Exception, e:
#NOTE(g): Cant call Error(), because Error calls this...
print 'error: Failed to write output to: %s: %s' % (options['output'], e)
# Output the data to STDOUT, as a last resort
print 'Output:'
print data
sys.exit(1)
# Else, output to STDOUT
else:
print output
| 27.222222 | 92 | 0.620408 | 216 | 1,470 | 4.217593 | 0.402778 | 0.026345 | 0.035126 | 0.043908 | 0.16685 | 0.16685 | 0.16685 | 0.16685 | 0.16685 | 0.16685 | 0 | 0.000903 | 0.246259 | 1,470 | 53 | 93 | 27.735849 | 0.8213 | 0.281633 | 0 | 0.129032 | 0 | 0 | 0.118172 | 0 | 0 | 0 | 0 | 0.018868 | 0 | 0 | null | null | 0 | 0.129032 | null | null | 0.258065 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3b9c2e07107f17091d441ba1b7dadde56e93a61 | 232 | py | Python | use_default_dict.py | tobyqin/py_quiz | e84d72fcecf4e02aaa4a12e96ab17dfffa075b74 | [
"Apache-2.0"
] | null | null | null | use_default_dict.py | tobyqin/py_quiz | e84d72fcecf4e02aaa4a12e96ab17dfffa075b74 | [
"Apache-2.0"
] | null | null | null | use_default_dict.py | tobyqin/py_quiz | e84d72fcecf4e02aaa4a12e96ab17dfffa075b74 | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
strings = ('puppy', 'kitten', 'puppy', 'puppy',
'weasel', 'puppy', 'kitten', 'puppy')
counts = defaultdict(int) # 使用lambda来定义简单的函数
for s in strings:
counts[s] += 1
print(counts) | 23.2 | 48 | 0.646552 | 26 | 232 | 5.769231 | 0.615385 | 0.146667 | 0.213333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005348 | 0.193966 | 232 | 10 | 49 | 23.2 | 0.796791 | 0.068966 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f3c8d720781b0fbcbc89ff107edaee16fcc27f47 | 440 | py | Python | mylib/my_sum.py | sijohans/sample-project | 0f9d90a945b85d5792e5b5a7d3813c4e0524d6b9 | [
"MIT"
] | null | null | null | mylib/my_sum.py | sijohans/sample-project | 0f9d90a945b85d5792e5b5a7d3813c4e0524d6b9 | [
"MIT"
] | null | null | null | mylib/my_sum.py | sijohans/sample-project | 0f9d90a945b85d5792e5b5a7d3813c4e0524d6b9 | [
"MIT"
] | null | null | null | """
Test module for learning python packaging.
"""
def my_sum(arg):
"""
Sums the arguments and returns the sum.
"""
total = 0
for val in arg:
total += val
return total
class MySum(object):
# pylint: disable=too-few-public-methods
"""
MySum class
"""
@staticmethod
def my_sum(arg):
"""
Sums the arguments and returns the sum.
"""
return my_sum(arg)
| 16.296296 | 47 | 0.556818 | 54 | 440 | 4.481481 | 0.555556 | 0.061983 | 0.099174 | 0.090909 | 0.355372 | 0.355372 | 0.355372 | 0.355372 | 0.355372 | 0.355372 | 0 | 0.003401 | 0.331818 | 440 | 26 | 48 | 16.923077 | 0.819728 | 0.395455 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f3c9793ee8583e704fde7d8a3a837ea08131bb5e | 495 | py | Python | tests/test_basic.py | mubashshirjamal/code | d9c7adf7efed8e9c1ab3ff8cdeb94e7eb1a45382 | [
"BSD-3-Clause"
] | 1,582 | 2015-01-05T02:41:44.000Z | 2022-03-30T20:03:22.000Z | tests/test_basic.py | mubashshirjamal/code | d9c7adf7efed8e9c1ab3ff8cdeb94e7eb1a45382 | [
"BSD-3-Clause"
] | 66 | 2015-01-23T07:58:04.000Z | 2021-11-12T02:23:27.000Z | tests/test_basic.py | mubashshirjamal/code | d9c7adf7efed8e9c1ab3ff8cdeb94e7eb1a45382 | [
"BSD-3-Clause"
] | 347 | 2015-01-05T07:47:07.000Z | 2021-09-20T21:22:32.000Z | import os
import shutil
from vilya.libs.permdir import get_repo_root
from vilya.models.project import CodeDoubanProject
from tests.base import TestCase
class TestBasic(TestCase):
def test_create_git_repo(self):
git_path = os.path.join(get_repo_root(), 'abc.git')
CodeDoubanProject.create_git_repo(git_path)
assert os.path.exists(git_path)
info_file = os.path.join(git_path, 'refs')
assert os.path.exists(info_file)
shutil.rmtree(git_path)
| 27.5 | 59 | 0.729293 | 72 | 495 | 4.791667 | 0.444444 | 0.101449 | 0.063768 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 495 | 17 | 60 | 29.117647 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.076923 | false | 0 | 0.384615 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
45f1279cf55dd9ec05dde027b64ea0c5d76ed09c | 499 | py | Python | setup.py | zhanglabtools/CIRCLET | 51e9a4555258af31e2baba04759303ceeaf6c367 | [
"MIT"
] | 1 | 2019-08-20T06:43:58.000Z | 2019-08-20T06:43:58.000Z | setup.py | zhanglabtools/CIRCLET | 51e9a4555258af31e2baba04759303ceeaf6c367 | [
"MIT"
] | null | null | null | setup.py | zhanglabtools/CIRCLET | 51e9a4555258af31e2baba04759303ceeaf6c367 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@author: Yusen Ye
"""
import os
import sys
import shutil
from subprocess import call
from warnings import warn
from setuptools import setup
setup(name='CIRCLET',
version='1.0',
package_dir={'': 'src'},
packages=['CIRCLET'],
package_data={
# And include any *.msg files found in the 'hello' package, too:
'CIRCLET': ['DATA/*','*.txt','DATA/RNA-seq/*','DATA/Hi-Cmaps/*','DATA/Nagano et al/*'],
},
include_package_data=True
)
| 19.96 | 96 | 0.617234 | 65 | 499 | 4.676923 | 0.707692 | 0.072368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007538 | 0.202405 | 499 | 24 | 97 | 20.791667 | 0.756281 | 0.206413 | 0 | 0 | 0 | 0 | 0.237569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
45f6dd118f01ba4811e0c7d41bb54b4184ecd518 | 1,155 | py | Python | Tests/Test_FetchUserStatuses.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | Tests/Test_FetchUserStatuses.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null | Tests/Test_FetchUserStatuses.py | rohitgs28/FindMyEmployer | d4b369eb488f44e40ef371ac09847f8ccc39994c | [
"MIT"
] | null | null | null |
import unittest
import mock
from mock import MagicMock,patch
import os.path
import logging
import sys,os
from Test_Config import *
from MockData import Emailid,Password,statusData,Messages2
import sys
import sys, os
sys.path.append(os.path.abspath(os.path.join('..', 'extensions/')))
import extensions
sys.path.append(os.path.abspath(os.path.join('..', 'LoggingDatabase/')))
import LoggingErrorsinDatabase
sys.path.append(os.path.abspath(os.path.join('..', 'Databaselayer/')))
import FetchUserStatuses
class Test_FetchUserStatuses(unittest.TestCase):
def test_getUserStatuses_1(self):
statusData = ['My first status', 'felling thoughtful']
fetchuserstatuses = FetchUserStatuses.FetchUserStatuses(mysql,statusData,Messages2[0])
statusData,result = fetchuserstatuses.getUserStatuses()
#assert result == "pass"
def test_getUserStatuses_2(self):
statusData = []
fetchuserstatuses = FetchUserStatuses.FetchUserStatuses(mysql,statusData,Messages2[1])
statusData,result = fetchuserstatuses.getUserStatuses()
#assert result == "fail"
if __name__ == '__main__':
unittest.main()
| 26.25 | 94 | 0.741126 | 124 | 1,155 | 6.790323 | 0.370968 | 0.049881 | 0.046318 | 0.053444 | 0.448931 | 0.448931 | 0.128266 | 0.128266 | 0.128266 | 0 | 0 | 0.007078 | 0.143723 | 1,155 | 43 | 95 | 26.860465 | 0.844287 | 0.039827 | 0 | 0.148148 | 0 | 0 | 0.079566 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0.037037 | 0.481481 | 0 | 0.592593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
3412c932814605d011bbaaf7912311ede4fd8184 | 544 | py | Python | setup.py | jayvdb/dotenv | 81923e7e901ed512ec91fa27058e0cb9d6b4ac6c | [
"MIT"
] | 24 | 2015-02-14T04:27:38.000Z | 2021-12-28T03:25:05.000Z | setup.py | gfunkmonk/dotenv | 81923e7e901ed512ec91fa27058e0cb9d6b4ac6c | [
"MIT"
] | 12 | 2015-02-16T18:35:04.000Z | 2022-01-17T19:35:57.000Z | setup.py | gfunkmonk/dotenv | 81923e7e901ed512ec91fa27058e0cb9d6b4ac6c | [
"MIT"
] | 12 | 2015-02-14T04:30:14.000Z | 2020-09-24T20:02:56.000Z | #!/usr/bin/env python
# encoding=UTF-8
from setuptools import setup
try: # fix nose error
import multiprocessing
except ImportError:
pass
setup(name='dotenv',
version=__import__('dotenv').__version__,
description='Handle .env files',
author='Pedro Burón',
author_email='pedro@witoi.com',
url='https://github.com/pedroburon/dotenv',
test_suite='nose.collector',
packages=['dotenv'],
tests_require=['nose'],
setup_requires=['distribute'],
scripts=['scripts/dotenv']
)
| 21.76 | 49 | 0.652574 | 60 | 544 | 5.716667 | 0.733333 | 0.075802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002309 | 0.204044 | 544 | 24 | 50 | 22.666667 | 0.789838 | 0.091912 | 0 | 0 | 0 | 0 | 0.283096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.058824 | 0.235294 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
34187cf900d631bc137fae1f18a9cc059b8a90a1 | 1,808 | py | Python | cloudferrylib/base/storage.py | toha10/CloudFerry | 5f844a480d3326d1fea74cca35b648c32d390fab | [
"Apache-2.0"
] | null | null | null | cloudferrylib/base/storage.py | toha10/CloudFerry | 5f844a480d3326d1fea74cca35b648c32d390fab | [
"Apache-2.0"
] | null | null | null | cloudferrylib/base/storage.py | toha10/CloudFerry | 5f844a480d3326d1fea74cca35b648c32d390fab | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the License);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and#
# limitations under the License.
from cloudferrylib.base import resource
class Storage(resource.Resource):
def __init__(self, config):
self.config = config
super(Storage, self).__init__()
def get_backend(self):
return self.config.storage.backend
def attach_volume_to_instance(self, volume_info):
raise NotImplemented("it's base class")
def get_volumes_list(self, detailed=True, search_opts=None):
raise NotImplemented("it's base class")
def create_volume(self, size, **kwargs):
raise NotImplemented("it's base class")
def delete_volume(self, volume_id):
raise NotImplemented("it's base class")
def get_volume_by_id(self, volume_id):
raise NotImplemented("it's base class")
def update_volume(self, volume_id, **kwargs):
raise NotImplemented("it's base class")
def attach_volume(self, volume_id, instance_id, mountpoint, mode='rw'):
raise NotImplemented("it's base class")
def detach_volume(self, volume_id):
raise NotImplemented("it's base class")
def upload_volume_to_image(self, volume_id, force, image_name,
container_format, disk_format):
raise NotImplemented("it's base class")
| 33.481481 | 75 | 0.705752 | 251 | 1,808 | 4.936255 | 0.442231 | 0.138015 | 0.152542 | 0.159806 | 0.297821 | 0.297821 | 0.272801 | 0.217918 | 0.121065 | 0.121065 | 0 | 0.005571 | 0.205752 | 1,808 | 53 | 76 | 34.113208 | 0.857242 | 0.301991 | 0 | 0.346154 | 0 | 0 | 0.109864 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.423077 | false | 0 | 0.038462 | 0.038462 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
341cc0c275ac415948a9df316620ed91ebfa4473 | 710 | py | Python | 021/euler021.py | MayankAgarwal/euler_py | 4cf32879f8f7746af6ba31b87b7d4f20a0c91f3f | [
"MIT"
] | null | null | null | 021/euler021.py | MayankAgarwal/euler_py | 4cf32879f8f7746af6ba31b87b7d4f20a0c91f3f | [
"MIT"
] | null | null | null | 021/euler021.py | MayankAgarwal/euler_py | 4cf32879f8f7746af6ba31b87b7d4f20a0c91f3f | [
"MIT"
] | null | null | null | from math import sqrt
def get_divisor_sum (num):
sum = 1
for i in xrange(2, int(sqrt(num)) + 1):
if num%i == 0:
sum += i
if i*i != num:
sum += (num/i)
return sum
def precompute_amicable_numbers(N):
amicable_numbers = []
for i in xrange(N+1):
temp = get_divisor_sum(i)
# sum of divisors == original number. Ex: 6 = 1+2+3
if temp == i:
continue
if get_divisor_sum(temp) == i:
amicable_numbers.append(temp)
amicable_numbers.append(i)
return set(amicable_numbers)
__MAX_N = 10**5
amicable_numbers = precompute_amicable_numbers(__MAX_N)
tests = int(raw_input())
for test in xrange(tests):
N = int(raw_input())
print sum(filter(lambda x: x<N, amicable_numbers)) | 16.136364 | 55 | 0.669014 | 118 | 710 | 3.822034 | 0.381356 | 0.266075 | 0.086475 | 0.053215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021201 | 0.202817 | 710 | 44 | 56 | 16.136364 | 0.775618 | 0.069014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04 | null | null | 0.04 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
34223280c738e676330c7196237188da1181eb03 | 184 | py | Python | webChat/urls.py | anokata/pythonPetProjects | 245c3ff11ae560b17830970061d8d60013948fd7 | [
"MIT"
] | 3 | 2017-04-30T17:44:53.000Z | 2018-02-03T06:02:11.000Z | webChat/urls.py | anokata/pythonPetProjects | 245c3ff11ae560b17830970061d8d60013948fd7 | [
"MIT"
] | 10 | 2021-03-18T20:17:19.000Z | 2022-03-11T23:14:19.000Z | webChat/urls.py | anokata/pythonPetProjects | 245c3ff11ae560b17830970061d8d60013948fd7 | [
"MIT"
] | null | null | null |
urlChatAdd = '/chat/add'
urlUserAdd = '/chat/adduser'
urlGetUsers = '/chat/getusers/'
urlGetChats = '/chat/chats'
urlPost = '/chat/post'
urlHist = '/chat/hist'
urlAuth = '/chat/auth'
| 20.444444 | 31 | 0.684783 | 21 | 184 | 6 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119565 | 184 | 8 | 32 | 23 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.42623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3423b6e3c6bda0e3ad227080be19007f3f8c0d46 | 5,176 | py | Python | 2018/09/sol/bike.py | lfrommelt/monty | e8cabf0e4ac01ab3d97eecee5e699139076d6544 | [
"MIT"
] | null | null | null | 2018/09/sol/bike.py | lfrommelt/monty | e8cabf0e4ac01ab3d97eecee5e699139076d6544 | [
"MIT"
] | null | null | null | 2018/09/sol/bike.py | lfrommelt/monty | e8cabf0e4ac01ab3d97eecee5e699139076d6544 | [
"MIT"
] | 1 | 2019-02-10T23:52:25.000Z | 2019-02-10T23:52:25.000Z | """
This module contains the definitions for Bike and its subclasses Bicycle and
Motorbike.
"""
class Bike:
"""
Class defining a bike that can be ridden and have its gear changed.
Attributes:
seats: number of seats the bike has
gears: number of gears the bike has
"""
def __init__(self, seats, gears):
"""
Creates a new Bike object.
Args:
seats: number of seats the bike has
gears: number of gears the bike has
"""
self.seats = seats
self.gears = gears
self._curr_gear = 1 # current gear, private
self._riding = False # bike is per default not ridden
@property
def curr_gear(self):
"""
Purpose of this function is to enable the user to check the gear
status, but is only able to change it with a specific method.
(was not necessary to implement it this way)
"""
return self._curr_gear
def start_ride(self):
"""
Starts a bike ride.
Returns:
True if successful.
False if bike is already on a ride.
"""
# can't ride a bike if already ridden
if self._riding:
return False
self._riding = True
return True
def end_ride(self):
"""
Ends a bike ride.
Returns:
True if successful.
False if bike is not currently ridden.
"""
# can't stop a bike ride if the bike is already standing
if not self._riding:
return False
self._riding = False
return True
def change_gear(self, new_gear):
"""
Changes bike gear to a new gear.
Args:
new_gear: gear to be changed to
Returns:
True if gear was successfully changed.
Raises:
ValueError if current gear is same as new gear or new gear is <= 0
or not in range of available gears.
"""
if self._curr_gear == new_gear or not 0 < new_gear <= self.gears:
raise ValueError("Already in this gear or invalid gear number.")
self._curr_gear = new_gear
return True
class Bicycle(Bike):
"""
Class defining a Bicycle (extending Bike) that can be ridden, have its
gear changed and has a bell that can be rung.
Attributes:
seats: number of seats the bike has
gears: number of gears the bike has
bell_sound: sound the bell makes when rung
"""
def __init__(self, seats=1, gears=7, bell_sound="ring ring"):
"""
Creates a new Bike object.
Args:
seats: number of seats the bicycle has, defaults to 1
gears: number of gears the bicycle has, defaults to 7
bell_sound: sound the bell makes when rung
"""
super().__init__(seats, gears)
self.bell_sound = bell_sound
def ring_bell(self):
""" Rings bicycle bell."""
print(self.bell_sound)
class Motorbike(Bike):
"""
Class defining a Motorbike (extending Bike) that can be ridden, have its
gear changed and has a tank that can be filled.
Attributes:
seats: number of seats the bike has
gears: number of gears the bike has
"""
def __init__(self, seats=2, gears=5):
"""
Creates a new Motorbike object.
Args:
seats: number of seats the motorbike has, defaults to 2
gears: number of gears the motorbike has, defaults to 5
"""
super().__init__(seats, gears)
# True means full tank. Private so it can only be changed in
# a controlled manner
self._tank = True
@property
def tank(self):
"""
Purpose of this function is to enable the user to check the tank
status, but is only able to fill/empty the tank with specific methods.
This was not necessary to implement.
"""
return self._tank
def start_ride(self):
"""
Starts a motorbike ride.
Returns:
True if successful.
False if motorbike is already on a ride or tank is empty
"""
# can't ride a motorbike if tank is empty or it is already ridden
if not self._tank or not super().start_ride():
return False
return True
def end_ride(self):
"""
Ends a motorbike ride and empties tank.
Returns:
True if successful.
False if motorbike is not currently ridden.
"""
if not super().end_ride():
return False
self._tank = False # tank is empty after riding
return True
# the following method was not necessary to implement, but we want to be
# able to ride more than once.
def fill_tank(self):
"""
Fills motorbike tank with fuel.
Returns:
True if successful.
False if tank already full.
"""
# can't fill tank if already full
if self._tank:
return False
self._tank = True
return True
| 25.004831 | 78 | 0.575155 | 683 | 5,176 | 4.270864 | 0.187408 | 0.032911 | 0.027425 | 0.037024 | 0.472746 | 0.391498 | 0.308193 | 0.296195 | 0.225231 | 0.225231 | 0 | 0.003336 | 0.363022 | 5,176 | 206 | 79 | 25.126214 | 0.881407 | 0.536901 | 0 | 0.45098 | 0 | 0 | 0.030477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0 | 0 | 0.54902 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
342502ea39cf6dd76f8813f1ffa78b4300f49df1 | 2,982 | py | Python | harvest_backup.py | bopopescu/daily_harvest | 7809011cb739dafd583e45b3be43c61a101178e3 | [
"MIT"
] | null | null | null | harvest_backup.py | bopopescu/daily_harvest | 7809011cb739dafd583e45b3be43c61a101178e3 | [
"MIT"
] | 3 | 2015-04-23T14:21:17.000Z | 2016-05-16T19:24:51.000Z | harvest_backup.py | bopopescu/daily_harvest | 7809011cb739dafd583e45b3be43c61a101178e3 | [
"MIT"
] | 1 | 2020-07-24T05:30:15.000Z | 2020-07-24T05:30:15.000Z | import sys
reload(sys) # Reload does the trick!
sys.setdefaultencoding('UTF8')
sys.path.append("packages")
#from Harvest.harvest import Harvest, HarvestError
import os
from harvest import Harvest, HarvestError
from datetime import datetime, timedelta
import time
#import simplejson as json
import json
import mysql.connector
from mysql.connector import errorcode
#import pprint
# Harvest Setup
harvest_creds = {'uri': os.getenv("HARVEST_URI"),
'email': os.getenv("HARVEST_EMAIL"),
'password': os.getenv("HARVEST_PASSWORD")}
URI = harvest_creds['uri']
EMAIL = harvest_creds['email']
PASS = harvest_creds['password']
h = Harvest(URI,EMAIL,PASS)
# Var Setup
user_hours={}
user_names={}
project_hours={}
timesheet_punches={}
email_html=""
# Yesterday - adjust to your liking
end = datetime.today().replace( hour=0, minute=0, second=0 )
start = end + timedelta(-1)
#mysql_creds = json.loads(open('mysql.json').read())
#mysql_creds = json.loads(os.getenv("MYSQL"))
mysql_creds = {'user': os.getenv("MYSQL_USER"),
'password': os.getenv("MYSQL_PASSWORD"),
'database': os.getenv("MYSQL_DATABASE"),
'host': os.getenv("MYSQL_HOST"),
'port': os.getenv("MYSQL_PORT", 3306)}
cnx = mysql.connector.connect(user=mysql_creds['user'],
password=mysql_creds['password'],
host=mysql_creds['host'],
database=mysql_creds['database'],
port=mysql_creds['port'])
cursor = cnx.cursor()
try:
for user in h.users():
user_hours[user.email] = 0
user_names[user.email] = user.first_name + " " + user.last_name
for entry in user.entries( start, end ):
if(not entry.adjustment_record):
user_hours[user.email] += entry.hours
project = h.project(entry.project_id)
client = h.client(project.client_id)
task = h.task(entry.task_id)
if(project_hours.has_key(project.name)):
project_hours[project.name] += entry.hours
else:
project_hours[project.name] = entry.hours
add_entry = ("INSERT INTO timesheet "
"(id, project_id, task_id, user_id, hours, "
"notes, client, created_at, updated_at, project, task) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s )")
entry_data = (entry.id, project.id, entry.task_id, entry.user_id, entry.hours,
entry.notes, client.name, entry.created_at, entry.updated_at, project.name, task.name)
cursor.execute(add_entry, entry_data)
#print cursor.lastrowid
cnx.commit()
except HarvestError:
print "error"
cursor.close()
cnx.close()
| 35.082353 | 116 | 0.579477 | 345 | 2,982 | 4.86087 | 0.292754 | 0.011926 | 0.0161 | 0.019082 | 0.045915 | 0.045915 | 0.006559 | 0.006559 | 0.006559 | 0.006559 | 0 | 0.004757 | 0.295104 | 2,982 | 84 | 117 | 35.5 | 0.793054 | 0.095238 | 0 | 0 | 0 | 0.016129 | 0.139189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.080645 | 0.129032 | null | null | 0.016129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
3456e6ba16d04ef3c0acc920008e899b8fcaefe3 | 412 | py | Python | schemas/movie_schema.py | AmeyMore98/movie_api | afc06ba71d98e873b91de17d2773cd820e89facd | [
"MIT"
] | null | null | null | schemas/movie_schema.py | AmeyMore98/movie_api | afc06ba71d98e873b91de17d2773cd820e89facd | [
"MIT"
] | null | null | null | schemas/movie_schema.py | AmeyMore98/movie_api | afc06ba71d98e873b91de17d2773cd820e89facd | [
"MIT"
] | null | null | null | from typing import List
from pydantic import BaseModel
from schemas import genre_schema
class MovieBase(BaseModel):
name: str
director: str
popularity: float
imdb_score: float
class MovieCreate(MovieBase):
genre: List[str]
class MovieUpdate(MovieCreate):
pass
class Movie(MovieBase):
movie_id: int
genre: List[genre_schema.Genre]
class Config:
orm_mode = True
| 15.846154 | 35 | 0.713592 | 51 | 412 | 5.666667 | 0.54902 | 0.076125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223301 | 412 | 25 | 36 | 16.48 | 0.903125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.176471 | 0 | 0.882353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
3456f83fc682ae0073018fe830c6637a4551cfb9 | 515 | py | Python | utils/io.py | cchesley2397/neht-graff | 848af3d3c60e760a25ac103c7634c72781c749b2 | [
"BSD-3-Clause-No-Nuclear-License-2014"
] | null | null | null | utils/io.py | cchesley2397/neht-graff | 848af3d3c60e760a25ac103c7634c72781c749b2 | [
"BSD-3-Clause-No-Nuclear-License-2014"
] | null | null | null | utils/io.py | cchesley2397/neht-graff | 848af3d3c60e760a25ac103c7634c72781c749b2 | [
"BSD-3-Clause-No-Nuclear-License-2014"
] | null | null | null | import json
def text_file_to_list(filename):
with open(filename, 'r') as file:
return file.read().splitlines()
def list_to_text_file(filename, string_list):
with open(filename, 'w') as text_file:
for line in string_list:
text_file.write(f'{line}\n')
def json_to_dict(filename):
with open(filename, 'r') as file:
return json.load(file)
def dict_to_json(filename, dictionary):
with open(filename, 'w') as json_file:
json.dump(dictionary, json_file)
| 22.391304 | 45 | 0.667961 | 77 | 515 | 4.25974 | 0.350649 | 0.097561 | 0.195122 | 0.146341 | 0.341463 | 0.22561 | 0.22561 | 0.22561 | 0 | 0 | 0 | 0 | 0.215534 | 515 | 22 | 46 | 23.409091 | 0.811881 | 0 | 0 | 0.142857 | 0 | 0 | 0.023301 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.071429 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3464265d86ba77fcc3df46a772947b291b90f30c | 177 | py | Python | core/api_shortcode.py | royrogers/operative-framework | 7628fe0801473cb48293335232c9e8ff084526ff | [
"MIT"
] | 5 | 2018-01-15T13:58:40.000Z | 2022-02-17T02:38:58.000Z | OSINT-Reconnaissance/operative-framework/core/api_shortcode.py | bhattsameer/TID3xploits | b57d8bae454081a3883a5684679e2a329e72d6e5 | [
"MIT"
] | null | null | null | OSINT-Reconnaissance/operative-framework/core/api_shortcode.py | bhattsameer/TID3xploits | b57d8bae454081a3883a5684679e2a329e72d6e5 | [
"MIT"
] | 4 | 2019-06-21T07:51:11.000Z | 2020-11-04T05:20:09.000Z | #!/usr/bin/env python
import sys
menu_list_api = {
'list_module': 'API_listmodule',
'use_module': 'API_startmodule',
'requirement_module': 'API_requirementModule'
}
| 16.090909 | 49 | 0.711864 | 21 | 177 | 5.619048 | 0.714286 | 0.228814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146893 | 177 | 10 | 50 | 17.7 | 0.781457 | 0.112994 | 0 | 0 | 0 | 0 | 0.574194 | 0.135484 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
346528de0458d628dcfae00a238939ca1b809fa4 | 1,318 | py | Python | cdn/utilities/objects/file.py | Axelancerr/cdn | 9fe1dba9b61ca011beb207b8f21b43aa6e64de1b | [
"MIT"
] | 4 | 2021-11-05T04:57:44.000Z | 2022-03-28T11:28:52.000Z | cdn/utilities/objects/file.py | Axelancerr/cdn | 9fe1dba9b61ca011beb207b8f21b43aa6e64de1b | [
"MIT"
] | null | null | null | cdn/utilities/objects/file.py | Axelancerr/cdn | 9fe1dba9b61ca011beb207b8f21b43aa6e64de1b | [
"MIT"
] | null | null | null | # Future
from __future__ import annotations
# Standard Library
from typing import Any
class File:
def __init__(self, data: dict[str, Any], /) -> None:
self._id: int = data["id"]
self._account_id: int = data["account_id"]
self._identifier: str = data["identifier"]
self._format: str = data["format"]
self._private: bool = data["private"]
def __repr__(self) -> str:
return f"<cdn.File identifier={self.identifier}, format={self.format}>"
# Properties
@property
def id(self) -> int:
return self._id
@property
def account_id(self) -> int:
return self._account_id
@property
def identifier(self) -> str:
return self._identifier
@property
def format(self) -> str:
return self._format
@property
def private(self) -> bool:
return self._private
#
@property
def filename(self) -> str:
return f"{self.identifier}.{self.format}"
#
@property
def info(self) -> dict[str, Any]:
return {
"id": self.id,
"account_id": self.account_id,
"identifier": self.identifier,
"format": self.format,
"private": self.private,
"filename": self.filename
}
| 21.258065 | 79 | 0.564492 | 144 | 1,318 | 4.972222 | 0.222222 | 0.107542 | 0.072626 | 0.041899 | 0.164804 | 0.111732 | 0 | 0 | 0 | 0 | 0 | 0 | 0.312595 | 1,318 | 61 | 80 | 21.606557 | 0.790287 | 0.025797 | 0 | 0.179487 | 0 | 0 | 0.13302 | 0.06338 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.051282 | 0.205128 | 0.512821 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
3469be0cf674f5a436f901a31bd292577cd8b50e | 1,243 | py | Python | 2/basic_calculator2.py | IronCore864/leetcode | a62a4cdde9814ae48997176debcaad537f7ad01f | [
"Apache-2.0"
] | 4 | 2018-03-07T02:56:03.000Z | 2021-06-15T05:43:31.000Z | 2/basic_calculator2.py | IronCore864/leetcode | a62a4cdde9814ae48997176debcaad537f7ad01f | [
"Apache-2.0"
] | null | null | null | 2/basic_calculator2.py | IronCore864/leetcode | a62a4cdde9814ae48997176debcaad537f7ad01f | [
"Apache-2.0"
] | 1 | 2021-09-02T12:05:15.000Z | 2021-09-02T12:05:15.000Z | class Solution(object):
def calculate(self, s):
"""
:type s: str
:rtype: int
"""
if not s:
return 0
s = s.replace(" ", "")
n = len(s)
stack = []
num = 0
sign = '+'
for i in range(n):
if s[i].isdigit():
num = num * 10 + int(s[i])
if not s[i].isdigit() or i == n - 1:
if sign == '-':
stack.append(-num)
elif sign == '+':
stack.append(num)
elif sign == '*':
stack.append(stack.pop() * num)
elif sign == '/':
d = stack.pop()
r = d // num
if r < 0 and d % num != 0:
r += 1
stack.append(r)
sign = s[i]
num = 0
res = 0
for i in stack:
res += i
return res
s = Solution()
print(s.calculate(""))
print(s.calculate("123"))
print(s.calculate("3+2*2"))
print(s.calculate(" 3/2 "))
print(s.calculate("3+5 / 2"))
print(s.calculate("14/3*2"))
print(s.calculate("14-3/2"))
print(s.calculate("10000-1000/10+100*1"))
| 24.86 | 51 | 0.376508 | 149 | 1,243 | 3.14094 | 0.302013 | 0.102564 | 0.25641 | 0.17094 | 0.350427 | 0.241453 | 0.241453 | 0.241453 | 0.115385 | 0.115385 | 0 | 0.066066 | 0.4642 | 1,243 | 49 | 52 | 25.367347 | 0.636637 | 0.019308 | 0 | 0.05 | 0 | 0 | 0.04802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0 | 0 | 0.1 | 0.2 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
caccd5af9a559387e94d59a4796f4d5b5ba4bf76 | 860 | py | Python | anagram/anagram.py | rdlu/exercism-python | 61e89f4969af5bcad028e843aaae6a8869b0187d | [
"MIT"
] | null | null | null | anagram/anagram.py | rdlu/exercism-python | 61e89f4969af5bcad028e843aaae6a8869b0187d | [
"MIT"
] | null | null | null | anagram/anagram.py | rdlu/exercism-python | 61e89f4969af5bcad028e843aaae6a8869b0187d | [
"MIT"
] | null | null | null | """Anagram utilities"""
def find_anagrams(word: str, candidates: list) -> list:
"""Detect anagrams on a list against a reference word
Args:
word: the reference word
candidates: the list of words to be compared
Returns:
A new list with the anagrams found.
"""
low_word = sorted(word.lower())
return [candidate for candidate in candidates if is_anagram(low_word, candidate)]
def is_anagram(low_sort_word: str, candidate: str) -> bool:
"""Determine whether two words are anagrams of each other.
Args:
low_sort_words: the original word, sorted and lowered.
candidate: the word to be compared
Returns:
A boolean True if the two words have the same letters in different order."""
return sorted(candidate.lower()) == low_sort_word and candidate.lower() != low_sort_word | 34.4 | 92 | 0.683721 | 120 | 860 | 4.791667 | 0.433333 | 0.048696 | 0.057391 | 0.066087 | 0.156522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234884 | 860 | 25 | 92 | 34.4 | 0.87386 | 0.525581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cad1dca59bf66b720faac057b329e2e934007da0 | 6,255 | py | Python | lassolver/utils/func.py | Ken529n/Lassolver | f9f6997bf065622fe462b329c5cc99bd20f7d68b | [
"MIT"
] | null | null | null | lassolver/utils/func.py | Ken529n/Lassolver | f9f6997bf065622fe462b329c5cc99bd20f7d68b | [
"MIT"
] | null | null | null | lassolver/utils/func.py | Ken529n/Lassolver | f9f6997bf065622fe462b329c5cc99bd20f7d68b | [
"MIT"
] | null | null | null | import numpy as np
from scipy.stats import truncnorm, norm
def soft_threshold(r, gamma):
"""
soft-thresholding function
"""
return np.maximum(np.abs(r) - gamma, 0.0) * np.sign(r)
def df(r, gamma):
"""
divergence-free function
"""
eta = soft_threshold(r, gamma)
return eta - np.mean(eta != 0) * r
def GCAMP(w, beta, log=False):
shita = 0.7
communication_cost = 0
P, N, _ = w.shape
T = beta * shita / (P-1)
R = np.zeros((P, N, 1))
z = np.zeros((N, 1))
#STEP1
for p in range(1, P):
R[p] = np.abs(w[p]) > T
candidate = np.where(R[p])[0]
for n in candidate:
communication_cost += 1
send_to1(n, w[p, n])
#STEP2
S = [np.where(R[:, n])[0] for n in range(N)]
m = np.sum(R, axis=0)
U = np.empty((N, 1))
for n in range(N):
upper = (P - 1 - m[n]) * T
z[n] = w[0, n] + np.sum([w[p, n] for p in S[n]])
U[n] = np.abs(z[n]) + upper
F = (U > beta) * (m < (P-1))
candidate = np.where(F)[0]
for n in candidate:
communication_cost += 1
broadcast_others(n)
#STEP3
F_R = F * np.logical_not(R)
for p in range(1, P):
#print("p: {}".format(p))
candidate = np.where(F_R[p])[0]
for n in candidate:
communication_cost += 1
send_to1(n ,w[p, n])
if log:
print("Rp: {} \t F: {} \t F\\Rp: {}".format(np.sum(R), np.sum(F), np.sum(F_R)-np.sum(F)))
print("Total Communication Cost: {}".format(communication_cost))
print("="*50)
#STEP4
s = np.zeros((N, 1))
b = np.zeros((N, 1))
V = np.where(U > beta)[0].tolist()
for n in V:
b[n] = np.sum(w[:, n])
s[n] = soft_threshold(b[n], beta)
return s.real, communication_cost
def GCAMP_exp(w, tau_p, log=False):
shita = 0.7
tau = np.sum(tau_p)
communication_cost = 0
P, N, _ = w.shape
R = np.zeros((P, N, 1))
#STEP1
for p in range(1, P):
R[p] = np.square(w[p]) > tau_p[p] * shita
candidate = np.where(R[p])[0]
for n in candidate:
communication_cost += 1
send_to1(n, w[p, n])
#STEP2
S = [np.where(R[:, n])[0] for n in range(N)]
m = np.sum(R, axis=0)
U = np.empty((N, 1))
for n in range(N):
upper = np.sum([tau_p[p] for p in range(1, P) if p not in S[p]])
U[n] = (w[0, n] + np.sum(w[p, n] for p in S[n]))**2 + upper * shita
F = (U > tau) * (m < (P-1))
candidate = np.where(F)[0]
for n in candidate:
communication_cost += 1
broadcast_others(n)
#STEP3
F_R = F * np.logical_not(R)
for p in range(1, P):
#print("p: {}".format(p))
candidate = np.where(F_R[p])[0]
for n in candidate:
communication_cost += 1
send_to1(n ,w[p, n])
if log:
print("Rp: {} \t F: {} \t F\\Rp: {}".format(np.sum(R), np.sum(F), np.sum(F_R)-np.sum(F)))
print("Total Communication Cost: {}".format(communication_cost))
print("="*50)
#STEP4
s = np.zeros((N, 1))
V = np.where(U > tau)[0].tolist()
for n in V:
w_sum = np.sum(w[:, n])
s[n] = soft_threshold(w_sum, tau**0.5)
return s.real, communication_cost
def send_to1(n, w):
#print("n: {}, w: {}".format(n, w))
pass
def broadcast_others(n):
#print("n: {}".format(n))
pass
def GCOAMP(w, tau_p, log=False):
shita = 0.7
tau = np.sum(tau_p)
communication_cost = 0
P, N, _ = w.shape
R = np.zeros((P, N, 1))
z = [0] * N
#STEP1
for p in range(1, P):
R[p] = np.square(w[p]) > tau_p[p] * shita
candidate = np.where(R[p])[0]
for n in candidate:
communication_cost += 1
send_to1(n, w[p, n])
#STEP2
S = [np.where(R[:, n])[0] for n in range(N)]
m = np.sum(R, axis=0)
U = np.empty((N, 1))
for n in range(N):
upper = np.sum([tau_p[p] for p in range(1, P) if p not in S[p]])
z[n] = w[0, n] + np.sum([w[p, n] for p in S[n]])
U[n] = z[n]**2 + upper * shita
F = (U > tau) * (m < (P-1))
candidate = np.where(F)[0]
for n in candidate:
communication_cost += 1
broadcast_others(n)
#STEP3
F_R = F * np.logical_not(R)
for p in range(1, P):
#print("p: {}".format(p))
candidate = np.where(F_R[p])[0]
for n in candidate:
communication_cost += 1
send_to1(n ,w[p, n])
if log:
print("Rp: {} \t F: {} \t F\\Rp: {}".format(np.sum(R), np.sum(F), np.sum(F_R)-np.sum(F)))
print("Total Communication Cost: {}".format(communication_cost))
print("="*50)
#STEP4
u = np.zeros((N, 1))
b = np.zeros((N, 1))
V = np.where(U > tau)[0].tolist()
for n in V:
b[n] = np.sum(w[:, n])
u[n] = soft_threshold(b[n], tau**0.5)
#STEP5
#if approx: rand = beta * truncnorm.rvs(-1, 1, loc=0, scale=1, size=N-K)
#else : rand = Rrandom(u, beta, K)#(tau - tau_p[0])**0.5 * truncnorm.rvs(-1, 1, loc=0, scale=1, size=N-K)
Vc = [n for n in range(N) if n not in V]
for n in Vc:
b[n] = z[n]
b[n] += np.sum([rand(shita * tau_p[p]) for p in range(1, P) if p not in S[n]])
s = u - np.mean(u != 0)*b
return s.real, communication_cost
def rand(tau):
return tau**0.5 * truncnorm.rvs(-1, 1, loc=0, scale=1, size=1)
def Rrandom(u, t, K):
N = u.shape[0]
u0 = np.histogram(u, bins=N)
Pu = u0[0]/N
Pu = np.append(Pu, 0)
u1 = u0[1]
phi = lambda x: norm.pdf((x-u1)/t)/t
maxu = np.argmax(Pu)
phi_x = phi(u1[maxu])
max = np.max(np.sum(Pu * phi_x))
rand = np.empty(N-K)
for i in range(N-K):
while True:
x, y = np.random.rand(2)
a = -t + 2*t*x
phi_a = phi(a)
A = np.sum(Pu * phi_a)
if max*y <= A:
rand[i] = a
break
return rand | 27.8 | 110 | 0.470983 | 1,048 | 6,255 | 2.748092 | 0.107824 | 0.048611 | 0.041667 | 0.029167 | 0.745833 | 0.730208 | 0.697917 | 0.684375 | 0.671528 | 0.671528 | 0 | 0.032385 | 0.348361 | 6,255 | 225 | 111 | 27.8 | 0.67419 | 0.067306 | 0 | 0.660494 | 0 | 0 | 0.030755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.012346 | 0.012346 | 0.006173 | 0.111111 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cad39b9d21d487ae3d06dad3eae3e452ba8051e3 | 3,426 | py | Python | aoc/year_2021/day_05/solver.py | logan-connolly/AoC | 23f47e72abaf438cc97897616be4d6b057a01bf3 | [
"MIT"
] | 2 | 2020-12-06T10:59:52.000Z | 2021-09-29T22:14:03.000Z | aoc/year_2021/day_05/solver.py | logan-connolly/AoC | 23f47e72abaf438cc97897616be4d6b057a01bf3 | [
"MIT"
] | null | null | null | aoc/year_2021/day_05/solver.py | logan-connolly/AoC | 23f47e72abaf438cc97897616be4d6b057a01bf3 | [
"MIT"
] | 2 | 2021-09-29T22:14:18.000Z | 2022-01-18T02:20:26.000Z | """This is the Solution for Year 2021 Day 05"""
import itertools
from collections import Counter
from dataclasses import dataclass
from aoc.abstracts.solver import Answers, StrLines
@dataclass(frozen=True)
class Point:
"""Immutable point that will define x and y on 2D plane"""
x: int
y: int
@dataclass
class LineSegment:
"""Define a line object that takes a start and end point"""
start: Point
end: Point
@property
def slope(self) -> int:
return int((self.start.y - self.end.y) / (self.start.x - self.end.x))
@property
def intercept(self) -> int:
return int(self.start.y - (self.start.x * self.slope))
def is_vertical(self) -> bool:
return self.start.x == self.end.x
def is_horizontal(self) -> bool:
return self.start.y == self.end.y
def y_range(self) -> range:
coords = self.start.y, self.end.y
return range(min(coords), max(coords) + 1)
def x_range(self) -> range:
coords = self.start.x, self.end.x
return range(min(coords), max(coords) + 1)
def calculate_y(self, x: int) -> int:
return int(self.slope * x + self.intercept)
def parse_point(raw_point: str) -> Point:
"""Parse point from raw string"""
x, y = raw_point.split(",")
return Point(x=int(x), y=int(y))
def parse_lines(lines: StrLines) -> list[LineSegment]:
"""Parse raw lines into Lines and Points"""
parsed_lines = []
for raw_line in lines:
raw_start, raw_end = raw_line.split(" -> ")
start_point = parse_point(raw_start)
end_point = parse_point(raw_end)
line = LineSegment(start=start_point, end=end_point)
parsed_lines.append(line)
return parsed_lines
def get_horizontal_vertical_lines(lines: list[LineSegment]) -> list[LineSegment]:
"""Filter for only horizontal or vertical lines"""
return [line for line in lines if line.is_horizontal() or line.is_vertical()]
def get_point_segment(line: LineSegment) -> list[Point]:
"""Get a list of points in a given line"""
if line.is_vertical():
return [Point(x=line.start.x, y=y) for y in line.y_range()]
return [Point(x=x, y=line.calculate_y(x)) for x in line.x_range()]
def get_point_occurences(lines: list[LineSegment]) -> dict[Point, int]:
"""Count up the number of occurences for a given point"""
segment_points = (get_point_segment(line) for line in lines)
return Counter(itertools.chain.from_iterable(segment_points))
class Solver:
def __init__(self, data: str) -> None:
self.data = data
def _preprocess(self) -> StrLines:
return self.data.splitlines()
def _solve_part_one(self, lines: StrLines) -> int:
parsed_lines = parse_lines(lines)
filtered_lines = get_horizontal_vertical_lines(parsed_lines)
point_count = get_point_occurences(filtered_lines)
return sum(1 for n_occurences in point_count.values() if n_occurences >= 2)
def _solve_part_two(self, lines: StrLines) -> int:
parsed_lines = parse_lines(lines)
point_count = get_point_occurences(parsed_lines)
return sum(1 for n_occurences in point_count.values() if n_occurences >= 2)
def solve(self) -> Answers:
lines = self._preprocess()
ans_one = self._solve_part_one(lines)
ans_two = self._solve_part_two(lines)
return Answers(part_one=ans_one, part_two=ans_two)
| 31.145455 | 83 | 0.66725 | 499 | 3,426 | 4.402806 | 0.200401 | 0.032772 | 0.018207 | 0.025489 | 0.284479 | 0.250797 | 0.162039 | 0.162039 | 0.104688 | 0.062813 | 0 | 0.004849 | 0.217455 | 3,426 | 109 | 84 | 31.431193 | 0.814621 | 0.101576 | 0 | 0.114286 | 0 | 0 | 0.001646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242857 | false | 0 | 0.057143 | 0.085714 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cafeef555404efe8afb118d4125271a182138fdf | 247 | py | Python | test_items.py | insigmo/selenium | df55e312f7b9afd1b6ef3522d037e929ccab8516 | [
"MIT"
] | null | null | null | test_items.py | insigmo/selenium | df55e312f7b9afd1b6ef3522d037e929ccab8516 | [
"MIT"
] | null | null | null | test_items.py | insigmo/selenium | df55e312f7b9afd1b6ef3522d037e929ccab8516 | [
"MIT"
] | null | null | null |
def test_add_to_basket(browser):
link = 'http://selenium1py.pythonanywhere.com/catalogue/coders-at-work_207/'
browser.get(link)
assert browser.find_element_by_class_name('btn-add-to-basket').is_displayed(), f'Basket button not found'
| 41.166667 | 109 | 0.765182 | 37 | 247 | 4.864865 | 0.810811 | 0.055556 | 0.122222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018018 | 0.101215 | 247 | 5 | 110 | 49.4 | 0.792793 | 0 | 0 | 0 | 0 | 0 | 0.434959 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b02d2df0f500a9f4a42c8b3a9328bacd7fa4c7b | 633 | py | Python | config/__init__.py | kave/boilerplate-repo | 2ab761fafbe7e439ca2412e9dae2e498bc54fd0f | [
"Apache-2.0"
] | null | null | null | config/__init__.py | kave/boilerplate-repo | 2ab761fafbe7e439ca2412e9dae2e498bc54fd0f | [
"Apache-2.0"
] | null | null | null | config/__init__.py | kave/boilerplate-repo | 2ab761fafbe7e439ca2412e9dae2e498bc54fd0f | [
"Apache-2.0"
] | null | null | null | import os
from distutils.util import strtobool
from os.path import dirname, join
from dotenv import load_dotenv
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path, verbose=True)
# Server
HOST = os.environ.get('HOST', '0.0.0.0')
PORT = int(os.environ.get('PORT', 9000))
ENV = os.environ.get('ENV')
IS_LOCAL = ENV == 'local'
IS_HEROKU = ENV == 'heroku'
DEBUG = strtobool(os.environ.get('DEBUG'))
# Databases
DB_HOST = os.environ.get('DB_HOST')
DB_PORT = os.environ.get('DB_PORT')
DB_DATABASE = os.environ.get('DB_DATABASE')
DB_USERNAME = os.environ.get('DB_USERNAME')
DB_PASSWORD = os.environ.get('DB_PASSWORD')
| 26.375 | 45 | 0.731438 | 102 | 633 | 4.343137 | 0.313725 | 0.182844 | 0.243792 | 0.158014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014159 | 0.107425 | 633 | 23 | 46 | 27.521739 | 0.769912 | 0.025276 | 0 | 0 | 0 | 0 | 0.138436 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.235294 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
1b0b08d04d0efbe4b690b694a7cd7d55979ef6a3 | 241 | py | Python | src/structures/__init__.py | DantasB/gondola | 0f364e2fc06f52d5e543c6e91c026124b6fb8cb0 | [
"MIT"
] | 3 | 2021-12-12T21:11:53.000Z | 2021-12-16T13:18:10.000Z | src/structures/__init__.py | DantasB/gondola | 0f364e2fc06f52d5e543c6e91c026124b6fb8cb0 | [
"MIT"
] | null | null | null | src/structures/__init__.py | DantasB/gondola | 0f364e2fc06f52d5e543c6e91c026124b6fb8cb0 | [
"MIT"
] | null | null | null | from .block import Block
from .column import Column
from .file_org import FileOrg
from .record import Record
from .relation import Relation
from .schema import Schema
__all__ = ["Block", "Column", "FileOrg", "Record", "Relation", "Schema"]
| 26.777778 | 72 | 0.755187 | 32 | 241 | 5.53125 | 0.34375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136929 | 241 | 8 | 73 | 30.125 | 0.850962 | 0 | 0 | 0 | 0 | 0 | 0.157676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
1b142e4cc2dae96935ad847a9768c614e008d994 | 486 | py | Python | task_answer.py | synchro10/graph_task | 05abf62060bee5c57c23cd8a8e8bc0dd5f2c3ba0 | [
"Unlicense"
] | null | null | null | task_answer.py | synchro10/graph_task | 05abf62060bee5c57c23cd8a8e8bc0dd5f2c3ba0 | [
"Unlicense"
] | null | null | null | task_answer.py | synchro10/graph_task | 05abf62060bee5c57c23cd8a8e8bc0dd5f2c3ba0 | [
"Unlicense"
] | null | null | null |
class TaskAnswer:
# list of tuple: (vertice_source, vertice_destination, moved_value)
_steps = []
def get_steps(self) -> list:
return self._steps
def add_step(self, source: int, destination: int, value: float):
step = (source, destination, value)
self._steps.append(step)
def print(self):
for step in self._steps:
(source, destination, value) = step
print("from", source, "to", destination, "move", value)
| 28.588235 | 71 | 0.617284 | 57 | 486 | 5.105263 | 0.45614 | 0.092784 | 0.151203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265432 | 486 | 16 | 72 | 30.375 | 0.815126 | 0.133745 | 0 | 0 | 0 | 0 | 0.023923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0.090909 | 0.545455 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1b16af7120699e832a370450e4084a5ac4085262 | 1,044 | py | Python | AudioHash/recognize.py | MMohanram1/VidioPiksel | cfcaccdfcfeefae29310bf8b290491e93a8d8158 | [
"Apache-2.0"
] | 1 | 2021-04-18T08:17:03.000Z | 2021-04-18T08:17:03.000Z | AudioHash/recognize.py | MMohanram1/VidioPiksel | cfcaccdfcfeefae29310bf8b290491e93a8d8158 | [
"Apache-2.0"
] | 1 | 2021-04-20T04:49:34.000Z | 2021-04-20T04:49:34.000Z | AudioHash/recognize.py | MMohanram1/VidioPiksel | cfcaccdfcfeefae29310bf8b290491e93a8d8158 | [
"Apache-2.0"
] | null | null | null | import AudioHash.fingerprint as fingerprint
import AudioHash.decoder as decoder
import numpy as np
import pyaudio
import time
class BaseRecognizer(object):
def __init__(self, AudioHash):
self.AudioHash = AudioHash
self.Fs = fingerprint.DEFAULT_FS
def _recognize(self, *data):
matches = []
for d in data:
matches.extend(self.AudioHash.find_matches(d, Fs=self.Fs))
return self.AudioHash.align_matches(matches)
def recognize(self):
pass # base class does nothing
class FileRecognizer(BaseRecognizer):
def __init__(self, AudioHash):
super(FileRecognizer, self).__init__(AudioHash)
def recognize_file(self, filename):
frames, self.Fs, file_hash = decoder.read(filename, self.AudioHash.limit)
t = time.time()
match = self._recognize(*frames)
t = time.time() - t
if match:
match['match_time'] = t
return match
def recognize(self, filename):
return self.recognize_file(filename)
| 24.857143 | 81 | 0.659004 | 122 | 1,044 | 5.467213 | 0.344262 | 0.116942 | 0.071964 | 0.05997 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1,044 | 41 | 82 | 25.463415 | 0.851852 | 0.022031 | 0 | 0.068966 | 0 | 0 | 0.009814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206897 | false | 0.034483 | 0.172414 | 0.034483 | 0.551724 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1b209cebb1968944f1ff3f459f0f31ee326fbdc2 | 392 | py | Python | ingest_pnadc20203_to_postgres.py | edgallojr/igti-edc-mod1 | d8af5ccb9b405aa810f3c41ed55aee6402db2f2b | [
"MIT"
] | null | null | null | ingest_pnadc20203_to_postgres.py | edgallojr/igti-edc-mod1 | d8af5ccb9b405aa810f3c41ed55aee6402db2f2b | [
"MIT"
] | null | null | null | ingest_pnadc20203_to_postgres.py | edgallojr/igti-edc-mod1 | d8af5ccb9b405aa810f3c41ed55aee6402db2f2b | [
"MIT"
] | null | null | null | # psycopg2 library is necessary
import pandas as pd
from sqlalchemy import create_engine
import os
engine = create_engine(
f"postgresql://neylsoncrepalde:{os.environ['PGPASS']}@database-igti.cfowiwu0gidv.us-east-2.rds.amazonaws.com:5432/postgres"
)
df = pd.read_csv("data/pnadc20203.csv", sep=';')
df.to_sql('pnadc20203', con=engine, if_exists='replace', index=False, chunksize=10000)
| 30.153846 | 127 | 0.765306 | 56 | 392 | 5.267857 | 0.803571 | 0.081356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061798 | 0.091837 | 392 | 12 | 128 | 32.666667 | 0.766854 | 0.07398 | 0 | 0 | 0 | 0.125 | 0.434903 | 0.33241 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
1b25541a0878b26397cc6211064eba0b9251790f | 7,140 | py | Python | plugins/greynoise/icon_greynoise/actions/context_lookup/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/greynoise/icon_greynoise/actions/context_lookup/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/greynoise/icon_greynoise/actions/context_lookup/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
import insightconnect_plugin_runtime
import json
class Component:
DESCRIPTION = "Query a routable IPv4 address in the GreyNoise Context API endpoint"
class Input:
IP_ADDRESS = "ip_address"
class Output:
ACTOR = "actor"
BOT = "bot"
CLASSIFICATION = "classification"
CVE = "cve"
FIRST_SEEN = "first_seen"
IP = "ip"
LAST_SEEN = "last_seen"
METADATA = "metadata"
RAW_DATA = "raw_data"
SEEN = "seen"
SPOOFABLE = "spoofable"
TAGS = "tags"
VIZ_URL = "viz_url"
VPN = "vpn"
VPN_SERVICE = "vpn_service"
class ContextLookupInput(insightconnect_plugin_runtime.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"ip_address": {
"type": "string",
"title": "IP Address",
"description": "Routable IPv4 address to query",
"order": 1
}
},
"required": [
"ip_address"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
class ContextLookupOutput(insightconnect_plugin_runtime.Output):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"actor": {
"type": "string",
"title": "GreyNoise Actor",
"description": "GreyNoise Actor Associated with IP",
"order": 6
},
"bot": {
"type": "boolean",
"title": "GreyNoise Bot",
"description": "GreyNoise has identified this as a Bot",
"order": 10
},
"classification": {
"type": "string",
"title": "GreyNoise Classification",
"description": "GreyNoise Classification",
"order": 8
},
"cve": {
"type": "array",
"title": "GreyNoise CVEs",
"description": "CVEs associated with GreyNoise Tags",
"items": {
"type": "string"
},
"order": 9
},
"first_seen": {
"type": "string",
"title": "GreyNoise First Seen",
"displayType": "date",
"description": "First Seen By GreyNoise",
"format": "date-time",
"order": 2
},
"ip": {
"type": "string",
"title": "IP Queried",
"description": "Value that was Queried",
"order": 1
},
"last_seen": {
"type": "string",
"title": "GreyNoise Last Seen",
"description": "Last Seen By GreyNoise",
"order": 3
},
"metadata": {
"$ref": "#/definitions/metadata",
"title": "GreyNoise Metadata",
"description": "GreyNoise IP Metadata",
"order": 13
},
"raw_data": {
"$ref": "#/definitions/raw_data",
"title": "GreyNoise Raw Data",
"description": "GreyNoise IP Raw Data",
"order": 14
},
"seen": {
"type": "boolean",
"title": "GreyNoise Seen",
"description": "Has this IP been Seen by GreyNoise",
"order": 4
},
"spoofable": {
"type": "boolean",
"title": "GreyNoise Spoofable",
"description": "IP address may be spoofed",
"order": 7
},
"tags": {
"type": "array",
"title": "GreyNoise Tags",
"description": "GreyNoise Tags Associated with IP",
"items": {
"type": "string"
},
"order": 5
},
"viz_url": {
"type": "string",
"title": "GreyNoise Visualizer Link",
"description": "Link to GreyNoise Visualizer for IP Details",
"order": 15
},
"vpn": {
"type": "boolean",
"title": "GreyNoise VPN",
"description": "GreyNoise has identified this as a VPN",
"order": 11
},
"vpn_service": {
"type": "string",
"title": "GreyNoise VPN Service",
"description": "Name of VPN Service",
"order": 12
}
},
"definitions": {
"metadata": {
"type": "object",
"title": "metadata",
"properties": {
"asn": {
"type": "string",
"title": "ASN",
"description": "ASN",
"order": 1
},
"category": {
"type": "string",
"title": "Category",
"description": "Category",
"order": 2
},
"city": {
"type": "string",
"title": "City",
"description": "City",
"order": 3
},
"country": {
"type": "string",
"title": "Country",
"description": "Country",
"order": 4
},
"country_code": {
"type": "string",
"title": "Country Code",
"description": "Country Code",
"order": 5
},
"organization": {
"type": "string",
"title": "Organization",
"description": "Organization",
"order": 6
},
"os": {
"type": "string",
"title": "OS",
"description": "OS",
"order": 7
},
"rdns": {
"type": "string",
"title": "rDNS",
"description": "rDNS",
"order": 8
},
"region": {
"type": "string",
"title": "Region",
"description": "Region",
"order": 9
},
"tor": {
"type": "boolean",
"title": "TOR",
"description": "TOR",
"order": 10
}
}
},
"raw_data": {
"type": "object",
"title": "raw_data",
"properties": {
"hassh": {
"type": "array",
"title": "HASSH",
"description": "HASSH",
"items": {
"type": "object"
},
"order": 1
},
"ja3": {
"type": "array",
"title": "JA3",
"description": "Ja3",
"items": {
"type": "object"
},
"order": 2
},
"scan": {
"type": "array",
"title": "Scan",
"description": "Scan",
"items": {
"$ref": "#/definitions/scan"
},
"order": 3
},
"web": {
"type": "object",
"title": "Web",
"description": "Web",
"order": 4
}
},
"definitions": {
"scan": {
"type": "object",
"title": "scan",
"properties": {
"port": {
"type": "integer",
"title": "Port",
"description": "Port",
"order": 1
},
"protocol": {
"type": "string",
"title": "Protocol",
"description": "Protocol",
"order": 2
}
}
}
}
},
"scan": {
"type": "object",
"title": "scan",
"properties": {
"port": {
"type": "integer",
"title": "Port",
"description": "Port",
"order": 1
},
"protocol": {
"type": "string",
"title": "Protocol",
"description": "Protocol",
"order": 2
}
}
}
}
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 23.032258 | 87 | 0.447619 | 576 | 7,140 | 5.456597 | 0.215278 | 0.066815 | 0.090678 | 0.045816 | 0.187082 | 0.169265 | 0.169265 | 0.143812 | 0.112631 | 0.112631 | 0 | 0.010356 | 0.377871 | 7,140 | 309 | 88 | 23.106796 | 0.697208 | 0.005182 | 0 | 0.388514 | 1 | 0 | 0.892832 | 0.007041 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006757 | false | 0 | 0.006757 | 0 | 0.094595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b258480c816d560ab00fdd5668199664013a80f | 1,368 | py | Python | demoparser/fields.py | brunoalano/demoparser | 2ca295c2363531e7cd00af197c5ac2042b981bc3 | [
"Apache-2.0"
] | 36 | 2018-03-19T00:14:19.000Z | 2021-12-21T13:11:04.000Z | demoparser/fields.py | brunoalano/demoparser | 2ca295c2363531e7cd00af197c5ac2042b981bc3 | [
"Apache-2.0"
] | 18 | 2018-09-21T15:34:50.000Z | 2021-05-14T20:36:39.000Z | demoparser/fields.py | nolior/demoparser | 208e3669342410168580a7d71a63a865116bb944 | [
"Apache-2.0"
] | 13 | 2018-08-17T21:26:23.000Z | 2021-12-30T01:31:44.000Z | import struct
from suitcase.fields import BaseField
from suitcase.fields import BaseStructField
from suitcase.fields import BaseFixedByteSequence
class SLFloat32(BaseStructField):
"""Signed Little Endian 32-bit float field."""
PACK_FORMAT = UNPACK_FORMAT = b"<f"
def unpack(self, data, **kwargs):
self._value = struct.unpack(self.UNPACK_FORMAT, data)[0]
class UBInt32Sequence(BaseFixedByteSequence):
"""A sequence of unsigned, big-endian 32 bit integers.
:param length: Number of 32-bit integers in sequence.
:type length: Integer
"""
def __init__(self, length, **kwargs):
super().__init__(lambda l: ">" + "I" * l, length, **kwargs)
self.bytes_required = length * 4
class FixedLengthString(BaseField):
"""A string of a fixed number of bytes.
The specified number of bytes are read and then any null
bytes are stripped from the result.
:param length: Number of bytes to read.
:type length: Integer
"""
def __init__(self, length, **kwargs):
super().__init__(**kwargs)
self.length = length
@property
def bytes_required(self):
"""Number of bytes to read from stream."""
return self.length
def pack(self, stream):
stream.write(self._value.strip(b'\0'))
def unpack(self, data):
self._value = data.strip(b'\0')
| 26.307692 | 67 | 0.665936 | 174 | 1,368 | 5.097701 | 0.390805 | 0.045096 | 0.058625 | 0.081172 | 0.153326 | 0.110485 | 0.110485 | 0.110485 | 0.110485 | 0.110485 | 0 | 0.013195 | 0.224415 | 1,368 | 51 | 68 | 26.823529 | 0.822809 | 0.292398 | 0 | 0.086957 | 0 | 0 | 0.008762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0 | 0.173913 | 0 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1b25df27d60e18c7df2f20930d4ebc25de1b9c8e | 2,590 | py | Python | tests/test_executors.py | elGrandeGato/maigret | d7f94076bf7ea865981f6bca63d4990e2082f353 | [
"MIT"
] | 3 | 2022-01-31T18:50:58.000Z | 2022-02-01T13:48:07.000Z | tests/test_executors.py | elGrandeGato/maigret | d7f94076bf7ea865981f6bca63d4990e2082f353 | [
"MIT"
] | null | null | null | tests/test_executors.py | elGrandeGato/maigret | d7f94076bf7ea865981f6bca63d4990e2082f353 | [
"MIT"
] | null | null | null | """Maigret checking logic test functions"""
import pytest
import asyncio
import logging
from maigret.executors import (
AsyncioSimpleExecutor,
AsyncioProgressbarExecutor,
AsyncioProgressbarSemaphoreExecutor,
AsyncioProgressbarQueueExecutor,
)
logger = logging.getLogger(__name__)
async def func(n):
await asyncio.sleep(0.1 * (n % 3))
return n
@pytest.mark.asyncio
async def test_simple_asyncio_executor():
tasks = [(func, [n], {}) for n in range(10)]
executor = AsyncioSimpleExecutor(logger=logger)
assert await executor.run(tasks) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
assert executor.execution_time > 0.2
assert executor.execution_time < 0.3
@pytest.mark.asyncio
async def test_asyncio_progressbar_executor():
tasks = [(func, [n], {}) for n in range(10)]
executor = AsyncioProgressbarExecutor(logger=logger)
# no guarantees for the results order
assert sorted(await executor.run(tasks)) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
assert executor.execution_time > 0.2
assert executor.execution_time < 0.3
@pytest.mark.asyncio
async def test_asyncio_progressbar_semaphore_executor():
tasks = [(func, [n], {}) for n in range(10)]
executor = AsyncioProgressbarSemaphoreExecutor(logger=logger, in_parallel=5)
# no guarantees for the results order
assert sorted(await executor.run(tasks)) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
assert executor.execution_time > 0.2
assert executor.execution_time < 0.4
@pytest.mark.asyncio
async def test_asyncio_progressbar_queue_executor():
tasks = [(func, [n], {}) for n in range(10)]
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=2)
assert await executor.run(tasks) == [0, 1, 3, 2, 4, 6, 7, 5, 9, 8]
assert executor.execution_time > 0.5
assert executor.execution_time < 0.6
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=3)
assert await executor.run(tasks) == [0, 3, 1, 4, 6, 2, 7, 9, 5, 8]
assert executor.execution_time > 0.4
assert executor.execution_time < 0.5
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=5)
assert await executor.run(tasks) in (
[0, 3, 6, 1, 4, 7, 9, 2, 5, 8],
[0, 3, 6, 1, 4, 9, 7, 2, 5, 8],
)
assert executor.execution_time > 0.3
assert executor.execution_time < 0.4
executor = AsyncioProgressbarQueueExecutor(logger=logger, in_parallel=10)
assert await executor.run(tasks) == [0, 3, 6, 9, 1, 4, 7, 2, 5, 8]
assert executor.execution_time > 0.2
assert executor.execution_time < 0.3
| 33.636364 | 80 | 0.689575 | 363 | 2,590 | 4.818182 | 0.157025 | 0.112064 | 0.184105 | 0.216124 | 0.748428 | 0.697541 | 0.493997 | 0.423671 | 0.396798 | 0.376215 | 0 | 0.059495 | 0.188803 | 2,590 | 76 | 81 | 34.078947 | 0.772965 | 0.042471 | 0 | 0.339286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.089286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b31737dea875bf3bbd9f5a47979e6c49651734c | 1,319 | py | Python | lib/python/frugal/tornado/transport/transport.py | ariasheets-wk/frugal | 81d41af7fb573c1f97afea99a1b4dfa6ccae29e8 | [
"Apache-2.0"
] | 144 | 2017-08-17T15:51:58.000Z | 2022-01-14T21:36:55.000Z | lib/python/frugal/tornado/transport/transport.py | ariasheets-wk/frugal | 81d41af7fb573c1f97afea99a1b4dfa6ccae29e8 | [
"Apache-2.0"
] | 930 | 2017-08-17T17:53:30.000Z | 2022-03-28T14:04:49.000Z | lib/python/frugal/tornado/transport/transport.py | ariasheets-wk/frugal | 81d41af7fb573c1f97afea99a1b4dfa6ccae29e8 | [
"Apache-2.0"
] | 77 | 2017-08-17T15:54:31.000Z | 2021-12-25T15:18:34.000Z | # Copyright 2017 Workiva
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tornado import gen
from frugal.transport import FTransport
class FTransportBase(FTransport):
"""
FTransportBase extends FTransport using the coroutine decorators used by
all tornado FTransports.
"""
def is_open(self):
raise NotImplementedError("You must override this.")
@gen.coroutine
def open(self):
raise NotImplementedError("You must override this.")
@gen.coroutine
def close(self):
raise NotImplementedError("You must override this.")
@gen.coroutine
def oneway(self, context, payload):
raise NotImplementedError('You must override this.')
@gen.coroutine
def request(self, context, payload):
raise NotImplementedError('You must override this.')
| 32.975 | 76 | 0.727824 | 171 | 1,319 | 5.608187 | 0.526316 | 0.062565 | 0.140772 | 0.161627 | 0.345151 | 0.345151 | 0.345151 | 0.345151 | 0.345151 | 0.202294 | 0 | 0.007569 | 0.198635 | 1,319 | 39 | 77 | 33.820513 | 0.899716 | 0.487491 | 0 | 0.529412 | 0 | 0 | 0.177743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.117647 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b31c827c670ca662cac87f9eec9b641a8873ebc | 8,124 | py | Python | src/lib/plot_curves.py | giannislelekas/topdown | c247b39dcc1a74f2a459346cf71fe36e45764baf | [
"MIT"
] | 11 | 2020-04-25T12:06:51.000Z | 2021-06-26T10:49:44.000Z | src/lib/plot_curves.py | giannislelekas/topdown | c247b39dcc1a74f2a459346cf71fe36e45764baf | [
"MIT"
] | null | null | null | src/lib/plot_curves.py | giannislelekas/topdown | c247b39dcc1a74f2a459346cf71fe36e45764baf | [
"MIT"
] | 4 | 2020-04-18T15:27:10.000Z | 2021-06-26T10:50:53.000Z | import numpy as np
from matplotlib import pyplot as plt
'''
Function for plotting training and validation curves.
'''
def learning_curves(history, multival=None, model_n=None, filepath=None, plot_from_epoch=0, plot_to_epoch=None):
n = len(history)
num_epochs = len(history[0]['loss'])
if plot_to_epoch is None:
plot_to_epoch = num_epochs
IoU_in_history = 'mIoU' in history[0].keys()
train_loss = np.zeros((n, num_epochs))
train_acc = np.zeros_like(train_loss)
val_loss = np.zeros_like(train_loss)
val_acc = np.zeros_like(train_loss)
if IoU_in_history:
train_mIoU = np.zeros_like(train_loss)
val_mIoU = np.zeros_like(train_loss)
# For the additional validation curves, which are at different scales
if multival is not None:
m = len(multival[0]['val_loss']) // num_epochs
val_loss_scales = np.zeros((n, num_epochs*m))
val_acc_scales = np.zeros_like(val_loss_scales)
for i in range(len(history)):
train_loss[i, :] = history[i]['loss']
train_acc[i, :] = history[i]['acc']
val_loss[i, :] = history[i]['val_loss']
val_acc[i, :] = history[i]['val_acc']
if IoU_in_history:
train_mIoU[i, :] = history[i]['mIoU']
val_mIoU[i, :] = history[i]['val_mIoU']
if multival is not None:
val_loss_scales[i, :] = multival[i]['val_loss']
val_acc_scales[i, :] = multival[i]['val_acc']
# Extracting mean values and standard deviations
mean_train_loss = np.mean(train_loss, 0)
std_train_loss = np.std(train_loss, 0, ddof=1)
mean_train_acc = np.mean(train_acc, 0)
std_train_acc = np.std(train_acc, 0, ddof=1)
mean_val_loss = np.mean(val_loss, 0)
std_val_loss = np.std(val_loss, 0, ddof=1)
mean_val_acc = np.mean(val_acc, 0)
std_val_acc = np.std(val_acc, 0, ddof=1)
if IoU_in_history:
mean_train_mIoU = np.mean(train_mIoU, 0)
std_train_mIoU = np.std(train_mIoU, 0, ddof=1)
mean_val_mIoU = np.mean(val_mIoU, 0)
std_val_mIoU = np.std(val_mIoU, 0, ddof=1)
# Third dimension corresponds to scales
if multival is not None:
val_loss_scales = np.reshape(val_loss_scales, [n, num_epochs, m])
val_acc_scales = np.reshape(val_acc_scales, [n, num_epochs, m])
mean_val_loss_scales, mean_val_acc_scales = np.mean(val_loss_scales, axis=0), np.mean(val_acc_scales, axis=0)
std_val_loss_scales, std_val_acc_scales = np.std(val_loss_scales, axis=0, ddof=1), np.std(val_acc_scales, axis=0, ddof=1)
# if n == 0:
# std_train_loss = 0
# std_train_acc = 0
# std_val_loss = 0
# std_val_acc = 0
# std_val_loss_scales = 0
# std_val_acc_scales = 0
# else:
# std_train_loss = np.std(train_loss, 0, ddof=1)
# std_train_acc = np.std(train_acc, 0, ddof=1)
# std_val_loss = np.std(val_loss, 0, ddof=1)
# std_val_acc = np.std(val_acc, 0, ddof=1)
# std_val_loss_scales = np.std(val_loss_scales, axis=0, ddof=1)
# std_val_acc_scales = np.std(val_acc_scales, axis=0, ddof=1)
if filepath is not None:
plt.ioff()
# Plotting mean Loss curves with stds
plt.figure(figsize=(10, 10))
plt.title(model_n + '_loss')
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_train_loss[plot_from_epoch:plot_to_epoch], color="g", label='training')
plt.fill_between(np.arange(plot_to_epoch - plot_from_epoch), mean_train_loss[plot_from_epoch:plot_to_epoch] - std_train_loss[plot_from_epoch:plot_to_epoch],
mean_train_loss[plot_from_epoch:plot_to_epoch] + std_train_loss[plot_from_epoch:plot_to_epoch], alpha=0.2, color="g")
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_val_loss[plot_from_epoch:plot_to_epoch], color='r', label='validation')
plt.fill_between(range(plot_to_epoch - plot_from_epoch), mean_val_loss[plot_from_epoch:plot_to_epoch] - std_val_loss[plot_from_epoch:plot_to_epoch],
mean_val_loss[plot_from_epoch:plot_to_epoch] + std_val_loss[plot_from_epoch:plot_to_epoch], alpha=0.2, color='r')
if multival is not None:
for i in range(m):
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_val_loss_scales[plot_from_epoch:plot_to_epoch, i], label=f'validation_{i+1}')
plt.fill_between(range(plot_to_epoch - plot_from_epoch), mean_val_loss_scales[plot_from_epoch:plot_to_epoch, i] - std_val_loss_scales[plot_from_epoch:plot_to_epoch, i],
mean_val_loss_scales[plot_from_epoch:plot_to_epoch, i] + std_val_loss_scales[plot_from_epoch:plot_to_epoch, i], alpha=0.2, )
plt.xlabel("Epoch number")
plt.ylabel("Loss")
plt.legend()
if filepath is not None:
plt.savefig(filepath + model_n + '_loss' + '.png')
# Plotting mean Accuracy curves with stds
plt.figure(figsize=(10, 10))
plt.title(model_n + '_acc')
plt.ylim(0, 1.05)
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_train_acc[plot_from_epoch:plot_to_epoch], color="g", label='training')
plt.fill_between(np.arange(plot_to_epoch - plot_from_epoch), np.maximum(0, mean_train_acc[plot_from_epoch:plot_to_epoch] - std_train_acc[plot_from_epoch:plot_to_epoch]),
np.minimum(1, mean_train_acc[plot_from_epoch:plot_to_epoch] + std_train_acc[plot_from_epoch:plot_to_epoch]), alpha=0.2, color="g")
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_val_acc[plot_from_epoch:plot_to_epoch], color='r', label='validation')
plt.fill_between(range(plot_to_epoch - plot_from_epoch), np.maximum(0, mean_val_acc[plot_from_epoch:plot_to_epoch] - std_val_acc[plot_from_epoch:plot_to_epoch]),
np.minimum(1, mean_val_acc[plot_from_epoch:plot_to_epoch] + std_val_acc[plot_from_epoch:plot_to_epoch]), alpha=0.2, color='r')
if multival is not None:
for i in range(m):
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_val_acc_scales[plot_from_epoch:plot_to_epoch, i], label=f'validation_{i+1}')
plt.fill_between(range(plot_to_epoch - plot_from_epoch), np.maximum(0, mean_val_acc_scales[plot_from_epoch:plot_to_epoch, i] - std_val_acc_scales[plot_from_epoch:plot_to_epoch, i]),
np.minimum(1, mean_val_acc_scales[plot_from_epoch:plot_to_epoch, i] + std_val_acc_scales[plot_from_epoch:plot_to_epoch, i]), alpha=0.2)
plt.xlabel("Epoch number")
plt.ylabel("Accuracy")
plt.legend()
if filepath is not None:
plt.savefig(filepath + model_n + '_acc' + '.png')
if IoU_in_history:
plt.figure(figsize=(10, 10))
plt.title(model_n + '_mIoU')
plt.ylim(0, 1.05)
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_train_mIoU[plot_from_epoch:plot_to_epoch], color="g",
label='training')
plt.fill_between(np.arange(plot_to_epoch - plot_from_epoch), np.maximum(0, mean_train_mIoU[plot_from_epoch:plot_to_epoch] - std_train_mIoU[plot_from_epoch:plot_to_epoch]),
np.minimum(1, mean_train_mIoU[plot_from_epoch:plot_to_epoch] + std_train_mIoU[plot_from_epoch:plot_to_epoch]),
alpha=0.2, color="g")
plt.plot(range(plot_to_epoch - plot_from_epoch), mean_val_mIoU[plot_from_epoch:plot_to_epoch], color='r',
label='validation')
plt.fill_between(range(plot_to_epoch - plot_from_epoch), np.maximum(0, mean_val_mIoU[plot_from_epoch:plot_to_epoch] - std_val_mIoU[plot_from_epoch:plot_to_epoch]),
np.minimum(1, mean_val_mIoU[plot_from_epoch:plot_to_epoch] + std_val_mIoU[plot_from_epoch:plot_to_epoch]),
alpha=0.2, color='r')
plt.xlabel("Epoch number")
plt.ylabel("mIoU")
plt.legend()
if filepath is not None:
if IoU_in_history:
plt.savefig(filepath + model_n + '_mIoU' + '.png')
else:
plt.show()
plt.close('all')
return mean_train_loss, std_train_loss, mean_train_acc, std_train_acc, mean_val_loss, std_val_loss, mean_val_acc, std_val_acc | 48.646707 | 193 | 0.683038 | 1,333 | 8,124 | 3.772693 | 0.074269 | 0.070392 | 0.129052 | 0.135216 | 0.782263 | 0.718433 | 0.667727 | 0.655399 | 0.63273 | 0.623782 | 0 | 0.014597 | 0.198917 | 8,124 | 167 | 194 | 48.646707 | 0.758144 | 0.087149 | 0 | 0.243243 | 0 | 0 | 0.035169 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009009 | false | 0 | 0.018018 | 0 | 0.036036 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b3e30ff86f2b3da11ba310c21325419d772ccde | 588 | py | Python | venv/lib/python3.6/site-packages/ansible/galaxy/dependency_resolution/resolvers.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 1 | 2020-01-22T13:11:23.000Z | 2020-01-22T13:11:23.000Z | venv/lib/python3.6/site-packages/ansible/galaxy/dependency_resolution/resolvers.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 12 | 2020-02-21T07:24:52.000Z | 2020-04-14T09:54:32.000Z | venv/lib/python3.6/site-packages/ansible/galaxy/dependency_resolution/resolvers.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Requirement resolver implementations."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from resolvelib import Resolver
class CollectionDependencyResolver(Resolver):
"""A dependency resolver for Ansible Collections.
This is a proxy class allowing us to abstract away importing resolvelib
outside of the `ansible.galaxy.dependency_resolution` Python package.
"""
| 32.666667 | 92 | 0.758503 | 74 | 588 | 5.878378 | 0.810811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025845 | 0.144558 | 588 | 17 | 93 | 34.588235 | 0.838966 | 0.651361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.75 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.