hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a777279e25e5f54d2f1baa3f5a903f9db98549ea | 305 | py | Python | scripts/find_unmatched.py | dyelax/selfie2bitmoji | 1aa292800827a250494f43246b65812cff96b921 | [
"MIT"
] | 3 | 2017-12-21T05:15:12.000Z | 2018-01-26T16:36:06.000Z | scripts/find_unmatched.py | dyelax/selfie2bitmoji | 1aa292800827a250494f43246b65812cff96b921 | [
"MIT"
] | 1 | 2018-11-30T12:02:01.000Z | 2018-12-04T12:01:19.000Z | scripts/find_unmatched.py | dyelax/selfie2bitmoji | 1aa292800827a250494f43246b65812cff96b921 | [
"MIT"
] | 2 | 2018-09-21T16:52:29.000Z | 2019-12-02T08:15:46.000Z | from glob import glob
import os
png_paths = glob('../data/bitmoji/*/*.png')
npy_paths = glob('../data/bitmoji/*/*.npy')
png_set = set([os.path.splitext(p)[0] for p in png_paths])
npy_set = set([os.path.splitext(p)[0] for p in npy_paths])
sym_diff = png_set ^ npy_set
print len(sym_diff)
print sym_diff | 23.461538 | 58 | 0.698361 | 57 | 305 | 3.54386 | 0.350877 | 0.10396 | 0.128713 | 0.19802 | 0.277228 | 0.277228 | 0.277228 | 0.277228 | 0.277228 | 0.277228 | 0 | 0.007491 | 0.12459 | 305 | 13 | 59 | 23.461538 | 0.749064 | 0 | 0 | 0 | 0 | 0 | 0.150327 | 0.150327 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a77e1243c0644045786db43dbd281e62079dfe76 | 1,166 | py | Python | client.py | RYAN2540/PasswordLocker | 89042d52e09b5cda4313b4511aa6899e020f0a37 | [
"MIT"
] | null | null | null | client.py | RYAN2540/PasswordLocker | 89042d52e09b5cda4313b4511aa6899e020f0a37 | [
"MIT"
] | null | null | null | client.py | RYAN2540/PasswordLocker | 89042d52e09b5cda4313b4511aa6899e020f0a37 | [
"MIT"
] | null | null | null | import credentials
class Client:
"""
Class that generates new instances of account clients.
"""
users_list = []
def __init__ (self, username, password):
'''
Initialization method for a new client.
'''
self.username = username
self.password = password
self.credential = credentials.Credentials()
@classmethod
def add_user(cls, new_user):
'''
Method to add user to users_list.
'''
cls.users_list.append(new_user)
@classmethod
def check_login(cls, login_name, login_pass):
'''
Method to check if user login details exist in users_list.
'''
for user in cls.users_list:
if user.username == login_name:
if user.password == login_pass:
return True
return False
@classmethod
def return_user(cls, username, password):
'''
Method to return user object upon successful login.
'''
for user in cls.users_list:
if user.username == username:
if user.password == password:
return user | 27.116279 | 66 | 0.566895 | 126 | 1,166 | 5.095238 | 0.34127 | 0.084112 | 0.056075 | 0.037383 | 0.109034 | 0.109034 | 0.109034 | 0.109034 | 0.109034 | 0 | 0 | 0 | 0.352487 | 1,166 | 43 | 67 | 27.116279 | 0.850331 | 0.204974 | 0 | 0.217391 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0.26087 | 0.043478 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a780ecf3b2163c60f6ef74aee73017077232925d | 464 | py | Python | source/funwithflags/gateways/db_gateway_abc.py | zywangzy/fun_with_flags | ab2fccfc8a56513d842aa7fda81f3f4224f89be3 | [
"MIT"
] | null | null | null | source/funwithflags/gateways/db_gateway_abc.py | zywangzy/fun_with_flags | ab2fccfc8a56513d842aa7fda81f3f4224f89be3 | [
"MIT"
] | 26 | 2019-08-24T05:54:37.000Z | 2020-04-06T01:23:29.000Z | source/funwithflags/gateways/db_gateway_abc.py | andrewwangzy/fun_with_flags | ab2fccfc8a56513d842aa7fda81f3f4224f89be3 | [
"MIT"
] | null | null | null | """Module for the DbGateway abstract base class."""
from abc import ABC, abstractmethod
from typing import Any
class DbGateway(ABC):
"""Abstract base class for DbGateway defining the interfaces to interact with
a database, including read / write / update / delete database table entries.
"""
@abstractmethod
def query(self, command: str) -> Any:
"""Given a `command` string, do the query and return result of type `Any`.
"""
| 30.933333 | 82 | 0.6875 | 60 | 464 | 5.316667 | 0.666667 | 0.075235 | 0.106583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.221983 | 464 | 14 | 83 | 33.142857 | 0.883657 | 0.599138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a78832743706043814342f3e921deb0694756419 | 515 | py | Python | backend/accounts/management/commands/load.py | aibek79/Django-React-knboard | 074f4b1388a440290f9ae4a88c71fef749775932 | [
"MIT"
] | 665 | 2020-05-22T16:13:59.000Z | 2022-03-30T01:22:51.000Z | backend/accounts/management/commands/load.py | aibek79/Django-React-knboard | 074f4b1388a440290f9ae4a88c71fef749775932 | [
"MIT"
] | 70 | 2020-05-02T12:09:43.000Z | 2022-02-27T12:54:48.000Z | backend/accounts/management/commands/load.py | aibek79/Django-React-knboard | 074f4b1388a440290f9ae4a88c71fef749775932 | [
"MIT"
] | 161 | 2020-05-25T21:10:09.000Z | 2022-03-08T15:39:24.000Z | from django.conf import settings
from django.core import management
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
help = "Helpful command to load all fixtures"
def handle(self, *args, **options):
if not settings.DEBUG:
raise CommandError("Command not meant for production")
management.call_command("loaddata", "users")
management.call_command("loaddata", "tasks")
management.call_command("loaddata", "avatars")
| 32.1875 | 66 | 0.712621 | 59 | 515 | 6.169492 | 0.59322 | 0.082418 | 0.173077 | 0.239011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190291 | 515 | 15 | 67 | 34.333333 | 0.872902 | 0 | 0 | 0 | 0 | 0 | 0.21165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a78eb9e984b6a257768e25354d113445fc76d59e | 180 | py | Python | code_all/day15/my_project01/skill_system/skill_manager.py | testcg/python | 4db4bd5d0e44af807d2df80cf8c8980b40cc03c4 | [
"MIT"
] | null | null | null | code_all/day15/my_project01/skill_system/skill_manager.py | testcg/python | 4db4bd5d0e44af807d2df80cf8c8980b40cc03c4 | [
"MIT"
] | null | null | null | code_all/day15/my_project01/skill_system/skill_manager.py | testcg/python | 4db4bd5d0e44af807d2df80cf8c8980b40cc03c4 | [
"MIT"
] | null | null | null | from skill_system.skill_deplayer import SkillDeployer
class SkillManager:
def func01(self):
print("SkillManager-func01")
deplayer = SkillDeployer()
deplayer.func02()
| 20 | 53 | 0.761111 | 19 | 180 | 7.105263 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.15 | 180 | 8 | 54 | 22.5 | 0.843137 | 0 | 0 | 0 | 0 | 0 | 0.105556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a79753506b2643f925d9428c18fd0347d2813683 | 2,996 | py | Python | 18022022-1.py | BilalsGituation/FunctionsModulesIntro | 2024e562112798b195fac3122251e02554129d9f | [
"MIT"
] | null | null | null | 18022022-1.py | BilalsGituation/FunctionsModulesIntro | 2024e562112798b195fac3122251e02554129d9f | [
"MIT"
] | null | null | null | 18022022-1.py | BilalsGituation/FunctionsModulesIntro | 2024e562112798b195fac3122251e02554129d9f | [
"MIT"
] | null | null | null | """
Pseudocode: exercises 23, 24, 25
Post-solution REVIEW: Too much detail, to be honest
especially for someone who understands the fundamentals
Ex 23: Exercise 23 – Your first loops
Generate a list that contains at least 20 random integers. Write one loop that sums up all entries of the list. Write another loop that sums up only the entries of the list with even indices. Write a third loop that generates a new list that contains the values of the list in reverse order. Print out all three results
# Generate number list
import the random package
randomlist = empty list
for index in range of 0-25
number1, between 0 and 300, is generated
number1 is appended to randomlist
Loop is exited once whole list is read
# Add numbers in list to sum, by changing its value on iterations
set sum = 0
for index in randomlist
sum += index
print(sum)
Take user input (integer)
if input in list, just inform user of that
for each element:
If input bigger than list element, inform them of that then parse next element
else if input smaller than list element, inform them of that then parse next element
Exercise 24 – 3 while loops
Write a program that uses while loops to finish the tasks below:
- Searching for a specific number (e.g. 5) in an integer list of unknown length
- Multiplying all elements of an integer list of unknown length
- Printing out the contents of a string list of unknown length elementwise
# Generate list of unknown length with random elements
define empty list p
for index in range between 1 and random integer between min1 and max1
append random number between min2 and max2 to p
print p
# 1st part - search for number
get user input (number to scan for)
start at 0th position
set NumFound to False
while reading position is lower than length of list:
if reading position = user input
print confirmation for user
NumFound becomes True
reading position += 1
if NumFound remains False, inform user
# 2nd part - multiply list entries together
call list using code from earlier (recommended not to have more than 7 elements and random numbers only between 1-10)
set first parsing index (FPI) to 0
while FPI is less than list length:
set second parsing index (SPI) to 0
while SPI is less than list length
print(f'current entry {SPI} times last entry {FPI} = {SPI*FPI}')
SPI +=1 # move reading frame for reading position
FPI +=1 # move reading frame for reading position -1
# 3rd part - print contents of unknown list by element
call list using code from earlier
(STOPPED TO FOLLOW CHRISTOPH'S SOLUTION IN LESSON. It's much nicer to read,
of course, so you can take this as an example of too much detail)
"""
# after that, moved onto definition of function (see pdf)
"""Create a program that defines functions for the four mathematical basic
operations (+, -, * and /). Call the functions at least three times
with different parameters."""
| 37.924051 | 319 | 0.744326 | 493 | 2,996 | 4.527383 | 0.423935 | 0.020161 | 0.023297 | 0.03405 | 0.145161 | 0.12724 | 0.077061 | 0.045699 | 0.045699 | 0.045699 | 0 | 0.019206 | 0.217957 | 2,996 | 78 | 320 | 38.410256 | 0.932565 | 0.937917 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a798e36cfa4a47b0ac44a3262751231e9d2fae48 | 298 | py | Python | iaso/migrations/0003_auto_20190617_1347.py | ekhalilbsq/iaso | e6400c52aeb4f67ce1ca83b03efa3cb11ef235ee | [
"MIT"
] | 29 | 2020-12-26T07:22:19.000Z | 2022-03-07T13:40:09.000Z | iaso/migrations/0003_auto_20190617_1347.py | ekhalilbsq/iaso | e6400c52aeb4f67ce1ca83b03efa3cb11ef235ee | [
"MIT"
] | 150 | 2020-11-09T15:03:27.000Z | 2022-03-07T15:36:07.000Z | iaso/migrations/0003_auto_20190617_1347.py | ekhalilbsq/iaso | e6400c52aeb4f67ce1ca83b03efa3cb11ef235ee | [
"MIT"
] | 4 | 2020-11-09T10:38:13.000Z | 2021-10-04T09:42:47.000Z | # Generated by Django 2.0 on 2019-06-17 13:47
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [("iaso", "0002_auto_20190617_1232")]
operations = [migrations.RenameField(model_name="orglevel", old_name="org_unit_type", new_name="org_unit_types")]
| 27.090909 | 117 | 0.755034 | 42 | 298 | 5.119048 | 0.809524 | 0.065116 | 0.102326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114504 | 0.120805 | 298 | 10 | 118 | 29.8 | 0.706107 | 0.144295 | 0 | 0 | 1 | 0 | 0.245059 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a79a5f8fa568547dbac91fc5a805bb411b87030b | 1,152 | py | Python | climpred/tests/__init__.py | aaronspring/climpred | 5e01fdbe893507cd683dd97e0229dffabebbda43 | [
"MIT"
] | null | null | null | climpred/tests/__init__.py | aaronspring/climpred | 5e01fdbe893507cd683dd97e0229dffabebbda43 | [
"MIT"
] | 11 | 2021-06-01T13:50:50.000Z | 2022-03-07T23:31:47.000Z | climpred/tests/__init__.py | aaronspring/climpred | 5e01fdbe893507cd683dd97e0229dffabebbda43 | [
"MIT"
] | null | null | null | import importlib
from distutils import version
import pytest
def _importorskip(modname, minversion=None):
try:
mod = importlib.import_module(modname)
has = True
if minversion is not None:
if LooseVersion(mod.__version__) < LooseVersion(minversion):
raise ImportError("Minimum version not satisfied")
except ImportError:
has = False
func = pytest.mark.skipif(not has, reason=f"requires {modname}")
return has, func
def LooseVersion(vstring):
# Our development version is something like '0.10.9+aac7bfc'
# This function just ignored the git commit id.
vstring = vstring.split("+")[0]
return version.LooseVersion(vstring)
has_matplotlib, requires_matplotlib = _importorskip("matplotlib")
has_nc_time_axis, requires_nc_time_axis = _importorskip("nc_time_axis", "1.4.0")
has_xclim, requires_xclim = _importorskip("xclim", "0.31")
has_bias_correction, requires_bias_correction = _importorskip("bias_correction")
has_xesmf, requires_xesmf = _importorskip("xesmf")
has_xrft, requires_xrft = _importorskip("xrft")
has_eofs, requires_eofs = _importorskip("eofs")
| 33.882353 | 80 | 0.730903 | 140 | 1,152 | 5.757143 | 0.457143 | 0.022333 | 0.037221 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012592 | 0.172743 | 1,152 | 33 | 81 | 34.909091 | 0.833158 | 0.090278 | 0 | 0 | 0 | 0 | 0.107177 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.583333 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a7a28ad36fb15211f320cbcb46863f110a147ebf | 25,701 | py | Python | test/test_examples.py | VDBWRAIR/pyjip | dc147afebbabd550828fa51cc052db4aa07c5d3b | [
"BSD-3-Clause"
] | 18 | 2015-05-08T06:39:09.000Z | 2020-11-30T10:51:36.000Z | test/test_examples.py | VDBWRAIR/pyjip | dc147afebbabd550828fa51cc052db4aa07c5d3b | [
"BSD-3-Clause"
] | 9 | 2015-01-02T09:55:53.000Z | 2016-02-03T18:31:10.000Z | test/test_examples.py | castlabs/pyjip | 947f615d591a940438316e6d21291f730bfcda66 | [
"BSD-3-Clause"
] | 5 | 2016-02-01T16:52:36.000Z | 2021-03-10T12:08:39.000Z | #!/usr/bin/env python
"""Test some of the examples pipelines and tools"""
import os
import jip
import unittest
import jip.tools
class BWAPipelineTest(unittest.TestCase):
def testPipelineStructure(self):
# load the pipeline
tool = jip.find("examples/bwa/pileup.jip")
assert tool is not None
# create a pipeline
p = jip.Pipeline()
# create a new pipeline node and configure id
p.run(tool, input="setup.py", reference="Makefile", output="out.txt")
# expand the pipeline such that the internal pipeline is resolved
p.expand(validate=False)
# after expansion with this setuo, the pipeline should have 7 nodes
assert len(p) == 7
# the graph should consist of 6 edges
assert len(p.edges) == 6
# get out the nodes. we have to use indexes here
# because the names might have changed after expansion
ref = p.get("ref")
align = p.get("align")
sam = p.get("sam")
bam = p.get("bam")
dups = p.get("dups")
index = p.get("index")
pileup = p.get("pileup")
# check the connections
assert not ref.has_incoming()
assert align.has_incoming(ref)
assert sam.has_incoming(align)
assert bam.has_incoming(sam)
assert dups.has_incoming(bam)
assert index.has_incoming(dups)
assert pileup.has_incoming(index)
assert not pileup.has_outgoing()
def testPipelineStructureMultiplexed(self):
# load the pipeline
tool = jip.find("examples/bwa/pileup.jip")
assert tool is not None
# create a pipeline
p = jip.Pipeline()
# create a new pipeline node and configure id
p.run(tool, input=["setup.py", "README.rst"],
reference="Makefile",
output="${input|ext}_out.txt")
# expand the pipeline such that the internal pipeline is resolved
# this will also validate all nodes and raise an exception
# if one of the nodes validations failed
p.expand(validate=False)
# after expansion with this setuo, the pipeline should have 7 nodes
assert len(p) == 13
# the graph should consist of 6 edges
assert len(p.edges) == 12
# get out the nodes. we have to use indexes here
# because the names might have changed after expansion
ref = p.get("ref")
align = p.get("align.0")
sam = p.get("sam.0")
bam = p.get("bam.0")
dups = p.get("dups.0")
index = p.get("index.0")
pileup = p.get("pileup.0")
# check the connections
assert not ref.has_incoming()
assert align.has_incoming(ref)
assert sam.has_incoming(align)
assert bam.has_incoming(sam)
assert dups.has_incoming(bam)
assert index.has_incoming(dups)
assert pileup.has_incoming(index)
assert not pileup.has_outgoing()
# test second set
ref = p.get("ref")
align = p.get("align.1")
sam = p.get("sam.1")
bam = p.get("bam.1")
dups = p.get("dups.1")
index = p.get("index.1")
pileup = p.get("pileup.1")
# check the connections
assert not ref.has_incoming()
assert align.has_incoming(ref)
assert sam.has_incoming(align)
assert bam.has_incoming(sam)
assert dups.has_incoming(bam)
assert index.has_incoming(dups)
assert pileup.has_incoming(index)
assert not pileup.has_outgoing()
@jip.tool('gem_index')
class GemIndex(object):
"""
The GEM Indexer tool
Usage:
gem_index -i <genome> [-o <genome_index>] [-t <threads>] [--no-hash]
Options:
--help Show this help message
-o, --output-dir <output_dir> The folder where the output GEM
index is created
-t, --threads <threads> The number of execution threads
[default: 1]
--no-hash Do not produce the hash file
[default: false]
Inputs:
-i, --input <genome> The fasta file for the genome
"""
def init(self):
self.add_output('output', "${input|name|ext}.gem")
def setup(self):
out = "${input|name|ext}.gem"
if self.options['output_dir']:
out = "${output_dir}/" + out
self.options['output'].set(out)
def get_command(self):
return "gemtools index -i ${input} -o ${output} -t ${threads} "\
"${no_hash|arg}"
@jip.tool('gem_t_index')
class GemTranscriptomeIndex(object):
"""
The GEM Transcrptome Indexer tool
Usage:
gem_t_index -i <genome_index> -a <annotation> [-m <max_read_length>]
[-o <output_folder>] [-p <output_prefix>] [-t <threads>]
Options:
--help Show this help message
-o, --output-dir <output_dir> The folder where the output files
are created
[default: ${annotation|parent}]
-p, --prefix <output_prefix> The name to be used for the output
files [default: ${annotation|name}]
-t, --threads <threads> The number of execution threads
[default: 1]
-m, --max-length <max_read_length> Maximum read length [default: 150]
Inputs:
-i, --index <genome_index> The GEM index file for the genome
-a, --annotation <annotation> The reference annotation in GTF
format
"""
def init(self):
#if not self.options['output_dir']:
# self.options['output_dir'] = "."
#if not self.options['prefix']:
# self.options['prefix'] = "${annotation}"
self.add_output('gem', "${output_dir}/${prefix}.junctions.gem")
self.add_output('keys', "${output_dir}/${prefix}.junctions.keys")
def get_command(self):
return 'bash', 'gemtools t-index -i ${index} -a ${annotation} ' \
'-o ${output_dir|abs}/${prefix} -t ${threads} ' \
'-m ${max_length}'
def test_gemtools_index_command_rendering_for_options():
p = jip.Pipeline()
p.run('gem_index', input="Makefile", output_dir='test')
p.expand(validate=True)
node = p.get('gem_index')
print node._tool.options
job = jip.create_jobs(p)[0]
infile = os.path.abspath("Makefile")
base = os.path.dirname(infile)
print ">>>", job.command
assert job.command == 'gemtools index -i %s -o %s.gem -t 1 ' % (
infile, os.path.join(base, "test/Makefile"))
outfiles = list(job.get_output_files())
assert len(outfiles) == 1
assert outfiles[0] == os.path.join(base, "test/Makefile.gem")
def test_gemtools_t_index_inputs():
p = jip.Pipeline()
p.run('gem_t_index', index="Makefile", annotation='setup.py',
output_dir='test')
p.expand(validate=False)
job = jip.create_jobs(p)[0]
infile = os.path.abspath("Makefile")
annotation = os.path.abspath("setup.py")
base = os.path.dirname(infile)
print ">>>", job.command
assert job.command == 'gemtools t-index -i %s -a %s -o %s -t 1 -m 150' % (
infile, annotation, os.path.join(base, "test/setup.py"))
outfiles = list(job.get_output_files())
assert len(outfiles) == 2
assert outfiles[0] == os.path.join(base, "test/setup.py.junctions.gem")
assert outfiles[1] == os.path.join(base, "test/setup.py.junctions.keys")
@jip.tool('grape_gem_rnatool')
class gem(object):
"""
The GEMTools RNAseq Mapping Pipeline
Usage:
gem -f <fastq_file>... -i <genome_index> -a <annotation> -q <quality>
[-n <name>] [-o <output_dir>] [-t <threads>]
Options:
--help Show this help message
-q, --quality <quality> The fastq offset quality
-n, --name <name> The output prefix name
[default: ${fastq.raw()[0]|name|ext|ext|re("_[12]","")}]
-o, --output-dir <output_dir> The output folder
-t, --threads <threads> The number of execution threads [default: 1]
Inputs:
-f, --fastq <fastq_file>... The input fastq
-i, --index <genome_index> The GEM index file for the genome
-a, --annotation <annotation> The reference annotation in GTF format
"""
def init(self):
self.add_output('map', "${output_dir}/${name}.map.gz")
self.add_output('bam', "${output_dir}/${name}.bam")
self.add_output('bai', "${output_dir}/${name}.bam.bai")
self.add_option('single_end', False, long="--single-end",
hidden=False)
def setup(self):
if len(self.fastq) == 1:
self.options['single_end'].set(True)
def get_command(self):
return 'bash', 'gemtools rna-pipeline ${options()}'
@jip.tool('grape_flux')
class flux(object):
"""
The Flux Capacitor
Usage:
flux -i <input> -a <annotation> [-o <output_dir>]
Options:
--help Show this help message
-o, --output-dir <output_dir> The output folder
Inputs:
-i, --input <input> The input file with mappings
-a, --annotation <annotation> The reference annotation in GTF format
"""
def init(self):
self.add_option('name', "${input|name|ext}")
self.add_output('gtf', "${output_dir}/${name}.gtf")
def get_command(self):
return 'bash', 'flux-capacitor ${options()}'
@jip.pipeline('grape_gem_rnapipeline')
class GrapePipeline(object):
"""
Run the default RNAseq pipeline
usage:
rnaseq -f <fastq_file>... -q <quality> -i <genome_index>
-a <annotation> [-o <output_dir>]
Inputs:
-f, --fastq <fastq_file>... The input reference genome
-i, --index <genome_index> The input reference genome
-a, --annotation <annotation The input reference annotation
Options:
-q, --quality <quality> The fatq offset quality
[default: 33]
-o, --output-dir <output_dir> The output prefix
[default: ${fastq.raw()[0]|abs|parent}]
"""
def pipeline(self):
p = jip.Pipeline()
gem = p.run(
'grape_gem_rnatool',
index=self.index, annotation=self.annotation,
fastq=self.fastq, quality=self.quality,
output_dir=self.output_dir
)
p.run(
'grape_flux', input=gem.bam, annotation=self.annotation,
output_dir=self.output_dir
)
p.context(locals())
return p
def test_gem_name_option_delegation():
p = jip.Pipeline()
p.run('grape_gem_rnapipeline', fastq='reads_1.fastq.gz', index='index.gem',
annotation='gencode.gtf')
jobs = jip.create_jobs(p, validate=False)
ldir = os.getcwd()
j = os.path.join
assert len(jobs) == 2
assert jobs[0].configuration['index'].get() == j(ldir, 'index.gem')
assert jobs[0].configuration['fastq'].get() == j(ldir, 'reads_1.fastq.gz')
assert jobs[0].configuration['annotation'].get() == j(ldir, 'gencode.gtf')
assert jobs[0].configuration['quality'].get() == '33'
assert jobs[0].configuration['output_dir'].get() == ldir
assert jobs[0].configuration['name'].get() == 'reads'
assert jobs[0].configuration['bam'].get() == j(ldir, 'reads.bam')
assert jobs[0].configuration['bai'].get() == j(ldir, 'reads.bam.bai')
assert jobs[0].configuration['map'].get() == j(ldir, 'reads.map.gz')
assert jobs[1].configuration['input'].get() == j(ldir, 'reads.bam')
assert jobs[1].configuration['name'].get() == 'reads'
assert jobs[1].configuration['annotation'].get() == j(ldir, 'gencode.gtf')
assert jobs[1].configuration['output_dir'].get() == ldir
assert jobs[1].configuration['gtf'].get() == j(ldir, 'reads.gtf')
assert len(jobs[0].children) == 1
assert len(jobs[1].dependencies) == 1
assert jobs[0].children[0] == jobs[1]
def test_gem_name_option_delegation_with_output_dir():
p = jip.Pipeline()
p.run('grape_gem_rnapipeline', fastq='reads_1.fastq.gz', index='index.gem',
annotation='gencode.gtf', output_dir="mydir")
jobs = jip.create_jobs(p, validate=False)
ldir = os.getcwd()
j = os.path.join
assert len(jobs) == 2
assert jobs[0].configuration['index'].get() == j(ldir, 'index.gem')
assert jobs[0].configuration['fastq'].get() == j(ldir, 'reads_1.fastq.gz')
assert jobs[0].configuration['annotation'].get() == j(ldir, 'gencode.gtf')
assert jobs[0].configuration['quality'].get() == '33'
assert jobs[0].configuration['output_dir'].get() == "mydir"
assert jobs[0].configuration['name'].get() == 'reads'
assert jobs[0].configuration['bam'].get() == j(ldir, 'mydir/reads.bam')
assert jobs[0].configuration['bai'].get() == j(ldir, 'mydir/reads.bam.bai')
assert jobs[0].configuration['map'].get() == j(ldir, 'mydir/reads.map.gz')
assert jobs[1].configuration['input'].get() == j(ldir, 'mydir/reads.bam')
assert jobs[1].configuration['name'].get() == 'reads'
assert jobs[1].configuration['annotation'].get() == j(ldir, 'gencode.gtf')
assert jobs[1].configuration['output_dir'].get() == "mydir"
assert jobs[1].configuration['gtf'].get() == j(ldir, 'mydir/reads.gtf')
assert len(jobs[0].children) == 1
assert len(jobs[1].dependencies) == 1
assert jobs[0].children[0] == jobs[1]
def test_multiple_pipelines_with_delegated_outputs():
@jip.tool('grape_gem_index')
class GemIndex(object):
"""\
The GEM Indexer tool
Usage:
gem_index -i <genome> [-o <genome_index>]
Options:
-o, --output <genome_index> The output GEM index file
[default: ${input|ext}.gem]
-i, --input <genome> The fasta file for the genome
"""
def get_command(self):
return "bash", "gemtools index ${options()}"
@jip.tool('grape_gem_rnatool')
class gem(object):
"""\
The GEMtools RNAseq Mapping Pipeline
Usage:
gem -f <fastq_file> -i <genome_index>
Inputs:
-f, --fastq <fastq_file> The input fastq
-i, --index <genome_index> The GEM index file for the genome
"""
def get_command(self):
return 'bash', 'gemtools rna-pipeline ${options()}'
@jip.pipeline('grape_gem_setup')
class SetupPipeline(object):
"""\
The GEM indexes setup pipeline
usage:
setup -i <genome>
Options:
-i, --input <genome> The input reference genome
"""
def init(self):
self.add_output('index', '${input|ext}.gem')
def pipeline(self):
p = jip.Pipeline()
index = p.run('grape_gem_index',
input=self.input, output=self.index)
p.context(locals())
return p
@jip.pipeline('grape_gem_rnapipeline')
class GrapePipeline(object):
"""\
The default GRAPE RNAseq pipeline
usage:
rnaseq -f <fastq_file> -g <genome>
Inputs:
-f, --fastq <fastq_file> The input reference genome
-g, --genome <genome_index> The input reference genome
"""
def pipeline(self):
p = jip.Pipeline()
gem_setup = p.run('grape_gem_setup', input=self.genome)
gem = p.run('grape_gem_rnatool', index=gem_setup.index,
fastq=self.fastq)
p.context(locals())
return p
p = jip.Pipeline()
node = p.run('grape_gem_rnapipeline')
node.fastq = 'reads_1.fastq.gz'
node.genome = 'genome.fa'
jobs = jip.create_jobs(p, validate=False)
assert len(jobs) == 2
gem_job = filter(lambda x: x.name == 'gem', jobs)[0]
assert gem_job is not None
assert gem_job.configuration['index'].get().endswith('genome.gem')
def test_embedded_pipelines_stage_one(tmpdir):
tmpdir = str(tmpdir)
# laod teh embedded example
jip.scanner.add_module('examples/embedded_submission/embedded.py')
jip.scanner.scan_modules()
p = jip.Pipeline()
p.job(dir=tmpdir).run('example_embedded')
jobs = jip.create_jobs(p)
assert jobs[0].configuration['prefix'] == 'test'
assert jobs[0].configuration['output'] == [
os.path.join(tmpdir, 'test.*')
]
assert len(jobs) == 1
def test_embedded_pipelines_stage_two(tmpdir):
tmpdir = str(tmpdir)
# create stage one output
open(os.path.join(tmpdir, 'test.1'), 'a').close()
open(os.path.join(tmpdir, 'test.2'), 'a').close()
open(os.path.join(tmpdir, 'test.3'), 'a').close()
open(os.path.join(tmpdir, 'test.4'), 'a').close()
open(os.path.join(tmpdir, 'test.5'), 'a').close()
# laod teh embedded example
jip.scanner.add_module('examples/embedded_submission/embedded.py')
jip.scanner.scan_modules()
p = jip.Pipeline()
p.job(dir=tmpdir).run('example_embedded')
jobs = jip.create_jobs(p)
assert len(jobs) == 8
assert jobs[0].configuration['prefix'] == 'test'
print ">>>OUTPUT", jobs[0].configuration['output'].raw()
print ">>>TMPDIR", tmpdir
assert jobs[0].configuration['output'] == [
os.path.join(tmpdir, 'test.1'),
os.path.join(tmpdir, 'test.2'),
os.path.join(tmpdir, 'test.3'),
os.path.join(tmpdir, 'test.4'),
os.path.join(tmpdir, 'test.5'),
]
assert jobs[1].configuration['input'] == os.path.join(tmpdir, 'test.1')
assert jobs[2].configuration['input'] == os.path.join(tmpdir, 'test.2')
assert jobs[3].configuration['input'] == os.path.join(tmpdir, 'test.3')
assert jobs[4].configuration['input'] == os.path.join(tmpdir, 'test.4')
assert jobs[5].configuration['input'] == os.path.join(tmpdir, 'test.5')
print jobs[6].configuration['input'].raw()
assert jobs[6].configuration['input'] == [
os.path.join(tmpdir, 'consumed_test.1'),
os.path.join(tmpdir, 'consumed_test.2'),
os.path.join(tmpdir, 'consumed_test.3'),
os.path.join(tmpdir, 'consumed_test.4'),
os.path.join(tmpdir, 'consumed_test.5'),
]
def test_dynamic_options():
script = '''#!/usr/bin/env jip
# Touch a number of files with a common prefix
#
# usage:
# touch --prefix <prefix> --count <count>
#%begin init
add_output('output')
#%end
#%begin setup
options['output'].set(["%s_%s" % (prefix, i) for i in range(1, count.get(int) + 1)])
#%end
#%begin command
for x in ${output}; do
touch $x
done
'''
tool = jip.tools.ScriptTool.from_string(script)
tool.init()
assert tool is not None
p = jip.Pipeline()
node = p.job('test').run(tool, prefix='test', count=5)
assert node is not None
p.expand()
assert len(p) == 1
node = p.get('test')
assert node.prefix == 'test'
cwd = os.getcwd()
assert node.output == [os.path.join(cwd, x) for x in
['test_1', 'test_2', 'test_3', 'test_4', 'test_5']]
def test_dynamic_options_multiplex():
script = '''#!/usr/bin/env jip
# Touch a number of files with a common prefix
#
# usage:
# touch --prefix <prefix> --count <count>
#%begin init
add_output('output')
#%end
#%begin setup
options['output'].set(["%s_%s" % (prefix, i) for i in range(1, count.get(int) + 1)])
#%end
#%begin command
for x in ${output}; do
touch $x
done
'''
tool = jip.tools.ScriptTool.from_string(script)
tool.init()
assert tool is not None
p = jip.Pipeline()
node = p.job('test').run(tool, prefix='test', count=[1, 2])
assert node is not None
p.expand()
cwd = os.getcwd()
assert len(p) == 2
node_1 = p.get('test.0')
node_2 = p.get('test.1')
assert node_1.output == os.path.join(cwd, 'test_1')
assert node_2.output == [os.path.join(cwd, x) for x in
['test_1', 'test_2']]
def test_hello_world_py_fun(tmpdir):
tmpdir = str(tmpdir)
jip.scanner.add_module('examples/hello_world/hello_world.py')
jip.scanner.scan_modules()
p = jip.Pipeline()
p.job(dir=tmpdir).run('fun_hello_world_py')
jobs = jip.create_jobs(p)
assert len(jobs) == 1
def test_hello_world_py_cls(tmpdir):
tmpdir = str(tmpdir)
jip.scanner.add_module('examples/hello_world/hello_world.py')
jip.scanner.scan_modules()
p = jip.Pipeline()
p.job(dir=tmpdir).run('cls_hello_world_py')
jobs = jip.create_jobs(p)
assert len(jobs) == 1
def test_file_touch():
p = jip.Pipeline()
node = p.run('examples/file_touch.jip', p='test', c=5)
p.expand()
cwd = os.getcwd()
j = os.path.join
assert node.output == [
j(cwd, 'test_' + x) for x in ['1', '2', '3', '4', '5']
]
def test_pipeline_to_pipeline_edge_delegation():
@jip.tool('grape_gem_index')
class GemIndex(object):
"""\
The GEM Indexer tool
Usage:
gem_index -i <genome> [-o <genome_index>]
Options:
-o, --output <genome_index> The output GEM index file
[default: ${input|ext}.gem]
-i, --input <genome> The fasta file for the genome
"""
def get_command(self):
return "bash", "gemtools index ${options()}"
@jip.tool('grape_gem_rnatool')
class gem(object):
"""\
The GEMtools RNAseq Mapping Pipeline
Usage:
gem -f <fastq_file> -i <genome_index>
Inputs:
-f, --fastq <fastq_file> The input fastq
-i, --index <genome_index> The GEM index file for the genome
"""
def get_command(self):
return 'bash', 'gemtools rna-pipeline ${options()}'
@jip.pipeline('grape_gem_setup')
class SetupPipeline(object):
"""\
The GEM indexes setup pipeline
usage:
setup -i <genome>
Options:
-i, --input <genome> The input reference genome
"""
def init(self):
self.add_output('index', '${input|ext}.gem')
def pipeline(self):
p = jip.Pipeline()
index = p.run('grape_gem_index',
input=self.input, output=self.index)
p.context(locals())
return p
@jip.pipeline('grape_gem_rnapipeline')
class GrapePipeline(object):
"""\
The default GRAPE RNAseq pipeline
usage:
rnaseq -f <fastq_file> -g <genome>
Inputs:
-f, --fastq <fastq_file> The input reference genome
-g, --genome <genome_index> The input reference genome
"""
def pipeline(self):
p = jip.Pipeline()
gem_setup = p.run('grape_gem_setup', input=self.genome)
gem = p.run('grape_gem_rnatool', index=gem_setup.index,
fastq=self.fastq)
p.context(locals())
return p
p = jip.Pipeline()
node = p.run('grape_gem_rnapipeline')
node.fastq = 'reads_1.fastq.gz'
node.genome = 'genome.fa'
p.expand(validate=False)
index = p.get('index')
gem_node = p.get('gem')
cwd = os.getcwd()
j = os.path.join
assert index.has_outgoing(gem_node, link=('output', 'index'),
value=j(cwd, 'genome.gem'))
def test_subpipe_incoming_edge_resolve():
@jip.pipeline()
def subedge_pipe(tool):
"""Subedge
usage:
subedge --input <input>
"""
p = jip.Pipeline()
p.job('ls').bash('ls ${input}', input=tool.input)
return p
p = jip.Pipeline()
produce = p.job('touch').bash('touch ${outfile}', outfile='out.dat')
p.run('subedge_pipe', input=produce.outfile)
p.expand(validate=False)
touch = p.get('touch')
ls = p.get('ls')
cwd = os.getcwd()
j = os.path.join
assert touch.has_outgoing(ls, link=('outfile', 'input'),
value=j(cwd, 'out.dat'))
def test_subpipe_incoming_edge_resolve_pipe_to_pipe():
@jip.pipeline()
def subedge_pipe_1(tool):
"""Subedge
usage:
subedge --input <input> --output <output>
"""
p = jip.Pipeline()
p.job('p1').bash('touch', input=tool.input, output=tool.output)
return p
@jip.pipeline()
def subedge_pipe_2(tool):
"""Subedge
usage:
subedge --input <input> --output <output>
"""
p = jip.Pipeline()
p.job('p2').bash('touch', input=tool.input, output=tool.output)
return p
@jip.pipeline()
def subedge_pipe_combine(tool):
"""Subedge
usage:
subedge --input <input> --output <output>
"""
p = jip.Pipeline()
p1 = p.run('subedge_pipe_1', input=tool.input, output='p1.out')
p.run('subedge_pipe_2', input=p1, output=tool.output)
return p
p = jip.Pipeline()
p.run('subedge_pipe_combine', input='a.txt', output="out.dat")
p.expand(validate=False)
assert len(p) == 2
p1 = p.get('p1')
p2 = p.get('p2')
cwd = os.getcwd()
j = os.path.join
assert p1.has_outgoing(p2)
assert p1.has_outgoing(p2, value=j(cwd, 'out.dat'))
assert p1.has_outgoing(p2, link=('output', 'input'),
value=j(cwd, 'p1.out'))
if __name__ == '__main__':
unittest.main()
| 33.291451 | 84 | 0.57776 | 3,306 | 25,701 | 4.38899 | 0.086509 | 0.01654 | 0.024121 | 0.036389 | 0.760096 | 0.72612 | 0.665334 | 0.631633 | 0.589938 | 0.580565 | 0 | 0.010114 | 0.272908 | 25,701 | 771 | 85 | 33.33463 | 0.766362 | 0.043189 | 0 | 0.578834 | 0 | 0.006479 | 0.201296 | 0.039536 | 0 | 0 | 0 | 0 | 0.231102 | 0 | null | null | 0 | 0.008639 | null | null | 0.012959 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7a446cc031a772e0161ab65df22e6e147650f83 | 451 | py | Python | examples/async.py | roslovets/SP110E | 69b1bf71ee8a3f6355c2f6c9903f74bf1e54e9fd | [
"MIT"
] | 6 | 2021-11-09T04:51:09.000Z | 2022-02-21T20:53:12.000Z | examples/async.py | roslovets/SP110E | 69b1bf71ee8a3f6355c2f6c9903f74bf1e54e9fd | [
"MIT"
] | 1 | 2021-11-27T12:34:44.000Z | 2021-11-27T12:34:44.000Z | examples/async.py | roslovets/SP110E | 69b1bf71ee8a3f6355c2f6c9903f74bf1e54e9fd | [
"MIT"
] | null | null | null | import asyncio
from sp110e.controller import Controller
async def main():
device = Controller('AF:00:10:01:C8:AF', timeout=2, retries=1)
await device.switch_on()
await device.set_brightness(255)
await device.set_color([0, 255, 0])
await device.set_white(0)
await device.set_speed(10)
await device.set_mode(1)
await device.switch_off()
await device.disconnect()
if __name__ == "__main__":
asyncio.run(main())
| 25.055556 | 66 | 0.698448 | 66 | 451 | 4.545455 | 0.515152 | 0.293333 | 0.233333 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06469 | 0.177384 | 451 | 17 | 67 | 26.529412 | 0.743935 | 0 | 0 | 0 | 0 | 0 | 0.055432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7a5d061bbe74a0821d3dc3b08305f6a19d6a359 | 338 | py | Python | python/truck-tour.py | gajubadge11/hackerrank-3 | 132a5019b7ed21507bb95b5063fa66c446b0eff7 | [
"MIT"
] | 21 | 2015-02-09T18:08:38.000Z | 2021-11-08T15:00:48.000Z | python/truck-tour.py | gajubadge11/hackerrank-3 | 132a5019b7ed21507bb95b5063fa66c446b0eff7 | [
"MIT"
] | 7 | 2020-04-12T23:00:19.000Z | 2021-01-30T23:44:24.000Z | python/truck-tour.py | gajubadge11/hackerrank-3 | 132a5019b7ed21507bb95b5063fa66c446b0eff7 | [
"MIT"
] | 27 | 2015-07-22T18:08:12.000Z | 2022-02-28T19:50:26.000Z | current_petrol = 0
current_position = 0
n = int(input().strip())
for i in range(n):
petrol, distance = map(int, input().strip().split(' '))
current_petrol += petrol
if (current_petrol > distance):
current_petrol -= distance
else:
current_petrol = 0
current_position = i
print(current_position + 1)
| 26 | 59 | 0.642012 | 43 | 338 | 4.860465 | 0.44186 | 0.311005 | 0.133971 | 0.200957 | 0.277512 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015444 | 0.233728 | 338 | 12 | 60 | 28.166667 | 0.791506 | 0 | 0 | 0.166667 | 0 | 0 | 0.002959 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7a65e2c05d1a99b6f51a2d05a82479bc7d0ace8 | 163 | py | Python | lib/assembly/config.py | olsonanl/assembly | 6bcecac2ba7de826d2a4625964b02c348e7ce4e9 | [
"MIT"
] | null | null | null | lib/assembly/config.py | olsonanl/assembly | 6bcecac2ba7de826d2a4625964b02c348e7ce4e9 | [
"MIT"
] | null | null | null | lib/assembly/config.py | olsonanl/assembly | 6bcecac2ba7de826d2a4625964b02c348e7ce4e9 | [
"MIT"
] | null | null | null | APPNAME = 'assembly'
APPAUTHOR = 'kbase'
URL = 'http://kbase.us/services/assembly'
AUTH_SERVICE = 'KBase'
OAUTH_EXP_DAYS = 14
OAUTH_FILENAME = 'globus_oauth.prop'
| 23.285714 | 41 | 0.748466 | 22 | 163 | 5.318182 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013793 | 0.110429 | 163 | 6 | 42 | 27.166667 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0.417178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7b0c0f28cc26f789c7d38788bbc3f2a7fe72c41 | 210 | py | Python | desktop/core/ext-py/depender/mootools/urls.py | civascu/hue | 82f2de44789ff5a981ed725175bae7944832d1e9 | [
"Apache-2.0"
] | 2 | 2021-04-27T03:57:00.000Z | 2021-06-18T09:39:58.000Z | desktop/core/ext-py/depender/mootools/urls.py | civascu/hue | 82f2de44789ff5a981ed725175bae7944832d1e9 | [
"Apache-2.0"
] | null | null | null | desktop/core/ext-py/depender/mootools/urls.py | civascu/hue | 82f2de44789ff5a981ed725175bae7944832d1e9 | [
"Apache-2.0"
] | 2 | 2021-09-06T18:44:45.000Z | 2022-02-24T04:10:10.000Z | from django.conf.urls.defaults import patterns, include
urlpatterns = patterns('',
(r'^depender/', include('depender.urls')),
(r'^$', 'django.views.generic.simple.redirect_to', {'url': 'depender/'})
)
| 30 | 76 | 0.671429 | 24 | 210 | 5.833333 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 210 | 6 | 77 | 35 | 0.752688 | 0 | 0 | 0 | 0 | 0 | 0.361905 | 0.185714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7cf16fc9edf04485b6db4cb8b986e5c30add838 | 162 | py | Python | openpeerpower/generated/mqtt.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 1 | 2021-07-08T20:09:55.000Z | 2021-07-08T20:09:55.000Z | openpeerpower/generated/mqtt.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 47 | 2021-02-21T23:43:07.000Z | 2022-03-31T06:07:10.000Z | openpeerpower/generated/mqtt.py | OpenPeerPower/core | f673dfac9f2d0c48fa30af37b0a99df9dd6640ee | [
"Apache-2.0"
] | null | null | null | """Automatically generated by oppfest.
To update, run python3 -m script.oppfest
"""
# fmt: off
MQTT = {
"tasmota": [
"tasmota/discovery/#"
]
}
| 12.461538 | 40 | 0.592593 | 17 | 162 | 5.647059 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.253086 | 162 | 12 | 41 | 13.5 | 0.785124 | 0.537037 | 0 | 0 | 1 | 0 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a7db9ec64f3d4ec62dc924e2e879dd99f71c7899 | 328 | py | Python | scripts/update.py | helto4real/container | b19670341117fbe2d8d9940d9838625f32f2ace7 | [
"MIT"
] | null | null | null | scripts/update.py | helto4real/container | b19670341117fbe2d8d9940d9838625f32f2ace7 | [
"MIT"
] | null | null | null | scripts/update.py | helto4real/container | b19670341117fbe2d8d9940d9838625f32f2ace7 | [
"MIT"
] | null | null | null | from scripts.helpers.packages import (
update_alpine_packages,
update_base_images,
update_python_packages,
)
from scripts.helpers.update_feature_packages import update_s6, update_netcore
update_netcore("3.1")
update_netcore("5.0")
update_s6()
update_base_images()
update_alpine_packages()
update_python_packages()
| 21.866667 | 77 | 0.814024 | 44 | 328 | 5.636364 | 0.386364 | 0.157258 | 0.145161 | 0.209677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020339 | 0.10061 | 328 | 14 | 78 | 23.428571 | 0.820339 | 0 | 0 | 0 | 0 | 0 | 0.018293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
38f89a3ada82137ce2f707d5f6f8bad89821c4b2 | 279 | py | Python | social_tracker_library/__init__.py | jdnascim/social-tracker-library | b98da89683e5b277e0fd51cecd61d2e983f1d473 | [
"MIT"
] | 1 | 2019-10-17T16:45:02.000Z | 2019-10-17T16:45:02.000Z | social_tracker_library/__init__.py | jdnascim/social-tracker-library | b98da89683e5b277e0fd51cecd61d2e983f1d473 | [
"MIT"
] | null | null | null | social_tracker_library/__init__.py | jdnascim/social-tracker-library | b98da89683e5b277e0fd51cecd61d2e983f1d473 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Wherever smart people work, doors are unlocked. -- Steve Wozniak
"""
__title__ = 'social_tracker_library'
__author__ = 'José Nascimento'
from .collection import collection
from .extractor import extractor
from .query_expansion import query_expansion
| 25.363636 | 64 | 0.767025 | 33 | 279 | 6.121212 | 0.757576 | 0.138614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004132 | 0.132616 | 279 | 10 | 65 | 27.9 | 0.830579 | 0.311828 | 0 | 0 | 0 | 0 | 0.201087 | 0.119565 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
38f8d36d478f51367579c144ebd7a7ce3ddc1ccd | 2,526 | py | Python | tests/acceptance/staff_assistance/staff_assistance_sitedata.py | Chinchu-Thambi/YellowWorks | 161199fe5b2864287697b0b6eafd4fb6a234bb71 | [
"MIT"
] | null | null | null | tests/acceptance/staff_assistance/staff_assistance_sitedata.py | Chinchu-Thambi/YellowWorks | 161199fe5b2864287697b0b6eafd4fb6a234bb71 | [
"MIT"
] | null | null | null | tests/acceptance/staff_assistance/staff_assistance_sitedata.py | Chinchu-Thambi/YellowWorks | 161199fe5b2864287697b0b6eafd4fb6a234bb71 | [
"MIT"
] | null | null | null | # -----------------------------------------------------------------------------
# Staff Assistance Flow
# -----------------------------------------------------------------------------
staff_assistance_customer_email_inputtext = "xpath=//*[@id='emailInput']"
staff_assistance_company_input = "//*[@id='customerInput']"
staff_assistance_company_dropdown = "xpath=//*[@id='customerInput']/div/div[2]/div"
staff_assistance_company_select_button = "xpath=//button[contains(text(),'Done')]"
staff_assistance_company_dropdown_option_01 = "xpath=//div[contains(@class,' css-1n7v3ny-option')]"
staff_assistance_changed_company_dropdown_option_01 = "xpath=//div[contains(@class,' css-yt9ioa-option')]"
staff_assistance_changed_company_dropdown_option_02 = "xpath=//div[contains(@class,' css-1n7v3ny-option')]"
staff_assistance_header_banner_change_link = "xpath=//a[contains(text(),'Change')]"
staff_assistance_email_address_label = "xpath=//*[contains(@data-testid,'currentAccountEmail')]"
staff_assistance_currently_access_label = "xpath=//*[@id='gatsby-focus-wrapper']/div[4]/div/div/span"
staff_assistance_company_email_label = "xpath=//*[contains(@data-testid,'companyEmail')]"
staff_assistance_company_name_label = "xpath=//*[contains(@data-testid,'companyName')]"
staff_assistance_header_text_label = "You are currently acting on behalf of"
staff_assistance_customer_email_address_01 = "kumindu777+yellowms@gmail.com"
staff_assistance_header_company_name_label_0101 = "Yellow NZ Advertising MS"
staff_assistance_customer_email_address_02 = "kumindu777+yellowsa@gmail.com"
staff_assistance_header_company_name_label_0201 = "Test Business Search Ads Go Live"
staff_assistance_header_company_link = "xpath=//a[contains(@href, '/my-yellow/choose-customer')]"
staff_assistance_customer_email_address_03 = "kumindu777+fingoo@gmail.com"
staff_assistance_header_company_name_label_0301 = "Ian Charle Langdon"
staff_assistance_header_company_name_label_0302 = "Sriwis Foods"
staff_assistance_select_button = "//a[contains(text(),'Select')]"
| 97.153846 | 125 | 0.602534 | 241 | 2,526 | 5.892116 | 0.344398 | 0.242958 | 0.103521 | 0.098592 | 0.416197 | 0.283099 | 0.257042 | 0.217606 | 0.122535 | 0 | 0 | 0.023785 | 0.234363 | 2,526 | 25 | 126 | 101.04 | 0.710445 | 0.070071 | 0 | 0 | 0 | 0 | 0.351386 | 0.288699 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
38fb49f1875f2265e7e0b4dac0a06e367d246f3f | 3,914 | py | Python | ros/src/tl_detector/light_classification/tl_classifier.py | alexeysas/CarND-Capstone | 353ae36b9b1ec6f7d233c72197ef0d8fa00506b0 | [
"MIT"
] | null | null | null | ros/src/tl_detector/light_classification/tl_classifier.py | alexeysas/CarND-Capstone | 353ae36b9b1ec6f7d233c72197ef0d8fa00506b0 | [
"MIT"
] | null | null | null | ros/src/tl_detector/light_classification/tl_classifier.py | alexeysas/CarND-Capstone | 353ae36b9b1ec6f7d233c72197ef0d8fa00506b0 | [
"MIT"
] | null | null | null | import cv2
import numpy as np
import tensorflow as tf
from styx_msgs.msg import TrafficLight
import rospy
traffic_pattern = -1
class TLClassifier(object):
def __init__(self):
print(tf.__version__)
self.detection_graph = tf.Graph()
with self.detection_graph.as_default() as graph:
od_graph_def = tf.GraphDef()
with tf.gfile.GFile("light_classification/tlc_model/frozen_inference_graph.pb", 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
with tf.Session() as sess:
self.tensor_dict = {}
self.tensor_dict['detection_scores'] = tf.get_default_graph().get_tensor_by_name('detection_scores:0')
self.tensor_dict['detection_classes'] = tf.get_default_graph().get_tensor_by_name('detection_classes:0')
self.image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
self.sess = tf.Session(graph=graph)
def get_classification(self, image):
global traffic_pattern
image = cv2.resize(image[:,:,::-1],(400, 400), cv2.INTER_CUBIC)
final_image = np.expand_dims(image, 0)
output_dict = self.sess.run(self.tensor_dict, feed_dict={self.image_tensor: final_image})
output_dict['detection_classes'] = output_dict['detection_classes'][0].astype(np.uint8)
output_dict['detection_scores'] = output_dict['detection_scores'][0]
score = output_dict['detection_scores'][0]
if score > 0.4:
num = output_dict['detection_classes'][0] -1
else:
num = 6
if score > 0.55 and traffic_pattern == -1:
if num < 3:
traffic_pattern = 1
else:
traffic_pattern = 2
print(" ", num, score)
if num == 6:
return 4
else:
if (traffic_pattern == 1 and num < 3) or (traffic_pattern == 2 and num > 2) :
return num % 3
else:
return 4
return num
def __init__1(self):
#TODO load classifier
with tf.gfile.GFile('light_classification/tlc_model/tlc_model.pb', "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name='tlc',
op_dict=None,
producer_op_list=None)
self.image = graph.get_tensor_by_name('tlc/images:0')
self.keep_prob = graph.get_tensor_by_name('tlc/keep_prob:0')
self.pred = graph.get_tensor_by_name('tlc/predict:0')
self.sess = tf.Session(graph=graph)
self.counter = 0
def get_classification1(self, image):
"""Determines the color of the traffic light in the image
Args:
image (cv::Mat): image containing the traffic light
Returns:
int: ID of traffic light color (specified in styx_msgs/TrafficLight)
"""
#TODO implement light color prediction
image = cv2.resize(image, dsize=(40, 40))
image = cv2.normalize(image.astype('float'), None, -0.5, .5, cv2.NORM_MINMAX)
feed_dict = {self.image: [image],
self.keep_prob: 1.}
lid = self.sess.run(self.pred, feed_dict=feed_dict)[0]
lid = 4 if lid == 3 else lid
# For debugging
light = {2: 'green', 0: 'red', 1: 'yellow', 4: 'unknown'}
self.counter += 1
print self.counter, light[lid]
return lid
| 36.240741 | 120 | 0.568983 | 479 | 3,914 | 4.411273 | 0.263048 | 0.034075 | 0.039754 | 0.045433 | 0.241363 | 0.15381 | 0.121155 | 0.094652 | 0.038807 | 0 | 0 | 0.02427 | 0.326265 | 3,914 | 107 | 121 | 36.579439 | 0.777019 | 0.020184 | 0 | 0.105263 | 0 | 0 | 0.119599 | 0.0276 | 0 | 0 | 0 | 0.018692 | 0 | 0 | null | null | 0 | 0.092105 | null | null | 0.039474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ac026e007f5218fda7723080a658b5cc1c2aeacd | 254 | py | Python | utils.py | theeluwin/wpe | 7dd8ee281beedaa482166cf189e16fd2763beac0 | [
"MIT"
] | 6 | 2018-04-02T02:12:43.000Z | 2022-03-19T13:59:33.000Z | utils.py | theeluwin/wpe | 7dd8ee281beedaa482166cf189e16fd2763beac0 | [
"MIT"
] | 1 | 2021-07-17T13:13:43.000Z | 2021-07-22T01:06:02.000Z | utils.py | theeluwin/wpe | 7dd8ee281beedaa482166cf189e16fd2763beac0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import codecs
class Corpus(object):
def __init__(self, filepath):
self.source = codecs.open(filepath, 'r', encoding='utf-8')
def __iter__(self):
for line in self.source:
yield line.strip()
| 18.142857 | 66 | 0.594488 | 32 | 254 | 4.46875 | 0.6875 | 0.055944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0.259843 | 254 | 13 | 67 | 19.538462 | 0.75 | 0.082677 | 0 | 0 | 0 | 0 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ac39be4d156aae5c28065cc2d5032acb1f349890 | 1,169 | py | Python | klustakwik2/tests/test_partitioning.py | kwikteam/klustakwik2 | 415c945fa795f62f6dad4d017ccd59323a719d51 | [
"BSD-3-Clause"
] | 15 | 2015-07-04T05:38:48.000Z | 2021-05-28T14:01:56.000Z | klustakwik2/tests/test_partitioning.py | kwikteam/klustakwik2 | 415c945fa795f62f6dad4d017ccd59323a719d51 | [
"BSD-3-Clause"
] | 45 | 2015-05-28T15:50:16.000Z | 2022-02-23T11:43:21.000Z | klustakwik2/tests/test_partitioning.py | kwikteam/klustakwik2 | 415c945fa795f62f6dad4d017ccd59323a719d51 | [
"BSD-3-Clause"
] | 17 | 2015-05-29T16:16:35.000Z | 2020-11-16T06:52:23.000Z | from numpy import *
from klustakwik2 import *
from numpy.testing import assert_raises, assert_array_almost_equal, assert_array_equal
from nose import with_setup
from nose.tools import nottest
from numpy.random import randint, rand
from six.moves import range
from .test_mask_starts import generate_multimask_test_data
def test_partitioning():
for _ in range(100):
# actually we only need to generate some arbitrary data, the values don't matter much
data = generate_multimask_test_data(10, 10, 5)
kk = KK(data)
clusters = [0, 1, 0, 2, 0, 2, 0, 3, 0, 3]
kk.initialise_clusters(clusters)
assert_array_equal(kk.num_cluster_members, [5, 1, 2, 2])
assert_array_equal(kk.get_spikes_in_cluster(0), [0, 2, 4, 6, 8])
assert_array_equal(kk.get_spikes_in_cluster(1), [1])
assert_array_equal(kk.get_spikes_in_cluster(2), [3, 5])
assert_array_equal(kk.get_spikes_in_cluster(3), [7, 9])
assert_array_equal(kk.spikes_in_cluster, [0, 2, 4, 6, 8, 1, 3, 5, 7, 9])
assert_array_equal(kk.spikes_in_cluster_offset, [0, 5, 6, 8, 10])
if __name__=='__main__':
test_partitioning()
| 40.310345 | 93 | 0.701454 | 189 | 1,169 | 4.021164 | 0.359788 | 0.130263 | 0.168421 | 0.165789 | 0.294737 | 0.281579 | 0.281579 | 0.281579 | 0.092105 | 0 | 0 | 0.056323 | 0.195038 | 1,169 | 28 | 94 | 41.75 | 0.751328 | 0.071001 | 0 | 0 | 0 | 0 | 0.00738 | 0 | 0 | 0 | 0 | 0 | 0.347826 | 1 | 0.043478 | false | 0 | 0.347826 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
ac537e1bedc49ad91c3e581ccbf1c056416dbbc6 | 106 | py | Python | InputDataExample.py | ZnoKunG/PythonProject | 388b5dfeb0161aee66094e7b2ecc2d6ed13588bd | [
"MIT"
] | null | null | null | InputDataExample.py | ZnoKunG/PythonProject | 388b5dfeb0161aee66094e7b2ecc2d6ed13588bd | [
"MIT"
] | null | null | null | InputDataExample.py | ZnoKunG/PythonProject | 388b5dfeb0161aee66094e7b2ecc2d6ed13588bd | [
"MIT"
] | null | null | null | FirstNum = input("1st Number : ")
SecondNum = input("2nd Number ; ")
print(int(FirstNum) + int(SecondNum)) | 35.333333 | 37 | 0.688679 | 13 | 106 | 5.615385 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.132075 | 106 | 3 | 37 | 35.333333 | 0.771739 | 0 | 0 | 0 | 0 | 0 | 0.242991 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ac55728d5d7b0721be4a5964aa99725fa6b33cb4 | 71 | py | Python | AutomateTheBoringStuff/ex2-7-8.py | kevindeng123/Programming | a06e9f7773fc083bcb153af21e6e9942a4114b4a | [
"MIT"
] | null | null | null | AutomateTheBoringStuff/ex2-7-8.py | kevindeng123/Programming | a06e9f7773fc083bcb153af21e6e9942a4114b4a | [
"MIT"
] | null | null | null | AutomateTheBoringStuff/ex2-7-8.py | kevindeng123/Programming | a06e9f7773fc083bcb153af21e6e9942a4114b4a | [
"MIT"
] | null | null | null | total = 0
for num in range(101):
total = total + num
print (total)
| 14.2 | 23 | 0.633803 | 12 | 71 | 3.75 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 0.253521 | 71 | 4 | 24 | 17.75 | 0.773585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ac5b0fd3f22b98ef853e0ab66e872be1c1011271 | 8,613 | py | Python | src/oci/bds/models/add_worker_nodes_details.py | pabs3/oci-python-sdk | 437ba18ce39af2d1090e277c4bb8750c89f83021 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/bds/models/add_worker_nodes_details.py | pabs3/oci-python-sdk | 437ba18ce39af2d1090e277c4bb8750c89f83021 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | src/oci/bds/models/add_worker_nodes_details.py | pabs3/oci-python-sdk | 437ba18ce39af2d1090e277c4bb8750c89f83021 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class AddWorkerNodesDetails(object):
"""
The information about added nodes.
"""
#: A constant which can be used with the node_type property of a AddWorkerNodesDetails.
#: This constant has a value of "WORKER"
NODE_TYPE_WORKER = "WORKER"
#: A constant which can be used with the node_type property of a AddWorkerNodesDetails.
#: This constant has a value of "COMPUTE_ONLY_WORKER"
NODE_TYPE_COMPUTE_ONLY_WORKER = "COMPUTE_ONLY_WORKER"
def __init__(self, **kwargs):
"""
Initializes a new AddWorkerNodesDetails object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param cluster_admin_password:
The value to assign to the cluster_admin_password property of this AddWorkerNodesDetails.
:type cluster_admin_password: str
:param number_of_worker_nodes:
The value to assign to the number_of_worker_nodes property of this AddWorkerNodesDetails.
:type number_of_worker_nodes: int
:param node_type:
The value to assign to the node_type property of this AddWorkerNodesDetails.
Allowed values for this property are: "WORKER", "COMPUTE_ONLY_WORKER"
:type node_type: str
:param shape:
The value to assign to the shape property of this AddWorkerNodesDetails.
:type shape: str
:param block_volume_size_in_gbs:
The value to assign to the block_volume_size_in_gbs property of this AddWorkerNodesDetails.
:type block_volume_size_in_gbs: int
:param shape_config:
The value to assign to the shape_config property of this AddWorkerNodesDetails.
:type shape_config: oci.bds.models.ShapeConfigDetails
"""
self.swagger_types = {
'cluster_admin_password': 'str',
'number_of_worker_nodes': 'int',
'node_type': 'str',
'shape': 'str',
'block_volume_size_in_gbs': 'int',
'shape_config': 'ShapeConfigDetails'
}
self.attribute_map = {
'cluster_admin_password': 'clusterAdminPassword',
'number_of_worker_nodes': 'numberOfWorkerNodes',
'node_type': 'nodeType',
'shape': 'shape',
'block_volume_size_in_gbs': 'blockVolumeSizeInGBs',
'shape_config': 'shapeConfig'
}
self._cluster_admin_password = None
self._number_of_worker_nodes = None
self._node_type = None
self._shape = None
self._block_volume_size_in_gbs = None
self._shape_config = None
@property
def cluster_admin_password(self):
"""
**[Required]** Gets the cluster_admin_password of this AddWorkerNodesDetails.
Base-64 encoded password for the cluster (and Cloudera Manager) admin user.
:return: The cluster_admin_password of this AddWorkerNodesDetails.
:rtype: str
"""
return self._cluster_admin_password
@cluster_admin_password.setter
def cluster_admin_password(self, cluster_admin_password):
"""
Sets the cluster_admin_password of this AddWorkerNodesDetails.
Base-64 encoded password for the cluster (and Cloudera Manager) admin user.
:param cluster_admin_password: The cluster_admin_password of this AddWorkerNodesDetails.
:type: str
"""
self._cluster_admin_password = cluster_admin_password
@property
def number_of_worker_nodes(self):
"""
**[Required]** Gets the number_of_worker_nodes of this AddWorkerNodesDetails.
Number of additional worker nodes for the cluster.
:return: The number_of_worker_nodes of this AddWorkerNodesDetails.
:rtype: int
"""
return self._number_of_worker_nodes
@number_of_worker_nodes.setter
def number_of_worker_nodes(self, number_of_worker_nodes):
"""
Sets the number_of_worker_nodes of this AddWorkerNodesDetails.
Number of additional worker nodes for the cluster.
:param number_of_worker_nodes: The number_of_worker_nodes of this AddWorkerNodesDetails.
:type: int
"""
self._number_of_worker_nodes = number_of_worker_nodes
@property
def node_type(self):
"""
**[Required]** Gets the node_type of this AddWorkerNodesDetails.
Worker node types, can either be Worker Data node or Compute only worker node.
Allowed values for this property are: "WORKER", "COMPUTE_ONLY_WORKER"
:return: The node_type of this AddWorkerNodesDetails.
:rtype: str
"""
return self._node_type
@node_type.setter
def node_type(self, node_type):
"""
Sets the node_type of this AddWorkerNodesDetails.
Worker node types, can either be Worker Data node or Compute only worker node.
:param node_type: The node_type of this AddWorkerNodesDetails.
:type: str
"""
allowed_values = ["WORKER", "COMPUTE_ONLY_WORKER"]
if not value_allowed_none_or_none_sentinel(node_type, allowed_values):
raise ValueError(
"Invalid value for `node_type`, must be None or one of {0}"
.format(allowed_values)
)
self._node_type = node_type
@property
def shape(self):
"""
Gets the shape of this AddWorkerNodesDetails.
Shape of the node. This has to be specified when adding compute only worker node at the first time. Otherwise, it's a read-only property.
:return: The shape of this AddWorkerNodesDetails.
:rtype: str
"""
return self._shape
@shape.setter
def shape(self, shape):
"""
Sets the shape of this AddWorkerNodesDetails.
Shape of the node. This has to be specified when adding compute only worker node at the first time. Otherwise, it's a read-only property.
:param shape: The shape of this AddWorkerNodesDetails.
:type: str
"""
self._shape = shape
@property
def block_volume_size_in_gbs(self):
"""
Gets the block_volume_size_in_gbs of this AddWorkerNodesDetails.
The size of block volume in GB to be attached to the given node. This has to be specified when adding compute only worker node at the first time. Otherwise, it's a read-only property.
:return: The block_volume_size_in_gbs of this AddWorkerNodesDetails.
:rtype: int
"""
return self._block_volume_size_in_gbs
@block_volume_size_in_gbs.setter
def block_volume_size_in_gbs(self, block_volume_size_in_gbs):
"""
Sets the block_volume_size_in_gbs of this AddWorkerNodesDetails.
The size of block volume in GB to be attached to the given node. This has to be specified when adding compute only worker node at the first time. Otherwise, it's a read-only property.
:param block_volume_size_in_gbs: The block_volume_size_in_gbs of this AddWorkerNodesDetails.
:type: int
"""
self._block_volume_size_in_gbs = block_volume_size_in_gbs
@property
def shape_config(self):
"""
Gets the shape_config of this AddWorkerNodesDetails.
:return: The shape_config of this AddWorkerNodesDetails.
:rtype: oci.bds.models.ShapeConfigDetails
"""
return self._shape_config
@shape_config.setter
def shape_config(self, shape_config):
"""
Sets the shape_config of this AddWorkerNodesDetails.
:param shape_config: The shape_config of this AddWorkerNodesDetails.
:type: oci.bds.models.ShapeConfigDetails
"""
self._shape_config = shape_config
def __repr__(self):
return formatted_flat_dict(self)
def __eq__(self, other):
if other is None:
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| 36.037657 | 245 | 0.676652 | 1,085 | 8,613 | 5.105991 | 0.152074 | 0.033574 | 0.146209 | 0.061733 | 0.677617 | 0.572202 | 0.440975 | 0.355415 | 0.337726 | 0.305054 | 0 | 0.003605 | 0.259259 | 8,613 | 238 | 246 | 36.189076 | 0.864734 | 0.542436 | 0 | 0.075949 | 0 | 0 | 0.129327 | 0.042794 | 0 | 0 | 0 | 0 | 0 | 1 | 0.202532 | false | 0.101266 | 0.025316 | 0.025316 | 0.392405 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ac891ba15179ab5fee32bcbf76fbc9ba218b09c8 | 17,073 | py | Python | pysnmp-with-texts/ZHONE-PHY-SONET-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/ZHONE-PHY-SONET-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/ZHONE-PHY-SONET-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module ZHONE-PHY-SONET-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/ZHONE-PHY-SONET-MIB
# Produced by pysmi-0.3.4 at Wed May 1 15:47:49 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, ValueSizeConstraint, ValueRangeConstraint, ConstraintsIntersection, SingleValueConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "ValueSizeConstraint", "ValueRangeConstraint", "ConstraintsIntersection", "SingleValueConstraint")
InterfaceIndex, = mibBuilder.importSymbols("IF-MIB", "InterfaceIndex")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
Unsigned32, Bits, Counter32, Gauge32, MibIdentifier, ObjectIdentity, Counter64, NotificationType, Integer32, MibScalar, MibTable, MibTableRow, MibTableColumn, iso, IpAddress, ModuleIdentity, TimeTicks = mibBuilder.importSymbols("SNMPv2-SMI", "Unsigned32", "Bits", "Counter32", "Gauge32", "MibIdentifier", "ObjectIdentity", "Counter64", "NotificationType", "Integer32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "iso", "IpAddress", "ModuleIdentity", "TimeTicks")
TextualConvention, TruthValue, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "TruthValue", "DisplayString")
sonetMediumEntry, sonetPathCurrentStatus, sonetSectionCurrentStatus, sonetLineCurrentStatus = mibBuilder.importSymbols("SONET-MIB", "sonetMediumEntry", "sonetPathCurrentStatus", "sonetSectionCurrentStatus", "sonetLineCurrentStatus")
zhoneModules, zhoneSonet = mibBuilder.importSymbols("Zhone", "zhoneModules", "zhoneSonet")
phySonet = ModuleIdentity((1, 3, 6, 1, 4, 1, 5504, 6, 16))
phySonet.setRevisions(('2004-08-18 11:47', '2003-07-10 13:30', '2002-03-26 14:30', '2001-09-12 15:08', '2001-07-19 18:00', '2001-02-22 11:35', '2000-12-19 15:23', '2000-12-18 16:20',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: phySonet.setRevisionsDescriptions(('1.03.04 - Add zhoneSonetErrorStatsTable.', '1.03.02 Add sonetPathStatusChange trap.', '1.03.01 Add sonetSectionStatusChange trap and sonetLineStatusChange trap.', 'V01.03.00 Changed names for valid values for sonetClockTransmitSource', 'V01.02.00 Add Sonet clock source change trap.', 'V01.01.00 - Add Sonet Medium Extension Table.', 'V01.00.01 - Add Zhone keywords.', 'V01.00.00 - Initial Release',))
if mibBuilder.loadTexts: phySonet.setLastUpdated('200408181330Z')
if mibBuilder.loadTexts: phySonet.setOrganization('Zhone Technologies, Inc.')
if mibBuilder.loadTexts: phySonet.setContactInfo(' Postal: Zhone Technologies, Inc. @ Zhone Way 7001 Oakport Street Oakland, CA 94621 USA Toll-Free: +1 877-ZHONE20 (+1 877-946-6320) Tel: +1-510-777-7000 Fax: +1-510-777-7001 E-mail: support@zhone.com')
if mibBuilder.loadTexts: phySonet.setDescription('SONET physical MIB to configure and monitor SONET physical attributes. ')
sonetClockTable = MibTable((1, 3, 6, 1, 4, 1, 5504, 5, 9, 1), )
if mibBuilder.loadTexts: sonetClockTable.setStatus('current')
if mibBuilder.loadTexts: sonetClockTable.setDescription('MIB table for clock related configuration for SONET interfaces.')
sonetClockEntry = MibTableRow((1, 3, 6, 1, 4, 1, 5504, 5, 9, 1, 1), )
sonetMediumEntry.registerAugmentions(("ZHONE-PHY-SONET-MIB", "sonetClockEntry"))
sonetClockEntry.setIndexNames(*sonetMediumEntry.getIndexNames())
if mibBuilder.loadTexts: sonetClockEntry.setStatus('current')
if mibBuilder.loadTexts: sonetClockEntry.setDescription('An entry of the sonetClockEntry. sonetClockEntry is augmented to sonetMediumEntry defined in rfc2558.mib. Whenever a row is instantiated in the sonetMediumTable, a corresponding row will be instantiated in the sonetClockTable.')
sonetClockExternalRecovery = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 1, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sonetClockExternalRecovery.setStatus('current')
if mibBuilder.loadTexts: sonetClockExternalRecovery.setDescription("This variable indicates if external clock recovery is enabled for this SONET interface. The default value is 'enabled'.")
sonetClockTransmitSource = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("loopTiming", 1), ("throughTiming", 2), ("localTiming", 3), ("external155MHz", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sonetClockTransmitSource.setStatus('current')
if mibBuilder.loadTexts: sonetClockTransmitSource.setDescription("This variable describes the SONET transmit clock source. Valid values are: loopTiming - transmit clock synthesized from the recovered receive clock. throughTiming - transmit clock is derived from the recovered receive clock of another SONET interface. localTiming - transmit clock synthesized from a local clock source or that an external clock is attached to the box containing the interface. external155MHz - transmit clock synthesized from an external 155.52 MHz source. The default value is 'loopTiming'. 'external155MHz' option is not valid for Sechtor 100.")
sonetMediumExtTable = MibTable((1, 3, 6, 1, 4, 1, 5504, 5, 9, 2), )
if mibBuilder.loadTexts: sonetMediumExtTable.setStatus('current')
if mibBuilder.loadTexts: sonetMediumExtTable.setDescription('This is an extenion of the standard Sonet MIB (RFC 2558).')
sonetMediumExtEntry = MibTableRow((1, 3, 6, 1, 4, 1, 5504, 5, 9, 2, 1), )
sonetMediumEntry.registerAugmentions(("ZHONE-PHY-SONET-MIB", "sonetMediumExtEntry"))
sonetMediumExtEntry.setIndexNames(*sonetMediumEntry.getIndexNames())
if mibBuilder.loadTexts: sonetMediumExtEntry.setStatus('current')
if mibBuilder.loadTexts: sonetMediumExtEntry.setDescription('Each row is an extension to the sonetMediumTable for Zhone specific fields. This row is created when the augmented sonetMediumEntry is created.')
sonetMediumExtScrambleEnabled = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 2, 1, 1), TruthValue().clone('true')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sonetMediumExtScrambleEnabled.setStatus('current')
if mibBuilder.loadTexts: sonetMediumExtScrambleEnabled.setDescription('This field describes the enabled status of the Sonet Scramble mode. If this field is true(1) then Scramble mode is enabled, if this field is false(2) scramble mode is disable.')
sonetMediumExtLineScrmEnabled = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 2, 1, 2), TruthValue().clone('true')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sonetMediumExtLineScrmEnabled.setStatus('current')
if mibBuilder.loadTexts: sonetMediumExtLineScrmEnabled.setDescription('This field describes the enabled status of the Line level Sonet Scramble mode. If this field is true(1), then Scramble mode is enabled. If this field is false(2), scramble mode is disable.')
sonetTraps = ObjectIdentity((1, 3, 6, 1, 4, 1, 5504, 5, 9, 3))
if mibBuilder.loadTexts: sonetTraps.setStatus('current')
if mibBuilder.loadTexts: sonetTraps.setDescription('All Zhone Sonet traps will be defined under this object identity.')
sonetV2Traps = ObjectIdentity((1, 3, 6, 1, 4, 1, 5504, 5, 9, 3, 0))
if mibBuilder.loadTexts: sonetV2Traps.setStatus('current')
if mibBuilder.loadTexts: sonetV2Traps.setDescription('This object identity adds a zero(0) for the next to last sub-identifier which should be used for new SNMPv2 Traps.')
sonetClockTransmitSourceChange = NotificationType((1, 3, 6, 1, 4, 1, 5504, 5, 9, 3, 0, 1)).setObjects(("ZHONE-PHY-SONET-MIB", "sonetClockExternalRecovery"), ("ZHONE-PHY-SONET-MIB", "sonetClockTransmitSource"))
if mibBuilder.loadTexts: sonetClockTransmitSourceChange.setStatus('current')
if mibBuilder.loadTexts: sonetClockTransmitSourceChange.setDescription('A notification trap is sent when either sonetClockExternalRecovery or sonetClockTransmitSource is changed. The trap object is identified by the OID instance of sonetClockTransmitSource described in the OBJECTS clause.')
sonetSectionStatusChange = NotificationType((1, 3, 6, 1, 4, 1, 5504, 5, 9, 3, 0, 2)).setObjects(("SONET-MIB", "sonetSectionCurrentStatus"))
if mibBuilder.loadTexts: sonetSectionStatusChange.setStatus('current')
if mibBuilder.loadTexts: sonetSectionStatusChange.setDescription('A notification trap is sent when there is a change in the sonet SECTION status. Currenly the following are supported: NO-DEFECT = 1 LOS = 2 LOF = 4 The trap object is identified by the OID instance of sonetSectionCurrentStatus described in the OBJECTS clause.')
sonetLineStatusChange = NotificationType((1, 3, 6, 1, 4, 1, 5504, 5, 9, 3, 0, 3)).setObjects(("SONET-MIB", "sonetLineCurrentStatus"))
if mibBuilder.loadTexts: sonetLineStatusChange.setStatus('current')
if mibBuilder.loadTexts: sonetLineStatusChange.setDescription('A notification trap is sent when there is a change in the sonet LINE status. Currenly the following are supported: NO-DEFECT = 1 AIS = 2 RDI = 4 The trap object is identified by the OID instance of sonetLineCurrentStatus described in the OBJECTS clause.')
sonetPathStatusChange = NotificationType((1, 3, 6, 1, 4, 1, 5504, 5, 9, 3, 0, 4)).setObjects(("SONET-MIB", "sonetPathCurrentStatus"))
if mibBuilder.loadTexts: sonetPathStatusChange.setStatus('current')
if mibBuilder.loadTexts: sonetPathStatusChange.setDescription('A notification trap is sent when there is a change in the sonetPathCurrentStatus. Currenly the following are supported: 1 sonetPathNoDefect 2 sonetPathSTSLOP 4 sonetPathSTSAIS 8 sonetPathSTSRDI 16 sonetPathUnequipped 32 sonetPathSignalLabelMismatch The trap object is identified by the OID instance of sonetPathCurrentStatus described in the OBJECTS clause.')
zhoneSonetErrorStatsTable = MibTable((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6), )
if mibBuilder.loadTexts: zhoneSonetErrorStatsTable.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsTable.setDescription('Description.')
zhoneSonetErrorStatsEntry = MibTableRow((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1), ).setIndexNames((0, "ZHONE-PHY-SONET-MIB", "zhoneSonetErrorStatsIndex"))
if mibBuilder.loadTexts: zhoneSonetErrorStatsEntry.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsEntry.setDescription('Description.')
zhoneSonetErrorStatsIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 1), InterfaceIndex())
if mibBuilder.loadTexts: zhoneSonetErrorStatsIndex.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsIndex.setDescription('Description.')
zhoneSonetErrorStatsLineFebeCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsLineFebeCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsLineFebeCount.setDescription('Number of RLOP far-end block errors (FEBE).')
zhoneSonetErrorStatsPathFebeCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsPathFebeCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsPathFebeCount.setDescription('Number of RPOP far-end block errors (FEBE).')
zhoneSonetErrorStatsLineBipCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 4), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsLineBipCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsLineBipCount.setDescription('Number of Receive Line Overhead Processor (RLOP) BIP-8 errors. The RLOP is responsible for line-level alarms and for monitoring performance.')
zhoneSonetErrorStatsSectionBipCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 5), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsSectionBipCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsSectionBipCount.setDescription('Number of Receive Section Overhead Processor (RSOP) bit-interleaved parity (BIP)-8 errors. The RSOP synchronizes and descrambles frames and provides section-level alarms and performance monitoring.')
zhoneSonetErrorStatsPathBipCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 6), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsPathBipCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsPathBipCount.setDescription('Number of Receive Path Overhead Processor (RPOP) BIP-8 errors. The RSOP interprets pointers and extracts path overhead and the synchronous payload envelope. It is also responsible for path-level alarms and for monitoring performance.')
zhoneSonetErrorStatsOofCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 7), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsOofCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsOofCount.setDescription('Near end is out of frame. False indicates that the line is up and in frame.')
zhoneSonetErrorStatsRxCellCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 8), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsRxCellCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsRxCellCount.setDescription('Receive ATM Cell Processor (RACP) receive cell count.')
zhoneSonetErrorStatsTxCellCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 9), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsTxCellCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsTxCellCount.setDescription('Transmit ATM Cell Processor (TACP) transmit cell count.')
zhoneSonetErrorStatsHecCorrectedCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 10), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsHecCorrectedCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsHecCorrectedCount.setDescription('Number of corrected HEC cells.')
zhoneSonetErrorStatsHecUncorrectedCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 11), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsHecUncorrectedCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsHecUncorrectedCount.setDescription('Number of uncorrected dropped HEC cells.')
zhoneSonetErrorStatsCellFifoOverflowCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 12), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsCellFifoOverflowCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsCellFifoOverflowCount.setDescription('Number of cells dropped because of first in, first out (FIFO) overflow.')
zhoneSonetErrorStatsLocdCount = MibTableColumn((1, 3, 6, 1, 4, 1, 5504, 5, 9, 6, 1, 13), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: zhoneSonetErrorStatsLocdCount.setStatus('current')
if mibBuilder.loadTexts: zhoneSonetErrorStatsLocdCount.setDescription('Loss of cell delineation.')
mibBuilder.exportSymbols("ZHONE-PHY-SONET-MIB", sonetLineStatusChange=sonetLineStatusChange, zhoneSonetErrorStatsPathBipCount=zhoneSonetErrorStatsPathBipCount, zhoneSonetErrorStatsSectionBipCount=zhoneSonetErrorStatsSectionBipCount, sonetTraps=sonetTraps, zhoneSonetErrorStatsPathFebeCount=zhoneSonetErrorStatsPathFebeCount, zhoneSonetErrorStatsRxCellCount=zhoneSonetErrorStatsRxCellCount, sonetMediumExtEntry=sonetMediumExtEntry, sonetMediumExtTable=sonetMediumExtTable, sonetClockEntry=sonetClockEntry, sonetSectionStatusChange=sonetSectionStatusChange, sonetClockExternalRecovery=sonetClockExternalRecovery, sonetClockTransmitSource=sonetClockTransmitSource, sonetMediumExtLineScrmEnabled=sonetMediumExtLineScrmEnabled, zhoneSonetErrorStatsHecCorrectedCount=zhoneSonetErrorStatsHecCorrectedCount, zhoneSonetErrorStatsTable=zhoneSonetErrorStatsTable, zhoneSonetErrorStatsIndex=zhoneSonetErrorStatsIndex, zhoneSonetErrorStatsLineFebeCount=zhoneSonetErrorStatsLineFebeCount, phySonet=phySonet, sonetV2Traps=sonetV2Traps, sonetMediumExtScrambleEnabled=sonetMediumExtScrambleEnabled, zhoneSonetErrorStatsEntry=zhoneSonetErrorStatsEntry, zhoneSonetErrorStatsLineBipCount=zhoneSonetErrorStatsLineBipCount, PYSNMP_MODULE_ID=phySonet, zhoneSonetErrorStatsHecUncorrectedCount=zhoneSonetErrorStatsHecUncorrectedCount, sonetPathStatusChange=sonetPathStatusChange, sonetClockTransmitSourceChange=sonetClockTransmitSourceChange, sonetClockTable=sonetClockTable, zhoneSonetErrorStatsCellFifoOverflowCount=zhoneSonetErrorStatsCellFifoOverflowCount, zhoneSonetErrorStatsOofCount=zhoneSonetErrorStatsOofCount, zhoneSonetErrorStatsLocdCount=zhoneSonetErrorStatsLocdCount, zhoneSonetErrorStatsTxCellCount=zhoneSonetErrorStatsTxCellCount)
| 144.686441 | 1,720 | 0.806244 | 1,886 | 17,073 | 7.297455 | 0.208908 | 0.05493 | 0.096127 | 0.008719 | 0.343312 | 0.225096 | 0.166824 | 0.146116 | 0.137252 | 0.12272 | 0 | 0.051892 | 0.086862 | 17,073 | 117 | 1,721 | 145.923077 | 0.830917 | 0.019446 | 0 | 0 | 0 | 0.137615 | 0.356562 | 0.033887 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.082569 | 0 | 0.082569 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3bac9991d0a3f9781917014c3f31e06c304cb203 | 167 | py | Python | ToolApp/WordCloudApp/test/jieba_test.py | JimouChen/python-application | b7b16506a17e2c304d1c5fabd6385e96be211c56 | [
"Apache-2.0"
] | 1 | 2020-08-09T12:47:27.000Z | 2020-08-09T12:47:27.000Z | ToolApp/WordCloudApp/test/jieba_test.py | JimouChen/Python_Application | b7b16506a17e2c304d1c5fabd6385e96be211c56 | [
"Apache-2.0"
] | null | null | null | ToolApp/WordCloudApp/test/jieba_test.py | JimouChen/Python_Application | b7b16506a17e2c304d1c5fabd6385e96be211c56 | [
"Apache-2.0"
] | null | null | null | """
# @Time : 2020/8/31
# @Author : Jimou Chen
"""
import jieba
text_list = jieba.lcut('粉丝的芳草飞机饿哦平均分')
print(text_list)
text = '---'.join(text_list)
print(text) | 16.7 | 38 | 0.640719 | 23 | 167 | 4.521739 | 0.652174 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.161677 | 167 | 10 | 39 | 16.7 | 0.692857 | 0.287425 | 0 | 0 | 0 | 0 | 0.133929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.4 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3bb9b49963b0a06322bfcbb10b8c228d50b8ee18 | 430 | py | Python | 2015/01/not_quite_lisp.py | GeoffRiley/AdventOfCode | 27fe8670a1923cb3b0675784f5e855ad18c29c93 | [
"Unlicense"
] | 2 | 2020-12-12T03:18:45.000Z | 2021-12-17T00:35:33.000Z | 2015/01/not_quite_lisp.py | GeoffRiley/AdventOfCode | 27fe8670a1923cb3b0675784f5e855ad18c29c93 | [
"Unlicense"
] | null | null | null | 2015/01/not_quite_lisp.py | GeoffRiley/AdventOfCode | 27fe8670a1923cb3b0675784f5e855ad18c29c93 | [
"Unlicense"
] | null | null | null | def find_floor(brackets: str) -> int:
return sum(1 if c == '(' else -1 for c in brackets)
def find_basement_entry(brackets: str) -> int:
return [n + 1 for n in range(len(brackets)) if find_floor(brackets[:n + 1]) == -1][0]
if __name__ == '__main__':
with open('input') as f:
bracket_text = f.read()
print(f'Part 1: {find_floor(bracket_text)}')
print(f'Part 2: {find_basement_entry(bracket_text)}')
| 30.714286 | 89 | 0.639535 | 70 | 430 | 3.671429 | 0.471429 | 0.105058 | 0.132296 | 0.155642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023188 | 0.197674 | 430 | 13 | 90 | 33.076923 | 0.721739 | 0 | 0 | 0 | 0 | 0 | 0.211628 | 0.14186 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0.222222 | 0.444444 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
3bc1cff21c183110e228ec1743f045bd3aa770ee | 1,835 | py | Python | tests/results/025_complex06.py | CowboyTim/python-storable | f03cf0eae60eeb9c3345e9ddf3370a49a316e472 | [
"Zlib"
] | 8 | 2015-04-24T07:27:42.000Z | 2020-10-30T20:51:40.000Z | tests/results/025_complex06.py | CowboyTim/python-storable | f03cf0eae60eeb9c3345e9ddf3370a49a316e472 | [
"Zlib"
] | 7 | 2015-09-08T01:40:12.000Z | 2021-09-29T15:19:46.000Z | tests/results/025_complex06.py | CowboyTim/python-storable | f03cf0eae60eeb9c3345e9ddf3370a49a316e472 | [
"Zlib"
] | 10 | 2015-07-22T13:57:04.000Z | 2020-09-03T18:32:39.000Z | result = [
None,
6,
['a', 'b', 'c', {'uu': 5.6, 'oo': [Ellipsis], 'ii': {Ellipsis}}],
{'uu': 5.6, 'oo': [Ellipsis], 'ii': {Ellipsis}},
{'uu': 5.6, 'oo': [Ellipsis], 'ii': {Ellipsis}},
['a', 'b', 'c', {'uu': 5.6, 'oo': [Ellipsis], 'ii': {Ellipsis}}],
None,
6
]
def is_equal(a, b, message):
comparable_a = [
a[0],
a[1],
[a[2][0], a[2][1], a[2][2], {'uu': a[2][3]['uu'],
'oo': [Ellipsis],
'ii': [Ellipsis]}],
{'uu': a[3]['uu'], 'oo': [Ellipsis], 'ii': [Ellipsis]},
{'uu': a[4]['uu'], 'oo': [Ellipsis], 'ii': [Ellipsis]},
[a[5][0], a[5][1], a[5][2], {'uu': a[5][3]['uu'],
'oo': [Ellipsis],
'ii': [Ellipsis]}],
a[6],
a[7]
]
comparable_b = [
b[0],
b[1],
[b[2][0], b[2][1], b[2][2], {'uu': b[2][3]['uu'],
'oo': [Ellipsis],
'ii': [Ellipsis]}],
{'uu': b[3]['uu'], 'oo': [Ellipsis], 'ii': [Ellipsis]},
{'uu': b[4]['uu'], 'oo': [Ellipsis], 'ii': [Ellipsis]},
[b[5][0], b[5][1], b[5][2], {'uu': b[5][3]['uu'],
'oo': [Ellipsis],
'ii': [Ellipsis]}],
b[6],
b[7]
]
if len(a) != len(b):
raise AssertionError(message + ': Array length differs')
non_recursive_a = [a[0], a[1], a[-2], a[-1]]
non_recursive_b = [b[0], b[1], b[-2], b[-1]]
if non_recursive_a != non_recursive_b:
raise AssertionError(message + ': non-recursive part differs')
if comparable_a != comparable_b:
raise AssertionError(message + ': recursive part differs')
| 32.767857 | 70 | 0.370027 | 227 | 1,835 | 2.933921 | 0.140969 | 0.18018 | 0.216216 | 0.36036 | 0.501502 | 0.501502 | 0.426426 | 0.312312 | 0.153153 | 0.153153 | 0 | 0.054007 | 0.374387 | 1,835 | 55 | 71 | 33.363636 | 0.526132 | 0 | 0 | 0.297872 | 0 | 0 | 0.091553 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 1 | 0.021277 | false | 0 | 0 | 0 | 0.021277 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3bdb694e0e29d95359c5084b8e5002849921f98f | 8,096 | py | Python | Spark/spark_media_tipo.py | Dielam/Dielam.github.io | 19f01d693ef2c590f3ac35a3a143ae3dedf8594e | [
"MIT"
] | null | null | null | Spark/spark_media_tipo.py | Dielam/Dielam.github.io | 19f01d693ef2c590f3ac35a3a143ae3dedf8594e | [
"MIT"
] | null | null | null | Spark/spark_media_tipo.py | Dielam/Dielam.github.io | 19f01d693ef2c590f3ac35a3a143ae3dedf8594e | [
"MIT"
] | 1 | 2020-12-23T16:45:20.000Z | 2020-12-23T16:45:20.000Z | #!/usr/bin/python
import sys
from pyspark import SparkContext
from shutil import rmtree
import os.path as path
if path.exists("output"):
rmtree("output")
def gasolina_95(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[0]!= 'precio_gasolina_95')
datosRDD = datosRDD.filter(lambda x: x[0]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[0])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_gasolina_95.txt")
return 0
def gasoleo_a(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[1]!= 'precio_gasleo_a')
datosRDD = datosRDD.filter(lambda x: x[1]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[1])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_gasoleo_a.txt")
return 0
def gasoleo_b(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[2]!= 'precio_gasleo_b')
datosRDD = datosRDD.filter(lambda x: x[2]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[2])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_gasoleo_b.txt")
return 0
def bioetanol(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[3]!= 'precio_bioetanol')
datosRDD = datosRDD.filter(lambda x: x[3]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[3])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_bioetanol.txt")
return 0
def nuevo_gasoleo_a(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[4]!= 'precio_nuevo_gasleo_a')
datosRDD = datosRDD.filter(lambda x: x[4]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[4])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_nuevo_gasoleo_a.txt")
return 0
def biodiesel(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[5]!= 'precio_biodiesel')
datosRDD = datosRDD.filter(lambda x: x[5]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[5])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_biodiesel.txt")
return 0
def ester_metilico(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[6]!= 'f__ster_metlico')
datosRDD = datosRDD.filter(lambda x: x[6]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[6])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_ester_metilico.txt")
return 0
def bioalcohol(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[7]!= 'f__bioalcohol')
datosRDD = datosRDD.filter(lambda x: x[7]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[7])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_bioalcohol.txt")
return 0
def precio_gasolina_98(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[8]!= 'precio_gasolina_98')
datosRDD = datosRDD.filter(lambda x: x[8]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[8])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_gasolina_98.txt")
return 0
def gas_natural_comprimido(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[9]!= 'precio_gas_natural_comprimido')
datosRDD = datosRDD.filter(lambda x: x[9]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[9])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_gas_natural_comprimido.txt")
return 0
def gas_natural_licuado(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[10]!= 'precio_gas_natural_licuado')
datosRDD = datosRDD.filter(lambda x: x[10]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[10])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_gas_natural_licuado.txt")
return 0
def gas_licuados_del_petr(datosRDD):
datosRDD = datosRDD.filter(lambda x: x[11]!= 'precio_gases_licuados_del_petr')
datosRDD = datosRDD.filter(lambda x: x[11]!= '')
precioRDD = datosRDD.map(lambda rows: (["1", float(rows[11])]))
precioRDD = precioRDD.reduceByKey(lambda x,y: x+y)
tamRDD = datosRDD.count()
mediaTotal = precioRDD.map(lambda rows: ([rows[1], int(tamRDD)]))
mediaTotal = mediaTotal.map(lambda calc:(calc[0]/calc[1]))
mediaTotal.saveAsTextFile("output/media_tipo_gas_licuado_del_petr.txt")
return 0
if len(sys.argv) > 1:
sc = SparkContext()
tipo = sys.argv[1]
tipoRDD = sc.textFile("Gasolineras.csv")
tipoRDD = tipoRDD.map(lambda line: line.encode("ascii", "ignore"))
tipoRDD = tipoRDD.filter(lambda line: str(line.split(',')[11]) != '')
tipoRDD = tipoRDD.map(lambda line:(str(line.split(',')[11]) , str(line.split(',')[12]), str(line.split(',')[13]), str(line.split(',')[14]), str(line.split(',')[15]), str(line.split(',')[16]), str(line.split(',')[17]), str(line.split(',')[18]), str(line.split(',')[19]), str(line.split(',')[20]), str(line.split(',')[21]), str(line.split(',')[22])))
if tipoRDD.isEmpty():
result = sc.parallelize("0")
result.saveAsTextFile("output")
else:
if tipo == "gasolina_95":
gasolina_95(tipoRDD)
elif tipo == "gasoleo_a":
gasoleo_a(tipoRDD)
elif tipo == "gasoleo_b":
gasoleo_b(tipoRDD)
elif tipo == "bioetanol":
bioetanol(tipoRDD)
elif tipo == "nuevo_gasoleo_a":
nuevo_gasoleo_a(tipoRDD)
elif tipo == "biodiesel":
biodiesel(tipoRDD)
elif tipo == "ester_metilico":
ester_metilico(tipoRDD)
elif tipo == "bioalcohol":
bioalcohol(tipoRDD)
elif tipo == "gasolina_98":
gasolina_98(tipoRDD)
elif tipo == "gas_natural_comprimido":
gas_natural_comprimido(tipoRDD)
elif tipo == "gas_natural_licuado":
gas_natural_licuado(tipoRDD)
elif tipo == "glp":
gas_licuados_del_petr(tipoRDD)
else:
print "ERROR DE TIPO"
else:
print "Error no ha introducido tipo."
| 47.345029 | 352 | 0.664526 | 1,054 | 8,096 | 5.003795 | 0.104364 | 0.064846 | 0.100114 | 0.127418 | 0.769814 | 0.714448 | 0.689989 | 0.555745 | 0.450702 | 0.433257 | 0 | 0.022497 | 0.170949 | 8,096 | 170 | 353 | 47.623529 | 0.76326 | 0.001976 | 0 | 0.381818 | 0 | 0 | 0.112266 | 0.068078 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.024242 | null | null | 0.012121 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3be21c03fb4faff64d6a2faeeae62868998b7dbf | 200 | py | Python | TOPCOMP18/up2.py | AtilioA/CompProg-20211 | ebe4d5e39957c40c495f292275eacf3fbcac0799 | [
"Unlicense"
] | null | null | null | TOPCOMP18/up2.py | AtilioA/CompProg-20211 | ebe4d5e39957c40c495f292275eacf3fbcac0799 | [
"Unlicense"
] | null | null | null | TOPCOMP18/up2.py | AtilioA/CompProg-20211 | ebe4d5e39957c40c495f292275eacf3fbcac0799 | [
"Unlicense"
] | null | null | null | p1, p2 = map(int, input().split())
m = (p1 + p2) / 2
if m >= 7:
print(f'{m} - Aprovado', end='')
elif m < 5:
print(f'{m} - Reprovado', end='')
else:
print(f'{m} - De Recuperacao', end='')
| 22.222222 | 42 | 0.5 | 33 | 200 | 3.030303 | 0.606061 | 0.18 | 0.21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.23 | 200 | 8 | 43 | 25 | 0.603896 | 0 | 0 | 0 | 0 | 0 | 0.245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce04fd03fb7f7f27c85504de9df357274f232e95 | 2,377 | py | Python | src/lib/centipede/Dispatcher/Renderfarm/RenderfarmJob/ExpandedJob.py | paulondc/centipede | 6000a6964c2ce4a1f9c5ba0fac1d5ab0fead1fbe | [
"MIT"
] | null | null | null | src/lib/centipede/Dispatcher/Renderfarm/RenderfarmJob/ExpandedJob.py | paulondc/centipede | 6000a6964c2ce4a1f9c5ba0fac1d5ab0fead1fbe | [
"MIT"
] | null | null | null | src/lib/centipede/Dispatcher/Renderfarm/RenderfarmJob/ExpandedJob.py | paulondc/centipede | 6000a6964c2ce4a1f9c5ba0fac1d5ab0fead1fbe | [
"MIT"
] | null | null | null | import os
import uuid
from .RenderfarmJob import RenderfarmJob
class ExpandedJob(RenderfarmJob):
"""
Implements an expanded render farm job.
An expanded job is used to run a task on the farm. The processing
of the task can be devided by chunks or in case a task cannot
be divided then it performs the execution of an entire task.
"""
def __init__(self, *args, **kwargs):
"""
Create a render farm expanded job object.
"""
super(ExpandedJob, self).__init__(*args, **kwargs)
self.__chunkSize = 0
self.__currentChunk = 0
self.__chunkTotal = 0
self.__totalInChunk = 0
self.__taskResultFilePath = None
def taskResultFilePath(self):
"""
Return the file path about where the result of the task is going to be serialized.
"""
if self.__taskResultFilePath is None:
self.__taskResultFilePath = os.path.join(
self.jobDirectory(),
"result_{}.json".format(
str(uuid.uuid1())
)
)
return self.__taskResultFilePath
def setChunkSize(self, chunkSize):
"""
Associate the chunk size with the job.
"""
self.__chunkSize = chunkSize
def chunkSize(self):
"""
Return the job chunk size.
"""
return self.__chunkSize
def setTotalInChunk(self, totalInChunk):
"""
Associate the information about the total crawlers in the current chunk.
"""
self.__totalInChunk = totalInChunk
def totalInChunk(self):
"""
Return the information about the total of crawlers in the current chunk.
"""
return self.__totalInChunk
def setCurrentChunk(self, currentChunk):
"""
Associate the information about the current chunk with the job.
"""
self.__currentChunk = currentChunk
def currentChunk(self):
"""
Return information about the current chunk.
"""
return self.__currentChunk
def setChunkTotal(self, chunkTotal):
"""
Associate the total number of chunks with the job.
"""
self.__chunkTotal = chunkTotal
def chunkTotal(self):
"""
Return the job chunk total.
"""
return self.__chunkTotal
| 27.321839 | 90 | 0.595709 | 246 | 2,377 | 5.589431 | 0.329268 | 0.036364 | 0.037818 | 0.030545 | 0.181091 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003133 | 0.328565 | 2,377 | 86 | 91 | 27.639535 | 0.858396 | 0.316786 | 0 | 0 | 0 | 0 | 0.010249 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.083333 | 0 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ce1437d02347efa7dad7cc862c926e16ffce6174 | 4,336 | py | Python | pywinrt/winsdk/windows/ui/core/__init__.py | pywinrt/python-winsdk | 1e2958a712949579f5e84d38220062b2cec12511 | [
"MIT"
] | 3 | 2022-02-14T14:53:08.000Z | 2022-03-29T20:48:54.000Z | pywinrt/winsdk/windows/ui/core/__init__.py | pywinrt/python-winsdk | 1e2958a712949579f5e84d38220062b2cec12511 | [
"MIT"
] | 4 | 2022-01-28T02:53:52.000Z | 2022-02-26T18:10:05.000Z | pywinrt/winsdk/windows/ui/core/__init__.py | pywinrt/python-winsdk | 1e2958a712949579f5e84d38220062b2cec12511 | [
"MIT"
] | null | null | null | # WARNING: Please don't edit this file. It was generated by Python/WinRT v1.0.0-beta.4
import enum
import winsdk
_ns_module = winsdk._import_ns_module("Windows.UI.Core")
try:
import winsdk.windows.foundation
except Exception:
pass
try:
import winsdk.windows.foundation.collections
except Exception:
pass
try:
import winsdk.windows.system
except Exception:
pass
try:
import winsdk.windows.ui
except Exception:
pass
try:
import winsdk.windows.ui.composition
except Exception:
pass
try:
import winsdk.windows.ui.input
except Exception:
pass
class AppViewBackButtonVisibility(enum.IntEnum):
VISIBLE = 0
COLLAPSED = 1
DISABLED = 2
class CoreAcceleratorKeyEventType(enum.IntEnum):
CHARACTER = 2
DEAD_CHARACTER = 3
KEY_DOWN = 0
KEY_UP = 1
SYSTEM_CHARACTER = 6
SYSTEM_DEAD_CHARACTER = 7
SYSTEM_KEY_DOWN = 4
SYSTEM_KEY_UP = 5
UNICODE_CHARACTER = 8
class CoreCursorType(enum.IntEnum):
ARROW = 0
CROSS = 1
CUSTOM = 2
HAND = 3
HELP = 4
I_BEAM = 5
SIZE_ALL = 6
SIZE_NORTHEAST_SOUTHWEST = 7
SIZE_NORTH_SOUTH = 8
SIZE_NORTHWEST_SOUTHEAST = 9
SIZE_WEST_EAST = 10
UNIVERSAL_NO = 11
UP_ARROW = 12
WAIT = 13
PIN = 14
PERSON = 15
class CoreDispatcherPriority(enum.IntEnum):
IDLE = -2
LOW = -1
NORMAL = 0
HIGH = 1
class CoreIndependentInputFilters(enum.IntFlag):
NONE = 0
MOUSE_BUTTON = 0x1
MOUSE_WHEEL = 0x2
MOUSE_HOVER = 0x4
PEN_WITH_BARREL = 0x8
PEN_INVERTED = 0x10
class CoreInputDeviceTypes(enum.IntFlag):
NONE = 0
TOUCH = 0x1
PEN = 0x2
MOUSE = 0x4
class CoreProcessEventsOption(enum.IntEnum):
PROCESS_ONE_AND_ALL_PENDING = 0
PROCESS_ONE_IF_PRESENT = 1
PROCESS_UNTIL_QUIT = 2
PROCESS_ALL_IF_PRESENT = 3
class CoreProximityEvaluationScore(enum.IntEnum):
CLOSEST = 0
FARTHEST = 2147483647
class CoreVirtualKeyStates(enum.IntFlag):
NONE = 0
DOWN = 0x1
LOCKED = 0x2
class CoreWindowActivationMode(enum.IntEnum):
NONE = 0
DEACTIVATED = 1
ACTIVATED_NOT_FOREGROUND = 2
ACTIVATED_IN_FOREGROUND = 3
class CoreWindowActivationState(enum.IntEnum):
CODE_ACTIVATED = 0
DEACTIVATED = 1
POINTER_ACTIVATED = 2
class CoreWindowFlowDirection(enum.IntEnum):
LEFT_TO_RIGHT = 0
RIGHT_TO_LEFT = 1
CorePhysicalKeyStatus = _ns_module.CorePhysicalKeyStatus
CoreProximityEvaluation = _ns_module.CoreProximityEvaluation
AcceleratorKeyEventArgs = _ns_module.AcceleratorKeyEventArgs
AutomationProviderRequestedEventArgs = _ns_module.AutomationProviderRequestedEventArgs
BackRequestedEventArgs = _ns_module.BackRequestedEventArgs
CharacterReceivedEventArgs = _ns_module.CharacterReceivedEventArgs
ClosestInteractiveBoundsRequestedEventArgs = _ns_module.ClosestInteractiveBoundsRequestedEventArgs
CoreAcceleratorKeys = _ns_module.CoreAcceleratorKeys
CoreComponentInputSource = _ns_module.CoreComponentInputSource
CoreCursor = _ns_module.CoreCursor
CoreDispatcher = _ns_module.CoreDispatcher
CoreIndependentInputSource = _ns_module.CoreIndependentInputSource
CoreIndependentInputSourceController = _ns_module.CoreIndependentInputSourceController
CoreWindow = _ns_module.CoreWindow
CoreWindowEventArgs = _ns_module.CoreWindowEventArgs
CoreWindowResizeManager = _ns_module.CoreWindowResizeManager
IdleDispatchedHandlerArgs = _ns_module.IdleDispatchedHandlerArgs
InputEnabledEventArgs = _ns_module.InputEnabledEventArgs
KeyEventArgs = _ns_module.KeyEventArgs
PointerEventArgs = _ns_module.PointerEventArgs
SystemNavigationManager = _ns_module.SystemNavigationManager
TouchHitTestingEventArgs = _ns_module.TouchHitTestingEventArgs
VisibilityChangedEventArgs = _ns_module.VisibilityChangedEventArgs
WindowActivatedEventArgs = _ns_module.WindowActivatedEventArgs
WindowSizeChangedEventArgs = _ns_module.WindowSizeChangedEventArgs
ICoreAcceleratorKeys = _ns_module.ICoreAcceleratorKeys
ICoreInputSourceBase = _ns_module.ICoreInputSourceBase
ICorePointerInputSource = _ns_module.ICorePointerInputSource
ICorePointerInputSource2 = _ns_module.ICorePointerInputSource2
ICorePointerRedirector = _ns_module.ICorePointerRedirector
ICoreWindow = _ns_module.ICoreWindow
ICoreWindowEventArgs = _ns_module.ICoreWindowEventArgs
IInitializeWithCoreWindow = _ns_module.IInitializeWithCoreWindow
| 27.794872 | 98 | 0.794972 | 433 | 4,336 | 7.678984 | 0.387991 | 0.084211 | 0.027068 | 0.039699 | 0.07609 | 0.063459 | 0.063459 | 0.038797 | 0 | 0 | 0 | 0.025089 | 0.15429 | 4,336 | 155 | 99 | 27.974194 | 0.881647 | 0.019373 | 0 | 0.181818 | 1 | 0 | 0.003529 | 0 | 0 | 0 | 0.007294 | 0 | 0 | 1 | 0 | false | 0.045455 | 0.068182 | 0 | 0.613636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ce20ad6570d147c5a8d9b3d543288f65a9a031c7 | 3,863 | py | Python | pysnmp-with-texts/RADIUS-ACCOUNTING-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/RADIUS-ACCOUNTING-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/RADIUS-ACCOUNTING-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module RADIUS-ACCOUNTING-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/RADIUS-ACCOUNTING-MIB
# Produced by pysmi-0.3.4 at Wed May 1 14:44:47 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, ObjectIdentifier, OctetString = mibBuilder.importSymbols("ASN1", "Integer", "ObjectIdentifier", "OctetString")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, ConstraintsUnion, SingleValueConstraint, ValueSizeConstraint, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "ConstraintsUnion", "SingleValueConstraint", "ValueSizeConstraint", "ConstraintsIntersection")
dlink_common_mgmt, = mibBuilder.importSymbols("DLINK-ID-REC-MIB", "dlink-common-mgmt")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
IpAddress, Gauge32, Counter32, iso, ObjectIdentity, TimeTicks, MibScalar, MibTable, MibTableRow, MibTableColumn, MibIdentifier, Integer32, ModuleIdentity, Bits, NotificationType, Counter64, Unsigned32 = mibBuilder.importSymbols("SNMPv2-SMI", "IpAddress", "Gauge32", "Counter32", "iso", "ObjectIdentity", "TimeTicks", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "MibIdentifier", "Integer32", "ModuleIdentity", "Bits", "NotificationType", "Counter64", "Unsigned32")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
swRadiusAccountMGMTMIB = ModuleIdentity((1, 3, 6, 1, 4, 1, 171, 12, 55))
if mibBuilder.loadTexts: swRadiusAccountMGMTMIB.setLastUpdated('0712200000Z')
if mibBuilder.loadTexts: swRadiusAccountMGMTMIB.setOrganization('D-Link Corp.')
if mibBuilder.loadTexts: swRadiusAccountMGMTMIB.setContactInfo('http://support.dlink.com')
if mibBuilder.loadTexts: swRadiusAccountMGMTMIB.setDescription('The structure of RADIUS accounting for the proprietary enterprise.')
radiusAccountCtrl = MibIdentifier((1, 3, 6, 1, 4, 1, 171, 12, 55, 1))
accountingShellState = MibScalar((1, 3, 6, 1, 4, 1, 171, 12, 55, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: accountingShellState.setStatus('current')
if mibBuilder.loadTexts: accountingShellState.setDescription('Specifies the status of the RADIUS accounting service for shell events.')
accountingSystemState = MibScalar((1, 3, 6, 1, 4, 1, 171, 12, 55, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: accountingSystemState.setStatus('current')
if mibBuilder.loadTexts: accountingSystemState.setDescription('Specifies the status of the RADIUS accounting service for system events.')
accountingNetworkState = MibScalar((1, 3, 6, 1, 4, 1, 171, 12, 55, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: accountingNetworkState.setStatus('current')
if mibBuilder.loadTexts: accountingNetworkState.setDescription('Specifies the status of the RADIUS accounting service for 802.1x port access control.')
mibBuilder.exportSymbols("RADIUS-ACCOUNTING-MIB", accountingShellState=accountingShellState, accountingSystemState=accountingSystemState, PYSNMP_MODULE_ID=swRadiusAccountMGMTMIB, radiusAccountCtrl=radiusAccountCtrl, swRadiusAccountMGMTMIB=swRadiusAccountMGMTMIB, accountingNetworkState=accountingNetworkState)
| 124.612903 | 477 | 0.797049 | 401 | 3,863 | 7.668329 | 0.349127 | 0.039024 | 0.068293 | 0.006504 | 0.463415 | 0.362927 | 0.362927 | 0.362927 | 0.362927 | 0.358699 | 0 | 0.047951 | 0.071447 | 3,863 | 30 | 478 | 128.766667 | 0.809311 | 0.086979 | 0 | 0 | 0 | 0 | 0.270608 | 0.018476 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.304348 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
ce248b548495603728fcec62354aedadb7cae2e2 | 305 | py | Python | print_numbers.py | sonmezbaris/python-exercises | 949934bde2a78a718fbac76906e888ca24ae4103 | [
"Apache-2.0"
] | null | null | null | print_numbers.py | sonmezbaris/python-exercises | 949934bde2a78a718fbac76906e888ca24ae4103 | [
"Apache-2.0"
] | null | null | null | print_numbers.py | sonmezbaris/python-exercises | 949934bde2a78a718fbac76906e888ca24ae4103 | [
"Apache-2.0"
] | null | null | null |
#Prints the number in matrix form
def print_numbers(number):
for i in range(1,number+1):
for j in range(1, number+1):
print(i,end=" ")
print()
number=int(input("please enter a number: "))
if number%2==0:
print_numbers(number+1)
else:
print_numbers(number-1)
| 16.944444 | 44 | 0.613115 | 48 | 305 | 3.833333 | 0.5 | 0.152174 | 0.293478 | 0.152174 | 0.163043 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 0.252459 | 305 | 17 | 45 | 17.941176 | 0.77193 | 0.104918 | 0 | 0 | 0 | 0 | 0.089219 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.1 | 0.5 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
ce2eae9c5779b0cb8f7a528c4260f793695cbcc2 | 2,513 | py | Python | kmeans/develop/kmeans_mapr.py | stharrold/ARCHIVED_bench_mapr | cab88c0d100526f57253aaff6a7bccfb3ec9bf72 | [
"MIT"
] | null | null | null | kmeans/develop/kmeans_mapr.py | stharrold/ARCHIVED_bench_mapr | cab88c0d100526f57253aaff6a7bccfb3ec9bf72 | [
"MIT"
] | null | null | null | kmeans/develop/kmeans_mapr.py | stharrold/ARCHIVED_bench_mapr | cab88c0d100526f57253aaff6a7bccfb3ec9bf72 | [
"MIT"
] | 1 | 2020-02-26T04:22:35.000Z | 2020-02-26T04:22:35.000Z | #!/usr/bin/env python
"""
Do k-means clusting with map-reduce.
Adapted from disco/examples/datamining/kclustering.py
"""
import argparse
import os
import numpy as np
from disco.ddfs import DDFS
from disco.core import Job, result_iterator
from disco.util import kvgroup
from disco.func import chain_reader
def main(file_in="iris.csv", file_out="centers.csv", n_clusters=3):
# TODO: Rename tag data:kcluster1 if tag exists.
# Disco v0.4.4 requires that ./ prefix the file to idendify as local file.
# http://disco.readthedocs.org/en/0.4.4/howto/chunk.html#chunking
tag = "data:sort"
DDFS().chunk(tag=tag, urls=['./'+file_in])
try:
# Import since slave nodes do not have same namespace as master
from kcluster_map_reduce import KCluster
job = KCluster().run(input=[tag], map_reader=chain_reader)
with open(file_out, 'w') as f_out:
writer = csv.writer(f_out, quoting=csv.QUOTE_NONNUMERIC)
for center in result _iterator(job.wait(show=True)):
writer.writerow([center])
finally:
DDFS().delete(tag=tag)
return None
class KCluster(Job):
# TODO: Fastest way forward is to update versions to most dev branch and use built-in map-reduce
# Simplest, dumbest way.
# One node per cluster
# Assign a cluster center to first 3 data data points.
# Compute distance of all points to each center.
# Each point is assigned to cluster with minimum distance.
# With new point assignments, each cluster center is recomputed.
# new center - old center = diff
# repeat distance calc.
# repeat assignment
# repeat center determination
# repeat diff
# if diff old - diff new / diff old > 0.05, repeat again
def random_init_map(element, params):
"""Assign data element randomply to one of k clusters."""
yield
pass
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Do k-means clustering on file with map-reduce.")
parser.add_argument("--file_in", default="iris.csv", help="Input file in CSV format. Default: iris.csv")
parser.add_argument("--file_out", default="centers.csv", help="Output file. Default: centers.csv")
parser.add_argument("--n_clusters", default=3, type=int, help="Number of clusters. Default: 3")
args = parser.parse_args()
print args
if not os.path.isfile(args.file_in):
create_iris_csv
main(file_in=args.file_in, file_out=args.file_out, n_clusters=args.n_clusters)
| 39.265625 | 108 | 0.693593 | 371 | 2,513 | 4.587601 | 0.460916 | 0.024677 | 0.029965 | 0.024677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00699 | 0.202945 | 2,513 | 63 | 109 | 39.888889 | 0.842736 | 0.314365 | 0 | 0 | 0 | 0 | 0.155584 | 0 | 0 | 0 | 0 | 0.015873 | 0 | 0 | null | null | 0.029412 | 0.235294 | null | null | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce3af6da348bd06d5dc937781e7eb0e3fc914d41 | 155 | py | Python | protons/app/data/environment.py | choderalab/Protons | a55d7a72d0e4f2f402a2c144c5a7fb34ccce2a7f | [
"MIT"
] | 16 | 2016-12-30T23:58:06.000Z | 2021-09-29T02:40:35.000Z | protons/app/data/environment.py | choderalab/protons | a55d7a72d0e4f2f402a2c144c5a7fb34ccce2a7f | [
"MIT"
] | 75 | 2016-10-31T19:52:32.000Z | 2021-03-03T00:01:35.000Z | protons/app/data/environment.py | choderalab/Protons | a55d7a72d0e4f2f402a2c144c5a7fb34ccce2a7f | [
"MIT"
] | 5 | 2017-01-18T23:15:07.000Z | 2021-04-01T00:50:36.000Z | import os
def before_all(context):
context.tmpfiles = []
def after_all(context):
for filename in context.tmpfiles:
os.remove(filename)
| 14.090909 | 37 | 0.683871 | 20 | 155 | 5.2 | 0.6 | 0.192308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219355 | 155 | 10 | 38 | 15.5 | 0.859504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce3f1d4b968599ae77f2fe7e5ba8b86c47f5f08e | 2,257 | py | Python | mud/world.py | erwanaubry/alamud_IUT_Escape | cc9e77203245a9b933300edc2efb9bd5fcd8abc3 | [
"Unlicense"
] | null | null | null | mud/world.py | erwanaubry/alamud_IUT_Escape | cc9e77203245a9b933300edc2efb9bd5fcd8abc3 | [
"Unlicense"
] | null | null | null | mud/world.py | erwanaubry/alamud_IUT_Escape | cc9e77203245a9b933300edc2efb9bd5fcd8abc3 | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (C) 2014 Denys Duchier, IUT d'Orléans
#==============================================================================
import os.path
class World:
def __init__(self):
self.database = {}
self.types = {}
def load_models(self):
import mud.models as models
Model = models.Model
for k,v in models.__dict__.items():
if not k.startswith("_") and type(v) is type and issubclass(v, Model):
self.types[v.__name__] = v
def make(self, data):
if data is None:
return None
cls = self.types[data["type"]]
obj = cls(**data) # autoadded to database
obj.yaml = data
if "id" not in data:
data["id"] = obj.id
return obj
def load(self, initial, current):
from yaml import load_all
self.load_models()
self.autocreated = []
# create all objects in the world (except players)
contents = initial
for data in contents:
obj = self.make(data)
# initialize them with the initial yaml data
for obj in tuple(self.database.values()):
obj.init_from_yaml(obj.yaml, self)
# initialize autocreated objects with their yaml data
while self.autocreated:
objs = self.autocreated
self.autocreated = []
for obj in objs:
obj.init_from_yaml(obj.yaml, self)
# update them with the saved current state of the game
for data in current:
key = data["id"]
obj = self.database[key]
obj.update_from_yaml(data, self)
def save(self):
return [x.to_json() for x in self.database.values()]
def __getitem__(self, id):
return self.database[id]
def __setitem__(self, id, val):
if id in self.database:
raise Exception("id collision: %s" % id)
self.database[id] = val
def get(self, key, default=None):
return self.database.get(key, default)
def keys(self):
return self.database.keys()
def values(self):
return self.database.values()
def items(self):
return self.database.items()
| 28.2125 | 82 | 0.545857 | 276 | 2,257 | 4.351449 | 0.318841 | 0.109908 | 0.074938 | 0.054954 | 0.043297 | 0.043297 | 0.043297 | 0 | 0 | 0 | 0 | 0.00327 | 0.322552 | 2,257 | 79 | 83 | 28.56962 | 0.782211 | 0.162162 | 0 | 0.074074 | 0 | 0 | 0.014346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203704 | false | 0 | 0.055556 | 0.111111 | 0.425926 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
ce4051563a30c4aa616087680e77f0556a2ab7c3 | 665 | py | Python | core/data/astrosource/spectra_source.py | xcamilox/frastro | 466fa3758a60484ace85022e5e5d404afd5a86ea | [
"Apache-2.0"
] | 1 | 2020-02-14T15:11:57.000Z | 2020-02-14T15:11:57.000Z | core/data/astrosource/spectra_source.py | xcamilox/frastro | 466fa3758a60484ace85022e5e5d404afd5a86ea | [
"Apache-2.0"
] | null | null | null | core/data/astrosource/spectra_source.py | xcamilox/frastro | 466fa3758a60484ace85022e5e5d404afd5a86ea | [
"Apache-2.0"
] | null | null | null | import json
class SpectraSource(object):
__name=""
__cutouts=[]
__files=[]
__provider=""
__cache_date = False
def __init__(self,name,provider):
self.__name = name
self.__provider=provider
self.__cutouts = []
self.__files = []
def addCutout(self,url,size,band):
self.__cutouts.append({"url":url,"size":size,"band":band})
def addFile(self,name,url,type):
self.__files.append({"url": url, "name": name, "type": type})
def __dict__(self):
spectra = {"name": self.__name, "provider": self.__provider, "cutouts": self.__cutouts, "files": self.__files}
return spectra
| 27.708333 | 118 | 0.615038 | 75 | 665 | 4.933333 | 0.333333 | 0.086486 | 0.086486 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228571 | 665 | 23 | 119 | 28.913043 | 0.721248 | 0 | 0 | 0 | 0 | 0 | 0.069173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.052632 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ce420c95ad100308ad5f422a29b9789b05a39fcb | 2,488 | py | Python | keras_gym/proba_dists/test_categorical.py | KristianHolsheimer/keras-gym | 0296ddcc8685e1ce732c3173caaa0fd25af9ef58 | [
"MIT"
] | 16 | 2019-07-01T10:56:26.000Z | 2021-01-31T18:56:56.000Z | keras_gym/proba_dists/test_categorical.py | KristianHolsheimer/keras-gym | 0296ddcc8685e1ce732c3173caaa0fd25af9ef58 | [
"MIT"
] | 10 | 2019-03-10T21:56:10.000Z | 2020-09-06T21:49:55.000Z | keras_gym/proba_dists/test_categorical.py | KristianHolsheimer/keras-gym | 0296ddcc8685e1ce732c3173caaa0fd25af9ef58 | [
"MIT"
] | 5 | 2019-08-02T22:11:19.000Z | 2020-04-19T20:18:38.000Z | import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from scipy.stats import multinomial
from ..utils.array import one_hot
from .categorical import CategoricalDist
if tf.__version__ >= '2.0':
tf.random.set_seed(11)
else:
tf.set_random_seed(11)
rnd = np.random.RandomState(13)
x_np = rnd.randn(7, 5)
y_np = rnd.randint(3, size=7)
y_onehot_np = one_hot(y_np, 3)
x = keras.Input([5], dtype='float32')
y = keras.Input([1], dtype='int32')
y_onehot = keras.Input([3], dtype='float32')
logits = keras.layers.Dense(3)(x)
proba = keras.layers.Lambda(K.softmax)(logits)
dist = CategoricalDist(logits)
sample = keras.layers.Lambda(lambda args: dist.sample())(logits)
# test against scipy implementation
proba_np = keras.Model(x, proba).predict(x_np)
dists_np = [multinomial(n=1, p=p) for p in proba_np] # cannot broadcast
def test_sample():
# if tf.__version__ >= '2.0':
# expected = one_hot(np.array([1, 1, 1, 2, 1, 0, 2]), n=3)
# else:
# expected = one_hot(np.array([1, 1, 1, 2, 1, 0, 2]), n=3)
actual = keras.Model(x, sample).predict(x_np)
assert actual.shape == (7, 3)
np.testing.assert_array_almost_equal(actual.sum(axis=1), np.ones(7))
# np.testing.assert_array_almost_equal(actual, expected)
def test_log_proba():
expected = np.stack([d.logpmf(a) for d, a in zip(dists_np, y_onehot_np)])
out = keras.layers.Lambda(lambda args: dist.log_proba(y))(logits)
actual = keras.Model([x, y], out).predict([x_np, y_np])
np.testing.assert_array_almost_equal(actual, expected)
def test_log_proba_onehot():
expected = np.stack([d.logpmf(a) for d, a in zip(dists_np, y_onehot_np)])
out = keras.layers.Lambda(lambda args: dist.log_proba(y_onehot))(logits)
actual = keras.Model([x, y_onehot], out).predict([x_np, y_onehot_np])
np.testing.assert_array_almost_equal(actual, expected)
def test_entropy():
expected = np.stack([d.entropy() for d in dists_np])
out = keras.layers.Lambda(lambda args: dist.entropy())(logits)
actual = keras.Model(x, out).predict(x_np)
np.testing.assert_array_almost_equal(actual, expected)
def test_cross_entropy():
# TODO: test this without implementing the same thing in numpy
pass
def test_kl_divergence():
# TODO: test this without implementing the same thing in numpy
pass
def test_proba_ratio():
# TODO: test this without implementing the same thing in numpy
pass
| 27.955056 | 77 | 0.700965 | 405 | 2,488 | 4.130864 | 0.239506 | 0.029289 | 0.050807 | 0.059773 | 0.525403 | 0.479378 | 0.432158 | 0.410042 | 0.388524 | 0.388524 | 0 | 0.022073 | 0.162379 | 2,488 | 88 | 78 | 28.272727 | 0.78071 | 0.178457 | 0 | 0.163265 | 0 | 0 | 0.010821 | 0 | 0 | 0 | 0 | 0.011364 | 0.102041 | 1 | 0.142857 | false | 0.061224 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
cbf8b1f23693a4a1df0dff8d0d5aefed47672698 | 1,229 | py | Python | arcula/encode.py | aldur/Arcula | b68808ceac0f3e0bc726eb1e2366ed875d3ecb07 | [
"MIT"
] | 2 | 2021-01-15T08:32:28.000Z | 2021-01-25T14:55:56.000Z | arcula/encode.py | aldur/Arcula | b68808ceac0f3e0bc726eb1e2366ed875d3ecb07 | [
"MIT"
] | null | null | null | arcula/encode.py | aldur/Arcula | b68808ceac0f3e0bc726eb1e2366ed875d3ecb07 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Handle encoding/decoding and (de)serialization."""
import typing
if typing.TYPE_CHECKING:
import ecdsa
__author__ = 'aldur'
def int_to_bytes_32(k: int) -> bytes:
"""Serialize an integer to the corresponding big endian bytes."""
assert 0 <= k < 2 ** 256
return k.to_bytes(32, byteorder='big')
def bytes_32_to_int(b: bytes) -> int:
"""Deserialize an integer from the corresponding big endian bytes."""
assert len(b) == 32
return int.from_bytes(b, byteorder='big')
def int_to_bytes_8(k: int) -> bytes:
"""Serialize an integer to the corresponding big endian bytes."""
assert 0 <= k < 2 ** 64
return k.to_bytes(8, byteorder='big')
def bytes_8_to_int(b: bytes) -> int:
"""Deserialize an integer from the corresponding big endian bytes."""
assert len(b) == 8
return int.from_bytes(b, byteorder='big')
def verification_key_to_bytes_33(pk: 'ecdsa.VerifyingKey') -> bytes:
"""
Serialize a verification key to its compressed form.
https://github.com/bitcoinbook/bitcoinbook/blob/develop/ch04.asciidoc
"""
pk = pk.pubkey.point
return (b'\x02' if pk.y() % 2 == 0 else b'\x03') + int_to_bytes_32(pk.x())
| 27.931818 | 78 | 0.668023 | 185 | 1,229 | 4.286486 | 0.372973 | 0.052963 | 0.095839 | 0.126103 | 0.453972 | 0.453972 | 0.453972 | 0.453972 | 0.368222 | 0.368222 | 0 | 0.034102 | 0.188771 | 1,229 | 43 | 79 | 28.581395 | 0.761284 | 0.375102 | 0 | 0.105263 | 0 | 0 | 0.059557 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.263158 | false | 0 | 0.105263 | 0 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cbf8dcc336158973b1a66eb34654fd1eb89dd2b7 | 2,238 | py | Python | tests/test_str.py | vbergeron/pylars | 634237f53c349a9389fef362a31cad84907a625e | [
"MIT"
] | null | null | null | tests/test_str.py | vbergeron/pylars | 634237f53c349a9389fef362a31cad84907a625e | [
"MIT"
] | null | null | null | tests/test_str.py | vbergeron/pylars | 634237f53c349a9389fef362a31cad84907a625e | [
"MIT"
] | null | null | null | from pandas import DataFrame
from tests.helpers import ExprTestCase
from pylars.dsl import _
class TestStringExpr(ExprTestCase):
def data(self):
return DataFrame({
"animal": ["dog", "cat", "horse"],
"fruit": ["banana", "orange", "apple"],
"fold": ["yo", "YO", "Fluß"],
"many": ["dog dog", "cat cat", "H H"]
})
def test_concat(self):
self.make_test_expr(_.animal + _.fruit, [
"dogbanana",
"catorange",
"horseapple"
])
def test_concat_ws(self):
self.make_test_expr(_.animal + " - " + _.fruit, [
"dog - banana",
"cat - orange",
"horse - apple"
])
def test_capitalize(self):
self.make_test_expr(_.animal.capitalize(), [
"Dog", "Cat", "Horse"
])
def test_casefold(self):
self.make_test_expr(_.fold.casefold(), [
"yo", "yo", "fluss"
])
def test_contains(self):
self.make_test_expr(_.fold.contains("y"), [
True, False, False
])
def test_contains_regex(self):
self.make_test_expr(_.fold.contains("[yY]"), [
True, True, False
])
def test_match(self):
self.make_test_expr(_.fruit.match(".+e$"), [
False, True, True
])
def test_replace(self):
self.make_test_expr(_.animal.replace("e?$", "ololo"), [
'dogololo', 'catololo', 'horsololo'
])
def test_len(self):
self.make_test_expr(_.animal.len(), [
3, 3, 5
])
def test_upper(self):
self.make_test_expr(_.animal.upper(), [
"DOG", "CAT", "HORSE"
])
def test_lower(self):
self.make_test_expr(_.animal.upper().lower(), [
"dog", "cat", "horse"
])
def test_title(self):
self.make_test_expr(_.many.title(), [
"Dog Dog", "Cat Cat", "H H"
])
def test_isupper(self):
self.make_test_expr(_.animal.upper().isupper(), [
True, True, True
])
def test_islower(self):
self.make_test_expr(_.animal.lower().islower(), [
True, True, True
])
| 25.146067 | 63 | 0.505809 | 238 | 2,238 | 4.5 | 0.252101 | 0.091503 | 0.156863 | 0.20915 | 0.45098 | 0.363212 | 0.243697 | 0.039216 | 0 | 0 | 0 | 0.002015 | 0.334674 | 2,238 | 88 | 64 | 25.431818 | 0.71726 | 0 | 0 | 0.225352 | 0 | 0 | 0.107685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211268 | false | 0 | 0.042254 | 0.014085 | 0.28169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
02011ebd8aabc02169537086409607f322a73646 | 1,138 | py | Python | python/src/nnabla/utils/converter/supported_info.py | daniel-falk/nnabla | 3fe132ea52dc10521cc029a5d6ba8f565cf65ccf | [
"Apache-2.0"
] | 2,792 | 2017-06-26T13:05:44.000Z | 2022-03-28T07:55:26.000Z | python/src/nnabla/utils/converter/supported_info.py | daniel-falk/nnabla | 3fe132ea52dc10521cc029a5d6ba8f565cf65ccf | [
"Apache-2.0"
] | 138 | 2017-06-27T07:04:44.000Z | 2022-02-28T01:37:15.000Z | python/src/nnabla/utils/converter/supported_info.py | daniel-falk/nnabla | 3fe132ea52dc10521cc029a5d6ba8f565cf65ccf | [
"Apache-2.0"
] | 380 | 2017-06-26T13:23:52.000Z | 2022-03-25T16:51:30.000Z | # Copyright 2018,2019,2020,2021 Sony Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
_SupportedInfo = collections.namedtuple(
'_SupportedInfo', 'import_name export_name')
extensions = _SupportedInfo(import_name=['.nnp', '.onnx', '.ckpt', '.meta', '.pb', 'saved_model', '.tflite'], export_name=[
'.nnp', '.nnb', '.onnx', '.tflite', 'saved_model', '.pb'])
formats = _SupportedInfo(import_name=['NNP', 'ONNX', 'TF_CKPT_V1', 'TF_CKPT_V2', 'TF_PB', 'SAVED_MODEL', 'TFLITE'], export_name=[
'NNP', 'NNB', 'CSRC', 'ONNX', 'SAVED_MODEL', 'TFLITE', 'TF_PB'])
| 49.478261 | 129 | 0.686292 | 152 | 1,138 | 5.006579 | 0.572368 | 0.078844 | 0.09067 | 0.04205 | 0.1682 | 0.089356 | 0.089356 | 0.089356 | 0 | 0 | 0 | 0.023429 | 0.174868 | 1,138 | 22 | 130 | 51.727273 | 0.787007 | 0.5 | 0 | 0 | 0 | 0 | 0.353153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
02158da90d4a65b9e1d79b297f2141449f30a2a2 | 451 | py | Python | src/umassstembot/bot_helper.py | tomking/UMass-STEM-Discord-Bot | 5d8230dd167d18138f77794f97790f492ac81e8c | [
"MIT"
] | null | null | null | src/umassstembot/bot_helper.py | tomking/UMass-STEM-Discord-Bot | 5d8230dd167d18138f77794f97790f492ac81e8c | [
"MIT"
] | null | null | null | src/umassstembot/bot_helper.py | tomking/UMass-STEM-Discord-Bot | 5d8230dd167d18138f77794f97790f492ac81e8c | [
"MIT"
] | null | null | null | import discord
import asyncio
async def del_convo(user_msg, bot_msg, delay):
"""
Deletes both the user and bot message after the specified delay in seconds
Parameters:
- user_msg: message object for the user's message
- bot_msg: message object for the bot's message
- delay: int representing the number of seconds to delay before deleting
"""
await bot_msg.delete(delay=delay)
await user_msg.delete(delay=delay)
| 28.1875 | 78 | 0.725055 | 68 | 451 | 4.705882 | 0.485294 | 0.065625 | 0.1 | 0.11875 | 0.1375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21286 | 451 | 15 | 79 | 30.066667 | 0.901408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
021ae5012149d0452320e42b7f93d0867193e6de | 380 | py | Python | python_teste/python_aulas/aula_04.py | BrunoDantasMoreira/projectsPython | bd73ab0b3c067456407f227ed2ece42e7f21ddfc | [
"MIT"
] | 1 | 2020-07-27T14:18:08.000Z | 2020-07-27T14:18:08.000Z | python_teste/python_aulas/aula_04.py | BrunoDantasMoreira/projectsPython | bd73ab0b3c067456407f227ed2ece42e7f21ddfc | [
"MIT"
] | null | null | null | python_teste/python_aulas/aula_04.py | BrunoDantasMoreira/projectsPython | bd73ab0b3c067456407f227ed2ece42e7f21ddfc | [
"MIT"
] | null | null | null | msg = str(input('Digite algo: '))
print(f'O tipo primitivo dele é {type(msg)}')
print(f'So tem espaços? ', msg.isspace())
print(f'É um número? ', msg.isnumeric())
print(f'É alfabetico? ', msg.isalpha())
print(f'É alfanúmerico? ', msg.isalnum())
print(f'Está em maisculas? ', msg.isupper())
print(f'Está em minuscolas? ', msg.islower())
print(f'Etá capitalizada? ', msg.istitle())
| 38 | 45 | 0.678947 | 60 | 380 | 4.3 | 0.55 | 0.186047 | 0.081395 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113158 | 380 | 9 | 46 | 42.222222 | 0.765579 | 0 | 0 | 0 | 0 | 0 | 0.431579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.888889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
021ee5c84f78e0c433fd22bd2d72e3238620a8ad | 216 | py | Python | dgd2.py | Kshitijkrishnadas/haribol | ca45e633baaabaad3bb923f5633340ccf88d996c | [
"bzip2-1.0.6"
] | null | null | null | dgd2.py | Kshitijkrishnadas/haribol | ca45e633baaabaad3bb923f5633340ccf88d996c | [
"bzip2-1.0.6"
] | null | null | null | dgd2.py | Kshitijkrishnadas/haribol | ca45e633baaabaad3bb923f5633340ccf88d996c | [
"bzip2-1.0.6"
] | null | null | null | n=int(input("enter a number"))
m=int(input("enter a number"))
if n%m==0:
print(n,"is dividible by",m)
else:
print(n,"is not divisible by",m)
if n%2==0:
print(n,"is even")
else:
print(n,"is odd")
| 18 | 36 | 0.583333 | 43 | 216 | 2.930233 | 0.44186 | 0.190476 | 0.253968 | 0.222222 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017442 | 0.203704 | 216 | 11 | 37 | 19.636364 | 0.715116 | 0 | 0 | 0.2 | 0 | 0 | 0.347222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0222fa91c2d5701376992eb184951fe76e9bbd3f | 514 | py | Python | metadata_gather/metadata_extractor/file_extractor.py | d-montenegro/MetadataGatherer | c7388ce026c4b66b164eaf933cc65f109f83f368 | [
"MIT"
] | null | null | null | metadata_gather/metadata_extractor/file_extractor.py | d-montenegro/MetadataGatherer | c7388ce026c4b66b164eaf933cc65f109f83f368 | [
"MIT"
] | null | null | null | metadata_gather/metadata_extractor/file_extractor.py | d-montenegro/MetadataGatherer | c7388ce026c4b66b164eaf933cc65f109f83f368 | [
"MIT"
] | null | null | null | file_extractors = {}
def file_extractor(extension):
"""
This decorator registers functions to be used as extractors.
Only functions decorated with this are considered metadata extractors, and will
be called when the file extension matches the configured one.
:param extension: the extension to match
"""
def deco(f):
assert extension not in file_extractors, f"extension {extension} already registered"
file_extractors[extension] = f
return f
return deco
| 28.555556 | 92 | 0.710117 | 64 | 514 | 5.640625 | 0.578125 | 0.116343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235409 | 514 | 17 | 93 | 30.235294 | 0.918575 | 0.476654 | 0 | 0 | 0 | 0 | 0.165975 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.285714 | false | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0229083023d1161fe900bebe2e9d5a3c174b934d | 990 | py | Python | coogger/views.py | ewuoso/coogger | 11df6f8487b59bd06f9a496efde3fec998a64217 | [
"MIT"
] | null | null | null | coogger/views.py | ewuoso/coogger | 11df6f8487b59bd06f9a496efde3fec998a64217 | [
"MIT"
] | null | null | null | coogger/views.py | ewuoso/coogger | 11df6f8487b59bd06f9a496efde3fec998a64217 | [
"MIT"
] | null | null | null | #django
from django.http import HttpResponseRedirect,HttpResponse
from django.shortcuts import render
from django.db.models import F
from django.contrib import messages as ms
from django.contrib.sitemaps import Sitemap
from django.conf import settings
# class
from django.views.generic.base import TemplateView
# apps home
class AppsHome(TemplateView):
template_name = "apps.html"
def get_context_data(self, **kwargs):
context = super(AppsHome, self).get_context_data(**kwargs)
return context
# # apps sitemap
# class AppsSitemap(Sitemap):
# changefreq = "weekly"
# priority = 0.6
#
# def items(self):
# i_list = []
# for i in settings.INSTALLED_APPS:
# if i.startswith("apps."):
# i = i.split(".")
# if i[1] != "cooggerapp":
# url = i[0]+"/"+i[1]
# i_list.append(url)
# return i_list
#
# def location(self,obj):
# return "/"+obj
| 26.756757 | 66 | 0.617172 | 119 | 990 | 5.058824 | 0.495798 | 0.116279 | 0.056478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006906 | 0.268687 | 990 | 36 | 67 | 27.5 | 0.824586 | 0.460606 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.583333 | 0 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0238ecc564241ca74e4d1ff256fc827855d96d3b | 1,554 | py | Python | galaxia/game/shared/gameconstants.py | Eugenespace/cse210-09 | 9268051d63a91834d1a54a6f4c5f3b50a5960a81 | [
"RSA-MD"
] | null | null | null | galaxia/game/shared/gameconstants.py | Eugenespace/cse210-09 | 9268051d63a91834d1a54a6f4c5f3b50a5960a81 | [
"RSA-MD"
] | 16 | 2022-03-23T19:29:21.000Z | 2022-03-31T22:50:41.000Z | galaxia/game/shared/gameconstants.py | Eugenespace/cse210-09 | 9268051d63a91834d1a54a6f4c5f3b50a5960a81 | [
"RSA-MD"
] | 4 | 2022-03-16T15:59:32.000Z | 2022-03-24T15:15:47.000Z | from pathlib import Path
from game.shared.color import Color
import pygame
pygame.mixer.init()
FRAME_RATE = 50
MAX_X = 950
MAX_Y = 600
FONT_SIZE = 20
PLAYER_SIZE = 15
MAX_PLAYER_X = 400
CENTER = "center"
COLS = 60
ROWS = 40
CAPTION = "GALAXIA"
WHITE = Color(255, 255, 255)
SPLASH_IMAGE = Path(__file__).parent.parent / "assets/images/splash.png"
SPLASH_INSTRUCTION = Path(__file__).parent.parent / "assets/images/splash_inst.png"
ACTOR_IMAGE = Path(__file__).parent.parent / "assets/images/hero.png"
ENEMY_IMAGE = Path(__file__).parent.parent / "assets/images/enemy.png"
ENEMY_IMAGE1 = Path(__file__).parent.parent / "assets/images/enemy1.png"
ENEMY_IMAGE2 = Path(__file__).parent.parent / "assets/images/enemy2.png"
ENEMY_IMAGE3 = Path(__file__).parent.parent / "assets/images/enemy3.png"
BACKGROUND_IMAGE = Path(__file__).parent.parent /"assets/images/background.png"
LOGO_IMAGE = Path(__file__).parent.parent / "assets/images/logo.png"
BULLET_IMAGE = Path(__file__).parent.parent / "assets/images/bullet.png"
BULLET_ENEMY_IMAGE = Path(__file__).parent.parent / "assets/images/bullet_enemy.png"
# Sounds for the game
ACTOR_SOUND = Path(__file__).parent.parent / "assets/sounds/actor_shoot_sound.wav"
BACKGROUND_SOUND = Path(__file__).parent.parent / "assets/sounds/background_sound.wav"
TITLE_MUSIC = Path(__file__).parent.parent / "assets/sounds/title_music.wav"
ENEMY_SHOT = pygame.mixer.Sound(Path(__file__).parent.parent / "assets/sounds/shotdown.wav")
HERO_SHOT = pygame.mixer.Sound(Path(__file__).parent.parent / "assets/sounds/crash.wav")
| 40.894737 | 92 | 0.777349 | 224 | 1,554 | 4.973214 | 0.290179 | 0.114901 | 0.201077 | 0.287253 | 0.566427 | 0.566427 | 0.451526 | 0.212747 | 0.093357 | 0.093357 | 0 | 0.023977 | 0.087516 | 1,554 | 37 | 93 | 42 | 0.761636 | 0.012227 | 0 | 0 | 0 | 0 | 0.283105 | 0.274625 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.096774 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
02393a8d3b39e3472fc97d157aad1a49c225bbfd | 7,299 | py | Python | CharakterItems.py | brzGatsu/Sephrasto | 4586ffb87506fd776fc6d6f37d5b222884adb4e9 | [
"MIT"
] | null | null | null | CharakterItems.py | brzGatsu/Sephrasto | 4586ffb87506fd776fc6d6f37d5b222884adb4e9 | [
"MIT"
] | null | null | null | CharakterItems.py | brzGatsu/Sephrasto | 4586ffb87506fd776fc6d6f37d5b222884adb4e9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'CharakterItems.ui'
#
# Created by: PyQt5 UI code generator 5.12.1
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(872, 460)
self.gridLayout = QtWidgets.QGridLayout(Form)
self.gridLayout.setObjectName("gridLayout")
self.gridLayout_2 = QtWidgets.QGridLayout()
self.gridLayout_2.setObjectName("gridLayout_2")
self.lineEdit_12 = QtWidgets.QLineEdit(Form)
self.lineEdit_12.setMaxLength(52)
self.lineEdit_12.setObjectName("lineEdit_12")
self.gridLayout_2.addWidget(self.lineEdit_12, 8, 3, 1, 1)
spacerItem = QtWidgets.QSpacerItem(20, 19, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.gridLayout_2.addItem(spacerItem, 0, 1, 1, 1)
self.lineEdit_6 = QtWidgets.QLineEdit(Form)
self.lineEdit_6.setMaxLength(52)
self.lineEdit_6.setObjectName("lineEdit_6")
self.gridLayout_2.addWidget(self.lineEdit_6, 4, 3, 1, 1)
spacerItem1 = QtWidgets.QSpacerItem(163, 20, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout_2.addItem(spacerItem1, 1, 0, 1, 1)
self.lineEdit_9 = QtWidgets.QLineEdit(Form)
self.lineEdit_9.setMaxLength(52)
self.lineEdit_9.setObjectName("lineEdit_9")
self.gridLayout_2.addWidget(self.lineEdit_9, 7, 1, 1, 1)
self.lineEdit_7 = QtWidgets.QLineEdit(Form)
self.lineEdit_7.setMaxLength(52)
self.lineEdit_7.setObjectName("lineEdit_7")
self.gridLayout_2.addWidget(self.lineEdit_7, 6, 1, 1, 1)
self.lineEdit_10 = QtWidgets.QLineEdit(Form)
self.lineEdit_10.setMaxLength(52)
self.lineEdit_10.setObjectName("lineEdit_10")
self.gridLayout_2.addWidget(self.lineEdit_10, 7, 3, 1, 1)
self.lineEdit_13 = QtWidgets.QLineEdit(Form)
self.lineEdit_13.setMaxLength(52)
self.lineEdit_13.setObjectName("lineEdit_13")
self.gridLayout_2.addWidget(self.lineEdit_13, 9, 1, 1, 1)
self.lineEdit_19 = QtWidgets.QLineEdit(Form)
self.lineEdit_19.setMaxLength(52)
self.lineEdit_19.setObjectName("lineEdit_19")
self.gridLayout_2.addWidget(self.lineEdit_19, 13, 1, 1, 1)
spacerItem2 = QtWidgets.QSpacerItem(163, 20, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout_2.addItem(spacerItem2, 1, 4, 1, 1)
self.lineEdit_1 = QtWidgets.QLineEdit(Form)
self.lineEdit_1.setMinimumSize(QtCore.QSize(320, 0))
self.lineEdit_1.setText("")
self.lineEdit_1.setMaxLength(52)
self.lineEdit_1.setObjectName("lineEdit_1")
self.gridLayout_2.addWidget(self.lineEdit_1, 1, 1, 1, 1)
self.lineEdit_4 = QtWidgets.QLineEdit(Form)
self.lineEdit_4.setMaxLength(52)
self.lineEdit_4.setObjectName("lineEdit_4")
self.gridLayout_2.addWidget(self.lineEdit_4, 2, 3, 1, 1)
self.lineEdit_20 = QtWidgets.QLineEdit(Form)
self.lineEdit_20.setObjectName("lineEdit_20")
self.gridLayout_2.addWidget(self.lineEdit_20, 13, 3, 1, 1)
self.lineEdit_3 = QtWidgets.QLineEdit(Form)
self.lineEdit_3.setMaxLength(52)
self.lineEdit_3.setObjectName("lineEdit_3")
self.gridLayout_2.addWidget(self.lineEdit_3, 2, 1, 1, 1)
self.lineEdit_18 = QtWidgets.QLineEdit(Form)
self.lineEdit_18.setMaxLength(52)
self.lineEdit_18.setObjectName("lineEdit_18")
self.gridLayout_2.addWidget(self.lineEdit_18, 12, 3, 1, 1)
self.lineEdit_8 = QtWidgets.QLineEdit(Form)
self.lineEdit_8.setMaxLength(52)
self.lineEdit_8.setObjectName("lineEdit_8")
self.gridLayout_2.addWidget(self.lineEdit_8, 6, 3, 1, 1)
self.lineEdit_16 = QtWidgets.QLineEdit(Form)
self.lineEdit_16.setMaxLength(52)
self.lineEdit_16.setObjectName("lineEdit_16")
self.gridLayout_2.addWidget(self.lineEdit_16, 11, 3, 1, 1)
self.lineEdit_5 = QtWidgets.QLineEdit(Form)
self.lineEdit_5.setMaxLength(52)
self.lineEdit_5.setObjectName("lineEdit_5")
self.gridLayout_2.addWidget(self.lineEdit_5, 4, 1, 1, 1)
self.lineEdit_11 = QtWidgets.QLineEdit(Form)
self.lineEdit_11.setMaxLength(52)
self.lineEdit_11.setObjectName("lineEdit_11")
self.gridLayout_2.addWidget(self.lineEdit_11, 8, 1, 1, 1)
self.lineEdit_2 = QtWidgets.QLineEdit(Form)
self.lineEdit_2.setMinimumSize(QtCore.QSize(320, 0))
self.lineEdit_2.setMaxLength(52)
self.lineEdit_2.setObjectName("lineEdit_2")
self.gridLayout_2.addWidget(self.lineEdit_2, 1, 3, 1, 1)
spacerItem3 = QtWidgets.QSpacerItem(20, 19, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.gridLayout_2.addItem(spacerItem3, 14, 1, 1, 1)
self.lineEdit_14 = QtWidgets.QLineEdit(Form)
self.lineEdit_14.setMaxLength(52)
self.lineEdit_14.setObjectName("lineEdit_14")
self.gridLayout_2.addWidget(self.lineEdit_14, 9, 3, 1, 1)
self.lineEdit_17 = QtWidgets.QLineEdit(Form)
self.lineEdit_17.setMaxLength(52)
self.lineEdit_17.setObjectName("lineEdit_17")
self.gridLayout_2.addWidget(self.lineEdit_17, 12, 1, 1, 1)
self.lineEdit_15 = QtWidgets.QLineEdit(Form)
self.lineEdit_15.setMaxLength(52)
self.lineEdit_15.setObjectName("lineEdit_15")
self.gridLayout_2.addWidget(self.lineEdit_15, 11, 1, 1, 1)
self.gridLayout.addLayout(self.gridLayout_2, 0, 0, 1, 1)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
Form.setTabOrder(self.lineEdit_1, self.lineEdit_2)
Form.setTabOrder(self.lineEdit_2, self.lineEdit_3)
Form.setTabOrder(self.lineEdit_3, self.lineEdit_4)
Form.setTabOrder(self.lineEdit_4, self.lineEdit_5)
Form.setTabOrder(self.lineEdit_5, self.lineEdit_6)
Form.setTabOrder(self.lineEdit_6, self.lineEdit_7)
Form.setTabOrder(self.lineEdit_7, self.lineEdit_8)
Form.setTabOrder(self.lineEdit_8, self.lineEdit_9)
Form.setTabOrder(self.lineEdit_9, self.lineEdit_10)
Form.setTabOrder(self.lineEdit_10, self.lineEdit_11)
Form.setTabOrder(self.lineEdit_11, self.lineEdit_12)
Form.setTabOrder(self.lineEdit_12, self.lineEdit_13)
Form.setTabOrder(self.lineEdit_13, self.lineEdit_14)
Form.setTabOrder(self.lineEdit_14, self.lineEdit_15)
Form.setTabOrder(self.lineEdit_15, self.lineEdit_16)
Form.setTabOrder(self.lineEdit_16, self.lineEdit_17)
Form.setTabOrder(self.lineEdit_17, self.lineEdit_18)
Form.setTabOrder(self.lineEdit_18, self.lineEdit_19)
Form.setTabOrder(self.lineEdit_19, self.lineEdit_20)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
| 48.986577 | 116 | 0.699411 | 934 | 7,299 | 5.271949 | 0.104925 | 0.292445 | 0.08225 | 0.105605 | 0.436028 | 0.246141 | 0.099919 | 0.083266 | 0.083266 | 0.083266 | 0 | 0.073179 | 0.189341 | 7,299 | 148 | 117 | 49.317568 | 0.758999 | 0.025757 | 0 | 0 | 1 | 0 | 0.035614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015152 | false | 0 | 0.015152 | 0 | 0.037879 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0269f22a336264676deddf579a89bac9cc3a33c6 | 2,461 | py | Python | algorithms/leetcode/easy/0232_用栈实现队列.py | bigfoolliu/liu_aistuff | aa661d37c05c257ee293285dd0868fb7e8227628 | [
"MIT"
] | 1 | 2019-11-25T07:23:42.000Z | 2019-11-25T07:23:42.000Z | algorithms/leetcode/easy/0232_用栈实现队列.py | bigfoolliu/liu_aistuff | aa661d37c05c257ee293285dd0868fb7e8227628 | [
"MIT"
] | 13 | 2020-01-07T16:09:47.000Z | 2022-03-02T12:51:44.000Z | algorithms/leetcode/easy/0232_用栈实现队列.py | bigfoolliu/liu_aistuff | aa661d37c05c257ee293285dd0868fb7e8227628 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding:utf-8 -*-
# author: bigfoolliu
"""
请你仅使用两个栈实现先入先出队列。队列应当支持一般队列支持的所有操作(push、pop、peek、empty):
实现 MyQueue 类:
void push(int x) 将元素 x 推到队列的末尾
int pop() 从队列的开头移除并返回元素
int peek() 返回队列开头的元素
boolean empty() 如果队列为空,返回 true ;否则,返回 false
说明:
你只能使用标准的栈操作 —— 也就是只有 push to top, peek/pop from top, size, 和 is empty 操作是合法的。
你所使用的语言也许不支持栈。你可以使用 list 或者 deque(双端队列)来模拟一个栈,只要是标准的栈操作即可。
进阶:
你能否实现每个操作均摊时间复杂度为 O(1) 的队列?换句话说,执行 n 个操作的总时间复杂度为 O(n) ,即使其中一个操作可能花费较长时间。
示例:
输入:
["MyQueue", "push", "push", "peek", "pop", "empty"]
[[], [1], [2], [], [], []]
输出:
[null, null, null, 1, 1, false]
解释:
MyQueue myQueue = new MyQueue();
myQueue.push(1); // queue is: [1]
myQueue.push(2); // queue is: [1, 2] (leftmost is front of the queue)
myQueue.peek(); // return 1
myQueue.pop(); // return 1, queue is [2]
myQueue.empty(); // return false
提示:
1 <= x <= 9
最多调用 100 次 push、pop、peek 和 empty
假设所有操作都是有效的 (例如,一个空的队列不会调用 pop 或者 peek 操作)
来源:力扣(LeetCode)
链接:https://leetcode-cn.com/problems/implement-queue-using-stacks
著作权归领扣网络所有。商业转载请联系官方授权,非商业转载请注明出处。
"""
import doctest
class MyQueue:
"""
现实生活中,
- 使用两个栈来替代一个队列的实现是为了在多进程中分开对同一个队列对读写操作
- 一个栈是用来读的,另一个是用来写的
- 当且仅当读栈满时或者写栈为空时,读写操作才会发生冲突
- 当只有一个线程对栈进行读写操作的时候,总有一个栈是空的
- 在多线程应用中,如果我们只有一个队列,为了线程安全,我们在读或者写队列的时候都需要锁住整个队列。而在两个栈的实现中,只要写入栈不为空,那么push操作的锁就不会影响到pop
"""
def __init__(self):
"""
Initialize your data structure here.
"""
self.stack = [] # 用来存放数据
self.help_stack = [] # 用来辅助
def push(self, x: int) -> None:
"""
Push element x to the back of queue.
每次push元素的时候,队列需要push到头部,而栈的push操作的都是栈顶,所以每次push的时候:
- 先将存放数据的栈中的元素依次取出来放到辅助栈中
- 将待push的元素也最后放到辅助栈中
- 依次将辅助栈中的元素放到存放数据的栈
"""
while self.stack:
self.help_stack.append(self.stack.pop())
self.help_stack.append(x)
while self.help_stack:
self.stack.append(self.help_stack.pop())
def pop(self) -> int:
"""
Removes the element from in front of queue and returns that element.
"""
return self.stack.pop()
def peek(self) -> int:
"""
Get the front element.
"""
return self.stack[-1]
def empty(self) -> bool:
"""
Returns whether the queue is empty.
"""
return not bool(self.stack)
if __name__ == '__main__':
doctest.testmod()
| 21.4 | 92 | 0.621698 | 299 | 2,461 | 5.06689 | 0.505017 | 0.041584 | 0.042904 | 0.025083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011309 | 0.245429 | 2,461 | 114 | 93 | 21.587719 | 0.803446 | 0.660301 | 0 | 0 | 0 | 0 | 0.012461 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.052632 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
026a7cf99c24dff7410ad7673547fdcc176bd77e | 497 | py | Python | Mundo 1/ex023.py | judigunkel/judi-exercicios-python | c61bb75b1ae6141defcf42214194e141a70af15d | [
"MIT"
] | null | null | null | Mundo 1/ex023.py | judigunkel/judi-exercicios-python | c61bb75b1ae6141defcf42214194e141a70af15d | [
"MIT"
] | null | null | null | Mundo 1/ex023.py | judigunkel/judi-exercicios-python | c61bb75b1ae6141defcf42214194e141a70af15d | [
"MIT"
] | 1 | 2021-03-06T02:41:36.000Z | 2021-03-06T02:41:36.000Z | """
23 - Faça um programa que leia um número de 0 a 9999 e mostre na tela cada um dos
dígitos separados.
Ex: Digite um número: 1834
unidade: 4
dezena: 3
centena: 8
milhar: 1
"""
num = int(input('Informe um número: '))
print('=*='*10)
print(f'\033[33mAnalisando o número {num}\033[m')
print('=*='*10)
print(f'Unidade: \033[31m{num % 10}\033[m\n'
f'Dezena:\t \033[31m{num // 10 % 10}\033[m\n'
f'Centena: \033[31m{num // 100 % 10}\033[m\n'
f'Milhar:\t \033[31m{num // 1000}\033[m')
| 27.611111 | 81 | 0.623742 | 93 | 497 | 3.333333 | 0.494624 | 0.064516 | 0.116129 | 0.067742 | 0.077419 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180488 | 0.17505 | 497 | 17 | 82 | 29.235294 | 0.57561 | 0.34004 | 0 | 0.25 | 0 | 0 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
02703221dbf93ef09b9927134af8d7d1681a34fa | 309 | py | Python | conf/settings/storage.py | pincoin/iclover | 890fcbd836ebffa0de8cf9fbabee55f068b3bc8b | [
"MIT"
] | 1 | 2019-07-20T09:51:53.000Z | 2019-07-20T09:51:53.000Z | conf/settings/storage.py | pincoin/iclover | 890fcbd836ebffa0de8cf9fbabee55f068b3bc8b | [
"MIT"
] | 11 | 2019-07-26T02:23:52.000Z | 2022-03-11T23:41:09.000Z | conf/settings/storage.py | pincoin/iclover | 890fcbd836ebffa0de8cf9fbabee55f068b3bc8b | [
"MIT"
] | 1 | 2019-07-26T02:16:49.000Z | 2019-07-26T02:16:49.000Z | from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
default_acl = 'private'
location = settings.MEDIAFILES_LOCATION
class StaticStorage(S3Boto3Storage):
default_acl = 'public-read'
location = settings.STATICFILES_LOCATION | 30.9 | 52 | 0.802589 | 32 | 309 | 7.625 | 0.625 | 0.172131 | 0.196721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.132686 | 309 | 10 | 53 | 30.9 | 0.880597 | 0 | 0 | 0 | 0 | 0 | 0.058065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
027e9ac41ac004e1d3c21fa56c765f1eaa417895 | 2,011 | py | Python | tsa/celeryconfig.py | eghuro/dcat-dry | 8e669583b77814c6a7fe2b038c0011fd3619f125 | [
"MIT"
] | 1 | 2018-11-16T09:04:01.000Z | 2018-11-16T09:04:01.000Z | tsa/celeryconfig.py | eghuro/nkod-ts | 8e669583b77814c6a7fe2b038c0011fd3619f125 | [
"MIT"
] | 4 | 2019-03-31T08:58:42.000Z | 2020-02-27T16:44:20.000Z | tsa/celeryconfig.py | eghuro/dcat-dry | 8e669583b77814c6a7fe2b038c0011fd3619f125 | [
"MIT"
] | null | null | null | """Celery configuration."""
import os
broker_url = os.environ['REDIS_CELERY']
broker_pool_limit = 100
result_backend = os.environ['REDIS_CELERY']
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
timezone = 'Europe/Prague'
enable_utc = True
include = ['tsa.tasks.analyze', 'tsa.tasks.batch', 'tsa.tasks.index', 'tsa.tasks.process', 'tsa.tasks.query', 'tsa.tasks.system']
broker_transport_options = {
'fanout_prefix': True,
'fanout_patterns': True
}
beat_schedule = {
'cleanup-batches-every-minute': {
'task': 'tsa.tasks.batch.cleanup_batches',
'schedule': 60.0,
},
}
task_create_missing_queues = True
task_default_queue = 'default'
task_routes = {
'tsa.tasks.analyze.analyze_priority': {
'queue': 'high_priority'
},
'tsa.tasks.analyze.analyze_named': {
'queue': 'high_priority'
},
'tsa.tasks.analyze.run_one_named_analyzer': {
'queue': 'high_priority'
},
'tsa.tasks.analyze.run_one_analyzer': {
'queue': 'high_priority'
},
'tsa.tasks.analyze.run_analyzer': {
'queue': 'high_priority'
},
'tsa.tasks.analyze.store_named_analysis': {
'queue': 'high_priority'
},
'tsa.tasks.index.index_named': {
'queue': 'high_priority'
},
'tsa.tasks.index.run_one_named_indexer': {
'queue': 'high_priority'
},
'tsa.tasks.index.index': {
'queue': 'high_priority'
},
'tsa.tasks.index.run_one_indexer': {
'queue': 'high_priority'
},
'tsa.tasks.batch.inspect_endpoint': {
'queue': 'low_priority'
},
'tsa.tasks.analyze.process_endpoint': {
'queue': 'low_priority'
},
'tsa.tasks.analyze.decompress': {
'queue': 'low_priority'
},
'tsa.tasks.analyze.decompress_prio': {
'queue': 'low_priority'
},
'tsa.tasks.query.*': {
'queue': 'query'
},
'tsa.tasks.batch.cleanup_batches': {
'queue': 'low_priority'
},
}
| 22.344444 | 129 | 0.609647 | 219 | 2,011 | 5.3379 | 0.30137 | 0.157399 | 0.191617 | 0.171086 | 0.510693 | 0.443969 | 0.374679 | 0.170231 | 0 | 0 | 0 | 0.003841 | 0.223272 | 2,011 | 89 | 130 | 22.595506 | 0.744558 | 0.010443 | 0 | 0.208333 | 0 | 0 | 0.515625 | 0.272177 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013889 | 0 | 0.013889 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
028763d28e17d965496b4cb3ed4c83c1e62d2afb | 3,708 | py | Python | pgsmo/objects/scripting_mixins.py | sergb213/pgtoolsservice | 6296a207e7443fe4ebd5c91d837c033ee7886cab | [
"MIT"
] | null | null | null | pgsmo/objects/scripting_mixins.py | sergb213/pgtoolsservice | 6296a207e7443fe4ebd5c91d837c033ee7886cab | [
"MIT"
] | null | null | null | pgsmo/objects/scripting_mixins.py | sergb213/pgtoolsservice | 6296a207e7443fe4ebd5c91d837c033ee7886cab | [
"MIT"
] | 1 | 2020-07-30T11:46:44.000Z | 2020-07-30T11:46:44.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from abc import ABCMeta, abstractmethod
from typing import List, Tuple
import pgsmo.utils.templating as templating
class ScriptableBase(metaclass=ABCMeta):
def __init__(self, template_root: str, macro_root: List[str], server_version: Tuple[int, int, int]):
# NOTE: These member variables have a "mxin" prefix to prevent interaction with child class implementations
self._mxin_macro_root: List[str] = macro_root
self._mxin_server_version: Tuple[int, int, int] = server_version
self._mxin_template_root: str = template_root
class ScriptableCreate(ScriptableBase, metaclass=ABCMeta):
def __init__(self, template_root: str, macro_root: List[str], server_version: Tuple[int, int, int]):
super(ScriptableCreate, self).__init__(template_root, macro_root, server_version)
def create_script(self):
"""Generates a script that creates an object of the inheriting type"""
data = self._create_query_data()
return templating.render_template(
templating.get_template_path(self._mxin_template_root, 'create.sql', self._mxin_server_version),
self._mxin_macro_root,
**data
)
@abstractmethod
def _create_query_data(self) -> dict:
pass
class ScriptableDelete(ScriptableBase, metaclass=ABCMeta):
def __init__(self, template_root: str, macro_root: List[str], server_version: Tuple[int, int, int]):
super(ScriptableDelete, self).__init__(template_root, macro_root, server_version)
def delete_script(self):
"""Generates a script that deletes an object of the inheriting type"""
data = self._delete_query_data()
return templating.render_template(
templating.get_template_path(self._mxin_template_root, 'delete.sql', self._mxin_server_version),
self._mxin_macro_root,
**data
)
@abstractmethod
def _delete_query_data(self) -> dict:
pass
class ScriptableUpdate(ScriptableBase, metaclass=ABCMeta):
def __init__(self, template_root: str, macro_root: List[str], server_version: Tuple[int, int, int]):
super(ScriptableUpdate, self).__init__(template_root, macro_root, server_version)
def update_script(self):
"""Generates a script that updates/alters an object of the inheriting type"""
data = self._update_query_data()
return templating.render_template(
templating.get_template_path(self._mxin_template_root, 'update.sql', self._mxin_server_version),
self._mxin_macro_root,
**data
)
@abstractmethod
def _update_query_data(self) -> dict:
pass
class ScriptableSelect(ScriptableBase, metaclass=ABCMeta):
def __init__(self, template_root: str, macro_root: List[str], server_version: Tuple[int, int, int]):
super(ScriptableSelect, self).__init__(template_root, macro_root, server_version)
def select_script(self):
"""Generates a script which selects the object of the inheriting type"""
data = self._select_query_data()
return templating.render_template(
templating.get_template_path(self._mxin_template_root, 'select.sql', self._mxin_server_version),
self._mxin_macro_root,
**data
)
@abstractmethod
def _select_query_data(self) -> dict:
pass
| 41.2 | 115 | 0.669633 | 427 | 3,708 | 5.461358 | 0.208431 | 0.077187 | 0.038593 | 0.041166 | 0.711407 | 0.691252 | 0.607633 | 0.593482 | 0.548456 | 0.471269 | 0 | 0 | 0.196063 | 3,708 | 89 | 116 | 41.662921 | 0.782288 | 0.192017 | 0 | 0.416667 | 0 | 0 | 0.013463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216667 | false | 0.066667 | 0.05 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
02897172a4365cd2733be564ca8d01501e8e26cb | 9,344 | py | Python | infobip/api/model/omni/send/OmniAdvancedRequest.py | ubidreams/infobip-api-python-client | 3e585bf00565627bd7da46a2c8f10b860faaeb8b | [
"Apache-2.0"
] | null | null | null | infobip/api/model/omni/send/OmniAdvancedRequest.py | ubidreams/infobip-api-python-client | 3e585bf00565627bd7da46a2c8f10b860faaeb8b | [
"Apache-2.0"
] | null | null | null | infobip/api/model/omni/send/OmniAdvancedRequest.py | ubidreams/infobip-api-python-client | 3e585bf00565627bd7da46a2c8f10b860faaeb8b | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""This is a generated class and is not intended for modification!
"""
from datetime import datetime
from infobip.util.models import DefaultObject, serializable
from infobip.api.model.omni.Destination import Destination
from infobip.api.model.omni.send.ParsecoData import ParsecoData
from infobip.api.model.omni.send.VoiceData import VoiceData
from infobip.api.model.omni.send.FacebookData import FacebookData
from infobip.api.model.omni.send.ViberData import ViberData
from infobip.api.model.omni.send.LineData import LineData
from infobip.api.model.omni.send.VKontakteData import VKontakteData
from infobip.api.model.omni.send.PushData import PushData
from infobip.api.model.omni.send.SmsData import SmsData
from infobip.api.model.omni.send.EmailData import EmailData
class OmniAdvancedRequest(DefaultObject):
@property
@serializable(name="destinations", type=Destination)
def destinations(self):
"""
Property is a list of: Destination
"""
return self.get_field_value("destinations")
@destinations.setter
def destinations(self, destinations):
"""
Property is a list of: Destination
"""
self.set_field_value("destinations", destinations)
def set_destinations(self, destinations):
self.destinations = destinations
return self
@property
@serializable(name="bulkId", type='basestring')
def bulk_id(self):
"""
Property is of type: 'basestring'
"""
return self.get_field_value("bulk_id")
@bulk_id.setter
def bulk_id(self, bulk_id):
"""
Property is of type: 'basestring'
"""
self.set_field_value("bulk_id", bulk_id)
def set_bulk_id(self, bulk_id):
self.bulk_id = bulk_id
return self
@property
@serializable(name="scenarioKey", type='basestring')
def scenario_key(self):
"""
Property is of type: 'basestring'
"""
return self.get_field_value("scenario_key")
@scenario_key.setter
def scenario_key(self, scenario_key):
"""
Property is of type: 'basestring'
"""
self.set_field_value("scenario_key", scenario_key)
def set_scenario_key(self, scenario_key):
self.scenario_key = scenario_key
return self
@property
@serializable(name="sms", type=SmsData)
def sms(self):
"""
Property is of type: SmsData
"""
return self.get_field_value("sms")
@sms.setter
def sms(self, sms):
"""
Property is of type: SmsData
"""
self.set_field_value("sms", sms)
def set_sms(self, sms):
self.sms = sms
return self
@property
@serializable(name="parseco", type=ParsecoData)
def parseco(self):
"""
Property is of type: ParsecoData
"""
return self.get_field_value("parseco")
@parseco.setter
def parseco(self, parseco):
"""
Property is of type: ParsecoData
"""
self.set_field_value("parseco", parseco)
def set_parseco(self, parseco):
self.parseco = parseco
return self
@property
@serializable(name="viber", type=ViberData)
def viber(self):
"""
Property is of type: ViberData
"""
return self.get_field_value("viber")
@viber.setter
def viber(self, viber):
"""
Property is of type: ViberData
"""
self.set_field_value("viber", viber)
def set_viber(self, viber):
self.viber = viber
return self
@property
@serializable(name="voice", type=VoiceData)
def voice(self):
"""
Property is of type: VoiceData
"""
return self.get_field_value("voice")
@voice.setter
def voice(self, voice):
"""
Property is of type: VoiceData
"""
self.set_field_value("voice", voice)
def set_voice(self, voice):
self.voice = voice
return self
@property
@serializable(name="email", type=EmailData)
def email(self):
"""
Property is of type: EmailData
"""
return self.get_field_value("email")
@email.setter
def email(self, email):
"""
Property is of type: EmailData
"""
self.set_field_value("email", email)
def set_email(self, email):
self.email = email
return self
@property
@serializable(name="push", type=PushData)
def push(self):
"""
Property is of type: PushData
"""
return self.get_field_value("push")
@push.setter
def push(self, push):
"""
Property is of type: PushData
"""
self.set_field_value("push", push)
def set_push(self, push):
self.push = push
return self
@property
@serializable(name="facebook", type=FacebookData)
def facebook(self):
"""
Property is of type: FacebookData
"""
return self.get_field_value("facebook")
@facebook.setter
def facebook(self, facebook):
"""
Property is of type: FacebookData
"""
self.set_field_value("facebook", facebook)
def set_facebook(self, facebook):
self.facebook = facebook
return self
@property
@serializable(name="line", type=LineData)
def line(self):
"""
Property is of type: LineData
"""
return self.get_field_value("line")
@line.setter
def line(self, line):
"""
Property is of type: LineData
"""
self.set_field_value("line", line)
def set_line(self, line):
self.line = line
return self
@property
@serializable(name="vKontakte", type=VKontakteData)
def v_kontakte(self):
"""
Property is of type: VKontakteData
"""
return self.get_field_value("v_kontakte")
@v_kontakte.setter
def v_kontakte(self, v_kontakte):
"""
Property is of type: VKontakteData
"""
self.set_field_value("v_kontakte", v_kontakte)
def set_v_kontakte(self, v_kontakte):
self.v_kontakte = v_kontakte
return self
@property
@serializable(name="notify", type=bool)
def notify(self):
"""
Property is of type: bool
"""
return self.get_field_value("notify")
@notify.setter
def notify(self, notify):
"""
Property is of type: bool
"""
self.set_field_value("notify", notify)
def set_notify(self, notify):
self.notify = notify
return self
@property
@serializable(name="intermediateReport", type=bool)
def intermediate_report(self):
"""
Property is of type: bool
"""
return self.get_field_value("intermediate_report")
@intermediate_report.setter
def intermediate_report(self, intermediate_report):
"""
Property is of type: bool
"""
self.set_field_value("intermediate_report", intermediate_report)
def set_intermediate_report(self, intermediate_report):
self.intermediate_report = intermediate_report
return self
@property
@serializable(name="notifyUrl", type='basestring')
def notify_url(self):
"""
Property is of type: 'basestring'
"""
return self.get_field_value("notify_url")
@notify_url.setter
def notify_url(self, notify_url):
"""
Property is of type: 'basestring'
"""
self.set_field_value("notify_url", notify_url)
def set_notify_url(self, notify_url):
self.notify_url = notify_url
return self
@property
@serializable(name="notifyContentType", type='basestring')
def notify_content_type(self):
"""
Property is of type: 'basestring'
"""
return self.get_field_value("notify_content_type")
@notify_content_type.setter
def notify_content_type(self, notify_content_type):
"""
Property is of type: 'basestring'
"""
self.set_field_value("notify_content_type", notify_content_type)
def set_notify_content_type(self, notify_content_type):
self.notify_content_type = notify_content_type
return self
@property
@serializable(name="callbackData", type='basestring')
def callback_data(self):
"""
Property is of type: 'basestring'
"""
return self.get_field_value("callback_data")
@callback_data.setter
def callback_data(self, callback_data):
"""
Property is of type: 'basestring'
"""
self.set_field_value("callback_data", callback_data)
def set_callback_data(self, callback_data):
self.callback_data = callback_data
return self
@property
@serializable(name="sendAt", type=datetime)
def send_at(self):
"""
Property is of type: datetime
"""
return self.get_field_value("send_at")
@send_at.setter
def send_at(self, send_at):
"""
Property is of type: datetime
"""
self.set_field_value("send_at", send_at)
def set_send_at(self, send_at):
self.send_at = send_at
return self | 25.883657 | 72 | 0.613549 | 1,064 | 9,344 | 5.210526 | 0.077068 | 0.064935 | 0.073593 | 0.098124 | 0.596861 | 0.263167 | 0.14881 | 0.131313 | 0.121212 | 0.084596 | 0 | 0.000149 | 0.28125 | 9,344 | 361 | 73 | 25.883657 | 0.825342 | 0.131421 | 0 | 0.186529 | 1 | 0 | 0.069898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.279793 | false | 0 | 0.062176 | 0 | 0.533679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
65f4015b6d91f4d5bd0cb4806b20508f002d1d5c | 818 | py | Python | foxylib/tools/nlp/matcher/tests/test_fulltext_matcher.py | foxytrixy-com/foxylib | 94b8c5b9f8b12423393c68f7d9f910258840ed18 | [
"BSD-3-Clause"
] | null | null | null | foxylib/tools/nlp/matcher/tests/test_fulltext_matcher.py | foxytrixy-com/foxylib | 94b8c5b9f8b12423393c68f7d9f910258840ed18 | [
"BSD-3-Clause"
] | 3 | 2019-12-12T05:17:44.000Z | 2022-03-11T23:40:50.000Z | foxylib/tools/nlp/matcher/tests/test_fulltext_matcher.py | foxytrixy-com/foxylib | 94b8c5b9f8b12423393c68f7d9f910258840ed18 | [
"BSD-3-Clause"
] | 2 | 2019-10-16T17:39:34.000Z | 2020-02-10T06:32:08.000Z | import logging
from unittest import TestCase
from foxylib.tools.collections.collections_tool import DictTool
from foxylib.tools.log.foxylib_logger import FoxylibLogger
from foxylib.tools.nlp.matcher.fulltext_matcher import FulltextMatcher
from foxylib.tools.string.string_tool import str2lower
class TestFulltextMatcher(TestCase):
@classmethod
def setUpClass(cls):
FoxylibLogger.attach_stderr2loggers(logging.DEBUG)
def test_01(self):
dict_value2texts = {"ReD": ["scarleTT", "radish", 'red']}
matcher = FulltextMatcher(dict_value2texts, config=FulltextMatcher.Config(normalizer=str2lower))
hyp1 = list(matcher.text2values("scarlett"))
self.assertEqual(hyp1, ["ReD"])
hyp2 = list(matcher.text2values("red scarlett"))
self.assertEqual(hyp2, [])
| 32.72 | 104 | 0.745721 | 88 | 818 | 6.840909 | 0.488636 | 0.07309 | 0.106312 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018786 | 0.154034 | 818 | 24 | 105 | 34.083333 | 0.851156 | 0 | 0 | 0 | 0 | 0 | 0.052567 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.117647 | false | 0 | 0.352941 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
65fdbd95550cbad72173be5dfc354e3ce7e11c59 | 347 | py | Python | esputnik/exceptions.py | urumtsev/ok-esputnik | c1eccaf9b86d02e57fe088b62ed27ac85287742e | [
"MIT"
] | null | null | null | esputnik/exceptions.py | urumtsev/ok-esputnik | c1eccaf9b86d02e57fe088b62ed27ac85287742e | [
"MIT"
] | null | null | null | esputnik/exceptions.py | urumtsev/ok-esputnik | c1eccaf9b86d02e57fe088b62ed27ac85287742e | [
"MIT"
] | 1 | 2020-04-17T15:47:19.000Z | 2020-04-17T15:47:19.000Z | __all__ = (
'ESputnikException',
'InvalidAuthDataError',
'IncorrectDataError'
)
class ESputnikException(AttributeError):
pass
class InvalidAuthDataError(ESputnikException):
def __init__(self, code, message):
self.code = code
self.message = message
class IncorrectDataError(InvalidAuthDataError):
pass
| 17.35 | 47 | 0.711816 | 26 | 347 | 9.192308 | 0.461538 | 0.066946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204611 | 347 | 19 | 48 | 18.263158 | 0.865942 | 0 | 0 | 0.153846 | 0 | 0 | 0.158501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.153846 | 0 | 0 | 0.307692 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
65fe066a82c936d9d6d722f777910157acc754ac | 232 | py | Python | ChessRender/RenderFsmCommon/text_field_fsm.py | PavelLebed20/chess_classic | 72f7d08cadae8db9c65d61411bcdc8c79bfa04c3 | [
"Apache-2.0"
] | 1 | 2019-06-04T11:08:55.000Z | 2019-06-04T11:08:55.000Z | ChessRender/RenderFsmCommon/text_field_fsm.py | PavelLebed20/chess_classic | 72f7d08cadae8db9c65d61411bcdc8c79bfa04c3 | [
"Apache-2.0"
] | 115 | 2019-03-02T08:02:50.000Z | 2019-06-02T16:28:00.000Z | ChessRender/RenderFsmCommon/text_field_fsm.py | PavelLebed20/chess_classic | 72f7d08cadae8db9c65d61411bcdc8c79bfa04c3 | [
"Apache-2.0"
] | null | null | null | class TextFieldFsm:
def __init__(self, title, position, need_hide=False, initial_text=""):
self.title = title
self.position = position
self.need_hide = need_hide
self.initial_text = initial_text
| 29 | 74 | 0.668103 | 28 | 232 | 5.178571 | 0.428571 | 0.165517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24569 | 232 | 7 | 75 | 33.142857 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5a2390b02576fce95a011b594cf101893ab436f6 | 5,007 | py | Python | monero_glue/messages/Features.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | 20 | 2018-04-05T22:06:10.000Z | 2021-09-18T10:43:44.000Z | monero_glue/messages/Features.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | null | null | null | monero_glue/messages/Features.py | ph4r05/monero-agent | 0bac0e6f33142b2bb885565bfd1ef8ac04559280 | [
"MIT"
] | 5 | 2018-08-06T15:06:04.000Z | 2021-07-16T01:58:43.000Z | # Automatically generated by pb2py
# fmt: off
from .. import protobuf as p
if __debug__:
try:
from typing import Dict, List # noqa: F401
from typing_extensions import Literal # noqa: F401
EnumTypeCapability = Literal[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]
EnumTypeBackupType = Literal[0, 1, 2]
except ImportError:
pass
class Features(p.MessageType):
MESSAGE_WIRE_TYPE = 17
def __init__(
self,
vendor: str = None,
major_version: int = None,
minor_version: int = None,
patch_version: int = None,
bootloader_mode: bool = None,
device_id: str = None,
pin_protection: bool = None,
passphrase_protection: bool = None,
language: str = None,
label: str = None,
initialized: bool = None,
revision: bytes = None,
bootloader_hash: bytes = None,
imported: bool = None,
pin_cached: bool = None,
firmware_present: bool = None,
needs_backup: bool = None,
flags: int = None,
model: str = None,
fw_major: int = None,
fw_minor: int = None,
fw_patch: int = None,
fw_vendor: str = None,
fw_vendor_keys: bytes = None,
unfinished_backup: bool = None,
no_backup: bool = None,
recovery_mode: bool = None,
capabilities: List[EnumTypeCapability] = None,
backup_type: EnumTypeBackupType = None,
sd_card_present: bool = None,
sd_protection: bool = None,
wipe_code_protection: bool = None,
session_id: bytes = None,
passphrase_always_on_device: bool = None,
) -> None:
self.vendor = vendor
self.major_version = major_version
self.minor_version = minor_version
self.patch_version = patch_version
self.bootloader_mode = bootloader_mode
self.device_id = device_id
self.pin_protection = pin_protection
self.passphrase_protection = passphrase_protection
self.language = language
self.label = label
self.initialized = initialized
self.revision = revision
self.bootloader_hash = bootloader_hash
self.imported = imported
self.pin_cached = pin_cached
self.firmware_present = firmware_present
self.needs_backup = needs_backup
self.flags = flags
self.model = model
self.fw_major = fw_major
self.fw_minor = fw_minor
self.fw_patch = fw_patch
self.fw_vendor = fw_vendor
self.fw_vendor_keys = fw_vendor_keys
self.unfinished_backup = unfinished_backup
self.no_backup = no_backup
self.recovery_mode = recovery_mode
self.capabilities = capabilities if capabilities is not None else []
self.backup_type = backup_type
self.sd_card_present = sd_card_present
self.sd_protection = sd_protection
self.wipe_code_protection = wipe_code_protection
self.session_id = session_id
self.passphrase_always_on_device = passphrase_always_on_device
@classmethod
def get_fields(cls) -> Dict:
return {
1: ('vendor', p.UnicodeType, 0),
2: ('major_version', p.UVarintType, 0),
3: ('minor_version', p.UVarintType, 0),
4: ('patch_version', p.UVarintType, 0),
5: ('bootloader_mode', p.BoolType, 0),
6: ('device_id', p.UnicodeType, 0),
7: ('pin_protection', p.BoolType, 0),
8: ('passphrase_protection', p.BoolType, 0),
9: ('language', p.UnicodeType, 0),
10: ('label', p.UnicodeType, 0),
12: ('initialized', p.BoolType, 0),
13: ('revision', p.BytesType, 0),
14: ('bootloader_hash', p.BytesType, 0),
15: ('imported', p.BoolType, 0),
16: ('pin_cached', p.BoolType, 0),
18: ('firmware_present', p.BoolType, 0),
19: ('needs_backup', p.BoolType, 0),
20: ('flags', p.UVarintType, 0),
21: ('model', p.UnicodeType, 0),
22: ('fw_major', p.UVarintType, 0),
23: ('fw_minor', p.UVarintType, 0),
24: ('fw_patch', p.UVarintType, 0),
25: ('fw_vendor', p.UnicodeType, 0),
26: ('fw_vendor_keys', p.BytesType, 0),
27: ('unfinished_backup', p.BoolType, 0),
28: ('no_backup', p.BoolType, 0),
29: ('recovery_mode', p.BoolType, 0),
30: ('capabilities', p.EnumType("Capability", (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)), p.FLAG_REPEATED),
31: ('backup_type', p.EnumType("BackupType", (0, 1, 2)), 0),
32: ('sd_card_present', p.BoolType, 0),
33: ('sd_protection', p.BoolType, 0),
34: ('wipe_code_protection', p.BoolType, 0),
35: ('session_id', p.BytesType, 0),
36: ('passphrase_always_on_device', p.BoolType, 0),
}
| 39.117188 | 137 | 0.580787 | 599 | 5,007 | 4.632721 | 0.220367 | 0.043243 | 0.054054 | 0.034595 | 0.018018 | 0.018018 | 0.018018 | 0.018018 | 0.018018 | 0.018018 | 0 | 0.045154 | 0.305572 | 5,007 | 127 | 138 | 39.425197 | 0.752948 | 0.012582 | 0 | 0 | 1 | 0 | 0.08524 | 0.009719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0.058333 | 0.058333 | 0.008333 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5a3d8bde273be87216aac72a3459e446d7ba5a9c | 2,293 | py | Python | storeapp/apps/store/models.py | rohitit09/store_app | 6ee2b31b2d344757222c973be871307958745f3a | [
"MIT"
] | null | null | null | storeapp/apps/store/models.py | rohitit09/store_app | 6ee2b31b2d344757222c973be871307958745f3a | [
"MIT"
] | null | null | null | storeapp/apps/store/models.py | rohitit09/store_app | 6ee2b31b2d344757222c973be871307958745f3a | [
"MIT"
] | null | null | null | from django.db import models
from apps.user.models import StoreUser
# Create your models here.
class Category(models.Model):
name=models.CharField(max_length=255)
def __str__(self):
return self.name
class Meta:
verbose_name='Category'
verbose_name_plural='Categories'
class Products(models.Model):
product_name=models.CharField(max_length=255)
description=models.TextField()
mrp=models.DecimalField(max_digits=20,decimal_places=2)
sale_price=models.DecimalField(max_digits=20,decimal_places=2)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
def __str__(self):
return self.product_name
class Meta:
verbose_name='Product'
verbose_name_plural='Products'
class Store(models.Model):
name=models.CharField(max_length=255)
address=models.TextField()
products = models.ManyToManyField(Products,blank=True)
# store_link=models.CharField(max_length=400,primary_key=True,unique=True)
def __str__(self):
return self.name
class Meta:
verbose_name='Store'
verbose_name_plural='Stores'
def get_absolute_url(self):
return f"/view_store/{self.id}/"
class Customer(models.Model):
user=models.OneToOneField(StoreUser,on_delete=models.CASCADE)
address=models.TextField()
store = models.ManyToManyField(Store,blank=True)
def __str__(self):
return self.user.phone_number
class Meta:
verbose_name='Customer'
verbose_name_plural='Customers'
class Orders(models.Model):
choices=(
('PENDING','PENDING'),
('APPROVED','APPROVED'),
('CANCELED','CANCELED')
)
status=models.CharField(choices=choices,default='PENDING',max_length=8)
order_created=models.DateTimeField(auto_now_add=True)
amount=models.DecimalField(max_digits=20,decimal_places=2)
quatity=models.IntegerField()
product = models.ForeignKey(Products, on_delete=models.CASCADE)
store = models.ForeignKey(Store, on_delete=models.CASCADE)
customer = models.ForeignKey(Customer, on_delete=models.CASCADE,blank=True,null=True)
def __str__(self):
return self.product.product_name
class Meta:
verbose_name='Order'
verbose_name_plural='Orders' | 28.308642 | 89 | 0.709987 | 278 | 2,293 | 5.625899 | 0.302158 | 0.070332 | 0.031969 | 0.051151 | 0.30243 | 0.30243 | 0.192455 | 0.192455 | 0.056266 | 0.056266 | 0 | 0.011727 | 0.181858 | 2,293 | 81 | 90 | 28.308642 | 0.821962 | 0.042303 | 0 | 0.275862 | 0 | 0 | 0.067001 | 0.010027 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.034483 | 0.103448 | 0.758621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
5a477971f5c94d1086e60fd805f7524250793cf6 | 11,132 | py | Python | BookClub/views/action_views.py | amir-rahim/BookClubSocialNetwork | b69a07cd33592f700214252a64c7c1c53845625d | [
"MIT"
] | 4 | 2022-02-04T02:11:48.000Z | 2022-03-12T21:38:01.000Z | BookClub/views/action_views.py | amir-rahim/BookClubSocialNetwork | b69a07cd33592f700214252a64c7c1c53845625d | [
"MIT"
] | 51 | 2022-02-01T18:56:23.000Z | 2022-03-31T15:35:37.000Z | BookClub/views/action_views.py | amir-rahim/BookClubSocialNetwork | b69a07cd33592f700214252a64c7c1c53845625d | [
"MIT"
] | null | null | null | """Action related views."""
from django.contrib import messages
from django.contrib.auth.mixins import LoginRequiredMixin
from django.shortcuts import redirect
from django.views.generic import TemplateView, View
from django.core.exceptions import ObjectDoesNotExist
from BookClub.helpers import *
from BookClub.models import Club, User, ClubMembership
class ActionView(TemplateView):
"""Class for views that make an action."""
def get(self, request, *args, **kwargs):
return redirect('home')
def post(self, request, *args, **kwargs):
"""
Handle post request
Args:
request (Django Request): The request we have received
"""
try:
club = Club.objects.get(club_url_name=self.kwargs['club_url_name'])
current_user = self.request.user
target_user = User.objects.get(username=self.request.POST.get('user'))
except ObjectDoesNotExist:
messages.error(self.request, "Error, user or club not found.")
return redirect(self.redirect_location, kwargs['club_url_name'])
if self.is_actionable(current_user, target_user, club):
self.action(current_user, target_user, club)
else:
self.is_not_actionable(current_user, target_user, club)
return redirect(self.redirect_location, kwargs['club_url_name'])
def is_actionable(self, current_user, target_user, club):
"""Check if the action is legal."""
raise NotImplementedError("This method isn't implemented yet.") # pragma: no cover
def is_not_actionable(self, current_user, target_user, club):
"""Displays a message if the action is illegal."""
messages.error(self.request, f"You cannot do that!")
def action(self, current_user, target_user, club):
"""Runs the action."""
raise NotImplementedError("This method isn't implemented yet.") # pragma: no cover
class PromoteMemberView(LoginRequiredMixin, ActionView):
"""Promoting a member to a moderator."""
redirect_location = 'member_list'
def is_actionable(self, current_user, target_user, club):
"""Check if member can be promoted."""
return (has_owner_rank(current_user, club) and has_member_rank(target_user, club))
def action(self, current_user, target_user, club):
"""Promotes the member to moderator."""
messages.success(self.request, f"You have successfully promoted the member.")
set_rank(target_user, club, ClubMembership.UserRoles.MODERATOR)
def post(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
class DemoteMemberView(LoginRequiredMixin, ActionView):
"""Demoting a moderator to a member."""
redirect_location = 'member_list'
def is_actionable(self, current_user, target_user, club):
"""Check if moderator can be demoted."""
return has_owner_rank(current_user, club) and has_moderator_rank(target_user, club)
def action(self, current_user, target_user, club):
"""Demote moderator to a member."""
messages.success(self.request, f"You have successfully demoted the moderator.")
set_rank(target_user, club, ClubMembership.UserRoles.MEMBER)
def post(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
class ApproveApplicantView(LoginRequiredMixin, ActionView):
"""Approving an applicant to a member."""
redirect_location = 'applicant_list'
def is_actionable(self, current_user, target_user, club):
"""Check if applicant can be approved."""
return ((has_owner_rank(current_user, club) or has_moderator_rank(current_user, club)) and has_applicant_rank(
target_user, club))
def is_not_actionable(self, current_user, target_user, club):
"""Displays a message if the action is illegal."""
redirect_location = 'home'
messages.error(self.request, f"You cannot do that!")
def action(self, current_user, target_user, club):
"""Approves the applicant to a member."""
messages.success(self.request, f"You have successfully approved the applicant.")
set_rank(target_user, club, ClubMembership.UserRoles.MEMBER)
def post(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
class RejectApplicantView(LoginRequiredMixin, ActionView):
"""Rejecting an applicant"""
redirect_location = 'applicant_list'
def is_actionable(self, current_user, target_user, club):
"""Check if applicant can be rejected."""
return ((has_owner_rank(current_user, club) or has_moderator_rank(current_user, club)) and has_applicant_rank(
target_user, club))
def is_not_actionable(self, current_user, target_user, club):
"""Displays a message if the action is illegal."""
redirect_location = 'home'
messages.error(self.request, f"You cannot do that!")
def action(self, current_user, target_user, club):
"""Rejects the applicant"""
messages.success(self.request, f"You have successfully rejected the applicant.")
remove_from_club(target_user, club)
def post(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
class KickMemberView(LoginRequiredMixin, ActionView):
"""Kicks the target user from the club."""
redirect_location = 'member_list'
def is_actionable(self, current_user, target_user, club):
"""Check if currentUser can kick targetUser."""
return can_kick(club, current_user, target_user)
def action(self, current_user, target_user, club):
"""Kick the targetUser."""
messages.success(self.request, f"You have successfully kicked the member.")
remove_from_club(target_user, club)
def post(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
class TransferOwnershipView(LoginRequiredMixin, ActionView):
"""Transfer ownership of a club to another moderator."""
redirect_location = 'member_list'
def is_actionable(self, current_user, target_user, club):
"""Check if the ownership can be transferred to a valid moderator."""
return has_owner_rank(current_user, club) and has_moderator_rank(target_user, club)
def action(self, current_user, target_user, club):
"""Transfer ownership to moderator and demote owner to moderator."""
messages.success(self.request, f"You have transferred the ownership successfully.")
set_rank(target_user, club, ClubMembership.UserRoles.OWNER)
set_rank(current_user, club, ClubMembership.UserRoles.MODERATOR)
def post(self, request, *args, **kwargs):
return super().post(self, request, *args, **kwargs)
class JoinClubView(LoginRequiredMixin, View):
"""Users can join/apply to clubs."""
redirect_location = 'available_clubs'
def get(self, request, *args, **kwargs):
return redirect(self.redirect_location)
def is_actionable(self, current_user, club):
"""Check if user can join/apply to a club."""
return not has_membership(club, current_user)
def is_not_actionable(self, current_user, club):
"""If user has a membership with the club already."""
if club.is_private:
messages.info(self.request, "You have already applied to this club.")
else:
messages.info(self.request, "You are already a member of this club.")
def action(self, current_user, club):
"""Create membership for user with the club depending on privacy."""
if club.is_private:
create_membership(club, current_user, ClubMembership.UserRoles.APPLICANT)
messages.success(self.request, "Application to club successful.")
else:
create_membership(club, current_user, ClubMembership.UserRoles.MEMBER)
messages.success(self.request, "You have joined the club.")
def post(self, request, *args, **kwargs):
try:
club = Club.objects.get(club_url_name=self.kwargs['club_url_name'])
current_user = self.request.user
except:
messages.error(self.request, "Error, user or club not found.")
return redirect(self.redirect_location)
if (self.is_actionable(current_user, club)):
self.action(current_user, club)
else:
self.is_not_actionable(current_user, club)
return redirect(self.redirect_location)
class LeaveClubView(LoginRequiredMixin, View):
"""User can leave a club."""
redirect_location = 'my_club_memberships'
def get(self, request, *args, **kwargs):
return redirect(self.redirect_location)
def is_actionable(self, current_user, club):
"""Check if currentUser is in the club."""
return (has_membership(club, current_user) or has_applicant_rank(current_user, club)) and not (has_owner_rank(current_user, club))
def is_not_actionable(self, current_user, club):
"""User cannot leave club if they are the owner."""
if has_owner_rank(current_user, club):
messages.error(self.request, "The owner of the club cannot leave.")
def action(self, current_user, club):
"""User leaves the club, membership with the club is deleted."""
messages.success(self.request, "You have left the club.")
remove_from_club(current_user, club)
def post(self, request, *args, **kwargs):
try:
club = Club.objects.get(club_url_name=self.kwargs['club_url_name'])
current_user = self.request.user
except:
messages.error(self.request, "Error, user or club not found.")
return redirect(self.redirect_location)
if (self.is_actionable(current_user, club)):
self.action(current_user, club)
else:
self.is_not_actionable(current_user, club)
return redirect(self.redirect_location)
class DeleteClubView(LoginRequiredMixin, View):
"""User can delete a club."""
redirect_location = 'available_clubs'
def is_actionable(self, current_user, club):
"""Only allow the owner of a club to delete it."""
return has_owner_rank(current_user, club)
def is_not_actionable(self):
messages.error(self.request, f"You are not allowed to delete the club!")
def action(self, current_user, club):
club.delete()
messages.success(self.request, "You have deleted the club.")
def post(self, request, *args, **kwargs):
try:
club = Club.objects.get(club_url_name=self.kwargs['club_url_name'])
current_user = self.request.user
if self.is_actionable(current_user, club):
self.action(current_user, club)
return redirect(self.redirect_location)
else:
self.is_not_actionable()
except:
messages.error(self.request, "Error, user or club not found.")
return redirect('home')
| 35.794212 | 138 | 0.674722 | 1,382 | 11,132 | 5.272069 | 0.119392 | 0.06588 | 0.059566 | 0.060527 | 0.713972 | 0.670464 | 0.632034 | 0.594016 | 0.558057 | 0.523744 | 0 | 0 | 0.219637 | 11,132 | 310 | 139 | 35.909677 | 0.838725 | 0.129806 | 0 | 0.682635 | 0 | 0 | 0.103791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233533 | false | 0 | 0.041916 | 0.053892 | 0.54491 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5a4f8a97fb9b4f5e805eaf44b4b9a518c3c4c41f | 387 | py | Python | examples/qt_test.py | hycool/pywebview | 4c571117b89f36f48c19c0cf638254f8ecbf3e02 | [
"BSD-3-Clause"
] | 3 | 2020-10-06T19:43:43.000Z | 2021-01-13T16:40:21.000Z | examples/qt_test.py | hycool/pywebview | 4c571117b89f36f48c19c0cf638254f8ecbf3e02 | [
"BSD-3-Clause"
] | null | null | null | examples/qt_test.py | hycool/pywebview | 4c571117b89f36f48c19c0cf638254f8ecbf3e02 | [
"BSD-3-Clause"
] | 1 | 2020-11-18T16:38:33.000Z | 2020-11-18T16:38:33.000Z | import webview
import threading
"""
This example demonstrates how to create a pywebview windows using QT (normally GTK is preferred)
"""
if __name__ == '__main__':
webview.config.use_qt = True
# Create a non-resizable webview window with 800x600 dimensions
# webview.create_window("Simple browser", "http://www.baidu.com")
webview.load_html('<h1>Master Window</h1>')
| 25.8 | 96 | 0.728682 | 52 | 387 | 5.211538 | 0.769231 | 0.051661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0.162791 | 387 | 14 | 97 | 27.642857 | 0.811728 | 0.322997 | 0 | 0 | 0 | 0 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5a51f08f60e94f41d3a9af65581aa8ea4efe2a22 | 6,428 | py | Python | lib/gui/widgets/queue_items/__init__.py | B1d1zyaKa/mff_auto | fd3f0bd45aea45e2e8ad8e8fc73a8953c96d350a | [
"Apache-2.0"
] | 1 | 2022-03-28T07:38:16.000Z | 2022-03-28T07:38:16.000Z | lib/gui/widgets/queue_items/__init__.py | B1d1zyaKa/mff_auto | fd3f0bd45aea45e2e8ad8e8fc73a8953c96d350a | [
"Apache-2.0"
] | null | null | null | lib/gui/widgets/queue_items/__init__.py | B1d1zyaKa/mff_auto | fd3f0bd45aea45e2e8ad8e8fc73a8953c96d350a | [
"Apache-2.0"
] | null | null | null | import inspect
import sys
import lib.logger as logging
from lib.gui.widgets.queue_items import actions
from lib.gui.widgets.queue_items import events
from lib.gui.widgets.queue_items import missions
from .general import GameMode, Action, Event
logger = logging.get_logger(__name__)
def _classes_from_module(module_name):
"""Generator for all non-imported classes inside module.
:param str module_name: module's name.
:rtype: typing.Generator[type[GameMode]]
"""
if module_name not in sys.modules:
logger.error(f"Cannot load {module_name} module.")
for name, obj in inspect.getmembers(sys.modules[module_name]):
if obj and inspect.isclass(obj) and obj.__module__ == module_name:
yield obj
def _init_classes(module_name, game):
"""Initializes all non-imported classes inside module into the game.
:param str module_name: module's name.
:param lib.game.game.Game game: instance of the game.
:rtype: list[GameMode]
"""
return [class_init(game) for class_init in _classes_from_module(module_name)]
def get_actions(game):
"""Gets all initialized actions.
:rtype: list[Action]
"""
return _init_classes(module_name=actions.__name__, game=game)
def get_events(game):
"""Gets all initialized events.
:rtype: list[Event]
"""
return _init_classes(module_name=events.__name__, game=game)
def get_missions(game):
"""Gets all initialized missions.
:rtype: list[GameMode]
"""
return _init_classes(module_name=missions.__name__, game=game)
def get_missions_dict(mission_instances):
"""Constructs menu dictionary for missions by it's instances.
:param list[GameMode] mission_instances: list of mission's instances.
:rtype: dict
"""
def find_instance(class_):
return next((inst for inst in mission_instances if inst.__class__.__name__ == class_.__name__), None)
menu_dict = {
"EPIC QUESTS": {
"FATE OF MANKIND": [find_instance(missions._HeroesReunited), find_instance(missions._IndustrialComplex),
find_instance(missions._DeviantDiversion)],
"DARK REIGN": [find_instance(missions._PlayingHero), find_instance(missions._GoldenGods),
find_instance(missions._StingOfTheScorpion), find_instance(missions._SelfDefenseProtocol),
find_instance(missions._LegacyOfBlood)],
"GALACTIC IMPERATIVE": [find_instance(missions._FateOfTheUniverse), find_instance(missions._TheFault),
find_instance(missions._DangerousSisters), find_instance(missions._CosmicRider),
find_instance(missions._QuantumPower), find_instance(missions._WingsOfDarkness)],
"FIRST FAMILY": [find_instance(missions._DoomsDay), find_instance(missions._TwistedWorld),
find_instance(missions._InhumanPrincess), find_instance(missions._MeanAndGreen),
find_instance(missions._ClobberinTime), find_instance(missions._Hothead)],
"X-FORCE": [find_instance(missions._BeginningOfTheChaos), find_instance(missions._StupidXMen),
find_instance(missions._TheBigTwin), find_instance(missions._AwManThisGuy),
find_instance(missions._DominoFalls)],
"RISE OF THE X-MEN": [find_instance(missions._MutualEnemy), find_instance(missions._VeiledSecret),
find_instance(missions._GoingRogue), find_instance(missions._FriendsAndEnemies),
find_instance(missions._WeatheringTheStorm), find_instance(missions._Blindsided)],
"SORCERER SUPREME": [find_instance(missions._DarkAdvent), find_instance(missions._IncreasingDarkness),
find_instance(missions._RoadToMonastery), find_instance(missions._MysteriousAmbush),
find_instance(missions._MonasteryInTrouble), find_instance(missions._PowerOfTheDark)]
},
"STORY MISSION": find_instance(missions._StoryMission),
"DIMENSION MISSION": find_instance(missions._DimensionMissions),
"LEGENDARY BATTLE": find_instance(missions._LegendaryBattle),
"SQUAD BATTLE": find_instance(missions._SquadBattles),
"WORLD BOSS": find_instance(missions._WorldBosses),
"TIMELINE BATTLE": find_instance(missions._TimelineBattle),
"DANGER ROOM": find_instance(missions._DangerRoom),
"ALLIANCE BATTLE": find_instance(missions._AllianceBattle),
"CO-OP PLAY": find_instance(missions._CoopPlay),
"WORLD BOSS INVASION": find_instance(missions._WorldBossInvasion),
"GIANT BOSS RAID": find_instance(missions._GiantBossRaid),
"SHADOWLAND": find_instance(missions._Shadowland)
}
return menu_dict
def get_actions_dict(action_instances):
"""Constructs menu dictionary for actions by it's instances.
:param list[Action] action_instances: list of action's instances.
:rtype: dict
"""
def find_instance(class_):
return next((inst for inst in action_instances if inst.__class__.__name__ == class_.__name__), None)
alliance = [find_instance(actions._AllianceCheckIn), find_instance(actions._AllianceDonate),
find_instance(actions._AllianceBuyEnergy), find_instance(actions._AllianceRequestSupport),
find_instance(actions._AllianceChallengesEnergy)]
inventory = [find_instance(actions._ComicCards), find_instance(actions._CustomGear),
find_instance(actions._Iso8Upgrade), find_instance(actions._Iso8Combine),
find_instance(actions._Iso8Lock), find_instance(actions._ArtifactDismantle)]
store = [find_instance(actions._CollectFreeEnergy), find_instance(actions._CollectEnergyViaAssemblePoints),
find_instance(actions._AcquireFreeHeroChest), find_instance(actions._AcquireFreeArtifactChest),
find_instance(actions._BuyArtifactChest)]
menu_dict = {
"[ACTIONS]": {
"[QUEUE CONTROL]": [find_instance(actions._RunQueue)],
"ALLIANCE": alliance,
"INVENTORY": inventory,
"STORE": store,
**{action.mode_name: action for action in action_instances if action not in [*alliance, *inventory, *store]}
}
}
return menu_dict
| 45.267606 | 120 | 0.691817 | 658 | 6,428 | 6.405775 | 0.268997 | 0.193594 | 0.232503 | 0.019929 | 0.176512 | 0.123132 | 0.084223 | 0.046975 | 0.029419 | 0.029419 | 0 | 0.000593 | 0.212663 | 6,428 | 141 | 121 | 45.588652 | 0.832049 | 0 | 0 | 0.071429 | 0 | 0 | 0.062894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.083333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5a58f3d3d4606bb98158d8533bf114ae27ba40bf | 528 | py | Python | honeybee.py | Grzegorzorzorz/honeybee | 2d1ff0e75de078915fa254fcab12ff08f24413cb | [
"MIT"
] | null | null | null | honeybee.py | Grzegorzorzorz/honeybee | 2d1ff0e75de078915fa254fcab12ff08f24413cb | [
"MIT"
] | null | null | null | honeybee.py | Grzegorzorzorz/honeybee | 2d1ff0e75de078915fa254fcab12ff08f24413cb | [
"MIT"
] | null | null | null | import os
import src.main
# Check if the bin and dst directories are present, and if not, generate them.
if os.path.isdir(os.path.join(os.path.dirname(__file__), "bin")) == False:
print("'bin' directory not found. Generating...")
os.makedirs(os.path.join(os.path.dirname(__file__), "bin"))
if os.path.isdir(os.path.join(os.path.dirname(__file__), "dst")) == False:
print("'dst' directory not found. Generating...")
os.makedirs(os.path.join(os.path.dirname(__file__), "dst"))
# Start the program.
src.main.run() | 40.615385 | 78 | 0.69697 | 83 | 528 | 4.240964 | 0.361446 | 0.170455 | 0.113636 | 0.136364 | 0.625 | 0.625 | 0.625 | 0.625 | 0.590909 | 0.590909 | 0 | 0 | 0.123106 | 528 | 13 | 79 | 40.615385 | 0.760259 | 0.179924 | 0 | 0 | 1 | 0 | 0.213457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.222222 | 0 | 0.222222 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5a6216ce5a52a06e10e358605a62681d73e42716 | 45,877 | py | Python | src/whylogs/src/whylabs/logs/proto/summaries_pb2.py | bernease/cli-demo-1 | 895d9eddc95ca3dd43b7ae8b33a8fbdedbc855f5 | [
"Apache-2.0"
] | null | null | null | src/whylogs/src/whylabs/logs/proto/summaries_pb2.py | bernease/cli-demo-1 | 895d9eddc95ca3dd43b7ae8b33a8fbdedbc855f5 | [
"Apache-2.0"
] | null | null | null | src/whylogs/src/whylabs/logs/proto/summaries_pb2.py | bernease/cli-demo-1 | 895d9eddc95ca3dd43b7ae8b33a8fbdedbc855f5 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: summaries.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import wrappers_pb2 as google_dot_protobuf_dot_wrappers__pb2
from . import messages_pb2 as messages__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='summaries.proto',
package='',
syntax='proto3',
serialized_options=b'\n com.whylabs.logging.core.messageB\tSummariesP\001',
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n\x0fsummaries.proto\x1a\x1egoogle/protobuf/wrappers.proto\x1a\x0emessages.proto\"D\n\x12UniqueCountSummary\x12\x10\n\x08\x65stimate\x18\x01 \x01(\x01\x12\r\n\x05upper\x18\x02 \x01(\x01\x12\r\n\x05lower\x18\x03 \x01(\x01\"~\n\x16\x46requentStringsSummary\x12\x33\n\x05items\x18\x01 \x03(\x0b\x32$.FrequentStringsSummary.FrequentItem\x1a/\n\x0c\x46requentItem\x12\r\n\x05value\x18\x01 \x01(\t\x12\x10\n\x08\x65stimate\x18\x02 \x01(\x01\"\x96\x02\n\x16\x46requentNumbersSummary\x12;\n\x07\x64oubles\x18\x01 \x03(\x0b\x32*.FrequentNumbersSummary.FrequentDoubleItem\x12\x37\n\x05longs\x18\x02 \x03(\x0b\x32(.FrequentNumbersSummary.FrequentLongItem\x1a\x43\n\x12\x46requentDoubleItem\x12\x10\n\x08\x65stimate\x18\x01 \x01(\x03\x12\r\n\x05value\x18\x02 \x01(\x01\x12\x0c\n\x04rank\x18\x03 \x01(\x05\x1a\x41\n\x10\x46requentLongItem\x12\x10\n\x08\x65stimate\x18\x01 \x01(\x03\x12\r\n\x05value\x18\x02 \x01(\x03\x12\x0c\n\x04rank\x18\x03 \x01(\x05\"\x7f\n\x14\x46requentItemsSummary\x12\x31\n\x05items\x18\x01 \x03(\x0b\x32\".FrequentItemsSummary.FrequentItem\x1a\x34\n\x0c\x46requentItem\x12\x10\n\x08\x65stimate\x18\x01 \x01(\x03\x12\x12\n\njson_value\x18\x02 \x01(\t\"f\n\x0eStringsSummary\x12)\n\x0cunique_count\x18\x01 \x01(\x0b\x32\x13.UniqueCountSummary\x12)\n\x08\x66requent\x18\x02 \x01(\x0b\x32\x17.FrequentStringsSummary\"\x9d\x01\n\rSchemaSummary\x12$\n\rinferred_type\x18\x01 \x01(\x0b\x32\r.InferredType\x12\x33\n\x0btype_counts\x18\x02 \x03(\x0b\x32\x1e.SchemaSummary.TypeCountsEntry\x1a\x31\n\x0fTypeCountsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x03:\x02\x38\x01\"\x80\x01\n\x10HistogramSummary\x12\r\n\x05start\x18\x01 \x01(\x01\x12\x0b\n\x03\x65nd\x18\x02 \x01(\x01\x12\r\n\x05width\x18\x03 \x01(\x01\x12\x0e\n\x06\x63ounts\x18\x04 \x03(\x03\x12\x0b\n\x03max\x18\x05 \x01(\x01\x12\x0b\n\x03min\x18\x06 \x01(\x01\x12\x0c\n\x04\x62ins\x18\x07 \x03(\x01\x12\t\n\x01n\x18\x08 \x01(\x03\"=\n\x0fQuantileSummary\x12\x11\n\tquantiles\x18\x01 \x03(\x01\x12\x17\n\x0fquantile_values\x18\x02 \x03(\x01\"\x94\x02\n\rNumberSummary\x12\r\n\x05\x63ount\x18\x01 \x01(\x04\x12\x0b\n\x03min\x18\x02 \x01(\x01\x12\x0b\n\x03max\x18\x03 \x01(\x01\x12\x0c\n\x04mean\x18\x04 \x01(\x01\x12\x0e\n\x06stddev\x18\x05 \x01(\x01\x12$\n\thistogram\x18\x06 \x01(\x0b\x32\x11.HistogramSummary\x12)\n\x0cunique_count\x18\x07 \x01(\x0b\x32\x13.UniqueCountSummary\x12#\n\tquantiles\x18\x08 \x01(\x0b\x32\x10.QuantileSummary\x12\x31\n\x10\x66requent_numbers\x18\t \x01(\x0b\x32\x17.FrequentNumbersSummary\x12\x13\n\x0bis_discrete\x18\n \x01(\x08\"\xf7\x01\n\rColumnSummary\x12\x1b\n\x08\x63ounters\x18\x01 \x01(\x0b\x32\t.Counters\x12\x1e\n\x06schema\x18\x02 \x01(\x0b\x32\x0e.SchemaSummary\x12&\n\x0enumber_summary\x18\x03 \x01(\x0b\x32\x0e.NumberSummary\x12\'\n\x0estring_summary\x18\x04 \x01(\x0b\x32\x0f.StringsSummary\x12-\n\x0e\x66requent_items\x18\x05 \x01(\x0b\x32\x15.FrequentItemsSummary\x12)\n\x0cunique_count\x18\x06 \x01(\x0b\x32\x13.UniqueCountSummary\"\xa7\x01\n\x0e\x44\x61tasetSummary\x12&\n\nproperties\x18\x01 \x01(\x0b\x32\x12.DatasetProperties\x12-\n\x07\x63olumns\x18\x02 \x03(\x0b\x32\x1c.DatasetSummary.ColumnsEntry\x1a>\n\x0c\x43olumnsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x1d\n\x05value\x18\x02 \x01(\x0b\x32\x0e.ColumnSummary:\x02\x38\x01\"\x87\x01\n\x10\x44\x61tasetSummaries\x12\x31\n\x08profiles\x18\x01 \x03(\x0b\x32\x1f.DatasetSummaries.ProfilesEntry\x1a@\n\rProfilesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x1e\n\x05value\x18\x02 \x01(\x0b\x32\x0f.DatasetSummary:\x02\x38\x01\x42/\n com.whylabs.logging.core.messageB\tSummariesP\x01\x62\x06proto3'
,
dependencies=[google_dot_protobuf_dot_wrappers__pb2.DESCRIPTOR,messages__pb2.DESCRIPTOR,])
_UNIQUECOUNTSUMMARY = _descriptor.Descriptor(
name='UniqueCountSummary',
full_name='UniqueCountSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='estimate', full_name='UniqueCountSummary.estimate', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='upper', full_name='UniqueCountSummary.upper', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='lower', full_name='UniqueCountSummary.lower', index=2,
number=3, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=67,
serialized_end=135,
)
_FREQUENTSTRINGSSUMMARY_FREQUENTITEM = _descriptor.Descriptor(
name='FrequentItem',
full_name='FrequentStringsSummary.FrequentItem',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='value', full_name='FrequentStringsSummary.FrequentItem.value', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='estimate', full_name='FrequentStringsSummary.FrequentItem.estimate', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=216,
serialized_end=263,
)
_FREQUENTSTRINGSSUMMARY = _descriptor.Descriptor(
name='FrequentStringsSummary',
full_name='FrequentStringsSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='items', full_name='FrequentStringsSummary.items', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_FREQUENTSTRINGSSUMMARY_FREQUENTITEM, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=137,
serialized_end=263,
)
_FREQUENTNUMBERSSUMMARY_FREQUENTDOUBLEITEM = _descriptor.Descriptor(
name='FrequentDoubleItem',
full_name='FrequentNumbersSummary.FrequentDoubleItem',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='estimate', full_name='FrequentNumbersSummary.FrequentDoubleItem.estimate', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='FrequentNumbersSummary.FrequentDoubleItem.value', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='rank', full_name='FrequentNumbersSummary.FrequentDoubleItem.rank', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=410,
serialized_end=477,
)
_FREQUENTNUMBERSSUMMARY_FREQUENTLONGITEM = _descriptor.Descriptor(
name='FrequentLongItem',
full_name='FrequentNumbersSummary.FrequentLongItem',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='estimate', full_name='FrequentNumbersSummary.FrequentLongItem.estimate', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='FrequentNumbersSummary.FrequentLongItem.value', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='rank', full_name='FrequentNumbersSummary.FrequentLongItem.rank', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=479,
serialized_end=544,
)
_FREQUENTNUMBERSSUMMARY = _descriptor.Descriptor(
name='FrequentNumbersSummary',
full_name='FrequentNumbersSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='doubles', full_name='FrequentNumbersSummary.doubles', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='longs', full_name='FrequentNumbersSummary.longs', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_FREQUENTNUMBERSSUMMARY_FREQUENTDOUBLEITEM, _FREQUENTNUMBERSSUMMARY_FREQUENTLONGITEM, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=266,
serialized_end=544,
)
_FREQUENTITEMSSUMMARY_FREQUENTITEM = _descriptor.Descriptor(
name='FrequentItem',
full_name='FrequentItemsSummary.FrequentItem',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='estimate', full_name='FrequentItemsSummary.FrequentItem.estimate', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='json_value', full_name='FrequentItemsSummary.FrequentItem.json_value', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=621,
serialized_end=673,
)
_FREQUENTITEMSSUMMARY = _descriptor.Descriptor(
name='FrequentItemsSummary',
full_name='FrequentItemsSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='items', full_name='FrequentItemsSummary.items', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_FREQUENTITEMSSUMMARY_FREQUENTITEM, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=546,
serialized_end=673,
)
_STRINGSSUMMARY = _descriptor.Descriptor(
name='StringsSummary',
full_name='StringsSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='unique_count', full_name='StringsSummary.unique_count', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='frequent', full_name='StringsSummary.frequent', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=675,
serialized_end=777,
)
_SCHEMASUMMARY_TYPECOUNTSENTRY = _descriptor.Descriptor(
name='TypeCountsEntry',
full_name='SchemaSummary.TypeCountsEntry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='SchemaSummary.TypeCountsEntry.key', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='SchemaSummary.TypeCountsEntry.value', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=b'8\001',
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=888,
serialized_end=937,
)
_SCHEMASUMMARY = _descriptor.Descriptor(
name='SchemaSummary',
full_name='SchemaSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='inferred_type', full_name='SchemaSummary.inferred_type', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='type_counts', full_name='SchemaSummary.type_counts', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_SCHEMASUMMARY_TYPECOUNTSENTRY, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=780,
serialized_end=937,
)
_HISTOGRAMSUMMARY = _descriptor.Descriptor(
name='HistogramSummary',
full_name='HistogramSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='start', full_name='HistogramSummary.start', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='end', full_name='HistogramSummary.end', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='width', full_name='HistogramSummary.width', index=2,
number=3, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='counts', full_name='HistogramSummary.counts', index=3,
number=4, type=3, cpp_type=2, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='max', full_name='HistogramSummary.max', index=4,
number=5, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='min', full_name='HistogramSummary.min', index=5,
number=6, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='bins', full_name='HistogramSummary.bins', index=6,
number=7, type=1, cpp_type=5, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='n', full_name='HistogramSummary.n', index=7,
number=8, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=940,
serialized_end=1068,
)
_QUANTILESUMMARY = _descriptor.Descriptor(
name='QuantileSummary',
full_name='QuantileSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='quantiles', full_name='QuantileSummary.quantiles', index=0,
number=1, type=1, cpp_type=5, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='quantile_values', full_name='QuantileSummary.quantile_values', index=1,
number=2, type=1, cpp_type=5, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1070,
serialized_end=1131,
)
_NUMBERSUMMARY = _descriptor.Descriptor(
name='NumberSummary',
full_name='NumberSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='count', full_name='NumberSummary.count', index=0,
number=1, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='min', full_name='NumberSummary.min', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='max', full_name='NumberSummary.max', index=2,
number=3, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='mean', full_name='NumberSummary.mean', index=3,
number=4, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='stddev', full_name='NumberSummary.stddev', index=4,
number=5, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='histogram', full_name='NumberSummary.histogram', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='unique_count', full_name='NumberSummary.unique_count', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='quantiles', full_name='NumberSummary.quantiles', index=7,
number=8, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='frequent_numbers', full_name='NumberSummary.frequent_numbers', index=8,
number=9, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='is_discrete', full_name='NumberSummary.is_discrete', index=9,
number=10, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1134,
serialized_end=1410,
)
_COLUMNSUMMARY = _descriptor.Descriptor(
name='ColumnSummary',
full_name='ColumnSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='counters', full_name='ColumnSummary.counters', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='schema', full_name='ColumnSummary.schema', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='number_summary', full_name='ColumnSummary.number_summary', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='string_summary', full_name='ColumnSummary.string_summary', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='frequent_items', full_name='ColumnSummary.frequent_items', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='unique_count', full_name='ColumnSummary.unique_count', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1413,
serialized_end=1660,
)
_DATASETSUMMARY_COLUMNSENTRY = _descriptor.Descriptor(
name='ColumnsEntry',
full_name='DatasetSummary.ColumnsEntry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='DatasetSummary.ColumnsEntry.key', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='DatasetSummary.ColumnsEntry.value', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=b'8\001',
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1768,
serialized_end=1830,
)
_DATASETSUMMARY = _descriptor.Descriptor(
name='DatasetSummary',
full_name='DatasetSummary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='properties', full_name='DatasetSummary.properties', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='columns', full_name='DatasetSummary.columns', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_DATASETSUMMARY_COLUMNSENTRY, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1663,
serialized_end=1830,
)
_DATASETSUMMARIES_PROFILESENTRY = _descriptor.Descriptor(
name='ProfilesEntry',
full_name='DatasetSummaries.ProfilesEntry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='DatasetSummaries.ProfilesEntry.key', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='value', full_name='DatasetSummaries.ProfilesEntry.value', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=b'8\001',
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1904,
serialized_end=1968,
)
_DATASETSUMMARIES = _descriptor.Descriptor(
name='DatasetSummaries',
full_name='DatasetSummaries',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='profiles', full_name='DatasetSummaries.profiles', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[_DATASETSUMMARIES_PROFILESENTRY, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1833,
serialized_end=1968,
)
_FREQUENTSTRINGSSUMMARY_FREQUENTITEM.containing_type = _FREQUENTSTRINGSSUMMARY
_FREQUENTSTRINGSSUMMARY.fields_by_name['items'].message_type = _FREQUENTSTRINGSSUMMARY_FREQUENTITEM
_FREQUENTNUMBERSSUMMARY_FREQUENTDOUBLEITEM.containing_type = _FREQUENTNUMBERSSUMMARY
_FREQUENTNUMBERSSUMMARY_FREQUENTLONGITEM.containing_type = _FREQUENTNUMBERSSUMMARY
_FREQUENTNUMBERSSUMMARY.fields_by_name['doubles'].message_type = _FREQUENTNUMBERSSUMMARY_FREQUENTDOUBLEITEM
_FREQUENTNUMBERSSUMMARY.fields_by_name['longs'].message_type = _FREQUENTNUMBERSSUMMARY_FREQUENTLONGITEM
_FREQUENTITEMSSUMMARY_FREQUENTITEM.containing_type = _FREQUENTITEMSSUMMARY
_FREQUENTITEMSSUMMARY.fields_by_name['items'].message_type = _FREQUENTITEMSSUMMARY_FREQUENTITEM
_STRINGSSUMMARY.fields_by_name['unique_count'].message_type = _UNIQUECOUNTSUMMARY
_STRINGSSUMMARY.fields_by_name['frequent'].message_type = _FREQUENTSTRINGSSUMMARY
_SCHEMASUMMARY_TYPECOUNTSENTRY.containing_type = _SCHEMASUMMARY
_SCHEMASUMMARY.fields_by_name['inferred_type'].message_type = messages__pb2._INFERREDTYPE
_SCHEMASUMMARY.fields_by_name['type_counts'].message_type = _SCHEMASUMMARY_TYPECOUNTSENTRY
_NUMBERSUMMARY.fields_by_name['histogram'].message_type = _HISTOGRAMSUMMARY
_NUMBERSUMMARY.fields_by_name['unique_count'].message_type = _UNIQUECOUNTSUMMARY
_NUMBERSUMMARY.fields_by_name['quantiles'].message_type = _QUANTILESUMMARY
_NUMBERSUMMARY.fields_by_name['frequent_numbers'].message_type = _FREQUENTNUMBERSSUMMARY
_COLUMNSUMMARY.fields_by_name['counters'].message_type = messages__pb2._COUNTERS
_COLUMNSUMMARY.fields_by_name['schema'].message_type = _SCHEMASUMMARY
_COLUMNSUMMARY.fields_by_name['number_summary'].message_type = _NUMBERSUMMARY
_COLUMNSUMMARY.fields_by_name['string_summary'].message_type = _STRINGSSUMMARY
_COLUMNSUMMARY.fields_by_name['frequent_items'].message_type = _FREQUENTITEMSSUMMARY
_COLUMNSUMMARY.fields_by_name['unique_count'].message_type = _UNIQUECOUNTSUMMARY
_DATASETSUMMARY_COLUMNSENTRY.fields_by_name['value'].message_type = _COLUMNSUMMARY
_DATASETSUMMARY_COLUMNSENTRY.containing_type = _DATASETSUMMARY
_DATASETSUMMARY.fields_by_name['properties'].message_type = messages__pb2._DATASETPROPERTIES
_DATASETSUMMARY.fields_by_name['columns'].message_type = _DATASETSUMMARY_COLUMNSENTRY
_DATASETSUMMARIES_PROFILESENTRY.fields_by_name['value'].message_type = _DATASETSUMMARY
_DATASETSUMMARIES_PROFILESENTRY.containing_type = _DATASETSUMMARIES
_DATASETSUMMARIES.fields_by_name['profiles'].message_type = _DATASETSUMMARIES_PROFILESENTRY
DESCRIPTOR.message_types_by_name['UniqueCountSummary'] = _UNIQUECOUNTSUMMARY
DESCRIPTOR.message_types_by_name['FrequentStringsSummary'] = _FREQUENTSTRINGSSUMMARY
DESCRIPTOR.message_types_by_name['FrequentNumbersSummary'] = _FREQUENTNUMBERSSUMMARY
DESCRIPTOR.message_types_by_name['FrequentItemsSummary'] = _FREQUENTITEMSSUMMARY
DESCRIPTOR.message_types_by_name['StringsSummary'] = _STRINGSSUMMARY
DESCRIPTOR.message_types_by_name['SchemaSummary'] = _SCHEMASUMMARY
DESCRIPTOR.message_types_by_name['HistogramSummary'] = _HISTOGRAMSUMMARY
DESCRIPTOR.message_types_by_name['QuantileSummary'] = _QUANTILESUMMARY
DESCRIPTOR.message_types_by_name['NumberSummary'] = _NUMBERSUMMARY
DESCRIPTOR.message_types_by_name['ColumnSummary'] = _COLUMNSUMMARY
DESCRIPTOR.message_types_by_name['DatasetSummary'] = _DATASETSUMMARY
DESCRIPTOR.message_types_by_name['DatasetSummaries'] = _DATASETSUMMARIES
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
UniqueCountSummary = _reflection.GeneratedProtocolMessageType('UniqueCountSummary', (_message.Message,), {
'DESCRIPTOR' : _UNIQUECOUNTSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:UniqueCountSummary)
})
_sym_db.RegisterMessage(UniqueCountSummary)
FrequentStringsSummary = _reflection.GeneratedProtocolMessageType('FrequentStringsSummary', (_message.Message,), {
'FrequentItem' : _reflection.GeneratedProtocolMessageType('FrequentItem', (_message.Message,), {
'DESCRIPTOR' : _FREQUENTSTRINGSSUMMARY_FREQUENTITEM,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:FrequentStringsSummary.FrequentItem)
})
,
'DESCRIPTOR' : _FREQUENTSTRINGSSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:FrequentStringsSummary)
})
_sym_db.RegisterMessage(FrequentStringsSummary)
_sym_db.RegisterMessage(FrequentStringsSummary.FrequentItem)
FrequentNumbersSummary = _reflection.GeneratedProtocolMessageType('FrequentNumbersSummary', (_message.Message,), {
'FrequentDoubleItem' : _reflection.GeneratedProtocolMessageType('FrequentDoubleItem', (_message.Message,), {
'DESCRIPTOR' : _FREQUENTNUMBERSSUMMARY_FREQUENTDOUBLEITEM,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:FrequentNumbersSummary.FrequentDoubleItem)
})
,
'FrequentLongItem' : _reflection.GeneratedProtocolMessageType('FrequentLongItem', (_message.Message,), {
'DESCRIPTOR' : _FREQUENTNUMBERSSUMMARY_FREQUENTLONGITEM,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:FrequentNumbersSummary.FrequentLongItem)
})
,
'DESCRIPTOR' : _FREQUENTNUMBERSSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:FrequentNumbersSummary)
})
_sym_db.RegisterMessage(FrequentNumbersSummary)
_sym_db.RegisterMessage(FrequentNumbersSummary.FrequentDoubleItem)
_sym_db.RegisterMessage(FrequentNumbersSummary.FrequentLongItem)
FrequentItemsSummary = _reflection.GeneratedProtocolMessageType('FrequentItemsSummary', (_message.Message,), {
'FrequentItem' : _reflection.GeneratedProtocolMessageType('FrequentItem', (_message.Message,), {
'DESCRIPTOR' : _FREQUENTITEMSSUMMARY_FREQUENTITEM,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:FrequentItemsSummary.FrequentItem)
})
,
'DESCRIPTOR' : _FREQUENTITEMSSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:FrequentItemsSummary)
})
_sym_db.RegisterMessage(FrequentItemsSummary)
_sym_db.RegisterMessage(FrequentItemsSummary.FrequentItem)
StringsSummary = _reflection.GeneratedProtocolMessageType('StringsSummary', (_message.Message,), {
'DESCRIPTOR' : _STRINGSSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:StringsSummary)
})
_sym_db.RegisterMessage(StringsSummary)
SchemaSummary = _reflection.GeneratedProtocolMessageType('SchemaSummary', (_message.Message,), {
'TypeCountsEntry' : _reflection.GeneratedProtocolMessageType('TypeCountsEntry', (_message.Message,), {
'DESCRIPTOR' : _SCHEMASUMMARY_TYPECOUNTSENTRY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:SchemaSummary.TypeCountsEntry)
})
,
'DESCRIPTOR' : _SCHEMASUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:SchemaSummary)
})
_sym_db.RegisterMessage(SchemaSummary)
_sym_db.RegisterMessage(SchemaSummary.TypeCountsEntry)
HistogramSummary = _reflection.GeneratedProtocolMessageType('HistogramSummary', (_message.Message,), {
'DESCRIPTOR' : _HISTOGRAMSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:HistogramSummary)
})
_sym_db.RegisterMessage(HistogramSummary)
QuantileSummary = _reflection.GeneratedProtocolMessageType('QuantileSummary', (_message.Message,), {
'DESCRIPTOR' : _QUANTILESUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:QuantileSummary)
})
_sym_db.RegisterMessage(QuantileSummary)
NumberSummary = _reflection.GeneratedProtocolMessageType('NumberSummary', (_message.Message,), {
'DESCRIPTOR' : _NUMBERSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:NumberSummary)
})
_sym_db.RegisterMessage(NumberSummary)
ColumnSummary = _reflection.GeneratedProtocolMessageType('ColumnSummary', (_message.Message,), {
'DESCRIPTOR' : _COLUMNSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:ColumnSummary)
})
_sym_db.RegisterMessage(ColumnSummary)
DatasetSummary = _reflection.GeneratedProtocolMessageType('DatasetSummary', (_message.Message,), {
'ColumnsEntry' : _reflection.GeneratedProtocolMessageType('ColumnsEntry', (_message.Message,), {
'DESCRIPTOR' : _DATASETSUMMARY_COLUMNSENTRY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:DatasetSummary.ColumnsEntry)
})
,
'DESCRIPTOR' : _DATASETSUMMARY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:DatasetSummary)
})
_sym_db.RegisterMessage(DatasetSummary)
_sym_db.RegisterMessage(DatasetSummary.ColumnsEntry)
DatasetSummaries = _reflection.GeneratedProtocolMessageType('DatasetSummaries', (_message.Message,), {
'ProfilesEntry' : _reflection.GeneratedProtocolMessageType('ProfilesEntry', (_message.Message,), {
'DESCRIPTOR' : _DATASETSUMMARIES_PROFILESENTRY,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:DatasetSummaries.ProfilesEntry)
})
,
'DESCRIPTOR' : _DATASETSUMMARIES,
'__module__' : 'summaries_pb2'
# @@protoc_insertion_point(class_scope:DatasetSummaries)
})
_sym_db.RegisterMessage(DatasetSummaries)
_sym_db.RegisterMessage(DatasetSummaries.ProfilesEntry)
DESCRIPTOR._options = None
_SCHEMASUMMARY_TYPECOUNTSENTRY._options = None
_DATASETSUMMARY_COLUMNSENTRY._options = None
_DATASETSUMMARIES_PROFILESENTRY._options = None
# @@protoc_insertion_point(module_scope)
| 42.478704 | 3,611 | 0.764457 | 5,372 | 45,877 | 6.186895 | 0.054914 | 0.045011 | 0.064599 | 0.06174 | 0.664611 | 0.623089 | 0.611415 | 0.599049 | 0.581358 | 0.546395 | 0 | 0.035408 | 0.119057 | 45,877 | 1,079 | 3,612 | 42.518072 | 0.78696 | 0.030059 | 0 | 0.696177 | 1 | 0.002012 | 0.167645 | 0.110099 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006036 | 0 | 0.006036 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5a6710bae1f11f84d66d62f387058cb7b273a5bc | 2,277 | py | Python | nlp_architect/procedures/registry.py | unifiedcompliance/nlp-architect | 8dc5cb8bb9254e9d621dd4ba47e6eeb6e71d4ce1 | [
"Apache-2.0"
] | null | null | null | nlp_architect/procedures/registry.py | unifiedcompliance/nlp-architect | 8dc5cb8bb9254e9d621dd4ba47e6eeb6e71d4ce1 | [
"Apache-2.0"
] | null | null | null | nlp_architect/procedures/registry.py | unifiedcompliance/nlp-architect | 8dc5cb8bb9254e9d621dd4ba47e6eeb6e71d4ce1 | [
"Apache-2.0"
] | 1 | 2020-10-09T10:33:51.000Z | 2020-10-09T10:33:51.000Z | # ******************************************************************************
# Copyright 2017-2019 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ******************************************************************************
# Register procedures to be used by CLI
from nlp_architect.cli.cmd_registry import CMD_REGISTRY
from nlp_architect.procedures.procedure import Procedure
def register_cmd(registry: dict, name: str, description: str):
def register_cmd_fn(cls):
if not issubclass(cls, Procedure):
raise ValueError('Registered class must be subclassed from Procedure')
if name in registry:
raise ValueError('Cannot register duplicate model {}'.format(name))
run_fn = cls.run_procedure
arg_adder_fn = cls.add_arguments
new_cmd = {'name': name,
'description': description,
'fn': run_fn,
'arg_adder': arg_adder_fn}
registry.append(new_cmd)
return cls
return register_cmd_fn
def register_train_cmd(name: str, description: str):
return register_cmd(CMD_REGISTRY['train'], name, description)
def register_infer_cmd(name: str, description: str):
return register_cmd(CMD_REGISTRY['infer'], name, description)
def register_run_cmd(name: str, description: str):
return register_cmd(CMD_REGISTRY['run'], name, description)
def register_process_cmd(name: str, description: str):
return register_cmd(CMD_REGISTRY['process'], name, description)
def register_solution_cmd(name: str, description: str):
return register_cmd(CMD_REGISTRY['solution'], name, description)
def register_serve_cmd(name: str, description: str):
return register_cmd(CMD_REGISTRY['serve'], name, description)
| 37.95 | 82 | 0.671937 | 285 | 2,277 | 5.214035 | 0.378947 | 0.066622 | 0.084791 | 0.098923 | 0.20996 | 0.20996 | 0.20996 | 0.20996 | 0.20996 | 0.20996 | 0 | 0.006421 | 0.179183 | 2,277 | 59 | 83 | 38.59322 | 0.788657 | 0.331577 | 0 | 0 | 0 | 0 | 0.095017 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.275862 | false | 0 | 0.068966 | 0.206897 | 0.62069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
5a69359174a2c41c5f2e8de9d3bb630e361fac0f | 778 | py | Python | tests/test_ipipy.py | jaychoo/ipipy | 7643f3e81e60409d881b9d47be5dbe68bbc577b9 | [
"BSD-3-Clause"
] | null | null | null | tests/test_ipipy.py | jaychoo/ipipy | 7643f3e81e60409d881b9d47be5dbe68bbc577b9 | [
"BSD-3-Clause"
] | null | null | null | tests/test_ipipy.py | jaychoo/ipipy | 7643f3e81e60409d881b9d47be5dbe68bbc577b9 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_ipipy
----------------------------------
Tests for `ipipy` module.
"""
import unittest
from flask import json
import requests
from ipipy import ipipy
class TestIpipy(unittest.TestCase):
def setUp(self):
self.app = ipipy.setup_web().test_client()
def is_valid(self, response):
if 'ip' not in response:
return False
if len(response['ip'].split('.')) != 4:
return False
return True
def testCmd(self):
return self.is_valid(ipipy.main())
def testWeb(self):
response = self.app.get('/')
return self.is_valid(json.loads(response.data))
def tearDown(self):
pass
if __name__ == '__main__':
unittest.main()
| 17.681818 | 55 | 0.577121 | 94 | 778 | 4.62766 | 0.521277 | 0.048276 | 0.055172 | 0.078161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003442 | 0.253213 | 778 | 43 | 56 | 18.093023 | 0.745267 | 0.147815 | 0 | 0.090909 | 0 | 0 | 0.021407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0.045455 | 0.181818 | 0.045455 | 0.681818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5a73cbbb57ebeb2cee86cbfbb0c964e8923eb012 | 347 | py | Python | python/tests/test_quicksort.py | abdu-zeyad/data-structures-and-algorithms | 2f35985da2277de3aed17e391fe972b34e53e63f | [
"MIT"
] | 1 | 2021-08-12T17:11:19.000Z | 2021-08-12T17:11:19.000Z | python/tests/test_quicksort.py | abdu-zeyad/data-structures-and-algorithms | 2f35985da2277de3aed17e391fe972b34e53e63f | [
"MIT"
] | 7 | 2021-08-08T15:50:47.000Z | 2021-09-11T20:55:39.000Z | python/tests/test_quicksort.py | abdu-zeyad/data-structures-and-algorithms | 2f35985da2277de3aed17e391fe972b34e53e63f | [
"MIT"
] | 1 | 2021-11-23T17:45:42.000Z | 2021-11-23T17:45:42.000Z | from code_challenges.QuickSort.quicksort import __version__
from code_challenges.QuickSort.quicksort.quicksort import quick_sort
def test_version():
assert __version__ == '0.1.0'
def test_quick_sort():
arr = [5, 4, 3, 7]
quick_sort(0, len(arr) - 1, arr)
actual = arr
excepted = [3, 4, 5, 7]
assert actual == excepted
| 20.411765 | 68 | 0.685879 | 50 | 347 | 4.46 | 0.42 | 0.242152 | 0.161435 | 0.242152 | 0.32287 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046931 | 0.201729 | 347 | 16 | 69 | 21.6875 | 0.758123 | 0 | 0 | 0 | 0 | 0 | 0.014409 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5a75d825b685c56c9084e03eb9921dae1da1ac59 | 2,380 | py | Python | tests/test_basics.py | geopython/pygml | 825a6a400e7dc9fd55365669572dcdbe62bf41e5 | [
"MIT"
] | 3 | 2021-07-15T07:46:11.000Z | 2021-09-08T17:51:52.000Z | tests/test_basics.py | geopython/pygml | 825a6a400e7dc9fd55365669572dcdbe62bf41e5 | [
"MIT"
] | 1 | 2021-09-09T08:06:08.000Z | 2021-09-17T19:46:04.000Z | tests/test_basics.py | geopython/pygml | 825a6a400e7dc9fd55365669572dcdbe62bf41e5 | [
"MIT"
] | 2 | 2021-11-02T12:15:20.000Z | 2022-03-22T06:56:49.000Z | import pytest
from pygml.basics import (
parse_coordinates, parse_poslist, parse_pos, swap_coordinate_xy,
swap_coordinates_xy
)
def test_parse_coordinates():
# basic test
result = parse_coordinates('12.34 56.7,89.10 11.12')
assert result == [(12.34, 56.7), (89.10, 11.12)]
# ignore some whitespace
result = parse_coordinates('12.34 56.7, 89.10 11.12')
assert result == [(12.34, 56.7), (89.10, 11.12)]
# custom cs
result = parse_coordinates('12.34 56.7;89.10 11.12', cs=';')
assert result == [(12.34, 56.7), (89.10, 11.12)]
# custom ts
result = parse_coordinates('12.34:56.7,89.10:11.12', ts=':')
assert result == [(12.34, 56.7), (89.10, 11.12)]
# custom cs/ts
result = parse_coordinates('12.34:56.7;89.10:11.12', cs=';', ts=':')
assert result == [(12.34, 56.7), (89.10, 11.12)]
# custom cs/ts and decimal
result = parse_coordinates(
'12,34:56,7;89,10:11,12', cs=';', ts=':', decimal=','
)
assert result == [(12.34, 56.7), (89.10, 11.12)]
def test_parse_poslist():
# basic test
result = parse_poslist('12.34 56.7 89.10 11.12')
assert result == [(12.34, 56.7), (89.10, 11.12)]
# 3D coordinates
result = parse_poslist('12.34 56.7 89.10 11.12 13.14 15.16', dimensions=3)
assert result == [(12.34, 56.7, 89.10), (11.12, 13.14, 15.16)]
# exception on wrong dimensionality
with pytest.raises(ValueError):
parse_poslist('12.34 56.7 89.10 11.12', dimensions=3)
def test_parse_pos():
# basic test
result = parse_pos('12.34 56.7')
assert result == (12.34, 56.7)
# 3D pos
result = parse_pos('12.34 56.7 89.10')
assert result == (12.34, 56.7, 89.10)
def test_swap_coordinate_xy():
# basic test
swapped = swap_coordinate_xy((12.34, 56.7))
assert swapped == (56.7, 12.34)
# 3D coords, only X/Y are to be swapped
swapped = swap_coordinate_xy((12.34, 56.7, 89.10))
assert swapped == (56.7, 12.34, 89.10)
def test_swap_coordinates_xy():
# basic test
swapped = swap_coordinates_xy(
[(12.34, 56.7), (89.10, 11.12)]
)
assert swapped == [(56.7, 12.34), (11.12, 89.10)]
# 3D coords, only X/Y are to be swapped
swapped = swap_coordinates_xy(
[(12.34, 56.7, 89.10), (11.12, 13.14, 15.16)]
)
assert swapped == [(56.7, 12.34, 89.10), (13.14, 11.12, 15.16)]
| 28.674699 | 78 | 0.596218 | 406 | 2,380 | 3.401478 | 0.133005 | 0.083997 | 0.108617 | 0.12672 | 0.739319 | 0.707458 | 0.66908 | 0.632875 | 0.545981 | 0.541637 | 0 | 0.228741 | 0.219328 | 2,380 | 82 | 79 | 29.02439 | 0.514532 | 0.112185 | 0 | 0.23913 | 0 | 0 | 0.116834 | 0.031474 | 0 | 0 | 0 | 0 | 0.304348 | 1 | 0.108696 | false | 0 | 0.043478 | 0 | 0.152174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5a809828287e6e656fcaa5d30b9cc2a20b94aa56 | 130 | py | Python | homework_day_4/2.py | wxz896446511/Manyu | 5974ba3a3f27fbaf53bf9f876150ff764a70ac46 | [
"Apache-2.0"
] | 4 | 2018-09-26T00:48:02.000Z | 2020-08-21T02:06:59.000Z | homework_day_4/2.py | wxz896446511/Manyu | 5974ba3a3f27fbaf53bf9f876150ff764a70ac46 | [
"Apache-2.0"
] | null | null | null | homework_day_4/2.py | wxz896446511/Manyu | 5974ba3a3f27fbaf53bf9f876150ff764a70ac46 | [
"Apache-2.0"
] | 1 | 2020-08-21T02:07:04.000Z | 2020-08-21T02:07:04.000Z | def sumDigits(n):
s=0
while n!=0:
s=s+n%10
n=n/10
return s
x=eval(raw_input('>>'))
print(sumDigits(x))
| 13 | 23 | 0.515385 | 24 | 130 | 2.75 | 0.541667 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 0.292308 | 130 | 9 | 24 | 14.444444 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce453122e8ec87a8f9ab8935d5e27595c30719d4 | 1,691 | py | Python | src/config.py | bcgov/mssql-csv-import-tool | ef3fe484513e6e60c6ef980c09bcef1df9e6db53 | [
"Apache-2.0"
] | 1 | 2021-06-24T20:59:02.000Z | 2021-06-24T20:59:02.000Z | src/config.py | bcgov/mssql-csv-import-tool | ef3fe484513e6e60c6ef980c09bcef1df9e6db53 | [
"Apache-2.0"
] | 3 | 2021-06-17T16:11:29.000Z | 2021-06-24T20:23:57.000Z | src/config.py | bcgov/mssql-csv-import-tool | ef3fe484513e6e60c6ef980c09bcef1df9e6db53 | [
"Apache-2.0"
] | null | null | null | import os
from dotenv import load_dotenv
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv()
class Config:
TEST_DB_HOST = os.getenv('TEST_DB_HOST')
TEST_DB_NAME = os.getenv('TEST_DB_NAME')
TEST_DB_USERNAME = os.getenv('TEST_DB_USERNAME')
TEST_DB_PASSWORD = os.getenv('TEST_DB_PASSWORD')
# Path of destination Windows share from local perspective
TEST_SHARE_LOCAL = os.getenv('TEST_SHARE_LOCAL')
# Path of destination Windows share from database server perspective
TEST_SHARE_DB = os.getenv('TEST_SHARE_DB')
# BI PRODUCTION DATABASE
PROD_DB_HOST = os.getenv('PROD_DB_HOST')
PROD_DB_NAME = os.getenv('PROD_DB_NAME')
PROD_DB_USERNAME = os.getenv('PROD_DB_USERNAME')
PROD_DB_PASSWORD = os.getenv('PROD_DB_PASSWORD')
# Path of destination Windows share from local perspective
PROD_SHARE_LOCAL = os.getenv('PROD_SHARE_LOCAL')
# Path of destination Windows share from database server perspective
PROD_SHARE_DB = os.getenv('PROD_SHARE_DB')
# THE ODBC DRIVER MUST BE INSTALLED IN THE CONTAINER
ODBC_DRIVER = 'ODBC Driver 17 for SQL Server'
# Pandas options
CHUNK_SIZE = 5000
TEMPORARY_TABLE_NAME = '#temporary_data'
# Number of seconds to wait before bulk importing the temporary file
BULK_IMPORT_WAIT = int(os.getenv('BULK_IMPORT_WAIT', '10'))
| 42.275 | 83 | 0.595506 | 200 | 1,691 | 4.72 | 0.29 | 0.110169 | 0.076271 | 0.059322 | 0.269068 | 0.269068 | 0.269068 | 0.269068 | 0.269068 | 0.269068 | 0 | 0.007156 | 0.338853 | 1,691 | 39 | 84 | 43.358974 | 0.837209 | 0.238321 | 0 | 0 | 0 | 0 | 0.181392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.095238 | 0.142857 | 0 | 0.952381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
ce500e92c8647424cdc0eb842d7769b26f865f63 | 150 | py | Python | atcoder/corp/code-festival-2017-qualc_a.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | 1 | 2018-11-12T15:18:55.000Z | 2018-11-12T15:18:55.000Z | atcoder/corp/code-festival-2017-qualc_a.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | null | null | null | atcoder/corp/code-festival-2017-qualc_a.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | null | null | null | def main() -> None:
S = input()
print("Yes" if "AC" in [S[i:i+2] for i in range(len(S)-1)] else "No")
if __name__ == '__main__':
main()
| 18.75 | 73 | 0.526667 | 27 | 150 | 2.62963 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017699 | 0.246667 | 150 | 7 | 74 | 21.428571 | 0.610619 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce504524211e0f392f513e142031803b1b395bcd | 237 | py | Python | exercices/33.py | DegrangeM/alien-python | e0ef81bc87795a27536ce35a5117aabae78f95de | [
"MIT"
] | 4 | 2021-09-17T20:09:33.000Z | 2021-11-18T08:52:48.000Z | exercices/33.py | DegrangeM/alien-python | e0ef81bc87795a27536ce35a5117aabae78f95de | [
"MIT"
] | null | null | null | exercices/33.py | DegrangeM/alien-python | e0ef81bc87795a27536ce35a5117aabae78f95de | [
"MIT"
] | null | null | null | from libs.alien import *
gauche(6)
bas(3)
for loop in range(4) :
droite()
if colonne() > 5 :
for loop in range(4) :
haut()
else :
for loop in range(2) :
bas()
droite()
bas(2)
ss() | 14.8125 | 30 | 0.481013 | 34 | 237 | 3.352941 | 0.617647 | 0.184211 | 0.236842 | 0.368421 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047945 | 0.383966 | 237 | 16 | 31 | 14.8125 | 0.732877 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce553cde1a42544a1356cb8a117f94a4681ef4cd | 70 | py | Python | Python/4.py | manishaverma1012/programs | dd77546219eab2f2ee81dd0d599b78ebd8f95957 | [
"MIT"
] | null | null | null | Python/4.py | manishaverma1012/programs | dd77546219eab2f2ee81dd0d599b78ebd8f95957 | [
"MIT"
] | null | null | null | Python/4.py | manishaverma1012/programs | dd77546219eab2f2ee81dd0d599b78ebd8f95957 | [
"MIT"
] | null | null | null | i= 1
while i<=3:
print("Guess:", i)
i=i+1
print("sorry you failed") | 14 | 26 | 0.6 | 15 | 70 | 2.8 | 0.6 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.171429 | 70 | 5 | 26 | 14 | 0.672414 | 0 | 0 | 0 | 0 | 0 | 0.309859 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce5e297899c4f0312b4e41018c90658d822fcb18 | 815 | py | Python | src/wellsfargo/fraud/__init__.py | thelabnyc/django-oscar-wfrs | 9abd4ecbdafd597407fdf60657103cb5d29c4c8b | [
"0BSD"
] | 1 | 2021-02-08T05:54:56.000Z | 2021-02-08T05:54:56.000Z | src/wellsfargo/fraud/__init__.py | thelabnyc/django-oscar-wfrs | 9abd4ecbdafd597407fdf60657103cb5d29c4c8b | [
"0BSD"
] | 24 | 2019-12-04T21:37:01.000Z | 2022-03-11T23:16:20.000Z | src/wellsfargo/fraud/__init__.py | thelabnyc/django-oscar-wfrs | 9abd4ecbdafd597407fdf60657103cb5d29c4c8b | [
"0BSD"
] | 2 | 2016-05-31T10:02:35.000Z | 2016-12-19T11:29:37.000Z | from django.core.exceptions import ImproperlyConfigured
from ..settings import WFRS_FRAUD_PROTECTION
import importlib
def screen_transaction(request, order):
klass = WFRS_FRAUD_PROTECTION["fraud_protection"]
kwargs = WFRS_FRAUD_PROTECTION.get("fraud_protection_kwargs", {})
return _get_fraud_screener(klass, kwargs).screen_transaction(request, order)
def _get_fraud_screener(klass, kwargs):
FraudScreener = _load_cls_from_abs_path(klass)
screener = FraudScreener(**kwargs)
return screener
def _load_cls_from_abs_path(path):
pkgname, fnname = path.rsplit(".", 1)
try:
pkg = importlib.import_module(pkgname)
return getattr(pkg, fnname)
except (ImportError, AttributeError):
raise ImproperlyConfigured("Could not import class at path {}".format(path))
| 32.6 | 84 | 0.754601 | 96 | 815 | 6.114583 | 0.447917 | 0.127768 | 0.097104 | 0.098808 | 0.153322 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001451 | 0.154601 | 815 | 24 | 85 | 33.958333 | 0.850508 | 0 | 0 | 0 | 0 | 0 | 0.089571 | 0.028221 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ce6895f4cb84a021708a3855bf7c157c6c951976 | 629 | py | Python | output.py | codesmithyide/bootstrap | 1f2e06ee6ddebf9ec51d36832646bdf1487f12ad | [
"MIT"
] | null | null | null | output.py | codesmithyide/bootstrap | 1f2e06ee6ddebf9ec51d36832646bdf1487f12ad | [
"MIT"
] | 56 | 2017-06-04T10:54:24.000Z | 2019-02-23T18:16:58.000Z | output.py | CodeSmithyIDE/Bootstrap | 1f2e06ee6ddebf9ec51d36832646bdf1487f12ad | [
"MIT"
] | null | null | null | import json
class Output:
def __init__(self):
self.current_step = 1
with open('predefined_strings.json', 'r') as file:
self.predefined_strings = json.loads(file.read())
def print_predefined_string(self, string_id):
print(self.predefined_strings[string_id])
def print_main_title(self):
title = self.predefined_strings["title"]
print(title)
print('-' * len(title))
print("")
def print_step_title(self, title):
print("Step " + str(self.current_step) + ": " + title, flush=True)
def next_step(self):
self.current_step += 1
| 26.208333 | 74 | 0.620032 | 78 | 629 | 4.74359 | 0.371795 | 0.183784 | 0.121622 | 0.102703 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004255 | 0.252782 | 629 | 23 | 75 | 27.347826 | 0.782979 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.036566 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.058824 | 0 | 0.411765 | 0.470588 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
ce71bc8ddb487c4c4ee2974fa93d153940279479 | 5,545 | py | Python | algorithms/Gaussian.py | lucaspg96/pattern-recognition | ba78c90795e9074825436e5cf5d394cedf869225 | [
"MIT"
] | 1 | 2018-11-08T00:59:15.000Z | 2018-11-08T00:59:15.000Z | algorithms/Gaussian.py | lucaspg96/pattern-recognition | ba78c90795e9074825436e5cf5d394cedf869225 | [
"MIT"
] | null | null | null | algorithms/Gaussian.py | lucaspg96/pattern-recognition | ba78c90795e9074825436e5cf5d394cedf869225 | [
"MIT"
] | null | null | null | import numpy as np
from algorithms.utils import covariance
from algorithms.LearningAlgorithm import LearningAlgorithm
import math
from algorithms import utils as ut
class NormalNaiveBayes():
def fit(self,X,Y):
classes = {}
n = X.shape[0]
for (x,y) in zip(X,Y):
if not y in classes:
classes[y] = []
classes[y].append(x)
self.cells = {}
for k in classes:
data = np.array(classes[k])
m = np.mean(data,axis=0)
std = np.std(data,axis=0)
var = np.var(data,axis=0)
i_var = np.array([1/v if not v == 0 else 0 for v in var])
self.cells[k] = {
"icov": i_var,
"cov_det": np.prod(var),
"mean": m,
"std": std,
"prob_priori": data.shape[0]/n
}
def __prob__(self,x,k):
s = self.cells[k]["std"]
m = self.cells[k]["mean"]
ic = self.cells[k]["icov"]
cov_det = self.cells[k]["cov_det"]
prob_priori = self.cells[k]["prob_priori"]
z = x-m
if cov_det == 0:
cov_det = 1
return math.log(prob_priori) \
- 0.5*z.dot(ic*z) - 0.5*math.log(cov_det)
def predict(self,x):
distances = [(k,self.__prob__(x,k)) for k in self.cells]
distances = sorted(distances,key=lambda x: x[1], reverse=True)
return distances[0][0]
def score(self,X,Y):
predictions = [1 if self.predict(x)==y else 0 for x,y in zip(X,Y)]
return sum(predictions)/len(predictions)
class QuadraticGaussianClassifier(LearningAlgorithm):
def __init__(self,check_invertibility=False,pinv_mode="friedman"):
self.check_invertibility = check_invertibility
self.pinv_mode = pinv_mode
def fit(self,X,Y,itter=100):
classes = {}
n = X.shape[0]
for (x,y) in zip(X,Y):
if not y in classes:
classes[y] = []
classes[y].append(x)
self.cells = {}
if self.check_invertibility:
need_pinv = False
for k in classes:
data = np.array(classes[k])
m = np.mean(data,axis=0)
std = np.std(data,axis=0)
cov = ut.covariance(data.transpose())
invertibility, message = ut.is_invertible(cov)
if invertibility:
self.cells[k] = {
"icov": np.linalg.inv(cov),
"cov_det": np.linalg.det(cov),
"mean": m,
"std": std,
"prob_priori": data.shape[0]/n
}
else:
need_pinv = True
break
if need_pinv:
if self.pinv_mode == "friedman":
covs = ut.friedman_regularization(.5,1,classes)
for k in classes:
m = np.mean(data,axis=0)
std = np.std(data,axis=0)
cov = covs[k]
self.cells[k] = {
"icov": np.linalg.inv(cov),
"cov_det": np.linalg.det(cov),
"mean": m,
"std": std,
"prob_priori": data.shape[0]/n
}
elif self.pinv_mode == "pooled":
cov = ut.pooled_covariance(classes)
inv_cov = np.linalg.inv(cov)
for k in classes:
m = np.mean(data,axis=0)
std = np.std(data,axis=0)
self.cells[k] = {
"icov":inv_cov,
"cov_det": np.linalg.det(cov),
"mean": m,
"std": std,
"prob_priori": data.shape[0]/n
}
else:
raise Exception("Invalid pinv method: {}".format(self.pinv_mode))
else:
for k in classes:
data = np.array(classes[k])
m = np.mean(data,axis=0)
std = np.std(data,axis=0)
cov = ut.covariance(data.transpose())
self.cells[k] = {
"icov": np.linalg.inv(cov),
"cov_det": np.linalg.det(cov),
"mean": m,
"std": std,
"prob_priori": data.shape[0]/n
}
def __prob__(self,x,k):
s = self.cells[k]["std"]
m = self.cells[k]["mean"]
ic = self.cells[k]["icov"]
cov_det = self.cells[k]["cov_det"]
prob_priori = self.cells[k]["prob_priori"]
z = x-m
return math.log(prob_priori) \
- 0.5*np.matmul(z, np.matmul(ic,z)) - 0.5*math.log(cov_det)
def predict(self,x):
distances = [(k,self.__prob__(x,k)) for k in self.cells]
distances = sorted(distances,key=lambda x: x[1], reverse=True)
return distances[0][0]
def score(self,X,Y):
predictions = [1 if self.predict(x)==y else 0 for x,y in zip(X,Y)]
return sum(predictions)/len(predictions) | 35.318471 | 85 | 0.435167 | 643 | 5,545 | 3.653188 | 0.144635 | 0.072797 | 0.063857 | 0.04172 | 0.672627 | 0.66241 | 0.66241 | 0.641124 | 0.641124 | 0.641124 | 0 | 0.014906 | 0.443463 | 5,545 | 157 | 86 | 35.318471 | 0.746273 | 0 | 0 | 0.701493 | 0 | 0 | 0.044717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067164 | false | 0 | 0.037313 | 0 | 0.164179 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ce815d80a4756eb45c7dda47d3f543b7d704b448 | 862 | py | Python | tests/helpers.py | gnikonorov/pytest-dynamicrerun | 200f45830be5de3d088d092aaac2b11626c04668 | [
"MIT"
] | null | null | null | tests/helpers.py | gnikonorov/pytest-dynamicrerun | 200f45830be5de3d088d092aaac2b11626c04668 | [
"MIT"
] | 1 | 2020-08-10T00:58:07.000Z | 2020-08-10T03:47:55.000Z | tests/helpers.py | gnikonorov/pytest-dynamicrerun | 200f45830be5de3d088d092aaac2b11626c04668 | [
"MIT"
] | null | null | null | # Test helper functions and classes
class ParameterPassLevel:
FLAG = 0
INI_KEY = 1
MARKER = 2
def _assert_result_outcomes(
result, passed=0, skipped=0, failed=0, error=0, dynamic_rerun=0
):
outcomes = result.parseoutcomes()
_check_outcome_field(outcomes, "passed", passed)
_check_outcome_field(outcomes, "skipped", skipped)
_check_outcome_field(outcomes, "failed", failed)
_check_outcome_field(outcomes, "error", error)
_check_outcome_field(outcomes, "dynamicrerun", dynamic_rerun)
def _check_outcome_field(outcomes, field_name, expected_value):
field_value = outcomes.get(field_name, 0)
expected_value = int(expected_value)
assert (
field_value == expected_value
), "outcomes.{} has unexpected value. Expected '{}' but got '{}'".format(
field_name, expected_value, field_value
)
| 29.724138 | 77 | 0.715777 | 104 | 862 | 5.596154 | 0.375 | 0.123711 | 0.175258 | 0.257732 | 0.109966 | 0.109966 | 0 | 0 | 0 | 0 | 0 | 0.012748 | 0.180974 | 862 | 28 | 78 | 30.785714 | 0.811615 | 0.038283 | 0 | 0 | 0 | 0 | 0.116082 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.095238 | false | 0.142857 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ce8500a76325b37723172e41d94a4b2cf48f2a9b | 1,599 | py | Python | tests/util.py | DPBayes/d3p | af3ba4eb5243494bd5e223c60e81f32a8dca1eab | [
"Apache-2.0"
] | 6 | 2021-05-07T06:42:47.000Z | 2022-03-10T13:19:54.000Z | tests/util.py | DPBayes/d3p | af3ba4eb5243494bd5e223c60e81f32a8dca1eab | [
"Apache-2.0"
] | null | null | null | tests/util.py | DPBayes/d3p | af3ba4eb5243494bd5e223c60e81f32a8dca1eab | [
"Apache-2.0"
] | 2 | 2021-03-26T04:32:17.000Z | 2022-02-09T16:30:05.000Z | # SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: © 2019- d3p Developers and their Assignees
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import jax
import jax.numpy as jnp
from functools import reduce
def _and_reduce(iterable):
return reduce(lambda x, y: x and y, iterable, True)
def do_trees_have_same_structure(a, b):
"""Returns True if two jax trees have the same structure.
"""
return jax.tree_structure(a) == jax.tree_structure(b)
def do_trees_have_same_shape(a, b):
"""Returns True if two jax trees have the same structure and the shapes of
all corresponding leaves are identical.
"""
return do_trees_have_same_structure(a, b) and _and_reduce(
jnp.shape(x) == jnp.shape(y)
for x, y, in zip(jax.tree_leaves(a), jax.tree_leaves(b))
)
def are_trees_close(a, b):
"""Returns True if two jax trees have the same structure and all values are
close.
"""
return do_trees_have_same_shape(a, b) and _and_reduce(
jnp.allclose(x, y)
for x, y, in zip(jax.tree_leaves(a), jax.tree_leaves(b))
)
| 32.632653 | 79 | 0.714822 | 259 | 1,599 | 4.301158 | 0.397683 | 0.056553 | 0.039497 | 0.05386 | 0.324955 | 0.308797 | 0.281867 | 0.199282 | 0.199282 | 0.199282 | 0 | 0.00858 | 0.198249 | 1,599 | 48 | 80 | 33.3125 | 0.859594 | 0.548468 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.176471 | 0.058824 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ce859234f88358cc2190840873b79018e1951331 | 686 | py | Python | tests/testRunPolicyIn1DEnv.py | ningtangla/-ModellingJointInferenceOfPhysicsAndMind | 6da814c3aff77f23dd296bf0a050dfe7c69ac406 | [
"MIT"
] | null | null | null | tests/testRunPolicyIn1DEnv.py | ningtangla/-ModellingJointInferenceOfPhysicsAndMind | 6da814c3aff77f23dd296bf0a050dfe7c69ac406 | [
"MIT"
] | null | null | null | tests/testRunPolicyIn1DEnv.py | ningtangla/-ModellingJointInferenceOfPhysicsAndMind | 6da814c3aff77f23dd296bf0a050dfe7c69ac406 | [
"MIT"
] | null | null | null | import sys
sys.path.append('..')
import unittest
import numpy as np
from ddt import ddt, data, unpack
from anytree import AnyNode as Node
# Local import
from runPolicyInMujoco import evaluateMeanEpisodeLength, SampleTrajectory
@ddt
class TestRunPolicyIn1DEnv(unittest.TestCase):
@data(([[(3, 1), (4, 1), (5, 1), (6, -1)]], 4), ([[]], 0), ([[(3, 1), (4, 1), (5, 1), (6, -1)], [(3, 1), (4, 1), (5, 1)], [(3, 1), (4, 1)], [(3, 1)], []], 2))
@unpack
def testEvaluateMeanEpisodeLength(self, trajectory, groundTruthMeanEpisodeLength):
meanEpisodeLength = evaluateMeanEpisodeLength(trajectory)
self.assertEqual(meanEpisodeLength, groundTruthMeanEpisodeLength)
| 32.666667 | 162 | 0.666181 | 80 | 686 | 5.7125 | 0.45 | 0.021882 | 0.026258 | 0.035011 | 0.056893 | 0.04814 | 0.035011 | 0.035011 | 0 | 0 | 0 | 0.055363 | 0.157434 | 686 | 20 | 163 | 34.3 | 0.735294 | 0.017493 | 0 | 0 | 0 | 0 | 0.002985 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
cea605fedc798597b790af87fb698cba62410fcd | 3,309 | py | Python | airplayer/mediabackends/base_media_backend.py | timgates42/Airplayer | cb6f6a393710a31f934e3c09de4f689158d23b50 | [
"BSD-4-Clause"
] | 86 | 2015-01-15T14:26:49.000Z | 2021-12-25T11:01:25.000Z | airplayer/mediabackends/base_media_backend.py | timgates42/Airplayer | cb6f6a393710a31f934e3c09de4f689158d23b50 | [
"BSD-4-Clause"
] | 3 | 2016-06-22T22:17:16.000Z | 2021-06-30T21:53:25.000Z | airplayer/mediabackends/base_media_backend.py | timgates42/Airplayer | cb6f6a393710a31f934e3c09de4f689158d23b50 | [
"BSD-4-Clause"
] | 16 | 2015-04-26T14:41:41.000Z | 2021-06-30T12:05:37.000Z | import logging
import base64
import urllib2
class BaseMediaBackend(object):
def __init__(self, host, port, username=None, password=None):
self._host = host
self._port = port
self._username = username
self._password = password
self.log = logging.getLogger('airplayer')
def _http_request(self, req):
"""
Perform a http request andapply HTTP Basic authentication headers,
if an username and password are supplied in settings.
"""
if self._username and self._password:
base64string = base64.encodestring('%s:%s' % (self._username, self._password))[:-1]
req.add_header("Authorization", "Basic %s" % base64string)
try:
return urllib2.urlopen(req).read()
except urllib2.URLError, e:
clsname = self.__class__.__name__
name = clsname.replace('MediaBackend', '')
self.log.warning("Couldn't connect to %s at %s, are you sure it's running?", name, self.host_string())
return None
def host_string(self):
"""
Convenience method, get a string with the current host and port.
@return <host>:<port>
"""
return '%s:%d' % (self._host, self._port)
def cleanup(self):
"""
Called when airplayer is about to shutdown.
"""
raise NotImplementedError
def stop_playing(self):
"""
Stop playing media.
"""
raise NotImplementedError
def show_picture(self, data):
"""
Show a picture.
@param data raw picture data.
"""
raise NotImplementedError
def play_movie(self, url):
"""
Play a movie from the given location.
"""
raise NotImplementedError
def notify_started(self):
"""
Notify the user that Airplayer has started.
"""
raise NotImplementedError
def pause(self):
"""
Pause media playback.
"""
raise NotImplementedError
def play(self):
"""
Play media
"""
raise NotImplementedError
def get_player_position(self):
"""
Get the current videoplayer positon.
@returns int current position, int total length
"""
raise NotImplementedError
def is_playing(self):
"""
Return wether the backend is currently playing any media.
@returns boolean
"""
raise NotImplementedError
def set_player_position(self, position):
"""
Set the current videoplayer position.
@param position integer in seconds
"""
raise NotImplementedError
def set_player_position_percentage(self, percentage_position):
"""
Set current videoplayer position, in percentage.
@param percentage_position float
"""
raise NotImplementedError
def set_start_position(self, percentage_position):
"""
Play media from the given location
@param percentage_position float
"""
raise NotImplementedError | 27.575 | 114 | 0.561801 | 317 | 3,309 | 5.722397 | 0.37224 | 0.158765 | 0.163727 | 0.049614 | 0.105843 | 0.105843 | 0 | 0 | 0 | 0 | 0 | 0.005636 | 0.356603 | 3,309 | 120 | 115 | 27.575 | 0.846407 | 0 | 0 | 0.255319 | 0 | 0 | 0.052124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.085106 | 0.06383 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
cec19968392e54c845e6e9743eb47db012fb8bc8 | 2,877 | py | Python | conftest.py | StanfordVLSI/dragonphy2 | 996cc14f800b01c5ec0534e79dd2340f4de5e704 | [
"Apache-2.0"
] | 22 | 2020-04-27T18:06:50.000Z | 2022-02-12T12:29:43.000Z | conftest.py | StanfordVLSI/dragonphy2 | 996cc14f800b01c5ec0534e79dd2340f4de5e704 | [
"Apache-2.0"
] | 130 | 2020-04-27T20:05:52.000Z | 2021-07-29T22:12:57.000Z | conftest.py | StanfordVLSI/dragonphy2 | 996cc14f800b01c5ec0534e79dd2340f4de5e704 | [
"Apache-2.0"
] | 6 | 2020-08-15T16:31:55.000Z | 2021-04-13T15:47:29.000Z | import pytest
def pytest_addoption(parser):
parser.addoption(
'--dump_waveforms', action='store_true', help='Dump waveforms from test.'
)
parser.addoption(
'--simulator_name', default=None, type=str, help='Name of the simulator to use.'
)
parser.addoption(
'--board_name', default='ZC702', type=str, help='Name of the FPGA board.'
)
parser.addoption(
'--ser_port', default='/dev/ttyUSB2', type=str, help='USB serial path.'
)
parser.addoption(
'--ffe_length', default=4, type=int, help='Number of FFE coefficients per channel.'
)
parser.addoption(
'--emu_clk_freq', default=5.0e6, type=float, help='Frequency of emulator clock (Hz)'
)
parser.addoption(
'--prbs_test_dur', default=10.0, type=float, help='Length of time of the PRBS emulation test.'
)
parser.addoption(
'--fpga_sim_ctrl', default='UART_ZYNQ', type=str, help='Emulation control style (UART or VIO)'
)
parser.addoption(
'--jitter_rms', default=0, type=float, help='RMS sampling jitter (seconds)'
)
parser.addoption(
'--noise_rms', default=0, type=float, help='RMS ADC noise (Volts)'
)
parser.addoption(
'--flatten_hierarchy', default='rebuilt', type=str, help='Vivado synthesis option.'
)
parser.addoption(
'--chan_tau', default=25e-12, type=float, help='Time constant of channel used in emulation (seconds).'
)
parser.addoption(
'--chan_delay', default=31.25e-12, type=float, help='Delay of channel used in emulation (seconds).'
)
@pytest.fixture
def dump_waveforms(request):
return request.config.getoption('--dump_waveforms')
@pytest.fixture
def simulator_name(request):
return request.config.getoption('--simulator_name')
@pytest.fixture
def board_name(request):
return request.config.getoption('--board_name')
@pytest.fixture
def ser_port(request):
return request.config.getoption('--ser_port')
@pytest.fixture
def ffe_length(request):
return request.config.getoption('--ffe_length')
@pytest.fixture
def emu_clk_freq(request):
return request.config.getoption('--emu_clk_freq')
@pytest.fixture
def prbs_test_dur(request):
return request.config.getoption('--prbs_test_dur')
@pytest.fixture
def fpga_sim_ctrl(request):
return request.config.getoption('--fpga_sim_ctrl')
@pytest.fixture
def jitter_rms(request):
return request.config.getoption('--jitter_rms')
@pytest.fixture
def noise_rms(request):
return request.config.getoption('--noise_rms')
@pytest.fixture
def flatten_hierarchy(request):
return request.config.getoption('--flatten_hierarchy')
@pytest.fixture
def chan_tau(request):
return request.config.getoption('--chan_tau')
@pytest.fixture
def chan_delay(request):
return request.config.getoption('--chan_delay')
| 26.88785 | 110 | 0.688912 | 363 | 2,877 | 5.319559 | 0.253444 | 0.100984 | 0.107716 | 0.175039 | 0.346453 | 0.200932 | 0.027965 | 0 | 0 | 0 | 0 | 0.009644 | 0.171011 | 2,877 | 106 | 111 | 27.141509 | 0.8 | 0 | 0 | 0.325 | 0 | 0 | 0.280153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175 | false | 0 | 0.0125 | 0.1625 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
cec5e4897187a98b9f0c6e12d180ee7a9333e6d7 | 5,117 | py | Python | rpi/fish_stream.py | FishyByte/SushiRNG | 79de6aff2acf7f0165ecc282beb9adb6f96b4ab8 | [
"MIT"
] | 5 | 2016-07-23T16:13:08.000Z | 2019-09-06T10:29:03.000Z | rpi/fish_stream.py | FishyByte/SushiRNG | 79de6aff2acf7f0165ecc282beb9adb6f96b4ab8 | [
"MIT"
] | 16 | 2016-06-23T01:55:31.000Z | 2016-08-09T07:05:24.000Z | rpi/fish_stream.py | FishyByte/SushiRNG | 79de6aff2acf7f0165ecc282beb9adb6f96b4ab8 | [
"MIT"
] | 3 | 2016-06-22T04:34:50.000Z | 2021-05-14T17:17:44.000Z | #!/usr/bin/python
# generates bits from fish movements
# - Chris Asakawa
# Copyright (c) 2016 Christopher Asakawa, Nicholas McHale, Matthew O'Brien, Corey Aing
# This code is available under the "MIT License".
# Please see the file COPYING in this distribution
# for license terms.
from bitstream import BitStream
from numpy import *
import time
class FishStream:
# init a BitStream to hold the bit values
def __init__(self):
self.stream = BitStream()
self.fish_positions = []
self.zero_count = 0
self.one_count = 0
# helper function, add one to the stream and one count
def add_zero(self):
self.stream.write(False)
self.zero_count += 1
# helper function, add zero to the stream and the zero count
def add_one(self):
self.stream.write(True)
self.one_count += 1
def add_position(self, fish_id, x, y):
# if velocity and acceleration is greater than these values
# then flip the bit value.
velocity_threshold = 5
acceleration_threshold = 5
# if the current index it empty this will throw
# an indexException, and in which case, init the
# current fish position at that index.
try:
# grab the past position of the current fish # - PREVIOUS -
x_previous = self.fish_positions[fish_id][0] # position of x
y_previous = self.fish_positions[fish_id][1] # position of y
vx_previous = self.fish_positions[fish_id][2] # velocity of x
vy_previous = self.fish_positions[fish_id][3] # velocity of y
ax = ay = 0
# determine the current velocity
if x_previous > x:
vx = (x_previous - x)
else:
vx = (x - x_previous)
if y_previous > y:
vy = (y_previous - y)
else:
vy = (y - y_previous)
# determine the current acceleration
if vx_previous > vx:
ax = vx_previous - vx
else:
ax = vx - vx_previous
if vy_previous > vy:
ay = vy_previous - vy
else:
ay = vy - vy_previous
# current fish moved to the right
if x_previous > x:
if vx < velocity_threshold:
if ax < acceleration_threshold:
self.add_one()
else:
self.add_zero()
else:
if ax < acceleration_threshold:
self.add_one()
else:
self.add_zero()
# current fish moved to the left
elif x_previous < x:
if vx < velocity_threshold:
if ax > acceleration_threshold:
self.add_one()
else:
self.add_zero()
else:
if ax > acceleration_threshold:
self.add_one()
else:
self.add_zero()
# current fish moved up the screen
if y_previous < y:
if vy < velocity_threshold:
if ay < acceleration_threshold:
self.add_one()
else:
self.add_zero()
else:
if ay < acceleration_threshold:
self.add_one()
else:
self.add_zero()
# current fish moved down the screen
elif y_previous > y:
if vy < velocity_threshold:
if ay > acceleration_threshold:
self.add_one()
else:
self.add_zero()
else:
if ay > acceleration_threshold:
self.add_one()
else:
self.add_zero()
# overwrite previous positions with current
self.fish_positions[fish_id] = [x, y, vx, vy]
except IndexError:
# new fish found, append is to the list
self.fish_positions.append([x, y, 0, 0])
def print_stream(self):
print self.stream
# returns two values, probability of zero and one
def get_probabilities(self):
# calculate total (we use it twice)
total = self.zero_count + self.one_count
if total == 0:
return 0, 0
# returns the probability of a zero and a one
return float(self.zero_count) / total, \
float(self.one_count) / total
def get_bits(self, length):
while self.stream.__len__() < length:
time.sleep(0.1)
return_bits = self.stream.read(length)
self.zero_count -= str(return_bits).count('0')
self.one_count -= str(return_bits).count('1')
return return_bits
def get_length(self):
return len(str(self.stream))
| 33.012903 | 86 | 0.510455 | 576 | 5,117 | 4.375 | 0.241319 | 0.044444 | 0.079365 | 0.088889 | 0.330556 | 0.28254 | 0.233333 | 0.233333 | 0.233333 | 0.233333 | 0 | 0.008103 | 0.421145 | 5,117 | 154 | 87 | 33.227273 | 0.842674 | 0.225132 | 0 | 0.442308 | 1 | 0 | 0.000509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028846 | null | null | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cec95d6163d06ad4abdd8d8cc63ae618e216b52f | 887 | py | Python | cognises/permissions_req.py | jrlaforge/cognises-flask | 8616a3f10eea5e0472aa8a9a16326082b9af77b5 | [
"MIT"
] | null | null | null | cognises/permissions_req.py | jrlaforge/cognises-flask | 8616a3f10eea5e0472aa8a9a16326082b9af77b5 | [
"MIT"
] | null | null | null | cognises/permissions_req.py | jrlaforge/cognises-flask | 8616a3f10eea5e0472aa8a9a16326082b9af77b5 | [
"MIT"
] | null | null | null | from functools import wraps
def permission_required(group_detail):
""" A flask decorator to check if the user has the permission to access a
particular route.
A json loaded data for the details of the groups is passed which has a key
called "allowed_functions" which contains a list of all the functions
allowed for that particular group.
"""
def decorator(t):
@wraps(t)
def decorated(details, *args, **kwargs):
for each_obj in group_detail:
if each_obj['group_name'] == details['user_group'][0]:
allow = each_obj['allowed_functions']
if t.__name__ not in allow:
return t({'message': 'Forbidden access',
'status': 401})
return t(details, *args, **kwargs)
return decorated
return decorator
| 35.48 | 78 | 0.596392 | 109 | 887 | 4.724771 | 0.504587 | 0.040777 | 0.066019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0067 | 0.326945 | 887 | 24 | 79 | 36.958333 | 0.855946 | 0.302142 | 0 | 0 | 0 | 0 | 0.111864 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0c6891d7becb026d69677cb6ac7980cb1cd8fc97 | 622 | py | Python | spHNF_manip/find_equivalent_basis.py | JohnEdChristensen/NiggliOptimize | e90b8c66e7b7e560c460502ee24991af775c625b | [
"MIT"
] | null | null | null | spHNF_manip/find_equivalent_basis.py | JohnEdChristensen/NiggliOptimize | e90b8c66e7b7e560c460502ee24991af775c625b | [
"MIT"
] | null | null | null | spHNF_manip/find_equivalent_basis.py | JohnEdChristensen/NiggliOptimize | e90b8c66e7b7e560c460502ee24991af775c625b | [
"MIT"
] | null | null | null | from math_mat_code_gen import *
from opf_python import niggli_lat_id
import numpy
basis = [[1,2,2],[2,1,2],[4,3,3]]
print niggli_lat_id.niggli_id(numpy.transpose(basis))
print
print format_basis(basis)
print format_pg(generate_pg(basis))
label = "n24_hR"
equivalentBasis = find_equivalent_basis(basis, label)
print "-------------------------------------------------------------------------"
#if equivalentBasis[0][0] == 0:
# print "No Equivalent Basis Found"
#else:
print niggli_lat_id.niggli_id(numpy.transpose(equivalentBasis))
print
print format_basis(equivalentBasis)
print format_pg(generate_pg(equivalentBasis))
| 31.1 | 81 | 0.697749 | 86 | 622 | 4.802326 | 0.383721 | 0.106538 | 0.079903 | 0.077482 | 0.2954 | 0.184019 | 0.184019 | 0.184019 | 0 | 0 | 0 | 0.024518 | 0.081994 | 622 | 19 | 82 | 32.736842 | 0.698774 | 0.110932 | 0 | 0.133333 | 0 | 0 | 0.143636 | 0.132727 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
0c7166eef44aee2b2176045db3cba53d93891e65 | 393 | py | Python | modules/agents/actions/eat_cellular_grass.py | ofavre/cellulart | b4152a3f74117e7f14df51ffc56c11f0f461c427 | [
"BSD-3-Clause"
] | null | null | null | modules/agents/actions/eat_cellular_grass.py | ofavre/cellulart | b4152a3f74117e7f14df51ffc56c11f0f461c427 | [
"BSD-3-Clause"
] | null | null | null | modules/agents/actions/eat_cellular_grass.py | ofavre/cellulart | b4152a3f74117e7f14df51ffc56c11f0f461c427 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# License: See LICENSE file.
required_states = ['position']
required_matrices = ['cellular_binary_grass']
def run(name, world, matrices, states, extra_life=10):
y,x = states['position']
#if matrices['cellular_binary_grass'][int(y),int(x)]:
# Eat the cell under the agent's position
matrices['cellular_binary_grass'][int(y),int(x)] = False
#endif
| 28.071429 | 60 | 0.679389 | 55 | 393 | 4.690909 | 0.581818 | 0.186047 | 0.255814 | 0.313953 | 0.271318 | 0.271318 | 0.271318 | 0.271318 | 0 | 0 | 0 | 0.009063 | 0.157761 | 393 | 13 | 61 | 30.230769 | 0.770393 | 0.368957 | 0 | 0 | 0 | 0 | 0.239669 | 0.173554 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0c72f04ff28b954d4764000ee8337335f12eb703 | 467 | py | Python | mayan/apps/tags/tests.py | Dave360-crypto/mayan-edms | 9cd37537461347f79ff0429e4b8b16fd2446798d | [
"Apache-2.0"
] | 3 | 2020-02-03T11:58:51.000Z | 2020-10-20T03:52:21.000Z | mayan/apps/tags/tests.py | Dave360-crypto/mayan-edms | 9cd37537461347f79ff0429e4b8b16fd2446798d | [
"Apache-2.0"
] | null | null | null | mayan/apps/tags/tests.py | Dave360-crypto/mayan-edms | 9cd37537461347f79ff0429e4b8b16fd2446798d | [
"Apache-2.0"
] | 2 | 2020-10-24T11:10:06.000Z | 2021-03-03T20:05:38.000Z | from django.test import TestCase
from taggit.models import Tag
from .literals import COLOR_RED
from .models import TagProperties
class TagTestCase(TestCase):
def setUp(self):
self.tag = Tag(name='test')
self.tag.save()
self.tp = TagProperties(tag=self.tag, color=COLOR_RED)
self.tp.save()
def runTest(self):
self.failUnlessEqual(self.tag.name, 'test')
self.failUnlessEqual(self.tp.get_color_code(), 'red')
| 24.578947 | 62 | 0.678801 | 62 | 467 | 5.048387 | 0.387097 | 0.089457 | 0.070288 | 0.095847 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205567 | 467 | 18 | 63 | 25.944444 | 0.843666 | 0 | 0 | 0 | 0 | 0 | 0.023555 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0c895a8fc739cd8fc18b77e36465f0786dfca548 | 5,262 | py | Python | YearbookRevampLibrary/AutoAlignerModule.py | RajatGupta02/YearbookRevampLibrary | 6aab8d73c7261e68a0814ad24476adaedc67e6d5 | [
"MIT"
] | null | null | null | YearbookRevampLibrary/AutoAlignerModule.py | RajatGupta02/YearbookRevampLibrary | 6aab8d73c7261e68a0814ad24476adaedc67e6d5 | [
"MIT"
] | null | null | null | YearbookRevampLibrary/AutoAlignerModule.py | RajatGupta02/YearbookRevampLibrary | 6aab8d73c7261e68a0814ad24476adaedc67e6d5 | [
"MIT"
] | 4 | 2021-07-08T15:07:07.000Z | 2021-08-17T11:51:17.000Z | import os
import cv2 as cv
import mediapipe as mp
from YearbookRevampLibrary.utils import output_image_files, collect_image_files
class AutoAligner():
def __init__(self):
self.mp_drawing = mp.solutions.drawing_utils
self.mp_pose = mp.solutions.pose
self.mp_holistic = mp.solutions.holistic
self.mp_face_detection = mp.solutions.face_detection
def align_image(self, img, min_face_detection_confidence=0.5, min_pose_detection_confidence=0.5):
"""
:param img: image to align
:param min_face_detection_confidence: confidence for face detection in the image
:param min_pose_detection_confidence: confidence for pose detection in the image
:return: aligned image
"""
with self.mp_pose.Pose(static_image_mode=True, model_complexity=2,
min_detection_confidence=min_pose_detection_confidence) as pose:
image = img
image_height, image_width, _ = image.shape
results = pose.process(cv.cvtColor(image, cv.COLOR_BGR2RGB))
if not results.pose_landmarks:
i = 0
while (True):
mp_face_detection = self.mp_face_detection
with mp_face_detection.FaceDetection(model_selection=1,
min_detection_confidence=min_face_detection_confidence) as face_detection:
if i == 4:
return image
results2 = face_detection.process(cv.cvtColor(image, cv.COLOR_BGR2RGB))
if not results2.detections:
i += 1
image = cv.rotate(image, rotateCode=0)
continue
return image
n = results.pose_landmarks.landmark[self.mp_holistic.PoseLandmark.NOSE].y
r = results.pose_landmarks.landmark[self.mp_holistic.PoseLandmark.RIGHT_SHOULDER].y
l = results.pose_landmarks.landmark[self.mp_holistic.PoseLandmark.LEFT_SHOULDER].y
if n < r and n < l:
pass
elif n > r and n > l:
image = cv.rotate(image, rotateCode=1)
elif n < r and n > l:
image = cv.rotate(image, rotateCode=0)
else:
image = cv.rotate(image, rotateCode=2)
return image
def auto_align(cv2_list = None, input_path = None, output_path = None, min_face_detection_confidence=0.5, min_pose_detection_confidence=0.5):
"""
:param cv2_list: list of cv2 objects to be aligned
:param input_path: path of the folder containing images
:param output_path: path of the folder to save aligned images
:param min_face_detection_confidence: confidence for face detection in the image
:param min_pose_detection_confidence: confidence for pose detection in the image
:return: list of aligned cv2 objects
"""
images, filenames = collect_image_files(cv2_list, input_path)
# makeFolder(output_file)
path = output_path
refined_images = []
mp_drawing = mp.solutions.drawing_utils
mp_pose = mp.solutions.pose
mp_holistic = mp.solutions.holistic
with mp_pose.Pose(static_image_mode=True, model_complexity=2,
min_detection_confidence=min_pose_detection_confidence) as pose:
for idx, file in enumerate(images):
image = images[idx]
image_height, image_width, _ = image.shape
results = pose.process(cv.cvtColor(image, cv.COLOR_BGR2RGB))
if not results.pose_landmarks:
i = 0
while (True):
mp_face_detection = mp.solutions.face_detection
with mp_face_detection.FaceDetection(model_selection=1,
min_detection_confidence=min_face_detection_confidence) as face_detection:
if i == 4:
refined_images.append(image)
break
results2 = face_detection.process(cv.cvtColor(image, cv.COLOR_BGR2RGB))
if not results2.detections:
i += 1
image = cv.rotate(image, rotateCode=0)
continue
refined_images.append(image)
break
continue
n = results.pose_landmarks.landmark[mp_holistic.PoseLandmark.NOSE].y
r = results.pose_landmarks.landmark[mp_holistic.PoseLandmark.RIGHT_SHOULDER].y
l = results.pose_landmarks.landmark[mp_holistic.PoseLandmark.LEFT_SHOULDER].y
if n < r and n < l:
pass
elif n > r and n > l:
image = cv.rotate(image, rotateCode=1)
elif n < r and n > l:
image = cv.rotate(image, rotateCode=0)
else:
image = cv.rotate(image, rotateCode=2)
refined_images.append(image)
output = output_image_files(refined_images, output_path, filenames)
return output
| 38.977778 | 141 | 0.586659 | 595 | 5,262 | 4.956303 | 0.169748 | 0.088165 | 0.054256 | 0.04883 | 0.777891 | 0.710749 | 0.689047 | 0.662597 | 0.649034 | 0.649034 | 0 | 0.011638 | 0.346826 | 5,262 | 134 | 142 | 39.268657 | 0.846378 | 0.115165 | 0 | 0.616279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034884 | false | 0.023256 | 0.046512 | 0 | 0.139535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0c8ac64d866c0e6106456721744f44d26d3a7ac0 | 840 | py | Python | setup.py | sellerzoncom/wmapi | 425affc400c9ced1ac563ab030f39fec69e71b3d | [
"MIT"
] | 1 | 2021-02-13T14:17:56.000Z | 2021-02-13T14:17:56.000Z | setup.py | sellerzoncom/wmapi | 425affc400c9ced1ac563ab030f39fec69e71b3d | [
"MIT"
] | null | null | null | setup.py | sellerzoncom/wmapi | 425affc400c9ced1ac563ab030f39fec69e71b3d | [
"MIT"
] | 2 | 2021-03-16T01:23:13.000Z | 2021-03-29T16:58:40.000Z | import setuptools
requirements = [
'xmltodict',
'requests',
]
setuptools.setup(
name="wmapi",
version="0.1",
url="https://github.com/sellerzoncom/wmapi",
author="SellerZon",
author_email="dev@sellerzon.com",
description="Python Client for Walmart Canada Marketplace API",
long_description=open('README.md').read(),
long_description_content_type = 'text/markdown',
packages=setuptools.find_packages(),
install_requires=requirements,
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
)
| 28 | 67 | 0.635714 | 84 | 840 | 6.27381 | 0.630952 | 0.216319 | 0.28463 | 0.148008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016794 | 0.220238 | 840 | 29 | 68 | 28.965517 | 0.787786 | 0 | 0 | 0 | 0 | 0 | 0.480952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0c8db78eb5c47006aa011b99a0d3db3b82628a09 | 638 | py | Python | app/portfolio/migrations/0007_auto_20210224_0644.py | Arusey/Porfolio-website | cb95a17c44eb45f5e564b395ab2c73940eba27cc | [
"MIT"
] | null | null | null | app/portfolio/migrations/0007_auto_20210224_0644.py | Arusey/Porfolio-website | cb95a17c44eb45f5e564b395ab2c73940eba27cc | [
"MIT"
] | 1 | 2021-04-30T09:21:32.000Z | 2021-04-30T09:21:32.000Z | app/portfolio/migrations/0007_auto_20210224_0644.py | Arusey/Porfolio-website | cb95a17c44eb45f5e564b395ab2c73940eba27cc | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-02-24 06:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('portfolio', '0006_client'),
]
operations = [
migrations.RemoveField(
model_name='client',
name='icon',
),
migrations.RemoveField(
model_name='client',
name='projects',
),
migrations.AddField(
model_name='client',
name='projects',
field=models.ManyToManyField(related_name='portfolio_client_related', to='portfolio.Project'),
),
]
| 23.62963 | 106 | 0.568966 | 59 | 638 | 6.033898 | 0.59322 | 0.075843 | 0.126404 | 0.160112 | 0.323034 | 0.224719 | 0 | 0 | 0 | 0 | 0 | 0.043379 | 0.31348 | 638 | 26 | 107 | 24.538462 | 0.769406 | 0.070533 | 0 | 0.5 | 1 | 0 | 0.167513 | 0.040609 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0c9b6ebbf189a3dd4aac5a57cc271ee9dc38373d | 546 | py | Python | matador/workflows/castep/__init__.py | dquigley-warwick/matador | 729e97efb0865c4fff50af87555730ff4b7b6d91 | [
"MIT"
] | 24 | 2020-01-21T21:40:44.000Z | 2022-03-23T13:37:18.000Z | matador/workflows/castep/__init__.py | dquigley-warwick/matador | 729e97efb0865c4fff50af87555730ff4b7b6d91 | [
"MIT"
] | 234 | 2020-02-03T15:56:58.000Z | 2022-03-29T21:36:45.000Z | matador/workflows/castep/__init__.py | dquigley-warwick/matador | 729e97efb0865c4fff50af87555730ff4b7b6d91 | [
"MIT"
] | 15 | 2019-11-29T11:33:32.000Z | 2021-11-02T09:14:08.000Z | # coding: utf-8
# Distributed under the terms of the MIT License.
""" The workflows.castep submodule contains any workflows that rely on
CASTEP-specific implementation, for example electronic spectropscopy or phonons.
"""
__all__ = ('castep_full_phonon', 'castep_full_spectral', 'castep_elastic', 'castep_full_magres')
__author__ = 'Matthew Evans'
__maintainer__ = 'Matthew Evans'
from .phonons import castep_full_phonon
from .spectral import castep_full_spectral
from .elastic import castep_elastic
from .magres import castep_full_magres
| 27.3 | 96 | 0.804029 | 71 | 546 | 5.816901 | 0.535211 | 0.145278 | 0.116223 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002092 | 0.124542 | 546 | 19 | 97 | 28.736842 | 0.861925 | 0.384615 | 0 | 0 | 0 | 0 | 0.294479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0ca16a0c9dd9af06f3688c14d65ba7f0dc160b24 | 503 | py | Python | django/wagteil_template/irk_ru/home/models.py | Netromnik/python | 630a9df63b1cade9af38de07bb9cd0c3b8694c93 | [
"Apache-2.0"
] | null | null | null | django/wagteil_template/irk_ru/home/models.py | Netromnik/python | 630a9df63b1cade9af38de07bb9cd0c3b8694c93 | [
"Apache-2.0"
] | null | null | null | django/wagteil_template/irk_ru/home/models.py | Netromnik/python | 630a9df63b1cade9af38de07bb9cd0c3b8694c93 | [
"Apache-2.0"
] | null | null | null | from main.models import AbstractArticlePage
from taggit.models import TaggedItemBase
from modelcluster.fields import ParentalKey
from modelcluster.contrib.taggit import ClusterTaggableManager
from django.db import models
class BlogPanelTag(TaggedItemBase):
content_object = ParentalKey(
'ArticleEx',
related_name='tagged_items2',
on_delete=models.CASCADE,
)
class ArticleEx(AbstractArticlePage):
tags = ClusterTaggableManager(through='BlogPanelTag', blank=True)
| 26.473684 | 69 | 0.7833 | 50 | 503 | 7.8 | 0.6 | 0.061538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002342 | 0.151093 | 503 | 18 | 70 | 27.944444 | 0.911007 | 0 | 0 | 0 | 0 | 0 | 0.067594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0ca5fd09db17d3dbac365bf9ccbb4338902eb460 | 731 | py | Python | tests/test_subtract.py | josemolinagarcia/maya-math-nodes | 1f83eef1d1efe0b0c3dbb1477ca31ed9f8911ee4 | [
"MIT"
] | 148 | 2018-01-12T20:30:45.000Z | 2022-02-28T05:20:46.000Z | tests/test_subtract.py | josemolinagarcia/maya-math-nodes | 1f83eef1d1efe0b0c3dbb1477ca31ed9f8911ee4 | [
"MIT"
] | 13 | 2018-01-17T18:02:13.000Z | 2021-11-23T06:06:24.000Z | tests/test_subtract.py | josemolinagarcia/maya-math-nodes | 1f83eef1d1efe0b0c3dbb1477ca31ed9f8911ee4 | [
"MIT"
] | 41 | 2018-01-16T01:41:29.000Z | 2021-08-24T01:27:56.000Z | # Copyright (c) 2018 Serguei Kalentchouk et al. All rights reserved.
# Use of this source code is governed by an MIT license that can be found in the LICENSE file.
from node_test_case import NodeTestCase
class TestSubtract(NodeTestCase):
def test_subtract(self):
self.create_node('Subtract', {'input1': 10.0, 'input2': 5.0}, 5.0)
def test_subtract_int(self):
self.create_node('SubtractInt', {'input1': 10, 'input2': 5}, 5)
def test_subtract_angle(self):
self.create_node('SubtractAngle', {'input1': 10.0, 'input2': 5.0}, 5.0)
def test_subtract_vector(self):
self.create_node('SubtractVector', {'input1': [5.0, 5.0, 5.0], 'input2': [2.0, 2.0, 2.0]}, [3.0, 3.0, 3.0])
| 40.611111 | 115 | 0.653899 | 114 | 731 | 4.078947 | 0.464912 | 0.030108 | 0.129032 | 0.154839 | 0.16129 | 0.146237 | 0.146237 | 0.146237 | 0.146237 | 0.146237 | 0 | 0.081218 | 0.191518 | 731 | 17 | 116 | 43 | 0.705584 | 0.21751 | 0 | 0 | 0 | 0 | 0.165202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.1 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0ca72bc6032bcb0c5108e5052489228973ea6cef | 513 | py | Python | tools_box/_account/doctype/asset_transfer_form/asset_transfer_form.py | maisonarmani/Tools_Box | 4f8cc3a0deac1be50a3ac80758a10608faf58454 | [
"MIT"
] | 4 | 2017-09-25T23:34:08.000Z | 2020-07-17T23:52:26.000Z | tools_box/_account/doctype/asset_transfer_form/asset_transfer_form.py | maisonarmani/Tools_Box | 4f8cc3a0deac1be50a3ac80758a10608faf58454 | [
"MIT"
] | null | null | null | tools_box/_account/doctype/asset_transfer_form/asset_transfer_form.py | maisonarmani/Tools_Box | 4f8cc3a0deac1be50a3ac80758a10608faf58454 | [
"MIT"
] | 5 | 2017-06-02T01:58:32.000Z | 2022-02-22T16:59:01.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2017, masonarmani38@gmail.com and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from datetime import datetime
class AssetTransferForm(Document):
def validate(self):
if self.status == "Returned":
self.returned_date = datetime.now()
if self.workflow_state == "Approved" and not self.approved_by:
frappe.throw("Please indicate the approver.")
| 24.428571 | 64 | 0.758285 | 66 | 513 | 5.772727 | 0.681818 | 0.031496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015945 | 0.14425 | 513 | 20 | 65 | 25.65 | 0.851936 | 0.253411 | 0 | 0 | 0 | 0 | 0.119681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0cb1548ba94fbf6f522c608cd56db0adc3c530b5 | 3,317 | py | Python | QueroInternetWeb/main/models.py | quero-internet/quero-internet-web | 95f1763ecb587dcb6d09c0cd3c15c29f837ced90 | [
"MIT"
] | null | null | null | QueroInternetWeb/main/models.py | quero-internet/quero-internet-web | 95f1763ecb587dcb6d09c0cd3c15c29f837ced90 | [
"MIT"
] | 2 | 2019-08-06T01:04:37.000Z | 2019-08-27T00:26:32.000Z | QueroInternetWeb/main/models.py | quero-internet/quero-internet-web | 95f1763ecb587dcb6d09c0cd3c15c29f837ced90 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User
from django import forms
# Create your models here.
class Parceiro(models.Model):
class Meta:
verbose_name = "Parceiro"
verbose_name_plural = "Parceiros"
razao_social = models.CharField(max_length=80, verbose_name="Razão social")
nome_fantasia = models.CharField(max_length=80)
cnpj = models.CharField(max_length=14, verbose_name="CNPJ")
inscricao_estadual = models.CharField(max_length=14, verbose_name="Inscrição estadual")
contato = models.CharField(max_length=60)
usuario = models.OneToOneField(User, on_delete=models.PROTECT)
inativo = models.BooleanField(verbose_name="Inativo", default=False)
def __str__(self):
return self.razao_social
class TipoAcesso(models.Model):
class Meta:
verbose_name = "Tipo de acesso"
verbose_name_plural = "Tipos de acesso"
nome = models.CharField(max_length=60)
def __str__(self):
return self.nome
class PlanoInternet(models.Model):
class Meta:
verbose_name = "Plano de Internet"
verbose_name_plural = "Plano de Internet"
nome = models.CharField(max_length=60)
def __str__(self):
return self.nome
class VelocidadeInternet(models.Model):
class Meta:
verbose_name = "Velocidade de Internet"
verbose_name_plural = "Velocidades de Internet"
nome = models.CharField(max_length=60)
def __str__(self):
return self.nome
class Solicitacao(models.Model):
class Meta:
verbose_name = "Solicitação"
verbose_name_plural = "Solicitações"
tipo_acesso = models.ForeignKey(TipoAcesso, on_delete=models.PROTECT, verbose_name="Tipo de acesso")
cep = models.CharField(max_length=9, verbose_name="CEP")
logradouro = models.CharField(max_length=60)
numero = models.CharField(max_length=10, verbose_name="Número")
complemento = models.CharField(max_length=60, null=True, blank=True)
bairro = models.CharField(max_length=60)
cidade = models.CharField(max_length=60)
uf = models.CharField(max_length=2, verbose_name="UF")
planos_internet = models.ManyToManyField(PlanoInternet, verbose_name="Planos de internet")
velocidades_internet = models.ManyToManyField(VelocidadeInternet, verbose_name="Velocidades de internet")
usuario = models.ForeignKey(User, on_delete=models.PROTECT)
data_e_hora = models.DateTimeField(auto_now_add=True)
observacoes = models.CharField(verbose_name="Observações",max_length=300, blank=True, null=True)
parceiro_escolhido = models.ForeignKey(Parceiro, on_delete=models.PROTECT, null=True)
def __str__(self):
return "Solicitação "+ str(self.pk)
class Resposta(models.Model):
class Meta:
verbose_name = "Resposta"
verbose_name_plural = "Respostas"
resposta = models.CharField(max_length=300)
valor_implantacao = models.DecimalField(max_digits=10,decimal_places=2, verbose_name="Valor de implantação",null=True)
valor_mensalidade = models.DecimalField(max_digits=10, decimal_places=2, verbose_name="Valor de mensalidade", null=True)
usuario = models.ForeignKey(User, on_delete=models.PROTECT)
solicitacao = models.ForeignKey(Solicitacao, on_delete=models.PROTECT, related_name='respostas')
| 38.126437 | 124 | 0.734399 | 410 | 3,317 | 5.714634 | 0.25122 | 0.117371 | 0.122919 | 0.163892 | 0.422962 | 0.288092 | 0.208707 | 0.177123 | 0.13615 | 0.13615 | 0 | 0.014482 | 0.16732 | 3,317 | 86 | 125 | 38.569767 | 0.833816 | 0.007235 | 0 | 0.287879 | 0 | 0 | 0.104528 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075758 | false | 0 | 0.045455 | 0.075758 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0cb39f065896866631dd8be1b77c8ff752a7e442 | 200 | py | Python | src/nr/util/digraph/__init__.py | NiklasRosenstein/python-nr.util | 087f2410d38006c1005a5fb330c47a56bcdb2279 | [
"MIT"
] | null | null | null | src/nr/util/digraph/__init__.py | NiklasRosenstein/python-nr.util | 087f2410d38006c1005a5fb330c47a56bcdb2279 | [
"MIT"
] | 3 | 2022-02-16T13:17:28.000Z | 2022-03-14T15:28:41.000Z | src/nr/util/digraph/__init__.py | NiklasRosenstein/python-nr.util | 087f2410d38006c1005a5fb330c47a56bcdb2279 | [
"MIT"
] | null | null | null |
from ._digraph import DiGraph, E, K, N, UnknownEdgeError, UnknownNodeError
__all__ = [
'remove_with_predecessors',
'topological_sort',
'DiGraph',
'UnknownEdgeError',
'UnknownNodeError',
]
| 18.181818 | 74 | 0.73 | 18 | 200 | 7.666667 | 0.777778 | 0.463768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 200 | 10 | 75 | 20 | 0.811765 | 0 | 0 | 0 | 0 | 0 | 0.396985 | 0.120603 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0cc22e5b5e2f6e38ae22551ba13d262b3cea677a | 17,147 | py | Python | tests/test_mfrm.py | sash-ko/PyRM | e06ac8c44ec30cb7e731a4653dabc7f0ca0b6ec7 | [
"MIT"
] | 31 | 2017-02-08T15:23:20.000Z | 2022-01-21T02:31:07.000Z | tests/test_mfrm.py | sash-ko/PyRM | e06ac8c44ec30cb7e731a4653dabc7f0ca0b6ec7 | [
"MIT"
] | 7 | 2017-02-08T15:38:32.000Z | 2019-08-26T20:24:34.000Z | tests/test_mfrm.py | sash-ko/PyRM | e06ac8c44ec30cb7e731a4653dabc7f0ca0b6ec7 | [
"MIT"
] | 19 | 2017-02-08T15:23:47.000Z | 2022-03-03T10:06:25.000Z | import unittest
from revpy import mfrm
from revpy.exceptions import InvalidInputParameters
class MFRMTestHost(unittest.TestCase):
def test_empty_estimate_host_level(self):
estimations = mfrm.estimate_host_level({}, {}, {}, 0.9)
self.assertEqual(estimations, (0, 0, 0))
def test_estimate_paper_ex1(self):
"""This test is based on the exaple #1 in the paper"""
utilities = {'fare1': -2.8564, 'fare2': -2.5684}
probs, nofly_prob = mfrm.selection_probs(utilities, 0.5)
estimations = mfrm.estimate_host_level({'fare1': 3, 'fare2': 0},
{'fare1': 1, 'fare2': 0.},
probs, nofly_prob)
self.assertIsInstance(estimations, tuple)
self.assertEqual(len(estimations), 3)
# round numbers
estimations = round_tuple(estimations)
self.assertTupleEqual(estimations, (5, 2.86, 0.86))
def test_demand_mass_balance_h(self):
estimations = mfrm.demand_mass_balance_c(3, 3, 1, 0.86)
self.assertTupleEqual(round_tuple(estimations), (2.14, 0., 0.86))
def test_invalid_parameters(self):
"""Should not rise any exception
"""
mfrm.estimate_host_level({'fare2': 3}, {'fare2': 1.},
{'fare1': 0.1, 'fare2': 0.4}, 0.5)
class MFRMTestClass(unittest.TestCase):
def test_empty_estimate_class_level(self):
estimations = mfrm.estimate_class_level({}, {}, {}, 0.9)
self.assertEqual(estimations, {})
def test_demand_mass_balance_c_ex1(self):
estimations = mfrm.demand_mass_balance_c(3, 3, 1, 0.86)
self.assertIsInstance(estimations, tuple)
self.assertEqual(len(estimations), 3)
self.assertTupleEqual(round_tuple(estimations), (2.14, 0., 0.86))
def test_demand_mass_balance_c_ex2_a11(self):
estimations = mfrm.demand_mass_balance_c(3, 0., 0.1, 0.61)
self.assertTupleEqual(round_tuple(estimations), (0, 0, 0))
def test_demand_mass_balance_c_ex2_a21(self):
estimations = mfrm.demand_mass_balance_c(3, 2, 1, 0.61)
self.assertTupleEqual(round_tuple(estimations), (1.59, 0, 0.41))
def test_estimate_class_level_struct(self):
utilities = {'fare1': -2.8564, 'fare2': -2.5684}
probs, nofly_prob = mfrm.selection_probs(utilities, 0.5)
estimations = mfrm.estimate_class_level({'fare1': 3, 'fare2': 0},
{'fare1': 1, 'fare2': 0.},
probs, nofly_prob)
self.assertIsInstance(estimations, dict)
self.assertEqual(len(estimations), 2)
self.assertEqual(sorted(estimations.keys()), ['fare1', 'fare2'])
self.assertEqual(sorted(estimations['fare1'].keys()),
['demand', 'recapture', 'spill'])
def test_estimate_class_level_ex1(self):
utilities = {'fare1': -2.8564, 'fare2': -2.5684}
probs, nofly_prob = mfrm.selection_probs(utilities, 0.5)
estimations = mfrm.estimate_class_level({'fare1': 3, 'fare2': 0},
{'fare1': 1, 'fare2': 0.},
probs, nofly_prob)
self.assertAlmostEqual(estimations['fare1']['spill'], 0, places=2)
self.assertAlmostEqual(estimations['fare1']['recapture'], 0.86, 2)
self.assertAlmostEqual(estimations['fare1']['demand'], 2.14, 2)
self.assertAlmostEqual(estimations['fare2']['spill'], 2.86, places=2)
self.assertAlmostEqual(estimations['fare2']['recapture'], 0, 2)
self.assertAlmostEqual(estimations['fare2']['demand'], 2.86, 2)
def test_invalid_parameters(self):
"""Should not rise any exception
"""
mfrm.estimate_class_level({'fare2': 3}, {'fare2': 1.},
{'fare1': 0.1, 'fare2': 0.4}, 0.5)
def test_non_zero_demand_zero_availability(self):
with self.assertRaises(InvalidInputParameters):
mfrm.estimate_class_level({'fare1': 3, 'fare2': 1},
{'fare2': 1.},
{'fare1': 0.1, 'fare2': 0.4}, 0.5)
def test_estimate_class_level_ex3(self):
"""Example 3 from MFRM paper"""
probs = {
'a11': 0.0256,
'a12': 0.0513,
'a13': 0.0769,
'a21': 0.041,
'a22': 0.0615,
'a23': 0.0821,
'a31': 0.0154,
'a32': 0.0205,
'a33': 0.0256
}
observed = {
'a11': 2,
'a12': 5,
'a13': 0,
'a21': 4,
'a22': 0,
'a23': 0,
'a31': 0,
'a32': 3,
'a33': 6
}
availability = {
'a11': 1,
'a12': 1,
'a13': 0.25,
'a21': 1,
'a22': 0.5,
'a23': 0,
'a31': 1,
'a32': 1,
'a33': 0.5
}
nofly_prob = 0.6
host_estimations = mfrm.estimate_host_level(observed, availability,
probs, nofly_prob)
estimations = round_tuple(host_estimations)
self.assertTupleEqual(estimations, (30.16, 13.83, 3.67))
class_estimations = mfrm.estimate_class_level(observed, availability,
probs, nofly_prob)
# ensure that class demand, spill and recapture in total
# equals host level estimations
total_class = sum([v['demand'] for v in class_estimations.values()])
self.assertAlmostEqual(total_class, host_estimations[0], 2)
total_class = sum([v['spill'] for v in class_estimations.values()])
self.assertAlmostEqual(total_class, host_estimations[1], 2)
total_class = sum([v['recapture'] for v in class_estimations.values()])
self.assertAlmostEqual(total_class, host_estimations[2], 2)
expected = {
'a11': {'demand': 1.63, 'spill': 0, 'recapture': 0.37},
'a12': {'demand': 4.08, 'spill': 0, 'recapture': 0.92},
# 'a13': {'demand': 4.35, 'spill': 4.35, 'recapture': 0.},
'a13': {'demand': 3.02, 'spill': 3.02, 'recapture': 0.},
'a21': {'demand': 3.27, 'spill': 0, 'recapture': 0.73},
# 'a22': {'demand': 2.32, 'spill': 2.32, 'recapture': 0.},
'a22': {'demand': 1.61, 'spill': 1.61, 'recapture': 0.},
# 'a23': {'demand': 6.19, 'spill': 6.19, 'recapture': 0.},
'a23': {'demand': 4.3, 'spill': 4.3, 'recapture': 0.},
'a31': {'demand': 0, 'spill': 0, 'recapture': 0.},
'a32': {'demand': 2.45, 'spill': 0, 'recapture': 0.55},
# NOTE: this example case is different from the paper. Somehow
# it doesn't satisfy (14): s = d*k, k33 = 0.5 ...
# It also affects a13, a22, a23 that have 0 bookings and
# calibrated according to the unaccounted spill
# 'a33': {'demand': 5.87, 'spill': 0.97, 'recapture': 1.1},
'a33': {'demand': 9.798, 'spill': 4.899, 'recapture': 1.1}
}
for element, values in expected.items():
for key, value in values.items():
self.assertAlmostEqual(
class_estimations[element][key], value, 2)
def test_estimate_class_level_regression_1(self):
utilities = {
'29255588_3950': 1.4659572,
'27330948_3950': 2.16431,
'29255588_2490': 1.1630461,
'29255578_2990': 1.3300509,
'29255508_3950': 0.43902999,
'29255578_3590': 0.83872116,
'29255578_3950': 0.70265454,
'29255528_3590': 0.52609205,
'29255518_3590': 0.52609205,
'30920848_3950': -0.19642138,
'27331928_3950': 0.096954226,
'27337358_3590': 0.52609205,
'27334478_3590': -0.12226862,
'29255548_3950': 1.4128097,
'29255558_3590': 0.2330209,
'29255588_2990': 0.99219722,
'29255538_3590': 0.76555932,
'27341178_3590': 0.61577702,
'29255548_3590': 1.3927615,
'29255558_2990': 0.72435057
}
observed = {
'29255588_3950': 3,
'27330948_3950': 1,
'29255578_2990': 7,
'27331928_2450': 1,
'27331928_3200': 4,
# missing in 'utilities'
'27331928_2490': 1,
'29255588_3018': 1,
'29255588_2490': 6,
'29255518_3950': 1,
'29255578_3018': 1,
'27331928_3950': 2,
'29255578_3950': 3,
'29255548_3950': 2,
'29255588_2518': 1,
'29255538_3590': 1,
'29255578_3590': 1,
'29255588_2241': 1
}
availability = {
'29255588_3950': 0.650909090909091,
'27330948_3950': 0.634146341463415,
'29255578_2990': 0.1875,
'27331928_2450': 0.00436681222707424,
'30920848_3950': 0.5,
'29255538_3950': 0.0411764705882353,
'29255528_3590': 0.796875,
'29255518_3950': 0.032967032967033,
'29255588_3590': 0.0254545454545455,
'29255518_3590': 0.648351648351648,
'29255558_3950': 0.00436681222707424,
'27334478_3590': 0.753246753246753,
'27334478_3950': 0.0779220779220779,
'29255528_3950': 0.0703125,
'29255548_3950': 0.521951219512195,
'29255558_3590': 0.148471615720524,
'29255578_3018': 0.00436681222707424,
'27331928_2490': 0.00436681222707424,
'29255548_3590': 0.268292682926829,
'29255588_2241': 0.00436681222707424,
'29255508_3950': 0.714285714285714,
'27331928_3200': 0.00436681222707424,
'29255588_2518': 0.00436681222707424,
'29255588_3018': 0.00436681222707424,
'29255588_2490': 0.0872727272727273,
'27331928_3950': 0.370517928286853,
'27337358_3590': 0.774193548387097,
'29255578_3950': 0.213235294117647,
'29255578_3590': 0.132352941176471,
'29255588_2990': 0.236363636363636,
'29255538_3590': 0.911764705882353,
'27341178_3590': 0.971428571428571,
'29255558_2990': 0.152838427947598
}
probs, nofly_prob = mfrm.selection_probs(utilities, 0.7)
class_level = mfrm.estimate_class_level(observed, availability,
probs, nofly_prob)
# no utility
self.assertTrue('27331928_2490' not in class_level)
# low utility and no observed demand
self.assertEqual(class_level['27334478_3590'],
{'demand': 0, 'spill': 0, 'recapture': 0})
# one observation and low utility
self.assertLess(class_level['29255538_3590']['demand'],
observed['29255538_3590'])
self.assertLess(class_level['29255538_3590']['spill'], 0.1)
# high utility, many bookings, low availability
self.assertGreater(class_level['29255578_2990']['demand'],
observed['29255578_2990'])
self.assertGreater(class_level['29255578_2990']['spill'],
observed['29255578_2990'])
# moderate utility, many bookings, moderate availability
self.assertGreater(class_level['29255578_3950']['demand'],
observed['29255578_3950'])
def test_estimate_class_level_regression_2(self):
utilities = {
'25169088_1990': -0.32789022,
'25177238_2490': 0.74647802,
'30920878_3590': 1.1093128,
'30921208_3590': -0.95266426,
'25176068_2990': 0.60245919,
'27334408_3590': 1.3004869,
'27336048_2490': -0.72204816,
'25176068_2490': 1.1348069,
'27330978_3950': -0.95266426,
'27331818_2490': 1.1348069,
'25174968_2990': -0.95266426,
'27331818_3590': 0.60245919,
'25172878_2990': 1.3004869,
'27330978_3590': -0.95266426,
'25174968_3590': -0.95266426,
'27334408_3950': 1.3004869,
'30920428_2490': -0.72204816,
'27331818_2990': 0.60245919,
'25168168_1990': -0.86685419,
'25168168_3590': 0.028352857,
'25177238_2990': 0.37813818,
'25170958_1990': 0.43109083,
'25178118_1500': 1.557166,
'25172878_3590': 1.3004869,
'25173528_2990': -0.86880505,
'25173528_2490': 0.77468485,
'27332988_2990': 0.59399945,
'25168168_2490': -0.72204816,
'25170958_2490': 0.5758968,
'27332988_2490': -0.18308425,
'25168168_2990': 0.028352857}
observed = {
'25169088_1990': 2,
'25177238_2490': 5,
'25176068_2990': 1,
'25177238_2990': 1,
'25170958_1990': 7,
'25176068_2490': 5,
'25168168_1990': 1,
'25174968_2990': 2,
'25173528_2990': 2,
'25173528_2490': 1,
'25172878_2990': 3,
'25168168_2490': 3,
'25170958_2490': 3,
'25178118_1500': 5}
availability = {
'25177238_2490': 0.954545454545455,
'25169088_1990': 1.0,
'30920878_3590': 1.0,
'30921208_3590': 1.0,
'25176068_2990': 0.272727272727273,
'27334408_3590': 0.409090909090909,
'27334408_3950': 0.590909090909091,
'25176068_2490': 0.909090909090909,
'27331818_2990': 0.363636363636364,
'27331818_2490': 0.590909090909091,
'25172878_3590': 0.136363636363636,
'27331818_3590': 0.136363636363636,
'25172878_2990': 0.909090909090909,
'27330978_3590': 0.454545454545455,
'25174968_3590': 0.0454545454545455,
'27336048_2490': 1.0,
'30920428_2490': 1.0,
'27330978_3950': 0.590909090909091,
'25168168_1990': 0.727272727272727,
'25168168_3590': 0.0454545454545455,
'25177238_2990': 0.0909090909090909,
'25170958_1990': 0.590909090909091,
'25178118_1500': 1.0,
'25174968_2990': 1.0,
'25173528_2990': 0.590909090909091,
'25173528_2490': 0.5,
'27332988_2990': 0.545454545454545,
'25168168_2490': 0.454545454545455,
'25170958_2490': 0.454545454545455,
'27332988_2490': 0.681818181818182,
'25168168_2990': 0.0454545454545455
}
probs, nofly_prob = mfrm.selection_probs(utilities, 0.7)
class_level = mfrm.estimate_class_level(observed, availability,
probs, nofly_prob)
self.assertEqual(class_level['25168168_2990'],
{'demand': 0, 'spill': 0, 'recapture': 0})
total_demand = sum([round(d['demand']) for d in class_level.values()])
self.assertEqual(total_demand, 50)
total_demand = sum([round(d['spill']) for d in class_level.values()])
self.assertEqual(total_demand, 20)
def test_calibrate_no_booking_balance(self):
estimates = {
'p1': {'demand': 0, 'spill': 0},
'p2': {'demand': 0, 'spill': 0},
'p3': {'demand': 0, 'spill': 0}
}
observed = {}
probs = {
'p1': 0.3,
'p2': 0.3,
'p3': 0.1
}
host_spill = 10
availability = {}
result = mfrm.calibrate_no_booking(estimates, observed, availability,
probs, host_spill)
self.assertAlmostEqual(sum([r['demand'] for r in result.values()]),
host_spill)
self.assertAlmostEqual(sum([r['spill'] for r in result.values()]),
host_spill)
self.assertEqual(result['p1']['demand'], result['p2']['demand'])
self.assertGreater(result['p1']['demand'], result['p3']['demand'])
availability = {'p1': 0.5, 'p2': 0.1}
result = mfrm.calibrate_no_booking(estimates, observed,
availability, probs, host_spill)
self.assertGreater(result['p2']['demand'], result['p1']['demand'])
availability = {'p1': 0.5}
result = mfrm.calibrate_no_booking(estimates, observed,
availability, probs, host_spill)
self.assertGreater(result['p2']['demand'], result['p1']['demand'])
def round_tuple(tlp, level=2):
return tuple([round(e, level) for e in tlp])
| 39.148402 | 79 | 0.536012 | 1,764 | 17,147 | 5.023243 | 0.171769 | 0.016364 | 0.028439 | 0.019862 | 0.378513 | 0.322876 | 0.254599 | 0.242298 | 0.216567 | 0.1993 | 0 | 0.331212 | 0.331078 | 17,147 | 437 | 80 | 39.237986 | 0.441325 | 0.052079 | 0 | 0.171014 | 0 | 0 | 0.174323 | 0 | 0 | 0 | 0 | 0 | 0.124638 | 1 | 0.049275 | false | 0 | 0.008696 | 0.002899 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0cc84d528b6f7e7c273b598df4a5d72c2b59c6a2 | 65 | py | Python | pygifsicle/__version__.py | LucaCappelletti94/pygifsicle | 4f05d0eead3b002295212d985fdbc9ca8e7f4a8f | [
"MIT"
] | 42 | 2019-10-14T11:30:53.000Z | 2022-03-28T06:55:20.000Z | pygifsicle/__version__.py | LucaCappelletti94/pygifsicle | 4f05d0eead3b002295212d985fdbc9ca8e7f4a8f | [
"MIT"
] | 8 | 2019-10-13T02:54:19.000Z | 2021-10-02T17:47:40.000Z | pygifsicle/__version__.py | LucaCappelletti94/pygifsicle | 4f05d0eead3b002295212d985fdbc9ca8e7f4a8f | [
"MIT"
] | 8 | 2019-09-23T18:59:39.000Z | 2022-02-25T22:38:44.000Z | """Current version of package pygifsicle"""
__version__ = "1.0.5" | 32.5 | 43 | 0.723077 | 9 | 65 | 4.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.107692 | 65 | 2 | 44 | 32.5 | 0.689655 | 0.569231 | 0 | 0 | 0 | 0 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0cda10958b477ed3c5be001bc520bcbf5e7285c1 | 1,498 | py | Python | elit/server/format.py | emorynlp/el | dbe73d1ce6f2296a64fb013775d2691ae1ed90d4 | [
"Apache-2.0"
] | 40 | 2017-02-27T20:16:44.000Z | 2022-03-25T04:58:01.000Z | elit/server/format.py | emorynlp/el | dbe73d1ce6f2296a64fb013775d2691ae1ed90d4 | [
"Apache-2.0"
] | 12 | 2017-02-16T23:50:38.000Z | 2022-01-19T21:29:59.000Z | elit/server/format.py | emorynlp/el | dbe73d1ce6f2296a64fb013775d2691ae1ed90d4 | [
"Apache-2.0"
] | 6 | 2017-05-05T08:02:18.000Z | 2021-11-03T23:47:23.000Z | # ========================================================================
# Copyright 2020 Emory University
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ========================================================================
# -*- coding:utf-8 -*-
# Author: hankcs, Liyan Xu
from typing import Union, List, Tuple
from pydantic import BaseModel
class OnlineCorefContext(BaseModel):
input_ids: List[int]
sentence_map: List[int]
subtoken_map: List[int]
mentions: List[Tuple[int, int]]
uttr_start_idx: List[int]
speaker_ids: List[int] = None # Others are required
class Input(BaseModel):
text: Union[str, List[str]] = None
tokens: List[List[str]] = None
models: List[str] = ["lem", "pos", "ner", "con", "dep", "srl", "amr", "dcr", "ocr"]
# For coref
speaker_ids: Union[int, List[int]] = None
genre: str = None
coref_context: OnlineCorefContext = None
return_coref_prob: bool = False
language: str = 'en'
verbose: bool = True
| 33.288889 | 87 | 0.628171 | 193 | 1,498 | 4.823834 | 0.606218 | 0.064447 | 0.027927 | 0.034372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007305 | 0.17757 | 1,498 | 44 | 88 | 34.045455 | 0.748377 | 0.517356 | 0 | 0 | 0 | 0 | 0.041252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.105263 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0ce0463103b7536d8cc17a2c0d1d9d5a5782341d | 993 | py | Python | jobsapp/fields.py | GeeksCAT/anem-per-feina | 9c4fe400b8721adee88ab669cc074ffc16beca8a | [
"MIT"
] | 19 | 2020-09-29T15:22:12.000Z | 2020-10-19T17:19:44.000Z | jobsapp/fields.py | GeeksCAT/anem-per-feina | 9c4fe400b8721adee88ab669cc074ffc16beca8a | [
"MIT"
] | 162 | 2020-09-29T09:09:53.000Z | 2020-11-06T04:32:12.000Z | jobsapp/fields.py | GeeksCAT/anem-per-feina | 9c4fe400b8721adee88ab669cc074ffc16beca8a | [
"MIT"
] | 28 | 2020-09-29T14:32:53.000Z | 2020-11-02T21:52:39.000Z | from django_bleach.models import BleachField
from tinymce.models import HTMLField
from django.conf import settings
from django.core.validators import URLValidator
from django.db.models import URLField
from django.forms.fields import URLField as FormURLField
################
# JobsURLField #
################
JobsURLValidator = URLValidator(schemes=settings.URL_SCHEMES)
class JobsURLFormField(FormURLField):
"""Form URLField with custom SCHEMES from settings.URL_SCHEMES"""
default_validators = [JobsURLValidator]
class JobsURLField(URLField):
"""URLField with custom SCHEMES from settings.URL_SCHEMES"""
default_validators = [JobsURLValidator]
def formfield(self, **kwargs):
return super().formfield(
**{
"form_class": JobsURLFormField,
}
)
######################
# SanitizedHTMLField #
######################
class SanitizedHTMLField(HTMLField, BleachField):
description = "Sanitized HTML field"
| 23.642857 | 69 | 0.677744 | 91 | 993 | 7.318681 | 0.43956 | 0.075075 | 0.081081 | 0.075075 | 0.24024 | 0.24024 | 0.24024 | 0.24024 | 0.24024 | 0.24024 | 0 | 0 | 0.175227 | 993 | 41 | 70 | 24.219512 | 0.813187 | 0.149043 | 0 | 0.105263 | 0 | 0 | 0.039788 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.315789 | 0.052632 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0cf5f48dc7c851840939204c21bb9b38c6783e3a | 25,119 | py | Python | caffe2/python/tutorials/py_gen/Image_Pre-Processing_Pipeline.py | AIHGF/caffe2 | b9e61b72cd460f4c9f644294a7bf0b6306df17fc | [
"Apache-2.0"
] | 585 | 2015-08-10T02:48:52.000Z | 2021-12-01T08:46:59.000Z | caffe2/python/tutorials/py_gen/Image_Pre-Processing_Pipeline.py | mingzhe09088/caffe2 | 8f41717c46d214aaf62b53e5b3b9b308b5b8db91 | [
"Apache-2.0"
] | 23 | 2015-08-30T11:54:51.000Z | 2017-03-06T03:01:07.000Z | caffe2/python/tutorials/py_gen/Image_Pre-Processing_Pipeline.py | mingzhe09088/caffe2 | 8f41717c46d214aaf62b53e5b3b9b308b5b8db91 | [
"Apache-2.0"
] | 183 | 2015-08-10T02:49:04.000Z | 2021-12-01T08:47:13.000Z | #########################################################
#
# DO NOT EDIT THIS FILE. IT IS GENERATED AUTOMATICALLY. #
# PLEASE LOOK INTO THE README FOR MORE INFORMATION. #
#
#########################################################
# coding: utf-8
# # Image Loading and Preprocessing
#
# In this tutorial we're going to look at how we can load in images from a local file or a URL which you can then utilize in other tutorials or examples. Also, we're going to go in depth on the kinds of preprocessing that is necessary to utilize Caffe2 with images.
#
# #### Mac OSx Prerequisites
#
# If you don't already have these Python modules installed you'll need to do that now.
#
# ```
# sudo pip install scikit-image scipy matplotlib
# ```
# In[1]:
import skimage
import skimage.io as io
import skimage.transform
import sys
import numpy as np
import math
from matplotlib import pyplot
import matplotlib.image as mpimg
print("Required modules imported.")
# ## Test an Image
#
# In the code block below use IMAGE_LOCATION to load what you would like to test. Just change the comment flags to go through each round of the Tutorial. In this way, you'll get to see what happens with a variety of image formats and some tips on how you might preprocess them. If you want to try your own image, drop it in the images folder or use a remote URL. When you pick a remote URL, make it easy on yourself and try to find a URL that points to a common image file type and extension versus some long identifier or query string which might just break this next step.
#
# ## Color Issues
#
# Keep in mind when you load images from smartphone cameras that you may run into color formatting issues. Below we show an example of how flipping between RGB and BGR can impact an image. This would obviously throw off detection in your model. Make sure the image data you're passing around is what you think it is!
#
# ### Caffe Uses BGR Order
#
# Due to legacy support of OpenCV in Caffe and how it handles images in Blue-Green-Red (BGR) order instead of the more commonly used Red-Green-Blue (RGB) order, Caffe2 also expects **BGR** order. In many ways this decision helps in the long run as you use different computer vision utilities and libraries, but it also can be the source of confusion.
# In[2]:
# You can load either local IMAGE_FILE or remote URL
# For Round 1 of this tutorial, try a local image.
IMAGE_LOCATION = 'images/cat.jpg'
# For Round 2 of this tutorial, try a URL image with a flower:
# IMAGE_LOCATION = "https://cdn.pixabay.com/photo/2015/02/10/21/28/flower-631765_1280.jpg"
# IMAGE_LOCATION = "images/flower.jpg"
# For Round 3 of this tutorial, try another URL image with lots of people:
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/1/18/NASA_Astronaut_Group_15.jpg"
# IMAGE_LOCATION = "images/astronauts.jpg"
# For Round 4 of this tutorial, try a URL image with a portrait!
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/9/9a/Ducreux1.jpg"
# IMAGE_LOCATION = "images/Ducreux.jpg"
img = skimage.img_as_float(skimage.io.imread(IMAGE_LOCATION)).astype(np.float32)
# test color reading
# show the original image
pyplot.figure()
pyplot.subplot(1,2,1)
pyplot.imshow(img)
pyplot.axis('on')
pyplot.title('Original image = RGB')
# show the image in BGR - just doing RGB->BGR temporarily for display
imgBGR = img[:, :, (2, 1, 0)]
#pyplot.figure()
pyplot.subplot(1,2,2)
pyplot.imshow(imgBGR)
pyplot.axis('on')
pyplot.title('OpenCV, Caffe2 = BGR')
# As you can see in the example above, the difference in order is very important to keep in mind. In the code block below we'll be taking the image and converting to BGR order for Caffe to process it appropriately.
#
# But wait, there's more color fun...
#
# ### Caffe Prefers CHW Order
#
# Now what!? What's CHW, you ask? Well, there's also HWC! Both formats come up in image processing.
#
# - H: Height
# - W: Width
# - C: Channel (as in color)
#
# Digging even deeper into how image data can be stored is the memory allocation order. You might have noticed when we first loaded the image that we forced it through some interesting transformations. These were data transformations that let us play with the image as if it were a cube. What we see is on top of the cube, and manipulating the layers below can change what we view. We can tinker with it's underlying properties and as you saw above, swap colors quite easily.
#
# For GPU processing, which is what Caffe2 excels at, this order needs to be CHW. For CPU processing, this order is generally HWC. Essentially, you're going to want to use CHW and make sure that step is included in your image pipeline. Tweak RGB to be BGR, which is encapsulated as this "C" payload, then tweak HWC, the "C" being the very same colors you just switched around.
#
# You may ask why! And the reason points to cuDNN which is what helps accelerate processing on GPUs. It uses only CHW, and we'll sum it up by saying it is faster.
#
# Give these two transformations, you might think that's enough, but it isn't. We still need to resize and/or crop and potentially look at things like orientation (rotation) and mirroring.
#
# ## Rotation and Mirroring
#
# This topic is usually reserved for images that are coming from a smart phone. Phones, in general, take great pictures, but do a horrible job communicating how the image was taken and what orientation it should be in. Then there's the user who does everything under the sun with their phone's cameras, making them do things its designer never expected. Cameras - right, because there are often two cameras and these two cameras take different sized pictures in both pixel count and aspect ratio, and not only that, they sometimes take them mirrored, and they sometimes take them in portrait and landscape modes, and sometimes they don't bother to tell which mode they were in.
#
# In many ways this is the first thing you need to evaluate in your pipeline, then look at sizing (described below), then figure out the color situation. If you're developing for iOS, then you're in luck, it's going to be relatively easy. If you're a super-hacker wizard developer with lead-lined shorts and developing for Android, then at least you have lead-lined shorts.
#
# The variability in the Android marketplace is wonderful and horrifying. In an ideal world, you could rely on the EXIF data in pictures coming from any camera and use that to decide orientation and mirroring and you'd have one simple case function to handle your transformations. No such luck, but you're not alone. Many have come before you and suffered for you.
#
# ### Library for Handling Mobile Images
#
# Hooray! We're going to give you something to ease your pain. These are not full-proof. Users can and will defy you and every other developers' best attempts to handle their images. Here we'll link to some resources that can be used depending on the platform.
#
# Image Preprocessing Libraries and/or Snippits
# - [iOS](#)
# - [Android](#)
# - [Python](#)
# In the meantime though, let's play with some images and show the basics for manipulations that you might need to do.
# In[3]:
# Image came in sideways - it should be a portait image!
# How you detect this depends on the platform
# Could be a flag from the camera object
# Could be in the EXIF data
# ROTATED_IMAGE = "https://upload.wikimedia.org/wikipedia/commons/8/87/Cell_Phone_Tower_in_Ladakh_India_with_Buddhist_Prayer_Flags.jpg"
ROTATED_IMAGE = "images/cell-tower.jpg"
imgRotated = skimage.img_as_float(skimage.io.imread(ROTATED_IMAGE)).astype(np.float32)
pyplot.figure()
pyplot.imshow(imgRotated)
pyplot.axis('on')
pyplot.title('Rotated image')
# Image came in flipped or mirrored - text is backwards!
# Again detection depends on the platform
# This one is intended to be read by drivers in their rear-view mirror
# MIRROR_IMAGE = "https://upload.wikimedia.org/wikipedia/commons/2/27/Mirror_image_sign_to_be_read_by_drivers_who_are_backing_up_-b.JPG"
MIRROR_IMAGE = "images/mirror-image.jpg"
imgMirror = skimage.img_as_float(skimage.io.imread(MIRROR_IMAGE)).astype(np.float32)
pyplot.figure()
pyplot.imshow(imgMirror)
pyplot.axis('on')
pyplot.title('Mirror image')
# So you can see that we kind of have some problems. If we're detecting places, landmarks, or objects, a sideways cell tower is no good. If we're detecting text and doing automatic language translation, then mirrored text is no good. But hey, maybe you want to make a model that can detect English both ways. That would be awesome, but not for this tutorial!
#
# Let's transform these babies into something Caffe2 and the standard detection models we have around can detect. Also, this little trick might save you if, say for example, you really had to detect the cell tower but there's no EXIF data to be found: then you'd cycle through every rotation, and every flip, spawning many derivatives of this photo and run them all through. When the percentage of confidence of detection is high enough, Bam!, you found the orientation you needed and that sneaky cell tower.
#
# Anyway, to the example code:
# In[4]:
# Run me to flip the image back and forth
imgMirror = np.fliplr(imgMirror)
pyplot.figure()
pyplot.imshow(imgMirror)
pyplot.axis('off')
pyplot.title('Mirror image')
# In[5]:
# Run me to rotate the image 90 degrees
imgRotated = np.rot90(imgRotated)
pyplot.figure()
pyplot.imshow(imgRotated)
pyplot.axis('off')
pyplot.title('Rotated image')
# ## Sizing
#
# Part of preprocessing is resizing. For reasons we won't get into here, images in the Caffe2 pipeline should be square. Also, to help with performance, they should be resized to a standard height and width which is usually going to be smaller than your original source. In the example below we're resizing to 256 x 256 pixels, however you might notice that the `input_height` and `input_width` is set to 224 x 224 which is then used to specify the crop. This is what several image-based models are expecting. They were trained on images sized to 224 x 224 and in order for the model to properly identify the suspect images you throw at it, these should also be 224 x 224.
#
# ** Make sure you double-check the input sizes for the model you're using!**
# In[6]:
# Model is expecting 224 x 224, so resize/crop needed.
# Here are the steps we use to preprocess the image.
# (1) Resize the image to 256*256, and crop out the center.
input_height, input_width = 224, 224
print("Model's input shape is %dx%d") % (input_height, input_width)
#print("Original image is %dx%d") % (skimage.)
img256 = skimage.transform.resize(img, (256, 256))
pyplot.figure()
pyplot.imshow(img256)
pyplot.axis('on')
pyplot.title('Resized image to 256x256')
print("New image shape:" + str(img256.shape))
# Note the resizing has distorted the image a little bit. It is important to recognize this effect during your processing as it can have an effect on the results of your model. Flowers and animals might be ok with a little stretching or squeezing, but facial features may not.
#
# This can happen when the dimensions of the original image are not proportionally exact to your desired size. In this particular example it would have been better to just resize to 224x224 and not bother cropping. Let's try another strategy of rescaling the image and maintaining the aspect ratio.
#
# ### Rescaling
#
# If you imagine portait images versus landscape images you'll know that there are a lot of things that can get messed up by doing a slopping resize. Rescaling is assuming that you're locking down the aspect ratio to prevent distortion in the image. In this case, we'll scale down the image to the shortest side that matches with the model's input size.
#
# In our example here, the model size is 224 x 224. As you look at your monitor in 1920x1080, it is longer in width than height and if you shrunk it down to 224, you'd run out of height before you ran out of width, so...
#
# - Landscape: limit resize by the height
# - Portrait: limit resize by the width
# In[7]:
print("Original image shape:" + str(img.shape) + " and remember it should be in H, W, C!")
print("Model's input shape is %dx%d") % (input_height, input_width)
aspect = img.shape[1]/float(img.shape[0])
print("Orginal aspect ratio: " + str(aspect))
if(aspect>1):
# landscape orientation - wide image
res = int(aspect * input_height)
imgScaled = skimage.transform.resize(img, (input_height, res))
if(aspect<1):
# portrait orientation - tall image
res = int(input_width/aspect)
imgScaled = skimage.transform.resize(img, (res, input_width))
if(aspect == 1):
imgScaled = skimage.transform.resize(img, (input_height, input_width))
pyplot.figure()
pyplot.imshow(imgScaled)
pyplot.axis('on')
pyplot.title('Rescaled image')
print("New image shape:" + str(imgScaled.shape) + " in HWC")
# At this point only one dimension is set to what the model's input requires. We still need to crop one side to make a square.
#
# ### Cropping
#
# There are a variety of strategies we could utilize. In fact, we could backpeddle and decide to do a center crop. So instead of scaling down to the smallest we could get on at least one side, we take a chunk out of the middle. If we had done that without scaling we would have ended up with just part of a flower pedal, so we still needed some resizing of the image.
#
# Below we'll try a few strategies for cropping:
#
# 1. Just grab the exact dimensions you need from the middle!
# 2. Resize to a square that's pretty close then grab from the middle.
# 3. Use the rescaled image and grab the middle.
# In[8]:
# Compare the images and cropping strategies
# Try a center crop on the original for giggles
print("Original image shape:" + str(img.shape) + " and remember it should be in H, W, C!")
def crop_center(img,cropx,cropy):
y,x,c = img.shape
startx = x//2-(cropx//2)
starty = y//2-(cropy//2)
return img[starty:starty+cropy,startx:startx+cropx]
# yes, the function above should match resize and take a tuple...
pyplot.figure()
# Original image
imgCenter = crop_center(img,224,224)
pyplot.subplot(1,3,1)
pyplot.imshow(imgCenter)
pyplot.axis('on')
pyplot.title('Original')
# Now let's see what this does on the distorted image
img256Center = crop_center(img256,224,224)
pyplot.subplot(1,3,2)
pyplot.imshow(img256Center)
pyplot.axis('on')
pyplot.title('Squeezed')
# Scaled image
imgScaledCenter = crop_center(imgScaled,224,224)
pyplot.subplot(1,3,3)
pyplot.imshow(imgScaledCenter)
pyplot.axis('on')
pyplot.title('Scaled')
# As you can see that didn't work out so well, except for maybe the last one. The middle one may be just fine too, but you won't know until you try on the model and test a lot of candidate images.
# At this point we can look at the difference we have, split it in half and remove some pixels from each side. This does have a drawback, however, as an off-center subject of interest would get clipped.
# If you've run this tutorial a few times now and are on Round 3, you'll notice a pretty big problem. You're missing astronaughts! You can still see the issue with the flower from Round 2 as well. Things are missing after the cropping and that could cause you problems. Think of it this way: if you don't know how the model you're using was prepared then you don't know how to conform your images, so take care to test results! If the model used a lot of different aspect ratio images and just squeezed them to conform to a square then there's a good chance that over time and lots of samples it "learned" what things look like squeezed and can make a match. However, if you're looking for details like facial features and landmarks, or really nuanced elements in any image, this could be dangerous and error-prone.
#
# #### Further Strategies?
#
# Another strategy would be to rescale to the best size you can, with real data, but then pad the rest of the image with information that you can safely ignore in your model. We'll save that for another tutorial though since you've been through enough here!
#
# ### Upscaling
#
# What do you do when the images you want to run are "tiny"? In our example we've been prepping for Input Images with the spec of 224x224. Consider this 128x128 image below.
# 
# Now we're not talking about super-resolution or the CSI-effect where we can take blurry ATM photos and identify the tattoo an a perp's neck. Although, there are [some advances](https://github.com/david-gpu/srez) along these lines that deep learning has provided, and if you're reading this in time (before 3/1/17), go [check this out](https://developer.nvidia.com/zoom-enhance-magic-image-upscaling-using-deep-learning). What we want to do is simple, but, like cropping, it does have a variety of strategies you should consider.
#
# The most basic approach is going from a small square to a bigger square and using the defauls skimage provides for you. This `resize` method defaults the interpolation order parameter to 1 which happens to be bi-linear if you even cared, but it is worth mentioning because these might be the fine-tuning knobs you need later to fix problems, such as strange visual artifacts, that can be introduced in upscaling images.
# In[9]:
imgTiny = "images/Cellsx128.png"
imgTiny = skimage.img_as_float(skimage.io.imread(imgTiny)).astype(np.float32)
print "Original image shape: ", imgTiny.shape
imgTiny224 = skimage.transform.resize(imgTiny, (224, 224))
print "Upscaled image shape: ", imgTiny224.shape
# Plot original
pyplot.figure()
pyplot.subplot(1, 2, 1)
pyplot.imshow(imgTiny)
pyplot.axis('on')
pyplot.title('128x128')
# Plot upscaled
pyplot.subplot(1, 2, 2)
pyplot.imshow(imgTiny224)
pyplot.axis('on')
pyplot.title('224x224')
# Great, it worked!. You can see in the shape outputs that you had (128, 128, 4) and you received (224, 224, 4). Wait a minute! 4? In every example so far the last value in shape has been 3! When we used a png file we entered a new reality; one where transparency is possible. This 4th value describes opacity, or transparency, depending if you're a half-glass-empty type. Anyway, we can handle it just fine, but keep an eye on that number.
#
# It's appropriate to put this discussion towards the end, but before we do further manipulations to the image, it's data order, and its overall payload. You can really mess up your data and the image if you do a simple resample on the image in its current format. Remember that it is currently a cube of data and that there's more going on in there right now than just Red, Green, and Blue (and opacity). Depending on when you decide to resize you'll have to account for that extra data.
#
# Let's break stuff! Try upscaling the image after you've switched the image to CHW.
# In[10]:
imgTiny = "images/Cellsx128.png"
imgTiny = skimage.img_as_float(skimage.io.imread(imgTiny)).astype(np.float32)
print "Image shape before HWC --> CHW conversion: ", imgTiny.shape
# swapping the axes to go from HWC to CHW
# uncomment the next line and run this block!
imgTiny = imgTiny.swapaxes(1, 2).swapaxes(0, 1)
print "Image shape after HWC --> CHW conversion: ", imgTiny.shape
imgTiny224 = skimage.transform.resize(imgTiny, (224, 224))
print "Image shape after resize: ", imgTiny224.shape
# we know this is going to go wrong, so...
try:
# Plot original
pyplot.figure()
pyplot.subplot(1, 2, 1)
pyplot.imshow(imgTiny)
pyplot.axis('on')
pyplot.title('128x128')
except:
print "Here come bad things!"
# hands up if you want to see the error (uncomment next line)
#raise
# Epic fail, right? If you let the code block above swap the axes, then resize the image, you will see this output:
#
# `Image shape after resize: (224, 224, 128)`
#
# Now you have 128 where you should still have 4. Oops. Let's revert in the code block below and try something else. We'll show an example where the image is smaller than your Input specification, and not square. Like maybe it came from a new microscope that can only take imagery in rectangular bands.
# In[11]:
imgTiny = "images/Cellsx128.png"
imgTiny = skimage.img_as_float(skimage.io.imread(imgTiny)).astype(np.float32)
imgTinySlice = crop_center(imgTiny, 128, 56)
# Plot original
pyplot.figure()
pyplot.subplot(2, 1, 1)
pyplot.imshow(imgTiny)
pyplot.axis('on')
pyplot.title('Original')
# Plot slice
pyplot.figure()
pyplot.subplot(2, 2, 1)
pyplot.imshow(imgTinySlice)
pyplot.axis('on')
pyplot.title('128x56')
# Upscale?
print "Slice image shape: ", imgTinySlice.shape
imgTiny224 = skimage.transform.resize(imgTinySlice, (224, 224))
print "Upscaled slice image shape: ", imgTiny224.shape
# Plot upscaled
pyplot.subplot(2, 2, 2)
pyplot.imshow(imgTiny224)
pyplot.axis('on')
pyplot.title('224x224')
# Alright, this was a bit of a stretch for an example of how upscaling can fail. Get it? Stretch? This could be a life-or-death kind of failure though. What if normal cells are circular and diseased cells are elongated and bent? Sickle cell anemia for example:
# 
# In this situation, what do you do? It really depends on the model and how it was trained. In some cases it may be ok to pad the rest of the image with white, or maybe black, or maybe noise, or maybe even use png and transparencies and set a mask for the images so the model ignores transparent areas. See how much fun you can have figuring this out and you get to make medical breakthroughs too!
# Let's move on to the last step which we've already mentioned and that is to adjust the image input to be in BGR order. There's also another feature that Caffe2 uses, which is a `batch term`. We've already talked about CHW. This is the N, for number of images in NCHW.
#
# ### Final Preprocessing and the Batch Term
#
# In the last steps below we are going to switch the image's data order to BGR, stuff that into the Color column, then reoder the columns for GPU processing (HCW-->CHW) and then add a fourth dimension (N) to the image to track the number of images. In theory, you can just keep adding dimensions to your data, but this one is required for Caffe2 as it relays to Caffe how many images to expect in this batch. We set it to one (1) to indicate there's only one image going into Caffe in this batch. Note that in the final output when we check `img.shape` the order is quite different. We've added N for number of images, and changed the order like so: `N, C, H, W`
# In[12]:
# this next line helps with being able to rerun this section
# if you want to try the outputs of the different crop strategies above
# swap out imgScaled with img (original) or img256 (squeezed)
imgCropped = crop_center(imgScaled,224,224)
print "Image shape before HWC --> CHW conversion: ", imgCropped.shape
# (1) Since Caffe expects CHW order and the current image is HWC,
# we will need to change the order.
imgCropped = imgCropped.swapaxes(1, 2).swapaxes(0, 1)
print "Image shape after HWC --> CHW conversion: ", imgCropped.shape
pyplot.figure()
for i in range(3):
# For some reason, pyplot subplot follows Matlab's indexing
# convention (starting with 1). Well, we'll just follow it...
pyplot.subplot(1, 3, i+1)
pyplot.imshow(imgCropped[i])
pyplot.axis('off')
pyplot.title('RGB channel %d' % (i+1))
# (2) Caffe uses a BGR order due to legacy OpenCV issues, so we
# will change RGB to BGR.
imgCropped = imgCropped[(2, 1, 0), :, :]
print "Image shape after BGR conversion: ", imgCropped.shape
# for discussion later - not helpful at this point
# (3) We will subtract the mean image. Note that skimage loads
# image in the [0, 1] range so we multiply the pixel values
# first to get them into [0, 255].
#mean_file = os.path.join(CAFFE_ROOT, 'python/caffe/imagenet/ilsvrc_2012_mean.npy')
#mean = np.load(mean_file).mean(1).mean(1)
#img = img * 255 - mean[:, np.newaxis, np.newaxis]
pyplot.figure()
for i in range(3):
# For some reason, pyplot subplot follows Matlab's indexing
# convention (starting with 1). Well, we'll just follow it...
pyplot.subplot(1, 3, i+1)
pyplot.imshow(imgCropped[i])
pyplot.axis('off')
pyplot.title('BGR channel %d' % (i+1))
# (4) finally, since caffe2 expect the input to have a batch term
# so we can feed in multiple images, we will simply prepend a
# batch dimension of size 1. Also, we will make sure image is
# of type np.float32.
imgCropped = imgCropped[np.newaxis, :, :, :].astype(np.float32)
print 'Final input shape is:', imgCropped.shape
# In the output above you should note these alterations:
# 1. Before and after of the HWC to CHW change. The 3, which is the number of color channels moved to the beginning.
# 2. In the pictures above you can see that the color order was switched too. RGB became BGR. Blue and Red switched places.
# 3. The final input shape, meaning the last change to the image was to add the batch field to the beginning, so that now you have (1, 3, 224, 224) for:
# - 1 image in the batch,
# - 3 color channels (in BGR),
# - 224 height,
# - 224 width.
| 54.37013 | 816 | 0.746726 | 4,286 | 25,119 | 4.358143 | 0.216986 | 0.011992 | 0.009636 | 0.014455 | 0.172654 | 0.130628 | 0.120884 | 0.090958 | 0.080679 | 0.072916 | 0 | 0.022244 | 0.171344 | 25,119 | 461 | 817 | 54.488069 | 0.875138 | 0.756638 | 0 | 0.471698 | 1 | 0 | 0.175157 | 0.007676 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.056604 | null | null | 0.125786 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.