hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bebfe36afc8a169020e2b3f2d6602873133b4e74 | 884 | py | Python | tiddlyweb/filters/limit.py | tiddlyweb/tiddlyweb | 376bcad280e24d2de4d74883dc4d8369abcb2c28 | [
"BSD-3-Clause"
] | 57 | 2015-02-01T21:03:34.000Z | 2021-12-25T12:02:31.000Z | tiddlyweb/filters/limit.py | tiddlyweb/tiddlyweb | 376bcad280e24d2de4d74883dc4d8369abcb2c28 | [
"BSD-3-Clause"
] | 6 | 2016-02-05T11:43:32.000Z | 2019-09-05T13:38:49.000Z | tiddlyweb/filters/limit.py | tiddlyweb/tiddlyweb | 376bcad280e24d2de4d74883dc4d8369abcb2c28 | [
"BSD-3-Clause"
] | 17 | 2015-05-12T08:53:23.000Z | 2021-12-21T15:56:30.000Z | """
A :py:mod:`filter <tiddlyweb.filters>` type to limit a group of entities
using a syntax similar to SQL Limit::
limit=<index>,<count>
limit=<count>
"""
import itertools
def limit_parse(count='0'):
"""
Parse the argument of a ``limit`` :py:mod:`filter <tiddlyweb.filters>`
for a count and index argument, return a function which does the limiting.
Exceptions while parsing are passed up the stack.
"""
index = '0'
if ',' in count:
index, count = count.split(',', 1)
index = int(index)
count = int(count)
def limiter(entities, indexable=False, environ=None):
return limit(entities, index=index, count=count)
return limiter
def limit(entities, count=0, index=0):
"""
Make a slice of a list of entities based on a count and index.
"""
return itertools.islice(entities, index, index + count)
| 23.891892 | 78 | 0.64819 | 124 | 884 | 4.612903 | 0.443548 | 0.087413 | 0.038462 | 0.06993 | 0.094406 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007353 | 0.230769 | 884 | 36 | 79 | 24.555556 | 0.833824 | 0.469457 | 0 | 0 | 0 | 0 | 0.009456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0.083333 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bec9227899c9767af55354a2d39773951766ff07 | 486 | py | Python | tdx/abc.py | TrainerDex/DiscordBot | 7e7bb20c5ac76bed236a7458c31017b8ddd8b8be | [
"Apache-2.0"
] | 2 | 2020-09-18T12:43:48.000Z | 2020-11-10T00:34:15.000Z | tdx/abc.py | TrainerDex/DiscordBot | 7e7bb20c5ac76bed236a7458c31017b8ddd8b8be | [
"Apache-2.0"
] | 59 | 2020-07-24T00:04:53.000Z | 2022-03-29T11:15:48.000Z | tdx/abc.py | TrainerDex/DiscordBot | 7e7bb20c5ac76bed236a7458c31017b8ddd8b8be | [
"Apache-2.0"
] | 1 | 2022-01-12T12:33:15.000Z | 2022-01-12T12:33:15.000Z | from abc import ABC
from typing import Dict
from redbot.core import Config
from redbot.core.bot import Red
from trainerdex.client import Client
class MixinMeta(ABC):
"""
Base class for well behaved type hint detection with composite class.
Basically, to keep developers sane when not all attributes are defined in each mixin.
"""
def __init__(self, *_args):
self.bot: Red
self.config: Config
self.client: Client
self.emoji: Dict
| 23.142857 | 89 | 0.699588 | 68 | 486 | 4.926471 | 0.617647 | 0.059701 | 0.083582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240741 | 486 | 20 | 90 | 24.3 | 0.907859 | 0.320988 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fe3dd2d72750bce0851326699b900d4e0689f605 | 690 | py | Python | Python/1238.py | ArikBartzadok/beecrowd-challenges | ddb0453d1caa75c87c4b3ed6a40309ab99da77f2 | [
"MIT"
] | null | null | null | Python/1238.py | ArikBartzadok/beecrowd-challenges | ddb0453d1caa75c87c4b3ed6a40309ab99da77f2 | [
"MIT"
] | null | null | null | Python/1238.py | ArikBartzadok/beecrowd-challenges | ddb0453d1caa75c87c4b3ed6a40309ab99da77f2 | [
"MIT"
] | null | null | null | def execucoes():
return int(input())
def entradas():
return input().split(' ')
def imprimir(v):
print(v)
def tamanho_a(a):
return len(a)
def tamanho_b(b):
return len(b)
def diferenca_tamanhos(a, b):
return (len(a) <= len(b))
def analisar(e, i, s):
a, b = e
if(diferenca_tamanhos(a, b)):
for i in range(tamanho_a(a)):
s += a[i]
s += b[i]
s += b[tamanho_a(a):]
else:
for i in range(tamanho_b(b)):
s += a[i]
s += b[i]
s += a[tamanho_b(b):]
return s
def combinador():
n = execucoes()
for i in range(n): imprimir(analisar(entradas(), i, ''))
combinador() | 18.157895 | 60 | 0.510145 | 105 | 690 | 3.27619 | 0.257143 | 0.02907 | 0.078488 | 0.09593 | 0.145349 | 0.040698 | 0.040698 | 0 | 0 | 0 | 0 | 0 | 0.317391 | 690 | 38 | 61 | 18.157895 | 0.730361 | 0 | 0 | 0.137931 | 0 | 0 | 0.001447 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.275862 | false | 0 | 0 | 0.172414 | 0.482759 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
fe433c22e1af644dfc7ebbadd44ff0872fa4438b | 487 | py | Python | riddle.py | robertlit/monty-hall-problem | 746cab513dacdc1f47ce7269db35167df3520865 | [
"MIT"
] | null | null | null | riddle.py | robertlit/monty-hall-problem | 746cab513dacdc1f47ce7269db35167df3520865 | [
"MIT"
] | null | null | null | riddle.py | robertlit/monty-hall-problem | 746cab513dacdc1f47ce7269db35167df3520865 | [
"MIT"
] | null | null | null | import random
goat1 = random.randint(1, 3)
goat2 = random.randint(1, 3)
while goat1 == goat2:
goat2 = random.randint(1, 3)
success = 0
tries = 1_000_000
for _ in range(tries):
options = [1, 2, 3]
choice = random.randint(1, 3)
options.remove(choice)
if choice == goat1:
options.remove(goat2)
else:
options.remove(goat1)
choice = options[0]
if choice != goat1 and choice != goat2:
success = success + 1
print(success / tries)
| 18.037037 | 43 | 0.61807 | 67 | 487 | 4.447761 | 0.343284 | 0.174497 | 0.187919 | 0.201342 | 0.134228 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086111 | 0.26078 | 487 | 26 | 44 | 18.730769 | 0.741667 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0.052632 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe4725d5ecf06c13eb1ec7a97c57345acb7badcb | 760 | py | Python | tests/integration/test_interface.py | Synodic-Software/CPPython | 12e9acdf68e54d45bcf0b6c137d4fe627d1f6877 | [
"MIT"
] | null | null | null | tests/integration/test_interface.py | Synodic-Software/CPPython | 12e9acdf68e54d45bcf0b6c137d4fe627d1f6877 | [
"MIT"
] | 8 | 2021-11-28T23:46:36.000Z | 2022-03-15T09:00:43.000Z | tests/integration/test_interface.py | Synodic-Software/CPPython | 12e9acdf68e54d45bcf0b6c137d4fe627d1f6877 | [
"MIT"
] | 2 | 2021-11-28T23:17:49.000Z | 2021-11-28T23:36:03.000Z | """
Test the integrations related to the internal interface implementation and the 'Interface' interface itself
"""
import pytest
from cppython_core.schema import InterfaceConfiguration
from pytest_cppython.plugin import InterfaceIntegrationTests
from cppython.console import ConsoleInterface
class TestCLIInterface(InterfaceIntegrationTests):
"""
The tests for our CLI interface
"""
@pytest.fixture(name="interface")
def fixture_interface(self):
"""
Override of the plugin provided interface fixture.
Returns:
ConsoleInterface -- The Interface object to use for the CPPython defined tests
"""
configuration = InterfaceConfiguration()
return ConsoleInterface(configuration)
| 28.148148 | 107 | 0.735526 | 74 | 760 | 7.513514 | 0.540541 | 0.043165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205263 | 760 | 26 | 108 | 29.230769 | 0.92053 | 0.372368 | 0 | 0 | 0 | 0 | 0.021687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fe509cc8fe00e2ec571d053ee6c5713299416d2c | 1,225 | py | Python | h/exceptions.py | ssin122/test-h | c10062ae23b690afaac0ab4af7b9a5a5e4b686a9 | [
"MIT"
] | 2 | 2021-11-07T23:14:54.000Z | 2021-11-17T10:11:55.000Z | h/exceptions.py | ssin122/test-h | c10062ae23b690afaac0ab4af7b9a5a5e4b686a9 | [
"MIT"
] | null | null | null | h/exceptions.py | ssin122/test-h | c10062ae23b690afaac0ab4af7b9a5a5e4b686a9 | [
"MIT"
] | 1 | 2017-03-12T00:18:33.000Z | 2017-03-12T00:18:33.000Z | # -*- coding: utf-8 -*-
"""Exceptions raised by the h application."""
from __future__ import unicode_literals
from h.i18n import TranslationString as _
# N.B. This class **only** covers exceptions thrown by API code provided by
# the h package. memex code has its own base APIError class.
class APIError(Exception):
"""Base exception for problems handling API requests."""
def __init__(self, message, status_code=500):
self.status_code = status_code
super(APIError, self).__init__(message)
class ClientUnauthorized(APIError):
"""
Exception raised if the client credentials provided for an API request
were missing or invalid.
"""
def __init__(self):
message = _('Client credentials are invalid.')
super(ClientUnauthorized, self).__init__(message, status_code=403)
class OAuthTokenError(APIError):
"""
Exception raised when an OAuth token request failed.
This specifically handles OAuth errors which have a type (``message``) and
a description (``description``).
"""
def __init__(self, message, type_, status_code=400):
self.type = type_
super(OAuthTokenError, self).__init__(message, status_code=status_code)
| 27.222222 | 79 | 0.702041 | 150 | 1,225 | 5.466667 | 0.486667 | 0.085366 | 0.040244 | 0.065854 | 0.060976 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01227 | 0.201633 | 1,225 | 44 | 80 | 27.840909 | 0.826176 | 0.411429 | 0 | 0 | 0 | 0 | 0.046547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
fe56cfdabfdb2c62e991e0ff5887c5fa113a7477 | 694 | py | Python | set.py | QUDUSKUNLE/Python-Flask | 5990572b17923c976907c2fa5c2a9790f3a7c869 | [
"MIT"
] | null | null | null | set.py | QUDUSKUNLE/Python-Flask | 5990572b17923c976907c2fa5c2a9790f3a7c869 | [
"MIT"
] | null | null | null | set.py | QUDUSKUNLE/Python-Flask | 5990572b17923c976907c2fa5c2a9790f3a7c869 | [
"MIT"
] | null | null | null | """
How to set up virtual environment
pip install virtualenv
pip install virtualenvwrapper
# export WORKON_HOME=~/Envs
source /usr/local/bin/virtualenvwrapper.sh
# To activate virtualenv and set up flask
1. mkvirtualenv my-venv
###2. workon my-venv
3. pip install Flask
4. pip freeze
5. # To put all dependencies in a file
pip freeze > requirements.txt
6. run.py: entry point of the application
7. relational database management system
SQLite, MYSQL, PostgreSQL
SQLAlchemy is an Object Relational Mapper (ORM),
which means that it connects the objects of an application to tables in a
relational database management system.
""" | 30.173913 | 79 | 0.714697 | 97 | 694 | 5.103093 | 0.701031 | 0.060606 | 0.113131 | 0.137374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013084 | 0.229107 | 694 | 23 | 80 | 30.173913 | 0.91215 | 0.936599 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe5734aaedd2488a65c2f70b6e6de6bc38f3f4ec | 1,346 | py | Python | test/test_generate_data_coassembly.py | Badboy-16/SemiBin | 501bc1a7e310104c09475ca233a3f16d081f129a | [
"MIT"
] | null | null | null | test/test_generate_data_coassembly.py | Badboy-16/SemiBin | 501bc1a7e310104c09475ca233a3f16d081f129a | [
"MIT"
] | null | null | null | test/test_generate_data_coassembly.py | Badboy-16/SemiBin | 501bc1a7e310104c09475ca233a3f16d081f129a | [
"MIT"
] | null | null | null | from SemiBin.main import generate_data_single
import os
import pytest
import logging
import pandas as pd
def test_generate_data_coassembly():
logger = logging.getLogger('SemiBin')
logger.setLevel(logging.INFO)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(asctime)s - %(message)s'))
logger.addHandler(sh)
os.makedirs('output_coassembly',exist_ok=True)
generate_data_single(bams=['test/coassembly_sample_data/input.sorted1.bam',
'test/coassembly_sample_data/input.sorted2.bam',
'test/coassembly_sample_data/input.sorted3.bam',
'test/coassembly_sample_data/input.sorted4.bam',
'test/coassembly_sample_data/input.sorted5.bam'],
num_process=1,
logger=logger,
output='output_coassembly',
handle='test/coassembly_sample_data/input.fasta',
binned_short=False,
must_link_threshold=4000
)
data = pd.read_csv('output_coassembly/data.csv',index_col=0)
data_split = pd.read_csv('output_coassembly/data_split.csv',index_col=0)
assert data.shape == (40,141)
assert data_split.shape == (80,141) | 42.0625 | 80 | 0.604012 | 147 | 1,346 | 5.292517 | 0.442177 | 0.107969 | 0.154242 | 0.18509 | 0.313625 | 0.239075 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.297177 | 1,346 | 32 | 81 | 42.0625 | 0.799154 | 0 | 0 | 0 | 1 | 0 | 0.288048 | 0.23905 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.035714 | false | 0 | 0.178571 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe5e44d3d94cf663368e7d42480218daf9100e40 | 16,722 | py | Python | instahunter.py | Araekiel/instahunter | c07c10773bcf33bdc0d46b39a0dda3f55936b1f3 | [
"MIT"
] | 17 | 2020-09-06T18:10:51.000Z | 2021-12-04T07:04:00.000Z | instahunter.py | Araekiel/instahunter | c07c10773bcf33bdc0d46b39a0dda3f55936b1f3 | [
"MIT"
] | 1 | 2020-09-30T18:43:10.000Z | 2021-05-17T09:59:03.000Z | instahunter.py | Araekiel/instahunter | c07c10773bcf33bdc0d46b39a0dda3f55936b1f3 | [
"MIT"
] | 5 | 2020-11-10T15:08:37.000Z | 2022-01-02T21:20:24.000Z | '''
instahunter.py
Author: Araekiel
Copyright: Copyright © 2019, Araekiel
License: MIT
Version: 1.6.3
'''
import click
import requests
import json
from datetime import datetime
@click.group()
def cli():
"""Made by Araekiel | v1.6.3"""
headers = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:55.0) Gecko/20100101 Firefox/55.0"}
@click.command()
@click.option('-tag', prompt="Hashtag", help="The hashtag you want to search the posts with")
@click.option('--post-type', default="latest", help="latest: Get latest posts | top: Get top posts")
@click.option('-create-file', default="false", help="true: Create a file with the data | false: Will not create a file, false is default")
@click.option('--file-type', default="text", help="json: Create a json file | text: Create a text file, text is default")
def getposts(tag, post_type, create_file, file_type):
"""This command will fetch latest or top public posts with a Hashtag"""
try:
# Creating file if required, creating array json_data to store data if the file type is json
if(create_file == "true"):
if(file_type == "json"):
file = open(tag+"_posts.json", "w+")
json_data = []
else:
file = open(tag+"_posts.txt", "w+", encoding="utf-8")
counter = 0
api_url = "https://www.instagram.com/explore/tags/%s/?__a=1" % tag
req = requests.get(url=api_url, headers=headers)
data = req.json()
if(post_type == "top"):
edges = data["graphql"]["hashtag"]["edge_hashtag_to_top_posts"]["edges"]
else:
edges = data["graphql"]["hashtag"]["edge_hashtag_to_media"]["edges"]
# Looping through 'edges' in the data acquired
for edge in edges:
counter = counter + 1
# Collecting necessary data from each edge
try:
caption = edge["node"]["edge_media_to_caption"]["edges"][0]["node"]["text"]
except:
caption = "No Caption"
scraped_data = {
"id": counter,
"post_id": edge["node"]["id"],
"shortcode": edge["node"]["shortcode"],
"owner_id": edge["node"]["owner"]["id"],
"display_url": edge["node"]["display_url"],
"caption": caption,
"time": str(datetime.fromtimestamp(
edge["node"]["taken_at_timestamp"])),
"n_likes": edge["node"]["edge_liked_by"]["count"],
"n_comments": edge["node"]["edge_media_to_comment"]["count"],
"is_video": edge["node"]["is_video"]
}
if(create_file == "true"):
# If the file type is json then appending the data to json_data array instead of writing it to the file right away
if(file_type == "json"):
json_data.append(scraped_data)
else:
file.write("###############################\nID: %s \nPost ID: %s \nShortcode: %s \nOwner ID: %s \nDisplay URL: %s \nCaption: %s \nTime: %s \nNumber of likes: %s \nNumber of comments: %s \nIs Video: %s \n###############################\n\n\n\n\n" % (
str(counter), str(scraped_data["post_id"]), str(scraped_data["shortcode"]), str(scraped_data["owner_id"]), str(scraped_data["display_url"]), str(scraped_data["caption"]), str(scraped_data["time"]), str(scraped_data["n_likes"]), str(scraped_data["n_comments"]), str(scraped_data["is_video"])))
else:
click.echo("###############################\nID: %s \nPost ID: %s \nShortcode: %s \nOwner ID: %s \nDisplay URL: %s \nCaption: %s \nTime: %s \nNumber of likes: %s \nNumber of comments: %s \nIs Video: %s \n###############################\n\n\n\n\n" % (
counter, scraped_data["post_id"], scraped_data["shortcode"], scraped_data["owner_id"], scraped_data["display_url"], scraped_data["caption"], scraped_data["time"], scraped_data["n_likes"], scraped_data["n_comments"], scraped_data["is_video"]))
if(create_file == "true"):
# Closing the file and dumping the data before closing if the file type is json
if(file_type == "json"):
json.dump(json_data, file)
click.echo("File Created, name: '%s_posts.json'" % tag)
else:
click.echo("File Created, name: '%s_posts.txt" % tag)
file.close()
else:
click.echo("Done!")
except:
click.echo(
"Couldn't retrieve data, One of the following was the issue: \n1. Your query was wrong \n2. Instagram servers did not respond \n3. There is a problem with your internet connection")
@click.command()
@click.option('-username', prompt="Username", help="Username you want to search the user with")
@click.option('-create-file', default="false", help="true: Create a file with the data | false: Will not create a file, false is default")
@click.option('--file-type', default="text", help="json: Create a json file | text: Create a text file, text is default")
def getuser(username, create_file, file_type):
"""This command will fetch user data with a Username"""
api_url = "https://www.instagram.com/%s/?__a=1" % username
try:
req = requests.get(url=api_url, headers=headers)
data = req.json()
# Collecting necessary data
user = data["graphql"]["user"]
if(user["highlight_reel_count"] > 0):
has_highlights = True
else:
has_highlights = False
scraped_data = {
"user_id": user["id"],
"username": user["username"],
"full_name": user["full_name"],
"profile_pic_url": user["profile_pic_url_hd"],
"bio": user["biography"],
"n_uploads": user["edge_owner_to_timeline_media"]["count"],
"n_followers": user["edge_followed_by"]["count"],
"n_following": user["edge_follow"]["count"],
"is_private": user["is_private"],
"is_verified": user["is_verified"],
"external_url": user["external_url"],
"igtv_videos": user["edge_felix_video_timeline"]["count"],
"has_highlights": has_highlights
}
if(create_file == "true"):
if(file_type == "json"):
file = open(username+"_user.json", "w+")
json.dump(scraped_data, file)
file.close()
click.echo("File Created, name: '%s_user.json'" % str(username))
else:
file = open(username+"_user.txt", "w+", encoding="utf-8")
file.write("User ID: %s \nUsername: %s \nFull Name: %s \nProfile Pic URL: %s \nBio: %s \nUploads: %s \nFollowers: %s \nFollowing: %s \nPrivate ID: %s \nVerified ID: %s \nExternal URL: %s \nIGTV videos: %s \nHas highlights: %s" % (
str(scraped_data["user_id"]), scraped_data["username"], scraped_data["full_name"], scraped_data["profile_pic_url"], scraped_data["bio"], str(scraped_data["n_uploads"]), str(scraped_data["n_followers"]), str(scraped_data["n_following"]), str(scraped_data["is_private"]), str(scraped_data["is_verified"]), scraped_data["external_url"], str(scraped_data["igtv_videos"]), str(scraped_data["has_highlights"])))
file.close()
click.echo("File Created, name: '%s_user.txt'" % str(username))
else:
click.echo("User ID: %s \nUsername: %s \nFull Name: %s \nProfile Pic URL: %s \nBio: %s \nUploads: %s \nFollowers: %s \nFollowing: %s \nPrivate ID: %s \nVerified ID: %s \nExternal URL: %s \nIGTV videos: %s \nHas highlights: %s" % (
str(scraped_data["user_id"]), scraped_data["username"], scraped_data["full_name"], scraped_data["profile_pic_url"], scraped_data["bio"], str(scraped_data["n_uploads"]), str(scraped_data["n_followers"]), str(scraped_data["n_following"]), str(scraped_data["is_private"]), str(scraped_data["is_verified"]), scraped_data["external_url"], str(scraped_data["igtv_videos"]), str(scraped_data["has_highlights"])))
click.echo('Done!')
except:
click.echo(
"Couldn't retrieve data, One of the following was the issue: \n1. Your query was wrong \n2. Instagram servers did not respond \n3. There is a problem with your internet connection")
@click.command()
@click.option('-username', prompt="Username", help='The username of the user you want to search the user id of')
@click.option('-create-file', default="false", help="true: Create a file with the data | false: Will not create a file, false is default")
@click.option('--file-type', default="text", help="json: Create a json file | text: Create a text file, text is default")
def getuserposts(username, create_file, file_type):
"""This command will fetch recent posts of a user with a Username"""
try:
# Creating file if required, creating array json_data to store data if the file type is json
if(create_file == "true"):
if(file_type == "json"):
file = open(username+"_posts.json", "w+")
json_data = []
else:
file = open(username+"_posts.txt", "w+", encoding="utf-8")
counter = 0
api_url = "https://www.instagram.com/%s/?__a=1" % username
req = requests.get(url=api_url, headers=headers)
data = req.json()
posts = data["graphql"]["user"]["edge_owner_to_timeline_media"]["edges"]
# Looping through posts
for post in posts:
counter = counter + 1
node = post["node"]
# Collecting necessary data
try:
caption = node["edge_media_to_caption"]["edges"][0]["node"]["text"]
except:
caption = ""
try:
location = node["location"]["name"]
except:
location = "No Location"
scraped_data = {
"id": counter,
"post_id": node["id"],
"shortcode": node["shortcode"],
"display_url": node["display_url"],
"height": node["dimensions"]["height"],
"width": node["dimensions"]["width"],
"caption": caption,
"time": str(datetime.fromtimestamp(node["taken_at_timestamp"])),
"n_likes": node["edge_liked_by"]["count"],
"comments_disabled": node["comments_disabled"],
"n_comments": node["edge_media_to_comment"]["count"],
"location": location,
"is_video": node["is_video"]
}
if(create_file == "true"):
if(file_type == "json"):
# If the file type is json then appending the data to json_data array instead of writing it to the file right away
json_data.append(scraped_data)
else:
file.write("###############################\nID: %s \nPost ID: %s \nShortcode: %s \nDisplay URL: %s \nImage Height: %s \nImage Width: %s \nCaption: %s \nTime: %s \nNumber of likes: %s \nComments Disabled: %s \nNumber of comments: %s \nLocation: %s \nIs Video: %s \n###############################\n\n\n\n\n" % (
str(counter), str(scraped_data["post_id"]), str(scraped_data["shortcode"]), str(scraped_data["display_url"]), str(scraped_data["height"]), str(scraped_data["width"]), str(scraped_data["caption"]), str(scraped_data["time"]), str(scraped_data["n_likes"]), str(scraped_data["comments_disabled"]), str(scraped_data["n_comments"]), str(scraped_data["location"]), str(scraped_data["is_video"])))
else:
click.echo("###############################\nID: %s \nPost ID: %s \nShortcode: %s \nDisplay URL: %s \nImage Height: %s \nImage Width: %s \nCaption: %s \nTime: %s \nNumber of likes: %s \nComments Disabled: %s \nNumber of comments: %s \nLocation: %s \nIs Video: %s \n###############################\n\n\n\n\n" % (
str(counter), str(scraped_data["post_id"]), str(scraped_data["shortcode"]), str(scraped_data["display_url"]), str(scraped_data["height"]), str(scraped_data["width"]), str(scraped_data["caption"]), str(scraped_data["time"]), str(scraped_data["n_likes"]), str(scraped_data["comments_disabled"]), str(scraped_data["n_comments"]), str(scraped_data["location"]), str(scraped_data["is_video"])))
if(create_file == "true"):
# Closing the file and dumping the data before closing if the file type is json
if(file_type == "json"):
json.dump(json_data, file)
click.echo("File Created, name: '%s_posts.json'" % username)
else:
click.echo("File Created, name: '%s_posts.txt" % username)
file.close()
else:
click.echo("Done!")
except:
click.echo(
"Couldn't retrieve data, One of the following was the issue: \n1. Your query was wrong \n2. Instagram servers did not respond \n3. There is a problem with your internet connection")
@click.command()
@click.option('-query', prompt="Query", help="The term you want to search users with")
@click.option('-create-file', default="false", help="true: Create a file with the data | false: Will not create a file, false is default")
@click.option('--file-type', default="text", help="json: Create a json file | text: Create a text file, text is default")
def search(query, create_file, file_type):
"""This command searches for users on instagram"""
try:
if(create_file == "true"):
if(file_type == "json"):
file = open(query+"_users.json", "w+")
json_data = []
else:
file = open(query+"_users.text",
"w+", encoding="utf-8")
counter = 0
api_url = "https://www.instagram.com/web/search/topsearch/?query=%s" % query
req = requests.get(api_url, headers=headers)
data = req.json()
users = data["users"]
for user in users:
counter = counter + 1
scraped_data = {
"id": counter,
"user_id": user["user"]["pk"],
"username": user["user"]["username"],
"full_name": user["user"]["full_name"],
"profile_pic_url": user["user"]["profile_pic_url"],
"is_private": user["user"]["is_private"],
"is_verified": user["user"]["is_verified"],
}
if(create_file == "true"):
# If the file type is json then appending the data to json_data array instead of writing it to the file right away
if(file_type == "json"):
json_data.append(scraped_data)
else:
file.write("###############################\nID: %s \nUser ID: %s \nUsername: %s \nFull Name: %s \nProfile Pic URL: %s \nPrivate ID: %s \nVerified ID: %s \n###############################\n\n\n\n\n" % (str(counter), str(
scraped_data["user_id"]), str(scraped_data["username"]), str(scraped_data["full_name"]), str(scraped_data["profile_pic_url"]), str(scraped_data["is_private"]), str(scraped_data["is_verified"])))
else:
click.echo("###############################\nID: %s \nUser ID: %s \nUsername: %s \nFull Name: %s \nProfile Pic URL: %s \nPrivate ID: %s \nVerified ID: %s \n###############################\n\n\n\n\n" % (str(counter), str(
scraped_data["user_id"]), str(scraped_data["username"]), str(scraped_data["full_name"]), str(scraped_data["profile_pic_url"]), str(scraped_data["is_private"]), str(scraped_data["is_verified"])))
if(create_file == "true"):
# Closing the file and dumping the data before closing if the file type is json
if(file_type == "json"):
json.dump(json_data, file)
click.echo("File Created, name: '%s_users.json'" %
query)
else:
click.echo("File Created, name: '%s_users.txt'" %
query)
file.close()
else:
click.echo("Done!")
except:
click.echo(
"Couldn't retrieve data, One of the following was the issue: \n1. Your query was wrong \n2. Instagram servers did not respond \n3. There is a problem with your internet connection")
cli.add_command(getposts)
cli.add_command(getuser)
cli.add_command(getuserposts)
cli.add_command(search)
if __name__ == "__main__":
cli()
| 59.297872 | 425 | 0.57224 | 2,111 | 16,722 | 4.378494 | 0.113216 | 0.104728 | 0.092394 | 0.00779 | 0.759926 | 0.744888 | 0.697825 | 0.673483 | 0.653684 | 0.627502 | 0 | 0.004668 | 0.256967 | 16,722 | 281 | 426 | 59.508897 | 0.739155 | 0.075649 | 0 | 0.512605 | 0 | 0.054622 | 0.40625 | 0.044504 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021008 | false | 0 | 0.016807 | 0 | 0.037815 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe60c7c64b76bb62e7927a82cb0d30249ff0793b | 1,840 | py | Python | src/main.py | Lidenbrock-ed/challenge-prework-backend-python | d2f46a5cf9ad649de90d4194d115cd9492eb583d | [
"MIT"
] | null | null | null | src/main.py | Lidenbrock-ed/challenge-prework-backend-python | d2f46a5cf9ad649de90d4194d115cd9492eb583d | [
"MIT"
] | null | null | null | src/main.py | Lidenbrock-ed/challenge-prework-backend-python | d2f46a5cf9ad649de90d4194d115cd9492eb583d | [
"MIT"
] | null | null | null | # Resolve the problem!!
import string
import random
SYMBOLS = list('!"#$%&\'()*+,-./:;?@[]^_`{|}~')
def generate_password():
# Start coding here
letters_min = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','x','y','z']
letters_may = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','X','Y','Z']
numbers = ['1','2', '3','4','5','6','7','8','9','0']
safe_password = letters_min + letters_may + numbers + SYMBOLS
final_password = []
for i in range(15):
generate_caracter = random.choice(safe_password)
final_password.append(generate_caracter)
final_password = "".join(final_password)
print(final_password)
return final_password
def validate(password):
if len(password) >= 8 and len(password) <= 16:
has_lowercase_letters = False
has_numbers = False
has_uppercase_letters = False
has_symbols = False
for char in password:
if char in string.ascii_lowercase:
has_lowercase_letters = True
break
for char in password:
if char in string.ascii_uppercase:
has_uppercase_letters = True
break
for char in password:
if char in string.digits:
has_numbers = True
break
for char in password:
if char in SYMBOLS:
has_symbols = True
break
if has_symbols and has_numbers and has_lowercase_letters and has_uppercase_letters:
return True
return False
def run():
password = generate_password()
if validate(password):
print('Secure Password')
else:
print('Insecure Password')
if __name__ == '__main__':
run()
| 27.058824 | 119 | 0.547283 | 234 | 1,840 | 4.111111 | 0.363248 | 0.049896 | 0.037422 | 0.070686 | 0.227651 | 0.227651 | 0.227651 | 0.227651 | 0.227651 | 0.149688 | 0 | 0.011407 | 0.285326 | 1,840 | 67 | 120 | 27.462687 | 0.720152 | 0.021196 | 0 | 0.166667 | 1 | 0 | 0.059511 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.375 | 0.041667 | 0 | 0.166667 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fe6124434f4049e2a32ac1bce2dbe6c619c4fd73 | 222 | py | Python | pythonteste/aula08a.py | genisyskernel/cursoemvideo-python | dec301e33933388c886fe78010f38adfb24dae82 | [
"MIT"
] | 1 | 2020-10-26T04:33:14.000Z | 2020-10-26T04:33:14.000Z | pythonteste/aula08a.py | genisyskernel/cursoemvideo-python | dec301e33933388c886fe78010f38adfb24dae82 | [
"MIT"
] | null | null | null | pythonteste/aula08a.py | genisyskernel/cursoemvideo-python | dec301e33933388c886fe78010f38adfb24dae82 | [
"MIT"
] | null | null | null | from math import sqrt
import emoji
num = int(input("Digite um número: "))
raiz = sqrt(num)
print("A raiz do número {0} é {1:.2f}.".format(num, raiz))
print(emoji.emojize("Hello World! :earth_americas:", use_aliases=True))
| 31.714286 | 71 | 0.707207 | 37 | 222 | 4.189189 | 0.756757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.121622 | 222 | 6 | 72 | 37 | 0.779487 | 0 | 0 | 0 | 0 | 0 | 0.351351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
fe6f28fa08fad0c5dcac3f523f0415850eb9e77c | 3,495 | py | Python | dcos_installer/test_cli.py | nkhanal0/dcos | fe0571b6519c86b6c33db4af42c63ab3e9087dcf | [
"Apache-2.0"
] | 3 | 2017-02-05T06:58:28.000Z | 2017-05-12T07:28:53.000Z | dcos_installer/test_cli.py | nkhanal0/dcos | fe0571b6519c86b6c33db4af42c63ab3e9087dcf | [
"Apache-2.0"
] | 720 | 2017-02-08T04:04:19.000Z | 2021-09-14T14:04:56.000Z | dcos_installer/test_cli.py | nkhanal0/dcos | fe0571b6519c86b6c33db4af42c63ab3e9087dcf | [
"Apache-2.0"
] | 14 | 2017-02-08T03:57:24.000Z | 2019-10-28T12:14:49.000Z | import pytest
import gen
from dcos_installer import cli
def test_default_arg_parser():
parser = cli.get_argument_parser().parse_args([])
assert parser.verbose is False
assert parser.port == 9000
assert parser.action == 'genconf'
def test_set_arg_parser():
argument_parser = cli.get_argument_parser()
def parse_args(arg_list):
return argument_parser.parse_args(arg_list)
parser = parse_args(['-v', '-p 12345'])
assert parser.verbose is True
assert parser.port == 12345
parser = parse_args(['--web'])
assert parser.action == 'web'
parser = parse_args(['--genconf'])
assert parser.action == 'genconf'
parser = parse_args(['--preflight'])
assert parser.action == 'preflight'
parser = parse_args(['--postflight'])
assert parser.action == 'postflight'
parser = parse_args(['--deploy'])
assert parser.action == 'deploy'
parser = parse_args(['--validate-config'])
assert parser.action == 'validate-config'
parser = parse_args(['--hash-password', 'foo'])
assert parser.password == 'foo'
assert parser.action == 'hash-password'
parser = parse_args(['--hash-password'])
assert parser.password is None
assert parser.action == 'hash-password'
parser = parse_args(['--set-superuser-password', 'foo'])
assert parser.password == 'foo'
assert parser.action == 'set-superuser-password'
parser = parse_args(['--set-superuser-password'])
assert parser.password is None
assert parser.action == 'set-superuser-password'
parser = parse_args(['--generate-node-upgrade-script', 'fake'])
assert parser.installed_cluster_version == 'fake'
assert parser.action == 'generate-node-upgrade-script'
# Can't do two at once
with pytest.raises(SystemExit):
parse_args(['--validate', '--hash-password', 'foo'])
def test_stringify_config():
stringify = gen.stringify_configuration
# Basic cases pass right through
assert dict() == stringify(dict())
assert {"foo": "bar"} == stringify({"foo": "bar"})
assert {"a": "b", "c": "d"} == stringify({"a": "b", "c": "d"})
# booleans are converted to lower case true / false
assert {"a": "true"} == stringify({"a": True})
assert {"a": "false"} == stringify({"a": False})
assert {"a": "b", "c": "false"} == stringify({"a": "b", "c": False})
# integers are made into strings
assert {"a": "1"} == stringify({"a": 1})
assert {"a": "4123"} == stringify({"a": 4123})
assert {"a": "b", "c": "9999"} == stringify({"a": "b", "c": 9999})
# Dict and list are converted to JSON
assert {"a": '["b"]'} == stringify({"a": ['b']})
assert {"a": '["b\\"a"]'} == stringify({"a": ['b"a']})
assert {"a": '[1]'} == stringify({"a": [1]})
assert {"a": '[1, 2, 3, 4]'} == stringify({"a": [1, 2, 3, 4]})
assert {"a": '[true, false]'} == stringify({"a": [True, False]})
assert {"a": '{"b": "c"}'} == stringify({"a": {"b": "c"}})
assert {"a": '{"b": 1}'} == stringify({"a": {"b": 1}})
assert {"a": '{"b": true}'} == stringify({"a": {"b": True}})
assert {"a": '{"b": null}'} == stringify({"a": {"b": None}})
# Random types produce an error.
with pytest.raises(Exception):
stringify({"a": set()})
# All the handled types at once
assert {
"a": "b",
"c": "true",
"d": "1",
"e": "[1]",
"f": '{"g": "h"}'
} == stringify({"a": "b", "c": True, "d": 1, "e": [1], "f": {"g": "h"}})
| 34.60396 | 76 | 0.571674 | 434 | 3,495 | 4.520737 | 0.239631 | 0.12844 | 0.107034 | 0.022936 | 0.279307 | 0.222222 | 0.222222 | 0.20999 | 0.155963 | 0.014271 | 0 | 0.017285 | 0.205436 | 3,495 | 100 | 77 | 34.95 | 0.689233 | 0.065522 | 0 | 0.136986 | 0 | 0 | 0.180479 | 0.046041 | 0 | 0 | 0 | 0 | 0.547945 | 1 | 0.054795 | false | 0.178082 | 0.041096 | 0.013699 | 0.109589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fe74b07194e48e39b48840554a34c0fb3e4605a4 | 13,815 | py | Python | telemetry/telemetry/testing/internal/fake_gpu_info.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 2,151 | 2020-04-18T07:31:17.000Z | 2022-03-31T08:39:18.000Z | telemetry/telemetry/testing/internal/fake_gpu_info.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 4,640 | 2015-07-08T16:19:08.000Z | 2019-12-02T15:01:27.000Z | telemetry/telemetry/testing/internal/fake_gpu_info.py | tingshao/catapult | a8fe19e0c492472a8ed5710be9077e24cc517c5c | [
"BSD-3-Clause"
] | 698 | 2015-06-02T19:18:35.000Z | 2022-03-29T16:57:15.000Z | # Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# This dictionary of GPU information was captured from a run of
# Telemetry on a Linux workstation with NVIDIA GPU. It helps test
# telemetry.internal.platform's GPUInfo class, and specifically the
# attributes it expects to find in the dictionary; if the code changes
# in an incompatible way, tests using this fake GPU info will begin
# failing, indicating this fake data must be updated.
#
# To regenerate it, import pdb in
# telemetry/internal/platform/gpu_info.py and add a call to
# pdb.set_trace() in GPUInfo.FromDict before the return statement.
# Print the attrs dictionary in the debugger and copy/paste the result
# on the right-hand side of this assignment. Then run:
#
# pyformat [this file name] | sed -e "s/'/'/g"
#
# and put the output into this file.
FAKE_GPU_INFO = {
'feature_status':
{
'flash_stage3d': 'enabled',
'gpu_compositing': 'enabled',
'video_decode': 'unavailable_software',
'flash_3d': 'enabled',
'webgl': 'enabled',
'video_encode': 'enabled',
'multiple_raster_threads': 'enabled_on',
'2d_canvas': 'unavailable_software',
'rasterization': 'disabled_software',
'flash_stage3d_baseline': 'enabled'
},
'aux_attributes':
{
'optimus': False,
'sandboxed': True,
'basic_info_state': 1,
'adapter_luid': 0.0,
'driver_version': '331.79',
'direct_rendering': True,
'amd_switchable': False,
'context_info_state': 1,
'process_crash_count': 0,
'pixel_shader_version': '4.40',
'gl_ws_version': '1.4',
'can_lose_context': False,
'driver_vendor': 'NVIDIA',
'max_msaa_samples': '64',
'software_rendering': False,
'gl_version': '4.4.0 NVIDIA 331.79',
'gl_ws_vendor': 'NVIDIA Corporation',
'vertex_shader_version': '4.40',
'initialization_time': 1.284043,
'gl_reset_notification_strategy': 33362,
'gl_ws_extensions':
'GLX_EXT_visual_info GLX_EXT_visual_rating GLX_SGIX_fbconfig '
'GLX_SGIX_pbuffer GLX_SGI_video_sync GLX_SGI_swap_control '
'GLX_EXT_swap_control GLX_EXT_swap_control_tear '
'GLX_EXT_texture_from_pixmap GLX_EXT_buffer_age '
'GLX_ARB_create_context GLX_ARB_create_context_profile '
'GLX_EXT_create_context_es_profile '
'GLX_EXT_create_context_es2_profile '
'GLX_ARB_create_context_robustness GLX_ARB_multisample '
'GLX_NV_float_buffer GLX_ARB_fbconfig_float GLX_NV_swap_group'
' GLX_EXT_framebuffer_sRGB GLX_NV_multisample_coverage '
'GLX_NV_copy_image GLX_NV_video_capture ',
'gl_renderer': 'Quadro 600/PCIe/SSE2',
'driver_date': '',
'gl_vendor': 'NVIDIA Corporation',
'gl_extensions':
'GL_AMD_multi_draw_indirect GL_ARB_arrays_of_arrays '
'GL_ARB_base_instance GL_ARB_blend_func_extended '
'GL_ARB_buffer_storage GL_ARB_clear_buffer_object '
'GL_ARB_clear_texture GL_ARB_color_buffer_float '
'GL_ARB_compatibility GL_ARB_compressed_texture_pixel_storage'
' GL_ARB_conservative_depth GL_ARB_compute_shader '
'GL_ARB_compute_variable_group_size GL_ARB_copy_buffer '
'GL_ARB_copy_image GL_ARB_debug_output '
'GL_ARB_depth_buffer_float GL_ARB_depth_clamp '
'GL_ARB_depth_texture GL_ARB_draw_buffers '
'GL_ARB_draw_buffers_blend GL_ARB_draw_indirect '
'GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced '
'GL_ARB_enhanced_layouts GL_ARB_ES2_compatibility '
'GL_ARB_ES3_compatibility GL_ARB_explicit_attrib_location '
'GL_ARB_explicit_uniform_location '
'GL_ARB_fragment_coord_conventions '
'GL_ARB_fragment_layer_viewport GL_ARB_fragment_program '
'GL_ARB_fragment_program_shadow GL_ARB_fragment_shader '
'GL_ARB_framebuffer_no_attachments GL_ARB_framebuffer_object '
'GL_ARB_framebuffer_sRGB GL_ARB_geometry_shader4 '
'GL_ARB_get_program_binary GL_ARB_gpu_shader5 '
'GL_ARB_gpu_shader_fp64 GL_ARB_half_float_pixel '
'GL_ARB_half_float_vertex GL_ARB_imaging '
'GL_ARB_indirect_parameters GL_ARB_instanced_arrays '
'GL_ARB_internalformat_query GL_ARB_internalformat_query2 '
'GL_ARB_invalidate_subdata GL_ARB_map_buffer_alignment '
'GL_ARB_map_buffer_range GL_ARB_multi_bind '
'GL_ARB_multi_draw_indirect GL_ARB_multisample '
'GL_ARB_multitexture GL_ARB_occlusion_query '
'GL_ARB_occlusion_query2 GL_ARB_pixel_buffer_object '
'GL_ARB_point_parameters GL_ARB_point_sprite '
'GL_ARB_program_interface_query GL_ARB_provoking_vertex '
'GL_ARB_robust_buffer_access_behavior GL_ARB_robustness '
'GL_ARB_sample_shading GL_ARB_sampler_objects '
'GL_ARB_seamless_cube_map GL_ARB_separate_shader_objects '
'GL_ARB_shader_atomic_counters GL_ARB_shader_bit_encoding '
'GL_ARB_shader_draw_parameters GL_ARB_shader_group_vote '
'GL_ARB_shader_image_load_store GL_ARB_shader_image_size '
'GL_ARB_shader_objects GL_ARB_shader_precision '
'GL_ARB_query_buffer_object '
'GL_ARB_shader_storage_buffer_object GL_ARB_shader_subroutine'
' GL_ARB_shader_texture_lod GL_ARB_shading_language_100 '
'GL_ARB_shading_language_420pack '
'GL_ARB_shading_language_include '
'GL_ARB_shading_language_packing GL_ARB_shadow '
'GL_ARB_stencil_texturing GL_ARB_sync '
'GL_ARB_tessellation_shader GL_ARB_texture_border_clamp '
'GL_ARB_texture_buffer_object '
'GL_ARB_texture_buffer_object_rgb32 '
'GL_ARB_texture_buffer_range GL_ARB_texture_compression '
'GL_ARB_texture_compression_bptc '
'GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map '
'GL_ARB_texture_cube_map_array GL_ARB_texture_env_add '
'GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar '
'GL_ARB_texture_env_dot3 GL_ARB_texture_float '
'GL_ARB_texture_gather GL_ARB_texture_mirror_clamp_to_edge '
'GL_ARB_texture_mirrored_repeat GL_ARB_texture_multisample '
'GL_ARB_texture_non_power_of_two GL_ARB_texture_query_levels '
'GL_ARB_texture_query_lod GL_ARB_texture_rectangle '
'GL_ARB_texture_rg GL_ARB_texture_rgb10_a2ui '
'GL_ARB_texture_stencil8 GL_ARB_texture_storage '
'GL_ARB_texture_storage_multisample GL_ARB_texture_swizzle '
'GL_ARB_texture_view GL_ARB_timer_query '
'GL_ARB_transform_feedback2 GL_ARB_transform_feedback3 '
'GL_ARB_transform_feedback_instanced GL_ARB_transpose_matrix '
'GL_ARB_uniform_buffer_object GL_ARB_vertex_array_bgra '
'GL_ARB_vertex_array_object GL_ARB_vertex_attrib_64bit '
'GL_ARB_vertex_attrib_binding GL_ARB_vertex_buffer_object '
'GL_ARB_vertex_program GL_ARB_vertex_shader '
'GL_ARB_vertex_type_10f_11f_11f_rev '
'GL_ARB_vertex_type_2_10_10_10_rev GL_ARB_viewport_array '
'GL_ARB_window_pos GL_ATI_draw_buffers GL_ATI_texture_float '
'GL_ATI_texture_mirror_once GL_S3_s3tc GL_EXT_texture_env_add'
' GL_EXT_abgr GL_EXT_bgra GL_EXT_bindable_uniform '
'GL_EXT_blend_color GL_EXT_blend_equation_separate '
'GL_EXT_blend_func_separate GL_EXT_blend_minmax '
'GL_EXT_blend_subtract GL_EXT_compiled_vertex_array '
'GL_EXT_Cg_shader GL_EXT_depth_bounds_test '
'GL_EXT_direct_state_access GL_EXT_draw_buffers2 '
'GL_EXT_draw_instanced GL_EXT_draw_range_elements '
'GL_EXT_fog_coord GL_EXT_framebuffer_blit '
'GL_EXT_framebuffer_multisample '
'GL_EXTX_framebuffer_mixed_formats '
'GL_EXT_framebuffer_multisample_blit_scaled '
'GL_EXT_framebuffer_object GL_EXT_framebuffer_sRGB '
'GL_EXT_geometry_shader4 GL_EXT_gpu_program_parameters '
'GL_EXT_gpu_shader4 GL_EXT_multi_draw_arrays '
'GL_EXT_packed_depth_stencil GL_EXT_packed_float '
'GL_EXT_packed_pixels GL_EXT_pixel_buffer_object '
'GL_EXT_point_parameters GL_EXT_provoking_vertex '
'GL_EXT_rescale_normal GL_EXT_secondary_color '
'GL_EXT_separate_shader_objects '
'GL_EXT_separate_specular_color '
'GL_EXT_shader_image_load_store GL_EXT_shadow_funcs '
'GL_EXT_stencil_two_side GL_EXT_stencil_wrap GL_EXT_texture3D'
' GL_EXT_texture_array GL_EXT_texture_buffer_object '
'GL_EXT_texture_compression_dxt1 '
'GL_EXT_texture_compression_latc '
'GL_EXT_texture_compression_rgtc '
'GL_EXT_texture_compression_s3tc GL_EXT_texture_cube_map '
'GL_EXT_texture_edge_clamp GL_EXT_texture_env_combine '
'GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic '
'GL_EXT_texture_integer GL_EXT_texture_lod '
'GL_EXT_texture_lod_bias GL_EXT_texture_mirror_clamp '
'GL_EXT_texture_object GL_EXT_texture_shared_exponent '
'GL_EXT_texture_sRGB GL_EXT_texture_sRGB_decode '
'GL_EXT_texture_storage GL_EXT_texture_swizzle '
'GL_EXT_timer_query GL_EXT_transform_feedback2 '
'GL_EXT_vertex_array GL_EXT_vertex_array_bgra '
'GL_EXT_vertex_attrib_64bit GL_EXT_x11_sync_object '
'GL_EXT_import_sync_object GL_IBM_rasterpos_clip '
'GL_IBM_texture_mirrored_repeat GL_KHR_debug '
'GL_KTX_buffer_region GL_NV_bindless_multi_draw_indirect '
'GL_NV_blend_equation_advanced GL_NV_blend_square '
'GL_NV_compute_program5 GL_NV_conditional_render '
'GL_NV_copy_depth_to_color GL_NV_copy_image '
'GL_NV_depth_buffer_float GL_NV_depth_clamp '
'GL_NV_draw_texture GL_NV_ES1_1_compatibility '
'GL_NV_explicit_multisample GL_NV_fence GL_NV_float_buffer '
'GL_NV_fog_distance GL_NV_fragment_program '
'GL_NV_fragment_program_option GL_NV_fragment_program2 '
'GL_NV_framebuffer_multisample_coverage '
'GL_NV_geometry_shader4 GL_NV_gpu_program4 '
'GL_NV_gpu_program4_1 GL_NV_gpu_program5 '
'GL_NV_gpu_program5_mem_extended GL_NV_gpu_program_fp64 '
'GL_NV_gpu_shader5 GL_NV_half_float GL_NV_light_max_exponent '
'GL_NV_multisample_coverage GL_NV_multisample_filter_hint '
'GL_NV_occlusion_query GL_NV_packed_depth_stencil '
'GL_NV_parameter_buffer_object GL_NV_parameter_buffer_object2'
' GL_NV_path_rendering GL_NV_pixel_data_range '
'GL_NV_point_sprite GL_NV_primitive_restart '
'GL_NV_register_combiners GL_NV_register_combiners2 '
'GL_NV_shader_atomic_counters GL_NV_shader_atomic_float '
'GL_NV_shader_buffer_load GL_NV_shader_storage_buffer_object '
'GL_ARB_sparse_texture GL_NV_texgen_reflection '
'GL_NV_texture_barrier GL_NV_texture_compression_vtc '
'GL_NV_texture_env_combine4 GL_NV_texture_expand_normal '
'GL_NV_texture_multisample GL_NV_texture_rectangle '
'GL_NV_texture_shader GL_NV_texture_shader2 '
'GL_NV_texture_shader3 GL_NV_transform_feedback '
'GL_NV_transform_feedback2 GL_NV_vdpau_interop '
'GL_NV_vertex_array_range GL_NV_vertex_array_range2 '
'GL_NV_vertex_attrib_integer_64bit '
'GL_NV_vertex_buffer_unified_memory GL_NV_vertex_program '
'GL_NV_vertex_program1_1 GL_NV_vertex_program2 '
'GL_NV_vertex_program2_option GL_NV_vertex_program3 '
'GL_NVX_conditional_render GL_NVX_gpu_memory_info '
'GL_SGIS_generate_mipmap GL_SGIS_texture_lod '
'GL_SGIX_depth_texture GL_SGIX_shadow GL_SUN_slice_accum '
},
'devices':
[
{
'device_string': '',
'vendor_id': 4318.0,
'device_id': 3576.0,
'vendor_string': ''
}],
'driver_bug_workarounds':
['clear_uniforms_before_first_program_use',
'disable_gl_path_rendering',
'init_gl_position_in_vertex_shader',
'init_vertex_attributes',
'remove_pow_with_constant_exponent',
'scalarize_vec_and_mat_constructor_args',
'use_current_program_after_successful_link',
'use_virtualized_gl_contexts']
}
| 57.086777 | 78 | 0.668983 | 1,740 | 13,815 | 4.636782 | 0.262069 | 0.083044 | 0.043133 | 0.016857 | 0.055528 | 0.013014 | 0 | 0 | 0 | 0 | 0 | 0.014368 | 0.284618 | 13,815 | 241 | 79 | 57.323651 | 0.801983 | 0.06464 | 0 | 0 | 0 | 0 | 0.680902 | 0.493024 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.004545 | 0 | 0.004545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe86745dc1d7b386636a2027dae3d2552bd3e833 | 2,412 | py | Python | test/dict_parameter_test.py | shouldsee/luigi | 54a347361ae1031f06105eaf30ff88f5ef65b00c | [
"Apache-2.0"
] | 14,755 | 2015-01-01T09:33:34.000Z | 2022-03-31T15:38:39.000Z | test/dict_parameter_test.py | shouldsee/luigi | 54a347361ae1031f06105eaf30ff88f5ef65b00c | [
"Apache-2.0"
] | 2,387 | 2015-01-01T09:16:13.000Z | 2022-03-12T13:55:43.000Z | test/dict_parameter_test.py | shouldsee/luigi | 54a347361ae1031f06105eaf30ff88f5ef65b00c | [
"Apache-2.0"
] | 2,630 | 2015-01-02T06:11:32.000Z | 2022-03-27T22:11:20.000Z | # -*- coding: utf-8 -*-
#
# Copyright 2012-2015 Spotify AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from helpers import unittest, in_parse
import luigi
import luigi.interface
import json
import collections
class DictParameterTask(luigi.Task):
param = luigi.DictParameter()
class DictParameterTest(unittest.TestCase):
_dict = collections.OrderedDict([('username', 'me'), ('password', 'secret')])
def test_parse(self):
d = luigi.DictParameter().parse(json.dumps(DictParameterTest._dict))
self.assertEqual(d, DictParameterTest._dict)
def test_serialize(self):
d = luigi.DictParameter().serialize(DictParameterTest._dict)
self.assertEqual(d, '{"username": "me", "password": "secret"}')
def test_parse_and_serialize(self):
inputs = ['{"username": "me", "password": "secret"}', '{"password": "secret", "username": "me"}']
for json_input in inputs:
_dict = luigi.DictParameter().parse(json_input)
self.assertEqual(json_input, luigi.DictParameter().serialize(_dict))
def test_parse_interface(self):
in_parse(["DictParameterTask", "--param", '{"username": "me", "password": "secret"}'],
lambda task: self.assertEqual(task.param, DictParameterTest._dict))
def test_serialize_task(self):
t = DictParameterTask(DictParameterTest._dict)
self.assertEqual(str(t), 'DictParameterTask(param={"username": "me", "password": "secret"})')
def test_parse_invalid_input(self):
self.assertRaises(ValueError, lambda: luigi.DictParameter().parse('{"invalid"}'))
def test_hash_normalize(self):
self.assertRaises(TypeError, lambda: hash(luigi.DictParameter().parse('{"a": {"b": []}}')))
a = luigi.DictParameter().normalize({"a": [{"b": []}]})
b = luigi.DictParameter().normalize({"a": [{"b": []}]})
self.assertEqual(hash(a), hash(b))
| 37.6875 | 105 | 0.679934 | 282 | 2,412 | 5.719858 | 0.382979 | 0.100434 | 0.055797 | 0.074396 | 0.236826 | 0.109113 | 0.066956 | 0 | 0 | 0 | 0 | 0.00651 | 0.172056 | 2,412 | 63 | 106 | 38.285714 | 0.801202 | 0.236318 | 0 | 0 | 0 | 0 | 0.166575 | 0.019726 | 0 | 0 | 0 | 0 | 0.242424 | 1 | 0.212121 | false | 0.151515 | 0.151515 | 0 | 0.484848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
feab2f73df218463681f43ce0d3584c476b63adb | 925 | py | Python | src/common/bio/smiles.py | duttaprat/proteinGAN | 92b32192ab959e327e1d713d09fc9b40dc01d757 | [
"MIT"
] | 8 | 2020-12-23T21:44:47.000Z | 2021-07-09T05:46:16.000Z | src/common/bio/smiles.py | duttaprat/proteinGAN | 92b32192ab959e327e1d713d09fc9b40dc01d757 | [
"MIT"
] | null | null | null | src/common/bio/smiles.py | duttaprat/proteinGAN | 92b32192ab959e327e1d713d09fc9b40dc01d757 | [
"MIT"
] | null | null | null | from common.bio.constants import SMILES_CHARACTER_TO_ID, ID_TO_SMILES_CHARACTER
def from_smiles_to_id(data, column):
"""Converts sequences from smiles to ids
Args:
data: data that contains characters that need to be converted to ids
column: a column of the dataframe that contains characters that need to be converted to ids
Returns:
array of ids
"""
return [[SMILES_CHARACTER_TO_ID[char] for char in val] for index, val in data[column].iteritems()]
def from_id_from_smiles(data, column):
"""Converts sequences from ids to smiles characters
Args:
data: data that contains ids that need to be converted to characters
column: a column of the dataframe that contains ids that need to be converted to characters
Returns:
array of characters
"""
return [[ID_TO_SMILES_CHARACTER[id] for id in val] for index, val in data[column].iteritems()]
| 28.030303 | 102 | 0.721081 | 140 | 925 | 4.635714 | 0.25 | 0.09245 | 0.061633 | 0.07396 | 0.625578 | 0.493066 | 0.493066 | 0.493066 | 0.409861 | 0.29584 | 0 | 0 | 0.219459 | 925 | 32 | 103 | 28.90625 | 0.898892 | 0.526486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
feb0e950cc084ec84da234840633db92453d5121 | 16,227 | py | Python | sdk/python/pulumi_aws/cloudformation/stack_set.py | mdop-wh/pulumi-aws | 05bb32e9d694dde1c3b76d440fd2cd0344d23376 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/cloudformation/stack_set.py | mdop-wh/pulumi-aws | 05bb32e9d694dde1c3b76d440fd2cd0344d23376 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/cloudformation/stack_set.py | mdop-wh/pulumi-aws | 05bb32e9d694dde1c3b76d440fd2cd0344d23376 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Dict, List, Mapping, Optional, Tuple, Union
from .. import _utilities, _tables
__all__ = ['StackSet']
class StackSet(pulumi.CustomResource):
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
administration_role_arn: Optional[pulumi.Input[str]] = None,
capabilities: Optional[pulumi.Input[List[pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
execution_role_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
template_body: Optional[pulumi.Input[str]] = None,
template_url: Optional[pulumi.Input[str]] = None,
__props__=None,
__name__=None,
__opts__=None):
"""
Manages a CloudFormation StackSet. StackSets allow CloudFormation templates to be easily deployed across multiple accounts and regions via StackSet Instances (`cloudformation.StackSetInstance` resource). Additional information about StackSets can be found in the [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html).
> **NOTE:** All template parameters, including those with a `Default`, must be configured or ignored with the `lifecycle` configuration block `ignore_changes` argument.
> **NOTE:** All `NoEcho` template parameters must be ignored with the `lifecycle` configuration block `ignore_changes` argument.
## Example Usage
```python
import pulumi
import pulumi_aws as aws
a_ws_cloud_formation_stack_set_administration_role_assume_role_policy = aws.iam.get_policy_document(statements=[aws.iam.GetPolicyDocumentStatementArgs(
actions=["sts:AssumeRole"],
effect="Allow",
principals=[aws.iam.GetPolicyDocumentStatementPrincipalArgs(
identifiers=["cloudformation.amazonaws.com"],
type="Service",
)],
)])
a_ws_cloud_formation_stack_set_administration_role = aws.iam.Role("aWSCloudFormationStackSetAdministrationRole", assume_role_policy=a_ws_cloud_formation_stack_set_administration_role_assume_role_policy.json)
example = aws.cloudformation.StackSet("example",
administration_role_arn=a_ws_cloud_formation_stack_set_administration_role.arn,
parameters={
"VPCCidr": "10.0.0.0/16",
},
template_body=\"\"\"{
"Parameters" : {
"VPCCidr" : {
"Type" : "String",
"Default" : "10.0.0.0/16",
"Description" : "Enter the CIDR block for the VPC. Default is 10.0.0.0/16."
}
},
"Resources" : {
"myVpc": {
"Type" : "AWS::EC2::VPC",
"Properties" : {
"CidrBlock" : { "Ref" : "VPCCidr" },
"Tags" : [
{"Key": "Name", "Value": "Primary_CF_VPC"}
]
}
}
}
}
\"\"\")
a_ws_cloud_formation_stack_set_administration_role_execution_policy_policy_document = example.execution_role_name.apply(lambda execution_role_name: aws.iam.get_policy_document(statements=[aws.iam.GetPolicyDocumentStatementArgs(
actions=["sts:AssumeRole"],
effect="Allow",
resources=[f"arn:aws:iam::*:role/{execution_role_name}"],
)]))
a_ws_cloud_formation_stack_set_administration_role_execution_policy_role_policy = aws.iam.RolePolicy("aWSCloudFormationStackSetAdministrationRoleExecutionPolicyRolePolicy",
policy=a_ws_cloud_formation_stack_set_administration_role_execution_policy_policy_document.json,
role=a_ws_cloud_formation_stack_set_administration_role.name)
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] administration_role_arn: Amazon Resource Number (ARN) of the IAM Role in the administrator account.
:param pulumi.Input[List[pulumi.Input[str]]] capabilities: A list of capabilities. Valid values: `CAPABILITY_IAM`, `CAPABILITY_NAMED_IAM`, `CAPABILITY_AUTO_EXPAND`.
:param pulumi.Input[str] description: Description of the StackSet.
:param pulumi.Input[str] execution_role_name: Name of the IAM Role in all target accounts for StackSet operations. Defaults to `AWSCloudFormationStackSetExecutionRole`.
:param pulumi.Input[str] name: Name of the StackSet. The name must be unique in the region where you create your StackSet. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and cannot be longer than 128 characters.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] parameters: Key-value map of input parameters for the StackSet template. All template parameters, including those with a `Default`, must be configured or ignored with `lifecycle` configuration block `ignore_changes` argument. All `NoEcho` template parameters must be ignored with the `lifecycle` configuration block `ignore_changes` argument.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value map of tags to associate with this StackSet and the Stacks created from it. AWS CloudFormation also propagates these tags to supported resources that are created in the Stacks. A maximum number of 50 tags can be specified.
:param pulumi.Input[str] template_body: String containing the CloudFormation template body. Maximum size: 51,200 bytes. Conflicts with `template_url`.
:param pulumi.Input[str] template_url: String containing the location of a file containing the CloudFormation template body. The URL must point to a template that is located in an Amazon S3 bucket. Maximum location file size: 460,800 bytes. Conflicts with `template_body`.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
if administration_role_arn is None:
raise TypeError("Missing required property 'administration_role_arn'")
__props__['administration_role_arn'] = administration_role_arn
__props__['capabilities'] = capabilities
__props__['description'] = description
__props__['execution_role_name'] = execution_role_name
__props__['name'] = name
__props__['parameters'] = parameters
__props__['tags'] = tags
__props__['template_body'] = template_body
__props__['template_url'] = template_url
__props__['arn'] = None
__props__['stack_set_id'] = None
super(StackSet, __self__).__init__(
'aws:cloudformation/stackSet:StackSet',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
administration_role_arn: Optional[pulumi.Input[str]] = None,
arn: Optional[pulumi.Input[str]] = None,
capabilities: Optional[pulumi.Input[List[pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
execution_role_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
stack_set_id: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
template_body: Optional[pulumi.Input[str]] = None,
template_url: Optional[pulumi.Input[str]] = None) -> 'StackSet':
"""
Get an existing StackSet resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] administration_role_arn: Amazon Resource Number (ARN) of the IAM Role in the administrator account.
:param pulumi.Input[str] arn: Amazon Resource Name (ARN) of the StackSet.
:param pulumi.Input[List[pulumi.Input[str]]] capabilities: A list of capabilities. Valid values: `CAPABILITY_IAM`, `CAPABILITY_NAMED_IAM`, `CAPABILITY_AUTO_EXPAND`.
:param pulumi.Input[str] description: Description of the StackSet.
:param pulumi.Input[str] execution_role_name: Name of the IAM Role in all target accounts for StackSet operations. Defaults to `AWSCloudFormationStackSetExecutionRole`.
:param pulumi.Input[str] name: Name of the StackSet. The name must be unique in the region where you create your StackSet. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and cannot be longer than 128 characters.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] parameters: Key-value map of input parameters for the StackSet template. All template parameters, including those with a `Default`, must be configured or ignored with `lifecycle` configuration block `ignore_changes` argument. All `NoEcho` template parameters must be ignored with the `lifecycle` configuration block `ignore_changes` argument.
:param pulumi.Input[str] stack_set_id: Unique identifier of the StackSet.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value map of tags to associate with this StackSet and the Stacks created from it. AWS CloudFormation also propagates these tags to supported resources that are created in the Stacks. A maximum number of 50 tags can be specified.
:param pulumi.Input[str] template_body: String containing the CloudFormation template body. Maximum size: 51,200 bytes. Conflicts with `template_url`.
:param pulumi.Input[str] template_url: String containing the location of a file containing the CloudFormation template body. The URL must point to a template that is located in an Amazon S3 bucket. Maximum location file size: 460,800 bytes. Conflicts with `template_body`.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["administration_role_arn"] = administration_role_arn
__props__["arn"] = arn
__props__["capabilities"] = capabilities
__props__["description"] = description
__props__["execution_role_name"] = execution_role_name
__props__["name"] = name
__props__["parameters"] = parameters
__props__["stack_set_id"] = stack_set_id
__props__["tags"] = tags
__props__["template_body"] = template_body
__props__["template_url"] = template_url
return StackSet(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="administrationRoleArn")
def administration_role_arn(self) -> pulumi.Output[str]:
"""
Amazon Resource Number (ARN) of the IAM Role in the administrator account.
"""
return pulumi.get(self, "administration_role_arn")
@property
@pulumi.getter
def arn(self) -> pulumi.Output[str]:
"""
Amazon Resource Name (ARN) of the StackSet.
"""
return pulumi.get(self, "arn")
@property
@pulumi.getter
def capabilities(self) -> pulumi.Output[Optional[List[str]]]:
"""
A list of capabilities. Valid values: `CAPABILITY_IAM`, `CAPABILITY_NAMED_IAM`, `CAPABILITY_AUTO_EXPAND`.
"""
return pulumi.get(self, "capabilities")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
Description of the StackSet.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="executionRoleName")
def execution_role_name(self) -> pulumi.Output[Optional[str]]:
"""
Name of the IAM Role in all target accounts for StackSet operations. Defaults to `AWSCloudFormationStackSetExecutionRole`.
"""
return pulumi.get(self, "execution_role_name")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the StackSet. The name must be unique in the region where you create your StackSet. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and cannot be longer than 128 characters.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def parameters(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Key-value map of input parameters for the StackSet template. All template parameters, including those with a `Default`, must be configured or ignored with `lifecycle` configuration block `ignore_changes` argument. All `NoEcho` template parameters must be ignored with the `lifecycle` configuration block `ignore_changes` argument.
"""
return pulumi.get(self, "parameters")
@property
@pulumi.getter(name="stackSetId")
def stack_set_id(self) -> pulumi.Output[str]:
"""
Unique identifier of the StackSet.
"""
return pulumi.get(self, "stack_set_id")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Key-value map of tags to associate with this StackSet and the Stacks created from it. AWS CloudFormation also propagates these tags to supported resources that are created in the Stacks. A maximum number of 50 tags can be specified.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="templateBody")
def template_body(self) -> pulumi.Output[str]:
"""
String containing the CloudFormation template body. Maximum size: 51,200 bytes. Conflicts with `template_url`.
"""
return pulumi.get(self, "template_body")
@property
@pulumi.getter(name="templateUrl")
def template_url(self) -> pulumi.Output[Optional[str]]:
"""
String containing the location of a file containing the CloudFormation template body. The URL must point to a template that is located in an Amazon S3 bucket. Maximum location file size: 460,800 bytes. Conflicts with `template_body`.
"""
return pulumi.get(self, "template_url")
def translate_output_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return _tables.SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 57.747331 | 403 | 0.680841 | 1,929 | 16,227 | 5.522032 | 0.145671 | 0.055764 | 0.055201 | 0.033796 | 0.708787 | 0.668325 | 0.662974 | 0.644668 | 0.617161 | 0.611247 | 0 | 0.005918 | 0.229371 | 16,227 | 280 | 404 | 57.953571 | 0.845902 | 0.546127 | 0 | 0.278195 | 1 | 0 | 0.123449 | 0.023418 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112782 | false | 0.007519 | 0.037594 | 0.015038 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2283d1768504ac50dd9ea43fb4e940fbaf88eee6 | 649 | py | Python | code/gcd_sequence/sol_443.py | bhavinjawade/project-euler-solutions | 56bf6a282730ed4b9b875fa081cf4509d9939d98 | [
"Apache-2.0"
] | 2 | 2020-07-16T08:16:32.000Z | 2020-10-01T07:16:48.000Z | code/gcd_sequence/sol_443.py | Psingh12354/project-euler-solutions | 56bf6a282730ed4b9b875fa081cf4509d9939d98 | [
"Apache-2.0"
] | null | null | null | code/gcd_sequence/sol_443.py | Psingh12354/project-euler-solutions | 56bf6a282730ed4b9b875fa081cf4509d9939d98 | [
"Apache-2.0"
] | 1 | 2021-05-07T18:06:08.000Z | 2021-05-07T18:06:08.000Z |
# -*- coding: utf-8 -*-
'''
File name: code\gcd_sequence\sol_443.py
Author: Vaidic Joshi
Date created: Oct 20, 2018
Python Version: 3.x
'''
# Solution to Project Euler Problem #443 :: GCD sequence
#
# For more information see:
# https://projecteuler.net/problem=443
# Problem Statement
'''
Let g(n) be a sequence defined as follows:
g(4) = 13,
g(n) = g(n-1) + gcd(n, g(n-1)) for n > 4.
The first few values are:
n4567891011121314151617181920...
g(n)1314161718272829303132333451545560...
You are given that g(1 000) = 2524 and g(1 000 000) = 2624152.
Find g(1015).
'''
# Solution
# Solution Approach
'''
'''
| 17.540541 | 62 | 0.644068 | 96 | 649 | 4.333333 | 0.666667 | 0.024038 | 0.014423 | 0.019231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216797 | 0.211094 | 649 | 36 | 63 | 18.027778 | 0.595703 | 0.454545 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
228f917fd03d25566ca49e7918c233c48b585119 | 88 | py | Python | fast-ml/main.py | gabrielstork/fast-ml | ce93c1263970ce7b958e1c3e932c70909bcc0e31 | [
"Apache-2.0"
] | 1 | 2021-07-26T15:37:30.000Z | 2021-07-26T15:37:30.000Z | fast-ml/main.py | gabrielstork/fast-ml | ce93c1263970ce7b958e1c3e932c70909bcc0e31 | [
"Apache-2.0"
] | null | null | null | fast-ml/main.py | gabrielstork/fast-ml | ce93c1263970ce7b958e1c3e932c70909bcc0e31 | [
"Apache-2.0"
] | null | null | null | import root
if __name__ == '__main__':
window = root.Root()
window.mainloop()
| 12.571429 | 26 | 0.636364 | 10 | 88 | 4.8 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 88 | 6 | 27 | 14.666667 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2298b7f13b630423d0c12d2422ae336ad2ea8774 | 71 | py | Python | damn_vulnerable_python/evil.py | CodyKochmann/damn_vulnerable_python | 8a90ee3b70dddae96f9f0a8500ed9ba5693f3082 | [
"MIT"
] | 1 | 2018-05-22T03:27:54.000Z | 2018-05-22T03:27:54.000Z | damn_vulnerable_python/evil.py | CodyKochmann/damn_vulnerable_python | 8a90ee3b70dddae96f9f0a8500ed9ba5693f3082 | [
"MIT"
] | 2 | 2018-05-22T02:04:39.000Z | 2018-05-22T12:46:31.000Z | damn_vulnerable_python/evil.py | CodyKochmann/damn_vulnerable_python | 8a90ee3b70dddae96f9f0a8500ed9ba5693f3082 | [
"MIT"
] | null | null | null | ''' static analyzers are annoying so lets rename eval '''
evil = eval
| 17.75 | 57 | 0.704225 | 10 | 71 | 5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197183 | 71 | 3 | 58 | 23.666667 | 0.877193 | 0.690141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22ad01968a4a3e4e8168ccbc68b9c73d312ea977 | 709 | py | Python | development/simple_email.py | gerold-penz/python-simplemail | 9cfae298743af2b771d6d779717b602de559689b | [
"MIT"
] | 16 | 2015-04-21T19:12:26.000Z | 2021-06-04T04:38:12.000Z | development/simple_email.py | gerold-penz/python-simplemail | 9cfae298743af2b771d6d779717b602de559689b | [
"MIT"
] | 3 | 2015-04-21T22:09:55.000Z | 2021-04-27T07:04:05.000Z | development/simple_email.py | gerold-penz/python-simplemail | 9cfae298743af2b771d6d779717b602de559689b | [
"MIT"
] | 4 | 2015-07-22T11:33:28.000Z | 2019-08-06T07:27:20.000Z | #!/usr/bin/env python
# coding: utf-8
# BEGIN --- required only for testing, remove in real world code --- BEGIN
import os
import sys
THISDIR = os.path.dirname(os.path.abspath(__file__))
APPDIR = os.path.abspath(os.path.join(THISDIR, os.path.pardir, os.path.pardir))
sys.path.insert(0, APPDIR)
# END --- required only for testing, remove in real world code --- END
import simplemail
simplemail.Email(
smtp_server = "smtp.a1.net:25",
smtp_user = "xxx",
smtp_password = "xxx",
use_tls = False,
from_address = "xxx",
to_address = "xxx",
subject = u"Really simple test with umlauts (öäüß)",
message = u"This is the message with umlauts (öäüß)",
).send()
print "Sent"
print
| 22.870968 | 79 | 0.679831 | 106 | 709 | 4.45283 | 0.584906 | 0.076271 | 0.063559 | 0.09322 | 0.182203 | 0.182203 | 0.182203 | 0.182203 | 0.182203 | 0 | 0 | 0.008621 | 0.181946 | 709 | 30 | 80 | 23.633333 | 0.805172 | 0.248237 | 0 | 0 | 0 | 0 | 0.202268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.055556 | 0.166667 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
22ad976fe4002a0a8ca1f3ab36292229eb143691 | 2,040 | py | Python | common/irma/common/exceptions.py | vaginessa/irma | 02285080b67b25ef983a99a765044683bd43296c | [
"Apache-2.0"
] | null | null | null | common/irma/common/exceptions.py | vaginessa/irma | 02285080b67b25ef983a99a765044683bd43296c | [
"Apache-2.0"
] | null | null | null | common/irma/common/exceptions.py | vaginessa/irma | 02285080b67b25ef983a99a765044683bd43296c | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2013-2018 Quarkslab.
# This file is part of IRMA project.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the top-level directory
# of this distribution and at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# No part of the project, including this file, may be copied,
# modified, propagated, or distributed except according to the
# terms contained in the LICENSE file.
class IrmaDependencyError(Exception):
"""Error caused by a missing dependency."""
pass
class IrmaMachineManagerError(Exception):
"""Error on a machine manager."""
pass
class IrmaMachineError(Exception):
"""Error on a machine."""
pass
class IrmaAdminError(Exception):
"""Error in admin part."""
pass
class IrmaDatabaseError(Exception):
"""Error on a database manager."""
pass
class IrmaCoreError(Exception):
"""Error in core parts (Db, Ftp, Celery..)"""
pass
class IrmaDatabaseResultNotFound(IrmaDatabaseError):
"""A database result was required but none was found."""
pass
class IrmaFileSystemError(IrmaDatabaseError):
"""Nothing corresponding to the request has been found in the database."""
pass
class IrmaConfigurationError(IrmaCoreError):
"""Error wrong configuration."""
pass
class IrmaFtpError(IrmaCoreError):
"""Error on ftp manager."""
pass
class IrmaFTPSError(IrmaFtpError):
"""Error on ftp/tls manager."""
pass
class IrmaSFTPError(IrmaFtpError):
"""Error on sftp manager."""
pass
class IrmaTaskError(IrmaCoreError):
"""Error while processing celery tasks."""
pass
class IrmaLockError(Exception):
"""Error for the locks on db content (already taken)"""
pass
class IrmaLockModeError(Exception):
"""Error for the mode of the locks (doesn't exist)"""
pass
class IrmaValueError(Exception):
"""Error for the parameters passed to the functions"""
pass
| 21.473684 | 78 | 0.701471 | 248 | 2,040 | 5.770161 | 0.471774 | 0.09434 | 0.055905 | 0.035639 | 0.033543 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007304 | 0.194608 | 2,040 | 94 | 79 | 21.702128 | 0.863664 | 0.526471 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
22b80f5d2e66e370817465d9b5b278c1f1dcbe4e | 282 | py | Python | Ejercicio/Ejercicio7.py | tavo1599/F.P2021 | a592804fb5ae30da55551d9e29819887919db041 | [
"Apache-2.0"
] | 1 | 2021-05-05T19:39:37.000Z | 2021-05-05T19:39:37.000Z | Ejercicio/Ejercicio7.py | tavo1599/F.P2021 | a592804fb5ae30da55551d9e29819887919db041 | [
"Apache-2.0"
] | null | null | null | Ejercicio/Ejercicio7.py | tavo1599/F.P2021 | a592804fb5ae30da55551d9e29819887919db041 | [
"Apache-2.0"
] | null | null | null | #Datos de entrada
num=int(input("Ingrese un numero: "))
# Proceso
if num==10:
print("Calificacion: A")
elif num==9:
print("Calificacion: B")
elif num==8:
print("Calificacion: C")
elif num==7 and num==6:
print("Calificacion: D")
elif num<=5 and num>=0:
print("Calificacion: F")
| 20.142857 | 37 | 0.673759 | 46 | 282 | 4.130435 | 0.586957 | 0.447368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032787 | 0.134752 | 282 | 13 | 38 | 21.692308 | 0.745902 | 0.085106 | 0 | 0 | 0 | 0 | 0.367188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
22b828cde8bc59acbcf210743592fd1c629c4095 | 417 | py | Python | 2015/day-2/part2.py | nairraghav/advent-of-code-2019 | 274a2a4a59a8be39afb323356c592af5e1921e54 | [
"MIT"
] | null | null | null | 2015/day-2/part2.py | nairraghav/advent-of-code-2019 | 274a2a4a59a8be39afb323356c592af5e1921e54 | [
"MIT"
] | null | null | null | 2015/day-2/part2.py | nairraghav/advent-of-code-2019 | 274a2a4a59a8be39afb323356c592af5e1921e54 | [
"MIT"
] | null | null | null | ribbon_needed = 0
with open("input.txt", "r") as puzzle_input:
for line in puzzle_input:
length, width, height = [int(item) for item in line.split("x")]
dimensions = [length, width, height]
smallest_side = min(dimensions)
dimensions.remove(smallest_side)
second_smallest_side = min(dimensions)
ribbon_needed += 2*smallest_side + 2*second_smallest_side + length*width*height
print(ribbon_needed)
| 26.0625 | 81 | 0.736211 | 59 | 417 | 5 | 0.474576 | 0.20339 | 0.172881 | 0.169492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008475 | 0.151079 | 417 | 15 | 82 | 27.8 | 0.824859 | 0 | 0 | 0 | 0 | 0 | 0.026379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22c0b1d42f5e6f6bbd43886632ceb253dedae7b6 | 4,243 | py | Python | h1st/tests/core/test_schemas_inferrer.py | Mou-Ikkai/h1st | da47a8f1ad6af532c549e075fba19e3b3692de89 | [
"Apache-2.0"
] | 2 | 2020-08-21T07:49:08.000Z | 2020-08-21T07:49:13.000Z | h1st/tests/core/test_schemas_inferrer.py | Mou-Ikkai/h1st | da47a8f1ad6af532c549e075fba19e3b3692de89 | [
"Apache-2.0"
] | 3 | 2020-11-13T19:06:07.000Z | 2022-02-10T02:06:03.000Z | h1st/tests/core/test_schemas_inferrer.py | Mou-Ikkai/h1st | da47a8f1ad6af532c549e075fba19e3b3692de89 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from datetime import datetime
import pyarrow as pa
import numpy as np
import pandas as pd
from h1st.schema import SchemaInferrer
class SchemaInferrerTestCase(TestCase):
def test_infer_python(self):
inferrer = SchemaInferrer()
self.assertEqual(inferrer.infer_schema(1), pa.int64())
self.assertEqual(inferrer.infer_schema(1.1), pa.float64())
self.assertEqual(inferrer.infer_schema({
'test1': 1,
'test2': "hello",
'test3': b"hello",
'today': datetime.now(),
}), {
'type': dict,
'fields': {
'test1': pa.int64(),
'test2': pa.string(),
'test3': pa.binary(),
'today': pa.date64(),
}
})
self.assertEqual(inferrer.infer_schema((
1, 2, 3
)), pa.list_(pa.int64()))
self.assertEqual(inferrer.infer_schema((
1.2, 1.3, 1.4
)), pa.list_(pa.float64()))
table = pa.Table.from_arrays(
[pa.array([1, 2, 3]), pa.array(["a", "b", "c"])],
['c1', 'c2']
)
self.assertEqual(inferrer.infer_schema(table), table.schema)
def test_infer_numpy(self):
inferrer = SchemaInferrer()
self.assertEqual(inferrer.infer_schema(np.random.random((100, 28, 28))), {
'type': np.ndarray,
'item': pa.float64(),
'shape': (None, 28, 28)
})
self.assertEqual(inferrer.infer_schema(np.array(["1", "2", "3"])), {
'type': np.ndarray,
'item': pa.string()
})
def test_infer_dataframe(self):
inferrer = SchemaInferrer()
df = pd.DataFrame({
'f1': [1, 2, 3],
'f2': ['a', 'b', 'c'],
'f3': [0.1, 0.2, 0.9]
})
self.assertEqual(inferrer.infer_schema(df), {
'type': pd.DataFrame,
'fields': {
'f1': pa.int64(),
'f2': pa.string(),
'f3': pa.float64()
}
})
df = pd.DataFrame({
'Timestamp': [1.1, 2.2, 3.1],
'CarSpeed': [0.1, 0.2, 0.9],
'Gx': [0.1, 0.2, 0.9],
'Gy': [0.1, 0.2, 0.9],
'Label': ['1', '0', '1']
})
self.assertEqual(inferrer.infer_schema(df), {
'type': pd.DataFrame,
'fields': {
'Timestamp': pa.float64(),
'CarSpeed': pa.float64(),
'Gx': pa.float64(),
'Gy': pa.float64(),
'Label': pa.string(),
}
})
self.assertEqual(inferrer.infer_schema(pd.Series([1, 2, 3])), {
'type': pd.Series,
'item': pa.int64()
})
def test_infer_dict(self):
inferrer = SchemaInferrer()
self.assertEqual(inferrer.infer_schema({
'test': 123,
}), {
'type': dict,
'fields': {
'test': pa.int64(),
}
})
self.assertEqual(inferrer.infer_schema({
'test': 123,
'indices': [1, 2, 3]
}), {
'type': dict,
'fields': {
'test': pa.int64(),
'indices': pa.list_(pa.int64())
}
})
self.assertEqual(inferrer.infer_schema({
'results': pd.DataFrame({
'CarSpeed': [0, 1, 2],
'Label': ['a', 'b', 'c']
})
}), {
'type': dict,
'fields': {
'results': {
'type': pd.DataFrame,
'fields': {
'CarSpeed': pa.int64(),
'Label': pa.string(),
}
}
}
})
def test_infer_list(self):
inferrer = SchemaInferrer()
self.assertEqual(inferrer.infer_schema([
{'test': 123},
{'test': 345},
]), {
'type': list,
'item': {
'type': dict,
'fields': {
'test': pa.int64()
}
}
})
| 27.732026 | 82 | 0.418572 | 395 | 4,243 | 4.422785 | 0.189873 | 0.128792 | 0.197481 | 0.240412 | 0.514596 | 0.412708 | 0.337722 | 0.289067 | 0.195764 | 0.141958 | 0 | 0.056543 | 0.416451 | 4,243 | 152 | 83 | 27.914474 | 0.649031 | 0 | 0 | 0.41791 | 0 | 0 | 0.078247 | 0 | 0 | 0 | 0 | 0 | 0.11194 | 1 | 0.037313 | false | 0 | 0.044776 | 0 | 0.089552 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22c0ccfce68cfbaf9d19c13daf2d7c341cf47746 | 373 | py | Python | c_core_librairies/exercise_a.py | nicolasessisbreton/pyzehe | 7497a0095d974ac912ce9826a27e21fd9d513942 | [
"Apache-2.0"
] | 1 | 2018-05-31T19:36:36.000Z | 2018-05-31T19:36:36.000Z | c_core_librairies/exercise_a.py | nicolasessisbreton/pyzehe | 7497a0095d974ac912ce9826a27e21fd9d513942 | [
"Apache-2.0"
] | 1 | 2018-05-31T01:10:51.000Z | 2018-05-31T01:10:51.000Z | c_core_librairies/exercise_a.py | nicolasessisbreton/pyzehe | 7497a0095d974ac912ce9826a27e21fd9d513942 | [
"Apache-2.0"
] | null | null | null | """
# refactoring
Refactoring is the key to successfull projects.
Refactor:
1) annuity_factor such that:
conversion to integer is handled,
no extra printing
2) policy_book into a class such that:
a function generates the book and the premium
stats and visualizations functions are avalaible
3) book_report such that:
it uses all the previous improvements
""" | 21.941176 | 50 | 0.772118 | 55 | 373 | 5.181818 | 0.745455 | 0.084211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009868 | 0.184987 | 373 | 17 | 51 | 21.941176 | 0.927632 | 1.16622 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22c52f6029df65fcd8fa5837d73e5ae4e6fb61e1 | 1,087 | py | Python | test/functional/test_device.py | Jagadambass/Graph-Neural-Networks | c8f1d87f8cd67d645c2f05f370be039acf05ca52 | [
"MIT"
] | null | null | null | test/functional/test_device.py | Jagadambass/Graph-Neural-Networks | c8f1d87f8cd67d645c2f05f370be039acf05ca52 | [
"MIT"
] | null | null | null | test/functional/test_device.py | Jagadambass/Graph-Neural-Networks | c8f1d87f8cd67d645c2f05f370be039acf05ca52 | [
"MIT"
] | null | null | null | from graphgallery.functional import device
import tensorflow as tf
import torch
def test_device():
# how about other backend?
# tf
assert isinstance(device("cpu", "tf"), str)
assert device() == 'cpu'
assert device("cpu", "tf") == 'CPU'
assert device("cpu", "tf") == 'cpu'
assert device("device/cpu", "tf") == 'cpu'
try:
assert device("gpu", "tf") == 'GPU'
assert device("cuda", "tf") == 'GPU'
except RuntimeError:
pass
device = tf.device("cpu")
assert device(device, "tf") == device._device_name
# ?? torch
device = device("cpu", "torch")
assert isinstance(device, torch.device) and 'cpu' in str(device)
device = device(backend="torch")
assert isinstance(device, torch.device) and 'cpu' in str(device)
try:
assert 'cuda' in str(device("gpu", "torch"))
assert 'cuda' in str(device("cuda", "torch"))
except RuntimeError:
pass
device = torch.device("cpu")
assert device(device, "torch") == device
if __name__ == "__main__":
test_device()
| 26.512195 | 68 | 0.596136 | 130 | 1,087 | 4.892308 | 0.230769 | 0.113208 | 0.117925 | 0.099057 | 0.410377 | 0.259434 | 0.259434 | 0.259434 | 0.172956 | 0.172956 | 0 | 0 | 0.24563 | 1,087 | 40 | 69 | 27.175 | 0.77561 | 0.033119 | 0 | 0.275862 | 0 | 0 | 0.115568 | 0 | 0 | 0 | 0 | 0 | 0.448276 | 1 | 0.034483 | false | 0.068966 | 0.103448 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
22c7a70f2a69982c24184228f6ed64f2bdc7679e | 1,948 | py | Python | credentials_test.py | tinatasha/passwordgenerator | ad161e14779e975e98ad989c5df976ac3662f8d8 | [
"MIT"
] | null | null | null | credentials_test.py | tinatasha/passwordgenerator | ad161e14779e975e98ad989c5df976ac3662f8d8 | [
"MIT"
] | null | null | null | credentials_test.py | tinatasha/passwordgenerator | ad161e14779e975e98ad989c5df976ac3662f8d8 | [
"MIT"
] | null | null | null | import unittest
from password import Credentials
class TestCredentials(unittest.TestCase):
"""
Class to test behaviour of the credentials class
"""
def setUp(self):
"""
Setup method that defines instructions
"""
self.new_credentials = Credentials("Github","Tina","blackfaffp1")
def tearDown(self):
"""
Method that cleans up after each test
"""
Credentials.credentials_list = []
def test_init(self):
"""
Test for correct initialization
"""
self.assertEqual(self.new_credentials.account_name,"Github")
self.assertEqual(self.new_credentials.username,"tinatasga")
self.assertEqual(self.new_credentials.password,"@#tinatasha")
def test_save_credentials(self):
"""
Test to check whether app saves account credentials
"""
self.new_credentials.save_credentials()
self.assertEqual(len(Credentials.credentials_list),1)
def test_save_multiple_credentials(self):
"""
Test for saving multiple credentials
"""
self.new_credentials.save_credentials()
test_credentials = Credentials("AllFootball","Kibet","messithegoat")
test_credentials.save_credentials()
self.assertEqual(len(Credentials.credentials_list),2)
def test_view_credentials(self):
"""
Test to view an account credential
"""
self.assertEqual(Credentials.display_credentials(),Credentials.credentials_list)
def test_delete_credentials(self):
"""
Test to delete account credentials
"""
self.new_credentials.save_credentials()
test_credentials = Credentials("i","love","cats")
test_credentials.save_credentials()
self.new_credentials.delete_credentials()
self.assertEqual(len(Credentials.credentials_list),1)
if __name__ == '__main__':
unittest.main() | 31.419355 | 88 | 0.658624 | 195 | 1,948 | 6.358974 | 0.323077 | 0.133065 | 0.116129 | 0.093548 | 0.46129 | 0.297581 | 0.297581 | 0.271774 | 0.225806 | 0 | 0 | 0.002705 | 0.24076 | 1,948 | 62 | 89 | 31.419355 | 0.8357 | 0 | 0 | 0.241379 | 0 | 0 | 0.063187 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 0 | null | null | 0.068966 | 0.068966 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
22cecf207eb3150281c5d9ddc72a0ab1531e7bdb | 5,341 | py | Python | visual_genome/models.py | hayyubi/visual-genome-driver | 412223bf1552b1927fb1219cfcf90dcd2599bf34 | [
"MIT"
] | null | null | null | visual_genome/models.py | hayyubi/visual-genome-driver | 412223bf1552b1927fb1219cfcf90dcd2599bf34 | [
"MIT"
] | null | null | null | visual_genome/models.py | hayyubi/visual-genome-driver | 412223bf1552b1927fb1219cfcf90dcd2599bf34 | [
"MIT"
] | null | null | null | """
Visual Genome Python API wrapper, models
"""
class Image:
"""
Image.
ID int
url hyperlink string
width int
height int
"""
def __init__(self, id, url, width, height, coco_id, flickr_id):
self.id = id
self.url = url
self.width = width
self.height = height
self.coco_id = coco_id
self.flickr_id = flickr_id
def __str__(self):
return 'id: %d, coco_id: %d, flickr_id: %d, width: %d, url: %s' \
% (self.id, -1
if self.coco_id is None
else self.coco_id, -1
if self.flickr_id is None
else self.flickr_id, self.width, self.url)
def __repr__(self):
return str(self)
class Region:
"""
Region.
image int
phrase string
x int
y int
width int
height int
"""
def __init__(self, id, image, phrase, x, y, width, height):
self.id = id
self.image = image
self.phrase = phrase
self.x = x
self.y = y
self.width = width
self.height = height
def __str__(self):
stat_str = 'id: {0}, x: {1}, y: {2}, width: {3},' \
'height: {4}, phrase: {5}, image: {6}'
return stat_str.format(self.id, self.x, self.y,
self.width, self.height, self.phrase,
self.image.id)
def __repr__(self):
return str(self)
class Graph:
"""
Graphs contain objects, relationships and attributes
image Image
bboxes Object array
relationships Relationship array
attributes Attribute array
"""
def __init__(self, image, objects, relationships, attributes):
self.image = image
self.objects = objects
self.relationships = relationships
self.attributes = attributes
class Object:
"""
Objects.
id int
x int
y int
width int
height int
names string array
synsets Synset array
"""
def __init__(self, id, x, y, width, height, names, synsets):
self.id = id
self.x = x
self.y = y
self.width = width
self.height = height
self.names = names[0]
self.synsets = synsets
self.bbox = [x, y, width, height]
def __str__(self):
name = self.names[0] if len(self.names) != 0 else 'None'
return '%s' % (name)
def __repr__(self):
return str(self)
class Relationship:
"""
Relationships. Ex, 'man - jumping over - fire hydrant'.
subject int
predicate string
object int
rel_canon Synset
"""
def __init__(self, id, subject, predicate, object, synset):
self.id = id
self.subject = subject
self.predicate = predicate
self.object = object
self.synset = synset
def __str__(self):
return "{0}: {1} {2} {3}".format(self.id, self.subject,
self.predicate, self.object)
def __repr__(self):
return str(self)
class Attribute:
"""
Attributes. Ex, 'man - old'.
subject Object
attribute string
synset Synset
"""
def __init__(self, id, subject, attribute, synset):
self.id = id
self.subject = subject
self.attribute = attribute
self.synset = synset
def __str__(self):
return "%d: %s is %s" % (self.id, self.subject, self.attribute)
def __repr__(self):
return str(self)
class QA:
"""
Question Answer Pairs.
ID int
image int
question string
answer string
q_objects QAObject array
a_objects QAObject array
"""
def __init__(self, id, image, question, answer,
question_objects, answer_objects):
self.id = id
self.image = image
self.question = question
self.answer = answer
self.q_objects = question_objects
self.a_objects = answer_objects
def __str__(self):
return 'id: %d, image: %d, question: %s, answer: %s' \
% (self.id, self.image.id, self.question, self.answer)
def __repr__(self):
return str(self)
class QAObject:
"""
Question Answer Objects are localized in the image and refer to a part
of the question text or the answer text.
start_idx int
end_idx int
name string
synset_name string
synset_definition string
"""
def __init__(self, start_idx, end_idx, name, synset):
self.start_idx = start_idx
self.end_idx = end_idx
self.name = name
self.synset = synset
def __repr__(self):
return str(self)
class Synset:
"""
Wordnet Synsets.
name string
definition string
"""
def __init__(self, name, definition):
self.name = name
self.definition = definition
def __str__(self):
return '{} - {}'.format(self.name, self.definition)
def __repr__(self):
return str(self)
| 24.058559 | 74 | 0.52874 | 607 | 5,341 | 4.439868 | 0.153213 | 0.037848 | 0.036735 | 0.050464 | 0.331725 | 0.267161 | 0.224861 | 0.092393 | 0.031169 | 0.031169 | 0 | 0.004821 | 0.378581 | 5,341 | 221 | 75 | 24.167421 | 0.807171 | 0.24209 | 0 | 0.457944 | 0 | 0.009346 | 0.056045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.224299 | false | 0 | 0 | 0.121495 | 0.448598 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
22d061bf4dd94ca94a7f507ce7fe9f9a517f47a3 | 274 | py | Python | global_info.py | AkagiYui/AzurLaneTool | f00fa6e5c6371db72ee399d7bd178a81f39afd8b | [
"Apache-2.0"
] | null | null | null | global_info.py | AkagiYui/AzurLaneTool | f00fa6e5c6371db72ee399d7bd178a81f39afd8b | [
"Apache-2.0"
] | null | null | null | global_info.py | AkagiYui/AzurLaneTool | f00fa6e5c6371db72ee399d7bd178a81f39afd8b | [
"Apache-2.0"
] | null | null | null | from time import sleep
debug_mode = False
time_to_exit = False
exiting = False
exit_code = 0
def get_debug_mode():
return debug_mode
def trigger_exit(_exit_code):
global time_to_exit, exit_code
exit_code = _exit_code
time_to_exit = True
sleep(0.1)
| 14.421053 | 34 | 0.729927 | 45 | 274 | 4.044444 | 0.422222 | 0.21978 | 0.164835 | 0.175824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013889 | 0.211679 | 274 | 18 | 35 | 15.222222 | 0.828704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0.083333 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22d53110de1903196c37bd847b098f2456b54f16 | 1,441 | py | Python | windows_packages_gpu/torch/nn/intrinsic/qat/modules/linear_relu.py | codeproject/DeepStack | d96368a3db1bc0266cb500ba3701d130834da0e6 | [
"Apache-2.0"
] | 353 | 2020-12-10T10:47:17.000Z | 2022-03-31T23:08:29.000Z | windows_packages_gpu/torch/nn/intrinsic/qat/modules/linear_relu.py | codeproject/DeepStack | d96368a3db1bc0266cb500ba3701d130834da0e6 | [
"Apache-2.0"
] | 80 | 2020-12-10T09:54:22.000Z | 2022-03-30T22:08:45.000Z | windows_packages_gpu/torch/nn/intrinsic/qat/modules/linear_relu.py | codeproject/DeepStack | d96368a3db1bc0266cb500ba3701d130834da0e6 | [
"Apache-2.0"
] | 63 | 2020-12-10T17:10:34.000Z | 2022-03-28T16:27:07.000Z | from __future__ import absolute_import, division, print_function, unicode_literals
import torch.nn.qat as nnqat
import torch.nn.intrinsic
import torch.nn.functional as F
class LinearReLU(nnqat.Linear):
r"""
A LinearReLU module fused from Linear and ReLU modules, attached with
FakeQuantize modules for output activation and weight, used in
quantization aware training.
We adopt the same interface as :class:`torch.nn.Linear`.
Similar to `torch.nn.intrinsic.LinearReLU`, with FakeQuantize modules initialized to
default.
Attributes:
activation_post_process: fake quant module for output activation
weight: fake quant module for weight
Examples::
>>> m = nn.qat.LinearReLU(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
"""
_FLOAT_MODULE = torch.nn.intrinsic.LinearReLU
def __init__(self, in_features, out_features, bias=True,
qconfig=None):
super(LinearReLU, self).__init__(in_features, out_features, bias, qconfig)
def forward(self, input):
return self.activation_post_process(F.relu(
F.linear(input, self.weight_fake_quant(self.weight), self.bias)))
@classmethod
def from_float(cls, mod, qconfig=None):
return super(LinearReLU, cls).from_float(mod, qconfig)
| 34.309524 | 89 | 0.66898 | 179 | 1,441 | 5.223464 | 0.430168 | 0.04492 | 0.041711 | 0.055615 | 0.053476 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01275 | 0.238029 | 1,441 | 41 | 90 | 35.146341 | 0.838798 | 0.420541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.25 | 0.125 | 0.6875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
22da304d7553bb5adf64e1d52f39170a3b5aca59 | 249 | py | Python | jumbo_api/objects/profile.py | rolfberkenbosch/python-jumbo-api | 9ca35cbea6225dcc6108093539e76f110b1840b0 | [
"MIT"
] | 3 | 2020-07-24T08:44:13.000Z | 2021-09-05T06:24:01.000Z | jumbo_api/objects/profile.py | rolfberkenbosch/python-jumbo-api | 9ca35cbea6225dcc6108093539e76f110b1840b0 | [
"MIT"
] | 6 | 2020-04-30T19:12:24.000Z | 2021-03-23T19:21:19.000Z | jumbo_api/objects/profile.py | rolfberkenbosch/python-jumbo-api | 9ca35cbea6225dcc6108093539e76f110b1840b0 | [
"MIT"
] | 2 | 2020-04-30T14:59:12.000Z | 2020-08-30T19:15:57.000Z | from jumbo_api.objects.store import Store
class Profile(object):
def __init__(self, data):
self.id = data.get("identifier")
self.store = Store(data.get("store"))
def __str__(self):
return f"{self.id} {self.store}"
| 22.636364 | 45 | 0.634538 | 34 | 249 | 4.382353 | 0.558824 | 0.080537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220884 | 249 | 10 | 46 | 24.9 | 0.768041 | 0 | 0 | 0 | 0 | 0 | 0.148594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
22db31f9f12a464c13a70cead5b1a18013bd0add | 365 | py | Python | lazyblacksmith/views/ajax/__init__.py | jonathonfletcher/LazyBlacksmith | f244f0a15c795707b64e7cc53f82c6d6270691b5 | [
"BSD-3-Clause"
] | 49 | 2016-10-24T13:51:56.000Z | 2022-02-18T06:07:47.000Z | lazyblacksmith/views/ajax/__init__.py | jonathonfletcher/LazyBlacksmith | f244f0a15c795707b64e7cc53f82c6d6270691b5 | [
"BSD-3-Clause"
] | 84 | 2015-04-29T10:24:51.000Z | 2022-02-17T19:18:01.000Z | lazyblacksmith/views/ajax/__init__.py | jonathonfletcher/LazyBlacksmith | f244f0a15c795707b64e7cc53f82c6d6270691b5 | [
"BSD-3-Clause"
] | 34 | 2017-01-23T13:19:17.000Z | 2022-02-02T17:32:08.000Z | # -*- encoding: utf-8 -*-
from flask import request
from lazyblacksmith.utils.request import is_xhr
import logging
logger = logging.getLogger('lb.ajax')
def is_not_ajax():
"""
Return True if request is not ajax
This function is used in @cache annotation
to not cache direct call (http 403)
"""
return not is_xhr(request)
| 21.470588 | 48 | 0.665753 | 51 | 365 | 4.686275 | 0.627451 | 0.041841 | 0.075314 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014545 | 0.246575 | 365 | 16 | 49 | 22.8125 | 0.854545 | 0.378082 | 0 | 0 | 0 | 0 | 0.037433 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
22de394896bd7be748b49ef5d7072349cfcc8ff2 | 1,770 | py | Python | 09_multiprocessing/prime_validation/primes_factor_test.py | jumploop/high_performance_python | da5b11735601b51f141975f9d59f14293cab16bb | [
"MIT"
] | null | null | null | 09_multiprocessing/prime_validation/primes_factor_test.py | jumploop/high_performance_python | da5b11735601b51f141975f9d59f14293cab16bb | [
"MIT"
] | null | null | null | 09_multiprocessing/prime_validation/primes_factor_test.py | jumploop/high_performance_python | da5b11735601b51f141975f9d59f14293cab16bb | [
"MIT"
] | null | null | null | import math
import time
def check_prime(n):
if n % 2 == 0:
return False, 2
for i in range(3, int(math.sqrt(n)) + 1):
if n % i == 0:
return False, i
return True, None
if __name__ == "__main__":
primes = []
t1 = time.time()
# 100109100129100151 big prime
# http://primes.utm.edu/curios/page.php/100109100129100151.html
# number_range = xrange(100109100129100153, 100109100129101238, 2)
number_range = range(100109100129101237, 100109100129201238, 2)
# new expensive near-primes
# [(95362951, (100109100129100369, 7.254560947418213))
# (171656941, (100109100129101027, 13.052711009979248))
# (121344023, (100109100129101291, 8.994053840637207)
# note these two lines of timings look really wrong, they're about 4sec
# each really
# [(265687139, (100109100129102047, 19.642582178115845)), (219609683, (100109100129102277, 16.178056001663208)), (121344023, (100109100129101291, 8.994053840637207))]
# [(316096873, (100109100129126653, 23.480671882629395)), (313994287, (100109100129111617, 23.262380123138428)), (307151363, (100109100129140177, 22.80288815498352))]
# primes
# 100109100129162907
# 100109100129162947
highest_factors = {}
for possible_prime in number_range:
t2 = time.time()
is_prime, factor = check_prime(possible_prime)
if is_prime:
primes.append(possible_prime)
print("GOT NEW PRIME", possible_prime)
else:
highest_factors[factor] = (possible_prime, time.time() - t2)
hf = highest_factors.items()
hf = sorted(hf, reverse=True)
print(hf[:3])
print("Took:", time.time() - t1)
print(len(primes), primes[:10], primes[-10:])
| 36.122449 | 170 | 0.654802 | 188 | 1,770 | 6.042553 | 0.574468 | 0.057218 | 0.021127 | 0.075704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.403061 | 0.224859 | 1,770 | 48 | 171 | 36.875 | 0.424927 | 0.450282 | 0 | 0 | 0 | 0 | 0.027168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0 | 0.222222 | 0.148148 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22ecf4bdf03fca4f671513bb4a4ebe6ea6f1152b | 225 | py | Python | cocotb_test/run.py | canerbulduk/cocotb-test | ece092446a1e5de932db12dfb60441d6f322d5f1 | [
"BSD-2-Clause"
] | null | null | null | cocotb_test/run.py | canerbulduk/cocotb-test | ece092446a1e5de932db12dfb60441d6f322d5f1 | [
"BSD-2-Clause"
] | null | null | null | cocotb_test/run.py | canerbulduk/cocotb-test | ece092446a1e5de932db12dfb60441d6f322d5f1 | [
"BSD-2-Clause"
] | null | null | null |
import cocotb_test.simulator
# For partial back compatibility
def run(simulator=None, **kwargs):
if simulator:
sim = simulator(**kwargs)
sim.run()
else:
cocotb_test.simulator.run(**kwargs)
| 17.307692 | 43 | 0.648889 | 26 | 225 | 5.538462 | 0.576923 | 0.138889 | 0.263889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24 | 225 | 12 | 44 | 18.75 | 0.842105 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22f45d29bee95fa69837bee8b207676703b1cb59 | 844 | py | Python | examples/hello_world/src/Algorithm.py | algorithmiaio/algorithmia-adk-python | 1e5c6b9de08fe34260f3b4c03eb4596cccb4d070 | [
"MIT"
] | 4 | 2021-03-15T16:51:27.000Z | 2021-07-25T16:47:00.000Z | examples/hello_world/src/Algorithm.py | algorithmiaio/algorithmia-adk-python | 1e5c6b9de08fe34260f3b4c03eb4596cccb4d070 | [
"MIT"
] | 2 | 2021-02-25T21:13:30.000Z | 2021-05-03T14:49:41.000Z | examples/hello_world/src/Algorithm.py | algorithmiaio/algorithmia-adk-python | 1e5c6b9de08fe34260f3b4c03eb4596cccb4d070 | [
"MIT"
] | 1 | 2021-03-02T00:06:55.000Z | 2021-03-02T00:06:55.000Z | from Algorithmia import ADK
# API calls will begin at the apply() method, with the request body passed as 'input'
# For more details, see algorithmia.com/developers/algorithm-development/languages
def apply(input):
# If your apply function uses state that's loaded into memory via load, you can pass that loaded state to your apply
# function by defining an additional "globals" parameter in your apply function; but it's optional!
return "hello {}".format(str(input))
# This turns your library code into an algorithm that can run on the platform.
# If you intend to use loading operations, remember to pass a `load` function as a second variable.
algorithm = ADK(apply)
# The 'init()' function actually starts the algorithm, you can follow along in the source code
# to see how everything works.
algorithm.init("Algorithmia")
| 44.421053 | 120 | 0.761848 | 131 | 844 | 4.908397 | 0.625954 | 0.041991 | 0.079316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170616 | 844 | 18 | 121 | 46.888889 | 0.918571 | 0.798578 | 0 | 0 | 0 | 0 | 0.118012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
22fadcf738c9cad6b1e0cd6d9126f92326318681 | 1,088 | py | Python | main.py | vu-telab/DAKOTA-moga-post-processing-tool | 2f41561bd8ca44c693e5994f7f68a1edc1a82361 | [
"MIT"
] | null | null | null | main.py | vu-telab/DAKOTA-moga-post-processing-tool | 2f41561bd8ca44c693e5994f7f68a1edc1a82361 | [
"MIT"
] | 4 | 2017-02-06T18:20:25.000Z | 2017-02-06T20:50:34.000Z | main.py | caseynbrock/DAKOTA-moga-post-processing-tool | 2f41561bd8ca44c693e5994f7f68a1edc1a82361 | [
"MIT"
] | null | null | null | # main.py
#
# currently just an example script I use to test my optimization_results module
#
# WARNING: design point numbers 0-indexed in pandas database, but
# eval_id column is the original 1-indexed value given by DAKOTA
import optimization_results as optr
def main():
a4 = optr.MogaOptimizationResults()
print a4.gen_size_list
print a4.pareto_front
assert a4.gen_size_list == [100, 94, 48, 45, 45, 46, 62, 85, 102, 108, 131, 130, 134, 119,
127, 128, 155, 124, 124, 130, 128, 123, 137, 135, 149, 165, 154,
164, 169, 177, 205, 196, 215, 185, 205, 190, 162, 158, 154, 159,
163, 183, 175, 183, 186, 188, 188, 186, 201, 213, 222]
### OLD MATLAB CODE I NEED TO REWORK ###
# # read force and atan accuracy objectives from
# # all_accuracy_objectives.dat
# A3 = load('all_accuracy_objectives.dat');
# completed_points = A3(:,1);
# force_objs = A3(:,2);
# atan_objs = A3(:,3);
# n3 = length(A3(:,1));
if __name__=='__main__':
main()
| 35.096774 | 97 | 0.596507 | 156 | 1,088 | 4.012821 | 0.74359 | 0.086262 | 0.028754 | 0.041534 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.207692 | 0.283088 | 1,088 | 30 | 98 | 36.266667 | 0.594872 | 0.421875 | 0 | 0 | 0 | 0 | 0.01318 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | null | null | 0 | 0.090909 | null | null | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe099e17f120425cb619611e6ff40d2da802127d | 3,572 | py | Python | src/zope/app/content/__init__.py | zopefoundation/zope.app.content | d4c0276ff90bceed2156d808ab6b42b85d7b3810 | [
"ZPL-2.1"
] | null | null | null | src/zope/app/content/__init__.py | zopefoundation/zope.app.content | d4c0276ff90bceed2156d808ab6b42b85d7b3810 | [
"ZPL-2.1"
] | 1 | 2017-04-22T19:53:21.000Z | 2017-04-23T16:44:58.000Z | src/zope/app/content/__init__.py | zopefoundation/zope.app.content | d4c0276ff90bceed2156d808ab6b42b85d7b3810 | [
"ZPL-2.1"
] | 1 | 2015-04-03T07:35:01.000Z | 2015-04-03T07:35:01.000Z | ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Content Type convenience lookup functions."""
from zope.interface import provider
from zope.interface import providedBy
from zope.schema.interfaces import IVocabularyFactory
from zope.app.content.interfaces import IContentType
from zope.componentvocabulary.vocabulary import UtilityVocabulary
from zope.security.proxy import removeSecurityProxy
def queryType(object, interface):
"""Returns the object's interface which implements interface.
>>> from zope.interface import Interface
>>> class IContentType(Interface):
... pass
>>> from zope.interface import Interface, implementer, directlyProvides
>>> class I(Interface):
... pass
>>> class J(Interface):
... pass
>>> directlyProvides(I, IContentType)
>>> @implementer(I)
... class C(object):
... pass
>>> @implementer(J, I)
... class D(object):
... pass
>>> obj = C()
>>> c1_ctype = queryType(obj, IContentType)
>>> c1_ctype.__name__
'I'
>>> class I1(I):
... pass
>>> class I2(I1):
... pass
>>> class I3(Interface):
... pass
>>> @implementer(I1)
... class C1(object):
... pass
>>> obj1 = C1()
>>> c1_ctype = queryType(obj1, IContentType)
>>> c1_ctype.__name__
'I'
>>> @implementer(I2)
... class C2(object):
... pass
>>> obj2 = C2()
>>> c2_ctype = queryType(obj2, IContentType)
>>> c2_ctype.__name__
'I'
>>> @implementer(I3)
... class C3(object):
... pass
>>> obj3 = C3()
If Interface doesn't provide `IContentType`, `queryType` returns ``None``.
>>> c3_ctype = queryType(obj3, IContentType)
>>> c3_ctype
>>> c3_ctype is None
True
>>> class I4(I):
... pass
>>> directlyProvides(I4, IContentType)
>>> @implementer(I4)
... class C4(object):
... pass
>>> obj4 = C4()
>>> c4_ctype = queryType(obj4, IContentType)
>>> c4_ctype.__name__
'I4'
"""
# Remove the security proxy, so that we can introspect the type of the
# object's interfaces.
naked = removeSecurityProxy(object)
object_iro = providedBy(naked).__iro__
for iface in object_iro:
if interface.providedBy(iface):
return iface
return None
def queryContentType(object):
"""Returns the interface implemented by object which implements
:class:`zope.app.content.interfaces.IContentType`.
>>> from zope.interface import Interface, implementer, directlyProvides
>>> class I(Interface):
... pass
>>> directlyProvides(I, IContentType)
>>> @implementer(I)
... class C(object):
... pass
>>> obj = C()
>>> c1_ctype = queryContentType(obj)
>>> c1_ctype.__name__
'I'
"""
return queryType(object, IContentType)
@provider(IVocabularyFactory)
class ContentTypesVocabulary(UtilityVocabulary):
interface = IContentType
| 27.060606 | 78 | 0.606663 | 367 | 3,572 | 5.798365 | 0.3297 | 0.033835 | 0.039944 | 0.054041 | 0.18562 | 0.148026 | 0.132989 | 0.132989 | 0.132989 | 0.132989 | 0 | 0.016643 | 0.226204 | 3,572 | 131 | 79 | 27.267176 | 0.753256 | 0.657335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fe10f333391851cb33d5c6c2715480481922b0d0 | 2,993 | py | Python | heat/tests/test_rpc_listener_client.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 1 | 2015-12-18T21:46:55.000Z | 2015-12-18T21:46:55.000Z | heat/tests/test_rpc_listener_client.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | heat/tests/test_rpc_listener_client.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 3 | 2018-07-19T17:43:37.000Z | 2019-11-15T22:13:30.000Z | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
import oslo_messaging as messaging
from heat.rpc import api as rpc_api
from heat.rpc import listener_client as rpc_client
from heat.tests import common
class ListenerClientTest(common.HeatTestCase):
@mock.patch('heat.common.messaging.get_rpc_client',
return_value=mock.Mock())
def test_engine_alive_ok(self, rpc_client_method):
mock_rpc_client = rpc_client_method.return_value
mock_prepare_method = mock_rpc_client.prepare
mock_prepare_client = mock_prepare_method.return_value
mock_cnxt = mock.Mock()
listener_client = rpc_client.EngineListenerClient('engine-007')
rpc_client_method.assert_called_once_with(
version=rpc_client.EngineListenerClient.BASE_RPC_API_VERSION,
topic=rpc_api.LISTENER_TOPIC, server='engine-007',
)
mock_prepare_method.assert_called_once_with(timeout=2)
self.assertEqual(mock_prepare_client,
listener_client._client,
"Failed to create RPC client")
ret = listener_client.is_alive(mock_cnxt)
self.assertTrue(ret)
mock_prepare_client.call.assert_called_once_with(mock_cnxt,
'listening')
@mock.patch('heat.common.messaging.get_rpc_client',
return_value=mock.Mock())
def test_engine_alive_timeout(self, rpc_client_method):
mock_rpc_client = rpc_client_method.return_value
mock_prepare_method = mock_rpc_client.prepare
mock_prepare_client = mock_prepare_method.return_value
mock_cnxt = mock.Mock()
listener_client = rpc_client.EngineListenerClient('engine-007')
rpc_client_method.assert_called_once_with(
version=rpc_client.EngineListenerClient.BASE_RPC_API_VERSION,
topic=rpc_api.LISTENER_TOPIC, server='engine-007',
)
mock_prepare_method.assert_called_once_with(timeout=2)
self.assertEqual(mock_prepare_client,
listener_client._client,
"Failed to create RPC client")
mock_prepare_client.call.side_effect = messaging.MessagingTimeout(
'too slow')
ret = listener_client.is_alive(mock_cnxt)
self.assertFalse(ret)
mock_prepare_client.call.assert_called_once_with(mock_cnxt,
'listening')
| 42.15493 | 74 | 0.687604 | 371 | 2,993 | 5.237197 | 0.293801 | 0.088008 | 0.061246 | 0.06176 | 0.647452 | 0.647452 | 0.647452 | 0.647452 | 0.610396 | 0.610396 | 0 | 0.007961 | 0.244571 | 2,993 | 70 | 75 | 42.757143 | 0.851393 | 0.173739 | 0 | 0.708333 | 0 | 0 | 0.078049 | 0.029268 | 0 | 0 | 0 | 0 | 0.208333 | 1 | 0.041667 | false | 0 | 0.104167 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe116c1174a46647c502098395333cc909588b1c | 684 | py | Python | amadeus/travel/trip_parser_jobs/_status.py | akshitsingla/amadeus-python | d8f3595e556b674998156f98d8a318045bb4c21c | [
"MIT"
] | 125 | 2018-04-09T07:27:24.000Z | 2022-02-22T11:45:20.000Z | amadeus/travel/trip_parser_jobs/_status.py | akshitsingla/amadeus-python | d8f3595e556b674998156f98d8a318045bb4c21c | [
"MIT"
] | 58 | 2018-03-29T14:58:01.000Z | 2022-03-17T10:18:07.000Z | amadeus/travel/trip_parser_jobs/_status.py | akshitsingla/amadeus-python | d8f3595e556b674998156f98d8a318045bb4c21c | [
"MIT"
] | 58 | 2018-04-06T10:56:20.000Z | 2022-03-04T01:23:24.000Z | from amadeus.client.decorator import Decorator
class TripParserStatus(Decorator, object):
def __init__(self, client, job_id):
Decorator.__init__(self, client)
self.job_id = job_id
def get(self, **params):
'''
Returns the parsing status and the link to the result
in case of successful parsing.
.. code-block:: python
amadeus.travel.trip_parser_jobs.status('XXX').get
:rtype: amadeus.Response
:raises amadeus.ResponseError: if the request could not be completed
'''
return self.client.get(
'/v2/travel/trip-parser-jobs/{0}'.format(self.job_id),
**params)
| 28.5 | 76 | 0.627193 | 83 | 684 | 5 | 0.590361 | 0.048193 | 0.06747 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004008 | 0.270468 | 684 | 23 | 77 | 29.73913 | 0.827655 | 0.377193 | 0 | 0 | 0 | 0 | 0.085635 | 0.085635 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a3b55358fffe0e7cc61738673a1b1895170d48c3 | 9,891 | py | Python | mbta_python/__init__.py | dougzor/mbta_python | f277f48f8bf8048cb5c9c6307e672c37292e57f7 | [
"MIT"
] | null | null | null | mbta_python/__init__.py | dougzor/mbta_python | f277f48f8bf8048cb5c9c6307e672c37292e57f7 | [
"MIT"
] | null | null | null | mbta_python/__init__.py | dougzor/mbta_python | f277f48f8bf8048cb5c9c6307e672c37292e57f7 | [
"MIT"
] | null | null | null | import datetime
import requests
from mbta_python.models import Stop, Direction, Schedule, Mode, \
TripSchedule, Alert, StopWithMode, Prediction
HOST = "http://realtime.mbta.com/developer/api/v2"
def datetime_to_epoch(dt):
epoch = datetime.datetime.utcfromtimestamp(0)
return int((dt - epoch).total_seconds())
class MBTASDK(object):
"""Wrapper around calls to the MBTA Realtime API
"""
def __init__(self, api_key):
self.api_key = api_key
def _make_request(self, path, params):
url = "{}/{}".format(HOST, path)
response = requests.get(url, params=params)
data = response.json()
error = data.get("error")
if error:
raise Exception(error["message"])
return response.json()
def get_stops_by_location(self, latitude, longitude):
"""Get a List of Stops sorted by proximity to the given
latitude and longitude
"""
params = {
"lat": latitude,
"lon": longitude,
"api_key": self.api_key,
"format": "json"
}
data = self._make_request("stopsbylocation", params)
stops = [Stop(stop_data) for stop_data in data["stop"]]
return stops
def get_stops_by_route(self, route_id):
"""Return a List of Directions for the route_id
that contain a list of Stops that Direction and Route serve
"""
params = {
"route": route_id,
"api_key": self.api_key,
"format": "json"
}
data = self._make_request("stopsbyroute", params)
return [Direction(d) for d in data["direction"]]
def get_routes_by_stop(self, stop_id):
"""Return a list of routes that serve a particular stop
"""
params = {
"stop": stop_id,
"api_key": self.api_key,
"format": "json"
}
data = self._make_request("routesbystop", params)
return StopWithMode(data)
def get_schedules_by_stop(self, stop_id, route_id=None, direction_id=None,
date=None, max_time=None, max_trips=None):
"""Return scheduled arrivals and departures for a direction and route for a
particular stop.
stop_id - Stop ID
route_id - Route ID, If not included then schedule for all routes
serving the stop will be returned,
direction_id - Direction ID, If included then route must also be
included if not included then schedule for all
directions of the route serving the stop will be
returned
date - Time after which schedule should be returned. If included
then must be within the next seven (7) days
If not included then schedule starting from the current
datetime will be returned
max_time - Defines maximum range of time (in minutes) within which
trips will be returned. If not included defaults to 60.
max_trips - Defines number of trips to return. Integer between 1 and
100. If not included defaults to 5.
"""
params = {
"stop": stop_id,
"api_key": self.api_key,
"format": "json",
"route": route_id,
"direction": direction_id,
"datetime": datetime_to_epoch(date) if date else None,
"max_time": max_time,
"max_trips": max_trips
}
data = self._make_request("schedulebystop", params)
return Schedule(data)
def get_schedules_by_routes(self, route_ids, date=None,
max_time=None, max_trips=None):
"""Return the scheduled arrivals and departures in a direction
for a particular route or routes.
route_ids - List of Route IDs, or single Route ID
date - Time after which schedule should be returned. If included
then must be within the next seven (7) days If not included
then schedule starting from the current datetime will
be returned
max_time - Defines maximum range of time (in minutes) within which
trips will be returned. If not included defaults to 60.
max_trips - Defines number of trips to return. Integer between 1
and 100. If not included defaults to 5.
"""
if not isinstance(route_ids, list):
route_ids = [route_ids]
params = {
"routes": ",".join(route_ids),
"api_key": self.api_key,
"format": "json",
"datetime": datetime_to_epoch(date) if date else None,
"max_time": max_time,
"max_trips": max_trips
}
data = self._make_request("schedulebyroutes", params)
return [Mode(m) for m in data["mode"]]
def get_schedules_by_trip(self, trip_id, date=None):
"""Return the scheduled arrivals and departures in a direction
for a particular route or routes.
route_ids - List of Route IDs, or single Route ID
date - Time after which schedule should be returned. If included then
must be within the next seven (7) days. If not included then
schedule starting from the current datetime will be returned
max_time - Defines maximum range of time (in minutes) within which
trips will be returned. If not included defaults to 60.
max_trips - Defines number of trips to return. Integer between 1 and
100. If not included defaults to 5.
"""
params = {
"trip": trip_id,
"api_key": self.api_key,
"format": "json",
"datetime": datetime_to_epoch(date) if date else None,
}
data = self._make_request("schedulebytrip", params)
return TripSchedule(data)
def get_predictions_by_stop(self, stop_id, include_access_alerts=False,
include_service_alerts=True):
"""Return predicted arrivals and departures in the next hour for a
direction and route for a particular stop.
stop_id - Stop ID
include_access_alerts - Whether or not alerts pertaining to
accessibility (elevators, escalators) should be
returned
include_service_alerts - Whether or not service alerts should be
returned
"""
params = {
"stop": stop_id,
"api_key": self.api_key,
"format": "json",
"include_access_alerts": include_access_alerts,
"include_service_alerts": include_service_alerts
}
data = self._make_request("predictionsbystop", params)
return Prediction(data)
def get_predictions_by_routes(self, route_ids, include_access_alerts=False,
include_service_alerts=True):
"""Return predictions for upcoming trips (including trips already underway)
in a direction for a particular route or routes.
route_ids - List of Route IDs, or single Route ID
include_access_alerts - Whether or not alerts pertaining to
accessibility (elevators, escalators) should be
returned
include_service_alerts - Whether or not service alerts should be
returned
"""
if not isinstance(route_ids, list):
route_ids = [route_ids]
params = {
"routes": ",".join(route_ids),
"api_key": self.api_key,
"format": "json",
"include_access_alerts": include_access_alerts,
"include_service_alerts": include_service_alerts
}
data = self._make_request("predictionsbyroutes", params)
return Prediction(data)
def get_vehicles_by_routes(self, route_ids, include_access_alerts=False,
include_service_alerts=True):
"""Return vehicle positions for upcoming trips (including trips already
underway) in a direction for a particular route or routes.
route_ids - List of Route IDs, or single Route ID
include_access_alerts - Whether or not alerts pertaining to
accessibility (elevators, escalators) should be
returned
include_service_alerts - Whether or not service alerts should be
returned
"""
if not isinstance(route_ids, list):
route_ids = [route_ids]
params = {
"routes": ",".join(route_ids),
"api_key": self.api_key,
"format": "json",
"include_access_alerts": include_access_alerts,
"include_service_alerts": include_service_alerts
}
data = self._make_request("vehiclesbyroutes", params)
return [Mode(m) for m in data]
def get_predictions_by_trip(self, trip_id):
"""Return the predicted arrivals and departures for a particular trip.
trip_id - TripID
"""
params = {
"trip": trip_id,
"api_key": self.api_key,
"format": "json"
}
data = self._make_request("predictionsbytrip", params)
return TripSchedule(data)
def get_vehicles_by_trip(self, trip_id):
"""Return the predicted vehicle positions for a particular trip.
trip_id - TripID
"""
params = {
"trip": trip_id,
"api_key": self.api_key,
"format": "json"
}
data = self._make_request("vehiclesbytrip", params)
return TripSchedule(data)
| 37.324528 | 83 | 0.586897 | 1,149 | 9,891 | 4.871192 | 0.142733 | 0.0268 | 0.023227 | 0.027872 | 0.750759 | 0.70377 | 0.67018 | 0.659103 | 0.637306 | 0.616223 | 0 | 0.003975 | 0.338692 | 9,891 | 264 | 84 | 37.465909 | 0.851705 | 0.387827 | 0 | 0.525926 | 0 | 0 | 0.126291 | 0.023783 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103704 | false | 0 | 0.022222 | 0 | 0.22963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a3c068d2dc2c438793e5de5d6de56af20454dc8f | 507 | py | Python | diskcatalog/core/views.py | rywjhzd/Cataloging-and-Visualizing-Cradles-of-Planet-Formation | 6d59ea9d9a07630721e19c554651bae2775962ac | [
"MIT"
] | null | null | null | diskcatalog/core/views.py | rywjhzd/Cataloging-and-Visualizing-Cradles-of-Planet-Formation | 6d59ea9d9a07630721e19c554651bae2775962ac | [
"MIT"
] | null | null | null | diskcatalog/core/views.py | rywjhzd/Cataloging-and-Visualizing-Cradles-of-Planet-Formation | 6d59ea9d9a07630721e19c554651bae2775962ac | [
"MIT"
] | null | null | null | from django.shortcuts import render
from .models import Disk
import os
def index(request):
context = {}
disk_list = Disk.objects.all()
context['disk_list'] = disk_list
return render(request, 'index.html', context)
#def index(request):
# module_dir = os.path.dirname(__file__)
# file_path = os.path.join(module_dir, 'data.txt')
# disk_list = open(file_path , 'r')
# data = data_file.read()
# context = {'disk_list': data}
# return render(request, 'index.html', context)
| 25.35 | 53 | 0.672584 | 69 | 507 | 4.73913 | 0.405797 | 0.122324 | 0.137615 | 0.116208 | 0.214067 | 0.214067 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185404 | 507 | 19 | 54 | 26.684211 | 0.791768 | 0.510848 | 0 | 0 | 0 | 0 | 0.078838 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a3cae716974e2bebe27ab17e3253013ab6b42f7b | 782 | py | Python | dragontail/content/models/basicpage.py | tracon/dragontail | aae860acb5fe400015557f659b6d4221b939747a | [
"MIT"
] | null | null | null | dragontail/content/models/basicpage.py | tracon/dragontail | aae860acb5fe400015557f659b6d4221b939747a | [
"MIT"
] | null | null | null | dragontail/content/models/basicpage.py | tracon/dragontail | aae860acb5fe400015557f659b6d4221b939747a | [
"MIT"
] | null | null | null | # encoding: utf-8
from django.db import models
from wagtail.wagtailcore.models import Page
from wagtail.wagtailcore.fields import StreamField
from wagtail.wagtailcore import blocks
from wagtail.wagtailadmin.edit_handlers import FieldPanel, StreamFieldPanel
from wagtail.wagtailimages.blocks import ImageChooserBlock
class BasicPage(Page):
body = StreamField([
('paragraph', blocks.RichTextBlock()),
('image', ImageChooserBlock()),
])
content_panels = Page.content_panels + [
StreamFieldPanel('body'),
]
def get_template(self, request, *args, **kwargs):
from .templatesettings import TemplateSettings
template_settings = TemplateSettings.for_site(request.site)
return template_settings.basic_page_template | 28.962963 | 75 | 0.742967 | 80 | 782 | 7.15 | 0.525 | 0.096154 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001548 | 0.173913 | 782 | 27 | 76 | 28.962963 | 0.883901 | 0.019182 | 0 | 0 | 0 | 0 | 0.023499 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.388889 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
a3d04895a38a041247e2747afe97c42331c17ee1 | 3,866 | py | Python | src/mpass/mpass/migrations/0001_initial.py | haltu/velmu-mpass-demo | 19eb0e14fa6710e4aee5d47c898cf570bf7621e5 | [
"MIT"
] | null | null | null | src/mpass/mpass/migrations/0001_initial.py | haltu/velmu-mpass-demo | 19eb0e14fa6710e4aee5d47c898cf570bf7621e5 | [
"MIT"
] | 11 | 2018-08-16T12:09:57.000Z | 2018-08-22T14:26:15.000Z | src/mpass/mpass/migrations/0001_initial.py | haltu/velmu-mpass-demosp | 31b609d1413ab1bd9f833f42eac30366a6d3e6d0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.10 on 2018-03-20 08:34
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import parler.models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='AuthenticationSource',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('modified_at', models.DateTimeField(auto_now=True)),
('auth_id', models.CharField(max_length=128)),
('icon_url', models.CharField(blank=True, max_length=2048, null=True)),
],
options={
'abstract': False,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='AuthenticationSourceTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('title', models.CharField(max_length=2048)),
('master', models.ForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='mpass.AuthenticationSource')),
],
options={
'managed': True,
'db_table': 'mpass_authenticationsource_translation',
'db_tablespace': '',
'default_permissions': (),
'verbose_name': 'authentication source Translation',
},
),
migrations.CreateModel(
name='AuthenticationTag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('modified_at', models.DateTimeField(auto_now=True)),
('tag_id', models.CharField(max_length=128)),
],
options={
'abstract': False,
},
bases=(parler.models.TranslatableModelMixin, models.Model),
),
migrations.CreateModel(
name='AuthenticationTagTranslation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('language_code', models.CharField(db_index=True, max_length=15, verbose_name='Language')),
('title', models.CharField(max_length=2048)),
('master', models.ForeignKey(editable=False, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='translations', to='mpass.AuthenticationTag')),
],
options={
'managed': True,
'db_table': 'mpass_authenticationtag_translation',
'db_tablespace': '',
'default_permissions': (),
'verbose_name': 'authentication tag Translation',
},
),
migrations.AddField(
model_name='authenticationsource',
name='tags',
field=models.ManyToManyField(blank=True, to='mpass.AuthenticationTag'),
),
migrations.AlterUniqueTogether(
name='authenticationtagtranslation',
unique_together=set([('language_code', 'master')]),
),
migrations.AlterUniqueTogether(
name='authenticationsourcetranslation',
unique_together=set([('language_code', 'master')]),
),
]
| 42.483516 | 180 | 0.58148 | 338 | 3,866 | 6.461538 | 0.289941 | 0.040293 | 0.045788 | 0.042125 | 0.64652 | 0.64652 | 0.56044 | 0.56044 | 0.5 | 0.5 | 0 | 0.014593 | 0.290998 | 3,866 | 90 | 181 | 42.955556 | 0.782196 | 0.017848 | 0 | 0.597561 | 1 | 0 | 0.191355 | 0.06932 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.060976 | 0.04878 | 0 | 0.097561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a3e0c5f65be532d1c0caf49217af9908f82568d1 | 574 | py | Python | tests/test_comment.py | uwase-diane/min_pitch | 514ab5da150244e900fd51b6563173a905ef4f29 | [
"Unlicense"
] | 1 | 2020-11-29T16:18:50.000Z | 2020-11-29T16:18:50.000Z | tests/test_comment.py | uwase-diane/min_pitch | 514ab5da150244e900fd51b6563173a905ef4f29 | [
"Unlicense"
] | null | null | null | tests/test_comment.py | uwase-diane/min_pitch | 514ab5da150244e900fd51b6563173a905ef4f29 | [
"Unlicense"
] | null | null | null | import unittest
from app.models import Comment, Pitch
from app import db
class TestPitchComment(unittest.TestCase):
def setUp(self):
self.new_pitch = Pitch(post = "doit", category='Quotes')
self.new_comment = Comment(comment = "good comment", pitch=self.new_pitch)
def test_instance(self):
self.assertTrue(isinstance(self.new_comment,Comment))
def test_check_instance_variables(self):
self.assertEquals(self.new_comment.comment,"good comment")
self.assertEquals(self.new_comment.pitch,self.new_pitch, 'do it') | 33.764706 | 82 | 0.716028 | 74 | 574 | 5.405405 | 0.391892 | 0.1225 | 0.14 | 0.1575 | 0.2525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1777 | 574 | 17 | 83 | 33.764706 | 0.847458 | 0 | 0 | 0 | 0 | 0 | 0.067826 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a3eb6e2df01a9295d0fd4c9d2d237ab568ea9c17 | 62 | py | Python | 07/c/3 - Square Census.py | Surferlul/csc-python-solutions | bea99e5e1e344d17fb2cb29d8bcbc6b108e24cee | [
"MIT"
] | null | null | null | 07/c/3 - Square Census.py | Surferlul/csc-python-solutions | bea99e5e1e344d17fb2cb29d8bcbc6b108e24cee | [
"MIT"
] | null | null | null | 07/c/3 - Square Census.py | Surferlul/csc-python-solutions | bea99e5e1e344d17fb2cb29d8bcbc6b108e24cee | [
"MIT"
] | null | null | null | n=int(input())
c = 1
while c**2 < n:
print(c**2)
c += 1
| 10.333333 | 15 | 0.451613 | 14 | 62 | 2 | 0.571429 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.290323 | 62 | 5 | 16 | 12.4 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a3f7032251ab8fdd92446dda433cb7125e3c866d | 447 | py | Python | examples/py/async-basic.py | voBits/ccxt | edd2dd92053bd06232769a63465a43912b21eda0 | [
"MIT"
] | 73 | 2018-05-15T00:53:50.000Z | 2022-03-07T14:45:11.000Z | examples/py/async-basic.py | voBits/ccxt | edd2dd92053bd06232769a63465a43912b21eda0 | [
"MIT"
] | 46 | 2020-01-06T07:32:19.000Z | 2021-07-26T06:33:33.000Z | examples/py/async-basic.py | voBits/ccxt | edd2dd92053bd06232769a63465a43912b21eda0 | [
"MIT"
] | 11 | 2018-05-15T00:09:30.000Z | 2022-03-07T14:45:27.000Z | # -*- coding: utf-8 -*-
import asyncio
import os
import sys
root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(root + '/python')
import ccxt.async as ccxt # noqa: E402
async def test_gdax():
gdax = ccxt.gdax()
markets = await gdax.load_markets()
await gdax.close()
return markets
if __name__ == '__main__':
print(asyncio.get_event_loop().run_until_complete(test_gdax()))
| 21.285714 | 83 | 0.695749 | 65 | 447 | 4.492308 | 0.569231 | 0.082192 | 0.133562 | 0.15411 | 0.15411 | 0.15411 | 0.15411 | 0.15411 | 0 | 0 | 0 | 0.01061 | 0.1566 | 447 | 20 | 84 | 22.35 | 0.763926 | 0.071588 | 0 | 0 | 0 | 0 | 0.036408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.307692 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
a3f86c1b680627a4f18d2261e3c26090baebd672 | 261 | py | Python | xview/datasets/wrapper.py | ethz-asl/modular_semantic_segmentation | 7c950f24df11540a7ddae4ff806d5b31934a3210 | [
"BSD-3-Clause"
] | 20 | 2018-08-01T15:02:59.000Z | 2021-04-19T07:22:17.000Z | xview/datasets/wrapper.py | davesean/modular_semantic_segmentation | 5f9e34243915b862e8fef5e6195f1e29f4cebf50 | [
"BSD-3-Clause"
] | null | null | null | xview/datasets/wrapper.py | davesean/modular_semantic_segmentation | 5f9e34243915b862e8fef5e6195f1e29f4cebf50 | [
"BSD-3-Clause"
] | 9 | 2018-08-01T15:03:03.000Z | 2019-12-17T05:12:48.000Z | from abc import ABCMeta, abstractmethod
class DataWrapper:
"""Interface for access to datasets."""
__metaclass__ = ABCMeta
@abstractmethod
def next(self):
"""Returns next minibatch for training."""
return NotImplementedError
| 20.076923 | 50 | 0.685824 | 25 | 261 | 7 | 0.84 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233716 | 261 | 12 | 51 | 21.75 | 0.875 | 0.268199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a3f937683bc5952ca13a05b1c4f5742ed9f21027 | 2,307 | py | Python | partd/core.py | jrbourbeau/partd | 74016a296a760de9c7a0e0d4b012a3478c9a0831 | [
"BSD-3-Clause"
] | 2 | 2018-12-29T13:47:40.000Z | 2018-12-29T13:47:49.000Z | partd/core.py | jrbourbeau/partd | 74016a296a760de9c7a0e0d4b012a3478c9a0831 | [
"BSD-3-Clause"
] | 2 | 2021-05-11T16:00:55.000Z | 2021-08-23T20:45:22.000Z | partd/core.py | jrbourbeau/partd | 74016a296a760de9c7a0e0d4b012a3478c9a0831 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import absolute_import
import os
import shutil
import locket
import string
from toolz import memoize
from contextlib import contextmanager
from .utils import nested_get, flatten
# http://stackoverflow.com/questions/295135/turn-a-string-into-a-valid-filename-in-python
valid_chars = "-_.() " + string.ascii_letters + string.digits + os.path.sep
def escape_filename(fn):
""" Escape text so that it is a valid filename
>>> escape_filename('Foo!bar?')
'Foobar'
"""
return ''.join(filter(valid_chars.__contains__, fn))
def filename(path, key):
return os.path.join(path, escape_filename(token(key)))
def token(key):
"""
>>> token('hello')
'hello'
>>> token(('hello', 'world')) # doctest: +SKIP
'hello/world'
"""
if isinstance(key, str):
return key
elif isinstance(key, tuple):
return os.path.join(*map(token, key))
else:
return str(key)
class Interface(object):
def __init__(self):
self._iset_seen = set()
def __setstate__(self, state):
self.__dict__.update(state)
self._iset_seen = set()
def iset(self, key, value, **kwargs):
if key in self._iset_seen:
return
else:
self._iset(key, value, **kwargs)
self._iset_seen.add(key)
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.drop()
def iget(self, key):
return self._get([key], lock=False)[0]
def get(self, keys, **kwargs):
if not isinstance(keys, list):
return self.get([keys], **kwargs)[0]
elif any(isinstance(key, list) for key in keys): # nested case
flatkeys = list(flatten(keys))
result = self.get(flatkeys, **kwargs)
return nested_get(keys, dict(zip(flatkeys, result)))
else:
return self._get(keys, **kwargs)
def delete(self, keys, **kwargs):
if not isinstance(keys, list):
return self._delete([keys], **kwargs)
else:
return self._delete(keys, **kwargs)
def pop(self, keys, **kwargs):
with self.partd.lock:
result = self.partd.get(keys, lock=False)
self.partd.delete(keys, lock=False)
return result
| 24.806452 | 89 | 0.604681 | 288 | 2,307 | 4.666667 | 0.34375 | 0.052083 | 0.035714 | 0.02381 | 0.154762 | 0.06994 | 0.06994 | 0.06994 | 0.06994 | 0.06994 | 0 | 0.004714 | 0.264413 | 2,307 | 92 | 90 | 25.076087 | 0.787272 | 0.118769 | 0 | 0.140351 | 0 | 0 | 0.003027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.140351 | 0.052632 | 0.614035 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
a3ff284c249c767a8e6d1b66a73bf03b2d790a9e | 366 | py | Python | packages/starcheck/post_regress.py | sot/ska_testr | dd84b89d0b5ebf6158c6cda4c1df432138044e20 | [
"MIT"
] | null | null | null | packages/starcheck/post_regress.py | sot/ska_testr | dd84b89d0b5ebf6158c6cda4c1df432138044e20 | [
"MIT"
] | 27 | 2016-10-19T19:39:46.000Z | 2022-03-04T14:56:40.000Z | packages/starcheck/post_regress.py | sot/ska_testr | dd84b89d0b5ebf6158c6cda4c1df432138044e20 | [
"MIT"
] | null | null | null | import os
from testr.packages import make_regress_files
regress_files = ['starcheck.txt',
'starcheck/pcad_att_check.txt']
clean = {'starcheck.txt': [(r'\s*Run on.*[\n\r]*', ''),
(os.environ['SKA'], '')],
'starcheck/pcad_att_check.txt': [(os.environ['SKA'], '')]}
make_regress_files(regress_files, clean=clean)
| 30.5 | 67 | 0.592896 | 45 | 366 | 4.6 | 0.466667 | 0.231884 | 0.154589 | 0.222222 | 0.502415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215847 | 366 | 11 | 68 | 33.272727 | 0.721254 | 0 | 0 | 0 | 0 | 0 | 0.289617 | 0.153005 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4301cf37bd9ece6b54456c22562dfc5aa8e8a7cb | 748 | py | Python | product_details/utils.py | gene1wood/django-product-details | 53f245d76fa11d073ba686e0ece7b0293ec21942 | [
"BSD-3-Clause"
] | null | null | null | product_details/utils.py | gene1wood/django-product-details | 53f245d76fa11d073ba686e0ece7b0293ec21942 | [
"BSD-3-Clause"
] | null | null | null | product_details/utils.py | gene1wood/django-product-details | 53f245d76fa11d073ba686e0ece7b0293ec21942 | [
"BSD-3-Clause"
] | null | null | null | from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from product_details import settings_defaults
def settings_fallback(key):
"""Grab user-defined settings, or fall back to default."""
try:
return getattr(settings, key)
except (AttributeError, ImportError, ImproperlyConfigured):
return getattr(settings_defaults, key)
def get_django_cache(cache_name):
try:
from django.core.cache import caches # django 1.7+
return caches[cache_name]
except ImportError:
from django.core.cache import get_cache
return get_cache(cache_name)
except ImproperlyConfigured:
# dance to get around not-setup-django at import time
return {}
| 29.92 | 63 | 0.720588 | 91 | 748 | 5.802198 | 0.43956 | 0.075758 | 0.079545 | 0.07197 | 0.094697 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003401 | 0.213904 | 748 | 24 | 64 | 31.166667 | 0.894558 | 0.156417 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.411765 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
430bbb9266cf6f1301fe26015af1bcd016d7ae1a | 862 | py | Python | src/gauss_n.py | Konstantysz/InterGen | 1a1d0bde165f864daea70c6339a9b8426343fdd9 | [
"MIT"
] | null | null | null | src/gauss_n.py | Konstantysz/InterGen | 1a1d0bde165f864daea70c6339a9b8426343fdd9 | [
"MIT"
] | null | null | null | src/gauss_n.py | Konstantysz/InterGen | 1a1d0bde165f864daea70c6339a9b8426343fdd9 | [
"MIT"
] | null | null | null | from numba import jit
import numpy as np
@jit(nopython=True, parallel=True)
def gauss_n(X, Y, mu_x = 0.0, mu_y = 0.0, amp = 1.0, sigma = 3.0):
'''
Function that generates 2D discrete gaussian distribution.
Boosted with Numba: works in C and with parallel computing.
Parameters
----------
X : numpy.ndarray
meshgrided values in X axis
Y : numpy.ndarray
meshgrided values in Y axis
mu_x : float
Displacement in X axis
mu_y : float
Displacement in Y axis
amp : float
Amplitude of gaussian distribution
sigma : float
Std dev of gaussian distribution
Returns:
----------
val : numpy.ndarray
matrix of 2D gaussian distribution
'''
exponent = ((X - mu_x)**2 + (Y - mu_y)**2) / 2*sigma
val = (amp*np.exp(-exponent))
return val | 26.121212 | 66 | 0.598608 | 120 | 862 | 4.241667 | 0.441667 | 0.157171 | 0.086444 | 0.11002 | 0.117878 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021595 | 0.301624 | 862 | 33 | 67 | 26.121212 | 0.82392 | 0.588167 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
430f3dd58c283b4aea777f240b325f4a7f3a3026 | 332 | py | Python | run.py | seanzhangJM/torch_model_demo | 3ab3e841e77cf780198516c1910c906acdd3082d | [
"MIT"
] | null | null | null | run.py | seanzhangJM/torch_model_demo | 3ab3e841e77cf780198516c1910c906acdd3082d | [
"MIT"
] | null | null | null | run.py | seanzhangJM/torch_model_demo | 3ab3e841e77cf780198516c1910c906acdd3082d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# _*_ coding: utf-8 _*_
# @Time : 2021/12/27 14:04
# @Author : zhangjianming
# @Email : YYDSPanda@163.com
# @File : run_task.py
# @Software: PyCharm
import sys
sys.path.extend(["."])
from torch_model_demo.task.run_task import train_fashion_demo
if __name__ == '__main__':
train_fashion_demo()
| 19.529412 | 61 | 0.683735 | 47 | 332 | 4.404255 | 0.808511 | 0.067633 | 0.154589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.168675 | 332 | 16 | 62 | 20.75 | 0.692029 | 0.5 | 0 | 0 | 0 | 0 | 0.056604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
43118cb0eb019b0c97db7741f34ce6ca041f8dc1 | 296 | py | Python | ASR_TransV1/Load_sp_model.py | HariKrishna-Vydana/ASR_Transformer | a37dc7f1add148b14ca1d265d72fc4e9d9dd0fc0 | [
"MIT"
] | 1 | 2020-10-25T00:21:40.000Z | 2020-10-25T00:21:40.000Z | ASR_TransV1/Load_sp_model.py | HariKrishna-Vydana/ASR_Transformer | a37dc7f1add148b14ca1d265d72fc4e9d9dd0fc0 | [
"MIT"
] | null | null | null | ASR_TransV1/Load_sp_model.py | HariKrishna-Vydana/ASR_Transformer | a37dc7f1add148b14ca1d265d72fc4e9d9dd0fc0 | [
"MIT"
] | 1 | 2021-09-08T10:32:55.000Z | 2021-09-08T10:32:55.000Z | #!/usr/bin/python
import sys
import os
from os.path import join, isdir
import sentencepiece as spm
#--------------------------
def Load_sp_models(PATH):
PATH_model = spm.SentencePieceProcessor()
PATH_model.Load(join(PATH))
return PATH_model
#--------------------------
| 19.733333 | 49 | 0.581081 | 34 | 296 | 4.911765 | 0.588235 | 0.161677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172297 | 296 | 14 | 50 | 21.142857 | 0.681633 | 0.22973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4312500ffaaa31023ff14a2c64c200a842122fb2 | 2,213 | py | Python | fiepipedesktoplib/gitlabserver/shell/manager.py | leith-bartrich/fiepipe_desktop | 5136141d67a59e9a2afb79f368a6a02f2d61d2da | [
"MIT"
] | null | null | null | fiepipedesktoplib/gitlabserver/shell/manager.py | leith-bartrich/fiepipe_desktop | 5136141d67a59e9a2afb79f368a6a02f2d61d2da | [
"MIT"
] | null | null | null | fiepipedesktoplib/gitlabserver/shell/manager.py | leith-bartrich/fiepipe_desktop | 5136141d67a59e9a2afb79f368a6a02f2d61d2da | [
"MIT"
] | null | null | null | import typing
from fiepipelib.gitlabserver.data.gitlab_server import GitLabServer
from fiepipelib.gitlabserver.routines.manager import GitLabServerManagerInteractiveRoutines
from fiepipedesktoplib.gitlabserver.shell.gitlab_hostname_input_ui import GitLabHostnameInputDefaultShellUI
from fiepipedesktoplib.gitlabserver.shell.gitlab_username_input_ui import GitLabUsernameInputDefaultShellUI
from fiepipedesktoplib.gitlabserver.shell.gitlab_private_token_input_ui import GitLabPrivateTokenInputDefaultShellUI
from fiepipedesktoplib.gitlabserver.shell.gitlabserver import GitLabServerShell
from fiepipedesktoplib.gitlabserver.shell.server_name_var_command import GitLabServerNameVar
from fiepipedesktoplib.locallymanagedtypes.shells.AbstractLocalManagedTypeCommand import LocalManagedTypeCommand
from fiepipedesktoplib.shells.AbstractShell import AbstractShell
from fiepipedesktoplib.shells.variables.fqdn_var_command import FQDNVarCommand
class GitLabServerManagerShell(LocalManagedTypeCommand[GitLabServer]):
def get_routines(self) -> GitLabServerManagerInteractiveRoutines:
return GitLabServerManagerInteractiveRoutines(feedback_ui=self.get_feedback_ui(),
hostname_input_default_ui=GitLabHostnameInputDefaultShellUI(self),
username_input_default_ui=GitLabUsernameInputDefaultShellUI(self),
private_token_input_default_ui=GitLabPrivateTokenInputDefaultShellUI(self))
def get_shell(self, item: GitLabServer) -> AbstractShell:
# no shell currently. We call super instead.
server_name = GitLabServerNameVar()
server_name.set_value(item.get_name())
return GitLabServerShell(server_name)
def get_plugin_names_v1(self) -> typing.List[str]:
ret = super(GitLabServerManagerShell, self).get_plugin_names_v1()
ret.append("gitlabserver.manager")
return ret
def get_prompt_text(self) -> str:
return self.prompt_separator.join(['GitLabServer', 'Manager'])
def main():
shell = GitLabServerManagerShell()
shell.cmdloop()
if __name__ == '__main__':
main()
| 49.177778 | 129 | 0.770899 | 197 | 2,213 | 8.390863 | 0.335025 | 0.101633 | 0.099819 | 0.114943 | 0.079855 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001089 | 0.169905 | 2,213 | 44 | 130 | 50.295455 | 0.898748 | 0.019431 | 0 | 0 | 0 | 0 | 0.021679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15625 | false | 0 | 0.34375 | 0.0625 | 0.65625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
4312e79aaad5f7fe2f84f838da0893835b628082 | 470 | py | Python | fairseq/models/wav2vec/eteh_model/transformer/repeat.py | gaochangfeng/fairseq | 70a468230b8fb558caa394322b02fface663e17a | [
"MIT"
] | null | null | null | fairseq/models/wav2vec/eteh_model/transformer/repeat.py | gaochangfeng/fairseq | 70a468230b8fb558caa394322b02fface663e17a | [
"MIT"
] | null | null | null | fairseq/models/wav2vec/eteh_model/transformer/repeat.py | gaochangfeng/fairseq | 70a468230b8fb558caa394322b02fface663e17a | [
"MIT"
] | null | null | null | import torch
class MultiSequential(torch.nn.Sequential):
"""Multi-input multi-output torch.nn.Sequential"""
def forward(self, *args):
for m in self:
args = m(*args)
return args
def repeat(N, fn):
"""repeat module N times
:param int N: repeat time
:param function fn: function to generate module
:return: repeated modules
:rtype: MultiSequential
"""
return MultiSequential(*[fn(n) for n in range(N)])
| 21.363636 | 54 | 0.634043 | 61 | 470 | 4.885246 | 0.52459 | 0.04698 | 0.114094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255319 | 470 | 21 | 55 | 22.380952 | 0.851429 | 0.406383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
431afd38b43ccf5ad48d645a4d0327a638eb0852 | 441 | py | Python | dbestclient/ml/density.py | horeapinca/DBEstClient | 6ccbb24853c31f2a8cc567e03c09ca7aa31e2d26 | [
"BSD-2-Clause"
] | null | null | null | dbestclient/ml/density.py | horeapinca/DBEstClient | 6ccbb24853c31f2a8cc567e03c09ca7aa31e2d26 | [
"BSD-2-Clause"
] | null | null | null | dbestclient/ml/density.py | horeapinca/DBEstClient | 6ccbb24853c31f2a8cc567e03c09ca7aa31e2d26 | [
"BSD-2-Clause"
] | 1 | 2020-09-28T14:22:54.000Z | 2020-09-28T14:22:54.000Z | # Created by Qingzhi Ma at 2019-07-23
# All right reserved
# Department of Computer Science
# the University of Warwick
# Q.Ma.2@warwick.ac.uk
from sklearn.neighbors import KernelDensity
class DBEstDensity:
def __init__(self, kernel=None):
if kernel is None:
self.kernel = 'gaussian'
self.kde = None
def fit(self, x):
self.kde = KernelDensity(kernel=self.kernel).fit(x)
return self.kde | 24.5 | 59 | 0.671202 | 62 | 441 | 4.709677 | 0.66129 | 0.10274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026786 | 0.238095 | 441 | 18 | 60 | 24.5 | 0.842262 | 0.29932 | 0 | 0 | 0 | 0 | 0.026316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
43278d398c31ca35a7dadee17fca420abdd89662 | 608 | py | Python | api/urls.py | nf1s/covid-backend | 5529cccad2b0b596d8a720fd6211035e6376820f | [
"MIT"
] | null | null | null | api/urls.py | nf1s/covid-backend | 5529cccad2b0b596d8a720fd6211035e6376820f | [
"MIT"
] | 1 | 2020-03-21T16:20:28.000Z | 2020-03-21T16:20:28.000Z | api/urls.py | ahmednafies/covid-backend | 5529cccad2b0b596d8a720fd6211035e6376820f | [
"MIT"
] | null | null | null | from sanic import Blueprint
from sanic_transmute import add_route
from .views import (
get_all,
get_status_by_country_id,
get_status_by_country_name,
get_deaths,
get_active_cases,
get_recovered_cases,
get_confirmed_cases,
list_countries,
)
cases = Blueprint("cases", url_prefix="/cases")
add_route(cases, get_all)
add_route(cases, get_status_by_country_id)
add_route(cases, get_status_by_country_name)
add_route(cases, get_deaths)
add_route(cases, get_active_cases)
add_route(cases, get_recovered_cases)
add_route(cases, get_confirmed_cases)
add_route(cases, list_countries)
| 26.434783 | 47 | 0.804276 | 93 | 608 | 4.774194 | 0.258065 | 0.162162 | 0.234234 | 0.252252 | 0.38964 | 0.13964 | 0.13964 | 0 | 0 | 0 | 0 | 0 | 0.121711 | 608 | 22 | 48 | 27.636364 | 0.831461 | 0 | 0 | 0 | 0 | 0 | 0.018092 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.095238 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
432938f7572380d6dce4bd872cd6f38e7889cce7 | 863 | py | Python | app/migrations/0005_auto_20210619_2310.py | hungitptit/boecdjango | a1125bd292b5fd3a0610eda6e592017f8268c96c | [
"MIT"
] | null | null | null | app/migrations/0005_auto_20210619_2310.py | hungitptit/boecdjango | a1125bd292b5fd3a0610eda6e592017f8268c96c | [
"MIT"
] | null | null | null | app/migrations/0005_auto_20210619_2310.py | hungitptit/boecdjango | a1125bd292b5fd3a0610eda6e592017f8268c96c | [
"MIT"
] | null | null | null | # Generated by Django 3.2.4 on 2021-06-19 16:10
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('app', '0004_auto_20210619_1802'),
]
operations = [
migrations.AddField(
model_name='comment',
name='create_at',
field=models.DateTimeField(auto_now_add=True, db_column='create_at', default=django.utils.timezone.now),
preserve_default=False,
),
migrations.AddField(
model_name='comment',
name='subject',
field=models.CharField(blank=True, max_length=255),
),
migrations.AddField(
model_name='comment',
name='update_at',
field=models.DateTimeField(auto_now=True, db_column='update_at'),
),
]
| 27.83871 | 116 | 0.602549 | 93 | 863 | 5.408602 | 0.537634 | 0.107356 | 0.137177 | 0.161034 | 0.357853 | 0.357853 | 0 | 0 | 0 | 0 | 0 | 0.054927 | 0.282735 | 863 | 30 | 117 | 28.766667 | 0.757674 | 0.052144 | 0 | 0.375 | 1 | 0 | 0.110294 | 0.028186 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
432e74ae233189ec17dd1f03b1127352c4327439 | 1,518 | py | Python | courses/models.py | Biswa5812/CaramelIT-Django-Backend | 1f896cb75295d17345a862b99837f0bdf60868b4 | [
"MIT"
] | 1 | 2021-08-06T08:36:40.000Z | 2021-08-06T08:36:40.000Z | courses/models.py | Biswa5812/CaramelIT-Django-Backend | 1f896cb75295d17345a862b99837f0bdf60868b4 | [
"MIT"
] | 7 | 2021-04-08T21:58:03.000Z | 2022-01-13T03:09:17.000Z | courses/models.py | Biswa5812/CaramelIT-Django-Backend | 1f896cb75295d17345a862b99837f0bdf60868b4 | [
"MIT"
] | 3 | 2020-07-21T07:01:31.000Z | 2021-01-16T10:47:30.000Z | from django.db import models
from django.utils import timezone
# Course Category
class Course_category(models.Model):
category_id = models.AutoField(primary_key=True)
category_name = models.CharField(max_length=100)
date_of_creation = models.DateTimeField(default=timezone.now)
# Course Subcategory
class Course_subcategory(models.Model):
subcategory_id = models.AutoField(primary_key=True)
category = models.ForeignKey(Course_category, on_delete=models.CASCADE)
subcategory_name = models.CharField(max_length=100)
date_of_creation = models.DateTimeField(default=timezone.now)
# Course
class Course(models.Model):
course_id = models.AutoField(primary_key=True)
subcategory = models.ForeignKey(Course_subcategory, on_delete=models.CASCADE)
subcategory_name = models.CharField(max_length=100)
category_name = models.CharField(max_length=100)
course_name = models.CharField(max_length=100)
date_of_creation = models.DateTimeField(default=timezone.now)
course_description = models.TextField(default="")
course_difficulty = models.CharField(max_length=30)
# Course resources
class Course_resource(models.Model):
course = models.ForeignKey(Course, on_delete=models.CASCADE)
resourse_content = models.TextField(default="NIL")
resourse_name = models.CharField(max_length=100)
resourse_link = models.CharField(max_length=200)
resourse_length = models.CharField(max_length=10)
date_of_creation = models.DateTimeField(default=timezone.now)
| 42.166667 | 81 | 0.78722 | 189 | 1,518 | 6.100529 | 0.232804 | 0.117086 | 0.140503 | 0.187337 | 0.510841 | 0.510841 | 0.457069 | 0.355594 | 0.311362 | 0.311362 | 0 | 0.018713 | 0.119895 | 1,518 | 35 | 82 | 43.371429 | 0.844311 | 0.038208 | 0 | 0.296296 | 0 | 0 | 0.002062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.074074 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
43335b3cc2cb4c21d4856a039a41d9b440f02982 | 951 | py | Python | Dominant_cell.py | xi6th/Python_Algorithm | 05852b6fe133df2d83ae464b779b0818b173919d | [
"MIT"
] | null | null | null | Dominant_cell.py | xi6th/Python_Algorithm | 05852b6fe133df2d83ae464b779b0818b173919d | [
"MIT"
] | null | null | null | Dominant_cell.py | xi6th/Python_Algorithm | 05852b6fe133df2d83ae464b779b0818b173919d | [
"MIT"
] | null | null | null | #!/bin/python3
import math
import os
import random
import re
import sys
from typing import Counter
#
# Complete the 'numCells' function below.
#
# The function is expected to return an INTEGER.
# The function accepts 2D_INTEGER_ARRAY grid as parameter.
#
def numCells(grid):
# Write your code here
n = []
m = []
for neigbours in grid:
individual = max(neigbours)
n.append(individual)
m = len(n)
return(m)
# for individuals in neigbours:
# print(individuals)
grid = [[1, 2, 7], [4, 5, 6], [8, 8, 9]]
print(numCells(grid))
# if __name__ == '__main__':
# fptr = open(os.environ['OUTPUT_PATH'], 'w')
# grid_rows = int(input().strip())
# grid_columns = int(input().strip())
# grid = []
# for _ in range(grid_rows):
# grid.append(list(map(int, input().rstrip().split())))
# result = numCells(grid)
# fptr.write(str(result) + '\n')
# fptr.close()
| 19.8125 | 63 | 0.602524 | 125 | 951 | 4.464 | 0.592 | 0.064516 | 0.046595 | 0.060932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.24816 | 951 | 47 | 64 | 20.234043 | 0.765035 | 0.599369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 1 | 0.0625 | false | 0 | 0.375 | 0 | 0.4375 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
433a593c55202319269a697379cad0ea0390e623 | 555 | py | Python | applications/serializers.py | junlegend/back-landing-career | cfc01b439629e48ff058fa1693af8d5a3a37949a | [
"MIT"
] | null | null | null | applications/serializers.py | junlegend/back-landing-career | cfc01b439629e48ff058fa1693af8d5a3a37949a | [
"MIT"
] | null | null | null | applications/serializers.py | junlegend/back-landing-career | cfc01b439629e48ff058fa1693af8d5a3a37949a | [
"MIT"
] | null | null | null | from rest_framework import serializers
from applications.models import Application
class ApplicationSerializer(serializers.Serializer):
content = serializers.JSONField()
portfolio = serializers.FileField()
class ApplicationAdminSerializer(serializers.ModelSerializer):
class Meta:
model = Application
fields = ['content', 'user', 'status', 'created_at', 'updated_at', 'recruits']
class ApplicationAdminPatchSerializer(serializers.ModelSerializer):
class Meta:
model = Application
fields = ['status'] | 32.647059 | 86 | 0.736937 | 47 | 555 | 8.638298 | 0.574468 | 0.128079 | 0.152709 | 0.172414 | 0.280788 | 0.280788 | 0.280788 | 0 | 0 | 0 | 0 | 0 | 0.172973 | 555 | 17 | 87 | 32.647059 | 0.884532 | 0 | 0 | 0.307692 | 0 | 0 | 0.091727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4a36242f8ee5ebc5d59f9cbb0e67fddbadbb4a7c | 729 | py | Python | questionanswering/models/pooling.py | lvying1991/KBQA-System | 55e69c8320df3f7b199860afc76e8a0ab66f540e | [
"Apache-2.0"
] | 2 | 2019-09-10T13:20:27.000Z | 2019-11-14T12:58:40.000Z | questionanswering/models/pooling.py | lvying1991/KBQA-System | 55e69c8320df3f7b199860afc76e8a0ab66f540e | [
"Apache-2.0"
] | null | null | null | questionanswering/models/pooling.py | lvying1991/KBQA-System | 55e69c8320df3f7b199860afc76e8a0ab66f540e | [
"Apache-2.0"
] | null | null | null | import torch
from torch import nn as nn
from torch import autograd
class LogSumExpPooling1d(nn.Module):
"""Applies a 1D LogSumExp pooling over an input signal composed of several input planes.
LogSumExp is a smooth approximation of the max function.
在由多个输入平面组成的输入信号上应用1D LogSumExp池。
LogSumExp是max函数的平滑近似值。
Examples:
>>> m = LogSumExpPooling1d()
>>> input = autograd.Variable(torch.randn(4, 5, 10))
>>> m(input).squeeze()
"""
def __init__(self):
super(LogSumExpPooling1d, self).__init__()
def forward(self, x):
x.exp_()
x = x.sum(dim=-1, keepdim=True)
x.log_()
return x
def __repr__(self):
return self.__class__.__name__ + '()'
| 25.137931 | 92 | 0.650206 | 87 | 729 | 5.195402 | 0.632184 | 0.039823 | 0.066372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018116 | 0.242798 | 729 | 28 | 93 | 26.035714 | 0.800725 | 0.430727 | 0 | 0 | 0 | 0 | 0.005319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.230769 | 0.076923 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4a548d3916f1d9f7cfe21d9195722cae0fa08812 | 5,094 | py | Python | sympy/series/tests/test_demidovich.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | sympy/series/tests/test_demidovich.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | sympy/series/tests/test_demidovich.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | from sympy import (
limit,
Symbol,
oo,
sqrt,
Rational,
log,
exp,
cos,
sin,
tan,
pi,
asin,
together,
root,
S,
)
# Numbers listed with the tests refer to problem numbers in the book
# "Anti-demidovich, problemas resueltos, Ed. URSS"
x = Symbol("x")
def test_leadterm():
assert (3 + 2 * x ** (log(3) / log(2) - 1)).leadterm(x) == (3, 0)
def root3(x):
return root(x, 3)
def root4(x):
return root(x, 4)
def test_Limits_simple_0():
assert limit((2 ** (x + 1) + 3 ** (x + 1)) / (2 ** x + 3 ** x), x, oo) == 3 # 175
def test_Limits_simple_1():
assert limit((x + 1) * (x + 2) * (x + 3) / x ** 3, x, oo) == 1 # 172
assert limit(sqrt(x + 1) - sqrt(x), x, oo) == 0 # 179
assert (
limit((2 * x - 3) * (3 * x + 5) * (4 * x - 6) / (3 * x ** 3 + x - 1), x, oo)
== 8
) # Primjer 1
assert limit(x / root3(x ** 3 + 10), x, oo) == 1 # Primjer 2
assert limit((x + 1) ** 2 / (x ** 2 + 1), x, oo) == 1 # 181
def test_Limits_simple_2():
assert limit(1000 * x / (x ** 2 - 1), x, oo) == 0 # 182
assert limit((x ** 2 - 5 * x + 1) / (3 * x + 7), x, oo) is oo # 183
assert limit((2 * x ** 2 - x + 3) / (x ** 3 - 8 * x + 5), x, oo) == 0 # 184
assert limit((2 * x ** 2 - 3 * x - 4) / sqrt(x ** 4 + 1), x, oo) == 2 # 186
assert limit((2 * x + 3) / (x + root3(x)), x, oo) == 2 # 187
assert limit(x ** 2 / (10 + x * sqrt(x)), x, oo) is oo # 188
assert limit(root3(x ** 2 + 1) / (x + 1), x, oo) == 0 # 189
assert limit(sqrt(x) / sqrt(x + sqrt(x + sqrt(x))), x, oo) == 1 # 190
def test_Limits_simple_3a():
a = Symbol("a")
# issue 3513
assert together(limit((x ** 2 - (a + 1) * x + a) / (x ** 3 - a ** 3), x, a)) == (
a - 1
) / (
3 * a ** 2
) # 196
def test_Limits_simple_3b():
h = Symbol("h")
assert limit(((x + h) ** 3 - x ** 3) / h, h, 0) == 3 * x ** 2 # 197
assert limit((1 / (1 - x) - 3 / (1 - x ** 3)), x, 1) == -1 # 198
assert (
limit((sqrt(1 + x) - 1) / (root3(1 + x) - 1), x, 0) == Rational(3) / 2
) # Primer 4
assert limit((sqrt(x) - 1) / (x - 1), x, 1) == Rational(1) / 2 # 199
assert limit((sqrt(x) - 8) / (root3(x) - 4), x, 64) == 3 # 200
assert limit((root3(x) - 1) / (root4(x) - 1), x, 1) == Rational(4) / 3 # 201
assert (
limit((root3(x ** 2) - 2 * root3(x) + 1) / (x - 1) ** 2, x, 1)
== Rational(1) / 9
) # 202
def test_Limits_simple_4a():
a = Symbol("a")
assert limit((sqrt(x) - sqrt(a)) / (x - a), x, a) == 1 / (2 * sqrt(a)) # Primer 5
assert limit((sqrt(x) - 1) / (root3(x) - 1), x, 1) == Rational(3, 2) # 205
assert limit((sqrt(1 + x) - sqrt(1 - x)) / x, x, 0) == 1 # 207
assert limit(sqrt(x ** 2 - 5 * x + 6) - x, x, oo) == Rational(-5, 2) # 213
def test_limits_simple_4aa():
assert limit(x * (sqrt(x ** 2 + 1) - x), x, oo) == Rational(1) / 2 # 214
def test_Limits_simple_4b():
# issue 3511
assert limit(x - root3(x ** 3 - 1), x, oo) == 0 # 215
def test_Limits_simple_4c():
assert limit(log(1 + exp(x)) / x, x, -oo) == 0 # 267a
assert limit(log(1 + exp(x)) / x, x, oo) == 1 # 267b
def test_bounded():
assert limit(sin(x) / x, x, oo) == 0 # 216b
assert limit(x * sin(1 / x), x, 0) == 0 # 227a
def test_f1a():
# issue 3508:
assert limit((sin(2 * x) / x) ** (1 + x), x, 0) == 2 # Primer 7
def test_f1a2():
# issue 3509:
assert limit(((x - 1) / (x + 1)) ** x, x, oo) == exp(-2) # Primer 9
def test_f1b():
m = Symbol("m")
n = Symbol("n")
h = Symbol("h")
a = Symbol("a")
assert limit(sin(x) / x, x, 2) == sin(2) / 2 # 216a
assert limit(sin(3 * x) / x, x, 0) == 3 # 217
assert limit(sin(5 * x) / sin(2 * x), x, 0) == Rational(5, 2) # 218
assert limit(sin(pi * x) / sin(3 * pi * x), x, 0) == Rational(1, 3) # 219
assert limit(x * sin(pi / x), x, oo) == pi # 220
assert limit((1 - cos(x)) / x ** 2, x, 0) == S.Half # 221
assert limit(x * sin(1 / x), x, oo) == 1 # 227b
assert limit((cos(m * x) - cos(n * x)) / x ** 2, x, 0) == (
(n ** 2 - m ** 2) / 2
) # 232
assert limit((tan(x) - sin(x)) / x ** 3, x, 0) == S.Half # 233
assert limit((x - sin(2 * x)) / (x + sin(3 * x)), x, 0) == -Rational(1, 4) # 237
assert limit((1 - sqrt(cos(x))) / x ** 2, x, 0) == Rational(1, 4) # 239
assert limit((sqrt(1 + sin(x)) - sqrt(1 - sin(x))) / x, x, 0) == 1 # 240
assert limit((1 + h / x) ** x, x, oo) == exp(h) # Primer 9
assert limit((sin(x) - sin(a)) / (x - a), x, a) == cos(a) # 222, *176
assert limit((cos(x) - cos(a)) / (x - a), x, a) == -sin(a) # 223
assert limit((sin(x + h) - sin(x)) / h, h, 0) == cos(x) # 225
def test_f2a():
assert limit(((x + 1) / (2 * x + 1)) ** (x ** 2), x, oo) == 0 # Primer 8
def test_f2():
assert limit((sqrt(cos(x)) - root3(cos(x))) / (sin(x) ** 2), x, 0) == -Rational(
1, 12
) # *184
def test_f3():
a = Symbol("a")
# issue 3504
assert limit(asin(a * x) / x, x, 0) == a
| 30.686747 | 86 | 0.458186 | 886 | 5,094 | 2.594808 | 0.145598 | 0.248804 | 0.024358 | 0.07438 | 0.286646 | 0.110483 | 0.035668 | 0.020009 | 0.020009 | 0 | 0 | 0.118101 | 0.313506 | 5,094 | 165 | 87 | 30.872727 | 0.539319 | 0.085395 | 0 | 0.076923 | 0 | 0 | 0.001957 | 0 | 0 | 0 | 0 | 0 | 0.461538 | 1 | 0.162393 | false | 0 | 0.008547 | 0.017094 | 0.188034 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a618ed57cbfdde42c612f538425cdaf22f7923a | 20,082 | py | Python | yandex/cloud/access/access_pb2.py | IIKovalenko/python-sdk | 980e2c5d848eadb42799132b35a9f58ab7b27157 | [
"MIT"
] | 1 | 2019-06-07T10:45:58.000Z | 2019-06-07T10:45:58.000Z | yandex/cloud/access/access_pb2.py | IIKovalenko/python-sdk | 980e2c5d848eadb42799132b35a9f58ab7b27157 | [
"MIT"
] | null | null | null | yandex/cloud/access/access_pb2.py | IIKovalenko/python-sdk | 980e2c5d848eadb42799132b35a9f58ab7b27157 | [
"MIT"
] | null | null | null | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: yandex/cloud/access/access.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from yandex.cloud import validation_pb2 as yandex_dot_cloud_dot_validation__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='yandex/cloud/access/access.proto',
package='yandex.cloud.access',
syntax='proto3',
serialized_options=_b('Z>github.com/yandex-cloud/go-genproto/yandex/cloud/access;access'),
serialized_pb=_b('\n yandex/cloud/access/access.proto\x12\x13yandex.cloud.access\x1a\x1dyandex/cloud/validation.proto\"-\n\x07Subject\x12\x14\n\x02id\x18\x01 \x01(\tB\x08\x8a\xc8\x31\x04<=50\x12\x0c\n\x04type\x18\x02 \x01(\t\"_\n\rAccessBinding\x12\x19\n\x07role_id\x18\x01 \x01(\tB\x08\x8a\xc8\x31\x04<=50\x12\x33\n\x07subject\x18\x02 \x01(\x0b\x32\x1c.yandex.cloud.access.SubjectB\x04\xe8\xc7\x31\x01\"|\n\x19ListAccessBindingsRequest\x12!\n\x0bresource_id\x18\x01 \x01(\tB\x0c\xe8\xc7\x31\x01\x8a\xc8\x31\x04<=50\x12\x1d\n\tpage_size\x18\x02 \x01(\x03\x42\n\xfa\xc7\x31\x06<=1000\x12\x1d\n\npage_token\x18\x03 \x01(\tB\t\x8a\xc8\x31\x05<=100\"r\n\x1aListAccessBindingsResponse\x12;\n\x0f\x61\x63\x63\x65ss_bindings\x18\x01 \x03(\x0b\x32\".yandex.cloud.access.AccessBinding\x12\x17\n\x0fnext_page_token\x18\x02 \x01(\t\"\x80\x01\n\x18SetAccessBindingsRequest\x12!\n\x0bresource_id\x18\x01 \x01(\tB\x0c\xe8\xc7\x31\x01\x8a\xc8\x31\x04<=50\x12\x41\n\x0f\x61\x63\x63\x65ss_bindings\x18\x02 \x03(\x0b\x32\".yandex.cloud.access.AccessBindingB\x04\xe8\xc7\x31\x01\"0\n\x19SetAccessBindingsMetadata\x12\x13\n\x0bresource_id\x18\x01 \x01(\t\"\x8e\x01\n\x1bUpdateAccessBindingsRequest\x12!\n\x0bresource_id\x18\x01 \x01(\tB\x0c\xe8\xc7\x31\x01\x8a\xc8\x31\x04<=50\x12L\n\x15\x61\x63\x63\x65ss_binding_deltas\x18\x02 \x03(\x0b\x32\'.yandex.cloud.access.AccessBindingDeltaB\x04\xe8\xc7\x31\x01\"3\n\x1cUpdateAccessBindingsMetadata\x12\x13\n\x0bresource_id\x18\x01 \x01(\t\"\x96\x01\n\x12\x41\x63\x63\x65ssBindingDelta\x12>\n\x06\x61\x63tion\x18\x01 \x01(\x0e\x32(.yandex.cloud.access.AccessBindingActionB\x04\xe8\xc7\x31\x01\x12@\n\x0e\x61\x63\x63\x65ss_binding\x18\x02 \x01(\x0b\x32\".yandex.cloud.access.AccessBindingB\x04\xe8\xc7\x31\x01*Q\n\x13\x41\x63\x63\x65ssBindingAction\x12%\n!ACCESS_BINDING_ACTION_UNSPECIFIED\x10\x00\x12\x07\n\x03\x41\x44\x44\x10\x01\x12\n\n\x06REMOVE\x10\x02\x42@Z>github.com/yandex-cloud/go-genproto/yandex/cloud/access;accessb\x06proto3')
,
dependencies=[yandex_dot_cloud_dot_validation__pb2.DESCRIPTOR,])
_ACCESSBINDINGACTION = _descriptor.EnumDescriptor(
name='AccessBindingAction',
full_name='yandex.cloud.access.AccessBindingAction',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='ACCESS_BINDING_ACTION_UNSPECIFIED', index=0, number=0,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='ADD', index=1, number=1,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='REMOVE', index=2, number=2,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=1006,
serialized_end=1087,
)
_sym_db.RegisterEnumDescriptor(_ACCESSBINDINGACTION)
AccessBindingAction = enum_type_wrapper.EnumTypeWrapper(_ACCESSBINDINGACTION)
ACCESS_BINDING_ACTION_UNSPECIFIED = 0
ADD = 1
REMOVE = 2
_SUBJECT = _descriptor.Descriptor(
name='Subject',
full_name='yandex.cloud.access.Subject',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='yandex.cloud.access.Subject.id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\212\3101\004<=50'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='type', full_name='yandex.cloud.access.Subject.type', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=88,
serialized_end=133,
)
_ACCESSBINDING = _descriptor.Descriptor(
name='AccessBinding',
full_name='yandex.cloud.access.AccessBinding',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='role_id', full_name='yandex.cloud.access.AccessBinding.role_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\212\3101\004<=50'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='subject', full_name='yandex.cloud.access.AccessBinding.subject', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=135,
serialized_end=230,
)
_LISTACCESSBINDINGSREQUEST = _descriptor.Descriptor(
name='ListAccessBindingsRequest',
full_name='yandex.cloud.access.ListAccessBindingsRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resource_id', full_name='yandex.cloud.access.ListAccessBindingsRequest.resource_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001\212\3101\004<=50'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='page_size', full_name='yandex.cloud.access.ListAccessBindingsRequest.page_size', index=1,
number=2, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\372\3071\006<=1000'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='page_token', full_name='yandex.cloud.access.ListAccessBindingsRequest.page_token', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\212\3101\005<=100'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=232,
serialized_end=356,
)
_LISTACCESSBINDINGSRESPONSE = _descriptor.Descriptor(
name='ListAccessBindingsResponse',
full_name='yandex.cloud.access.ListAccessBindingsResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='access_bindings', full_name='yandex.cloud.access.ListAccessBindingsResponse.access_bindings', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='next_page_token', full_name='yandex.cloud.access.ListAccessBindingsResponse.next_page_token', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=358,
serialized_end=472,
)
_SETACCESSBINDINGSREQUEST = _descriptor.Descriptor(
name='SetAccessBindingsRequest',
full_name='yandex.cloud.access.SetAccessBindingsRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resource_id', full_name='yandex.cloud.access.SetAccessBindingsRequest.resource_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001\212\3101\004<=50'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='access_bindings', full_name='yandex.cloud.access.SetAccessBindingsRequest.access_bindings', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=475,
serialized_end=603,
)
_SETACCESSBINDINGSMETADATA = _descriptor.Descriptor(
name='SetAccessBindingsMetadata',
full_name='yandex.cloud.access.SetAccessBindingsMetadata',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resource_id', full_name='yandex.cloud.access.SetAccessBindingsMetadata.resource_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=605,
serialized_end=653,
)
_UPDATEACCESSBINDINGSREQUEST = _descriptor.Descriptor(
name='UpdateAccessBindingsRequest',
full_name='yandex.cloud.access.UpdateAccessBindingsRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resource_id', full_name='yandex.cloud.access.UpdateAccessBindingsRequest.resource_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001\212\3101\004<=50'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='access_binding_deltas', full_name='yandex.cloud.access.UpdateAccessBindingsRequest.access_binding_deltas', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=656,
serialized_end=798,
)
_UPDATEACCESSBINDINGSMETADATA = _descriptor.Descriptor(
name='UpdateAccessBindingsMetadata',
full_name='yandex.cloud.access.UpdateAccessBindingsMetadata',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resource_id', full_name='yandex.cloud.access.UpdateAccessBindingsMetadata.resource_id', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=800,
serialized_end=851,
)
_ACCESSBINDINGDELTA = _descriptor.Descriptor(
name='AccessBindingDelta',
full_name='yandex.cloud.access.AccessBindingDelta',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='action', full_name='yandex.cloud.access.AccessBindingDelta.action', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='access_binding', full_name='yandex.cloud.access.AccessBindingDelta.access_binding', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\350\3071\001'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=854,
serialized_end=1004,
)
_ACCESSBINDING.fields_by_name['subject'].message_type = _SUBJECT
_LISTACCESSBINDINGSRESPONSE.fields_by_name['access_bindings'].message_type = _ACCESSBINDING
_SETACCESSBINDINGSREQUEST.fields_by_name['access_bindings'].message_type = _ACCESSBINDING
_UPDATEACCESSBINDINGSREQUEST.fields_by_name['access_binding_deltas'].message_type = _ACCESSBINDINGDELTA
_ACCESSBINDINGDELTA.fields_by_name['action'].enum_type = _ACCESSBINDINGACTION
_ACCESSBINDINGDELTA.fields_by_name['access_binding'].message_type = _ACCESSBINDING
DESCRIPTOR.message_types_by_name['Subject'] = _SUBJECT
DESCRIPTOR.message_types_by_name['AccessBinding'] = _ACCESSBINDING
DESCRIPTOR.message_types_by_name['ListAccessBindingsRequest'] = _LISTACCESSBINDINGSREQUEST
DESCRIPTOR.message_types_by_name['ListAccessBindingsResponse'] = _LISTACCESSBINDINGSRESPONSE
DESCRIPTOR.message_types_by_name['SetAccessBindingsRequest'] = _SETACCESSBINDINGSREQUEST
DESCRIPTOR.message_types_by_name['SetAccessBindingsMetadata'] = _SETACCESSBINDINGSMETADATA
DESCRIPTOR.message_types_by_name['UpdateAccessBindingsRequest'] = _UPDATEACCESSBINDINGSREQUEST
DESCRIPTOR.message_types_by_name['UpdateAccessBindingsMetadata'] = _UPDATEACCESSBINDINGSMETADATA
DESCRIPTOR.message_types_by_name['AccessBindingDelta'] = _ACCESSBINDINGDELTA
DESCRIPTOR.enum_types_by_name['AccessBindingAction'] = _ACCESSBINDINGACTION
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Subject = _reflection.GeneratedProtocolMessageType('Subject', (_message.Message,), dict(
DESCRIPTOR = _SUBJECT,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.Subject)
))
_sym_db.RegisterMessage(Subject)
AccessBinding = _reflection.GeneratedProtocolMessageType('AccessBinding', (_message.Message,), dict(
DESCRIPTOR = _ACCESSBINDING,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.AccessBinding)
))
_sym_db.RegisterMessage(AccessBinding)
ListAccessBindingsRequest = _reflection.GeneratedProtocolMessageType('ListAccessBindingsRequest', (_message.Message,), dict(
DESCRIPTOR = _LISTACCESSBINDINGSREQUEST,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.ListAccessBindingsRequest)
))
_sym_db.RegisterMessage(ListAccessBindingsRequest)
ListAccessBindingsResponse = _reflection.GeneratedProtocolMessageType('ListAccessBindingsResponse', (_message.Message,), dict(
DESCRIPTOR = _LISTACCESSBINDINGSRESPONSE,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.ListAccessBindingsResponse)
))
_sym_db.RegisterMessage(ListAccessBindingsResponse)
SetAccessBindingsRequest = _reflection.GeneratedProtocolMessageType('SetAccessBindingsRequest', (_message.Message,), dict(
DESCRIPTOR = _SETACCESSBINDINGSREQUEST,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.SetAccessBindingsRequest)
))
_sym_db.RegisterMessage(SetAccessBindingsRequest)
SetAccessBindingsMetadata = _reflection.GeneratedProtocolMessageType('SetAccessBindingsMetadata', (_message.Message,), dict(
DESCRIPTOR = _SETACCESSBINDINGSMETADATA,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.SetAccessBindingsMetadata)
))
_sym_db.RegisterMessage(SetAccessBindingsMetadata)
UpdateAccessBindingsRequest = _reflection.GeneratedProtocolMessageType('UpdateAccessBindingsRequest', (_message.Message,), dict(
DESCRIPTOR = _UPDATEACCESSBINDINGSREQUEST,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.UpdateAccessBindingsRequest)
))
_sym_db.RegisterMessage(UpdateAccessBindingsRequest)
UpdateAccessBindingsMetadata = _reflection.GeneratedProtocolMessageType('UpdateAccessBindingsMetadata', (_message.Message,), dict(
DESCRIPTOR = _UPDATEACCESSBINDINGSMETADATA,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.UpdateAccessBindingsMetadata)
))
_sym_db.RegisterMessage(UpdateAccessBindingsMetadata)
AccessBindingDelta = _reflection.GeneratedProtocolMessageType('AccessBindingDelta', (_message.Message,), dict(
DESCRIPTOR = _ACCESSBINDINGDELTA,
__module__ = 'yandex.cloud.access.access_pb2'
# @@protoc_insertion_point(class_scope:yandex.cloud.access.AccessBindingDelta)
))
_sym_db.RegisterMessage(AccessBindingDelta)
DESCRIPTOR._options = None
_SUBJECT.fields_by_name['id']._options = None
_ACCESSBINDING.fields_by_name['role_id']._options = None
_ACCESSBINDING.fields_by_name['subject']._options = None
_LISTACCESSBINDINGSREQUEST.fields_by_name['resource_id']._options = None
_LISTACCESSBINDINGSREQUEST.fields_by_name['page_size']._options = None
_LISTACCESSBINDINGSREQUEST.fields_by_name['page_token']._options = None
_SETACCESSBINDINGSREQUEST.fields_by_name['resource_id']._options = None
_SETACCESSBINDINGSREQUEST.fields_by_name['access_bindings']._options = None
_UPDATEACCESSBINDINGSREQUEST.fields_by_name['resource_id']._options = None
_UPDATEACCESSBINDINGSREQUEST.fields_by_name['access_binding_deltas']._options = None
_ACCESSBINDINGDELTA.fields_by_name['action']._options = None
_ACCESSBINDINGDELTA.fields_by_name['access_binding']._options = None
# @@protoc_insertion_point(module_scope)
| 40.900204 | 1,963 | 0.7684 | 2,376 | 20,082 | 6.204545 | 0.100589 | 0.034731 | 0.065731 | 0.039886 | 0.651404 | 0.609212 | 0.518926 | 0.469271 | 0.447565 | 0.443495 | 0 | 0.045196 | 0.105368 | 20,082 | 490 | 1,964 | 40.983673 | 0.775353 | 0.044517 | 0 | 0.623288 | 1 | 0.015982 | 0.23245 | 0.191196 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015982 | 0 | 0.015982 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4aa67ef1976bb462a8e4797f9376dea3623f23b3 | 4,432 | py | Python | test/test_random.py | kevinintel/neural-compressor | b57645566aeff8d3c18dc49d2739a583c072f940 | [
"Apache-2.0"
] | 100 | 2020-12-01T02:40:12.000Z | 2021-09-09T08:14:22.000Z | test/test_random.py | kevinintel/neural-compressor | b57645566aeff8d3c18dc49d2739a583c072f940 | [
"Apache-2.0"
] | 25 | 2021-01-05T00:16:17.000Z | 2021-09-10T03:24:01.000Z | test/test_random.py | kevinintel/neural-compressor | b57645566aeff8d3c18dc49d2739a583c072f940 | [
"Apache-2.0"
] | 25 | 2020-12-01T19:07:08.000Z | 2021-08-30T14:20:07.000Z | """Tests for quantization"""
import numpy as np
import unittest
import os
import shutil
import yaml
import tensorflow as tf
def build_fake_yaml():
fake_yaml = '''
model:
name: fake_yaml
framework: tensorflow
inputs: x
outputs: op_to_store
device: cpu
evaluation:
accuracy:
metric:
topk: 1
tuning:
strategy:
name: random
accuracy_criterion:
relative: 0.01
workspace:
path: saved
'''
y = yaml.load(fake_yaml, Loader=yaml.SafeLoader)
with open('fake_yaml.yaml', "w", encoding="utf-8") as f:
yaml.dump(y, f)
f.close()
def build_fake_yaml2():
fake_yaml = '''
model:
name: fake_yaml
framework: tensorflow
inputs: x
outputs: op_to_store
device: cpu
evaluation:
accuracy:
metric:
topk: 1
tuning:
strategy:
name: random
exit_policy:
max_trials: 5
accuracy_criterion:
relative: -0.01
workspace:
path: saved
'''
y = yaml.load(fake_yaml, Loader=yaml.SafeLoader)
with open('fake_yaml2.yaml', "w", encoding="utf-8") as f:
yaml.dump(y, f)
f.close()
def build_fake_model():
try:
graph = tf.Graph()
graph_def = tf.GraphDef()
with tf.Session() as sess:
x = tf.placeholder(tf.float64, shape=(1, 3, 3, 1), name='x')
y = tf.constant(np.random.random((2, 2, 1, 1)), name='y')
op = tf.nn.conv2d(input=x, filter=y, strides=[
1, 1, 1, 1], padding='VALID', name='op_to_store')
sess.run(tf.global_variables_initializer())
constant_graph = tf.graph_util.convert_variables_to_constants(
sess, sess.graph_def, ['op_to_store'])
graph_def.ParseFromString(constant_graph.SerializeToString())
with graph.as_default():
tf.import_graph_def(graph_def, name='')
except:
graph = tf.Graph()
graph_def = tf.compat.v1.GraphDef()
with tf.compat.v1.Session() as sess:
x = tf.compat.v1.placeholder(tf.float64, shape=(1, 3, 3, 1), name='x')
y = tf.compat.v1.constant(np.random.random((2, 2, 1, 1)), name='y')
op = tf.nn.conv2d(input=x, filters=y, strides=[
1, 1, 1, 1], padding='VALID', name='op_to_store')
sess.run(tf.compat.v1.global_variables_initializer())
constant_graph = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, [
'op_to_store'])
graph_def.ParseFromString(constant_graph.SerializeToString())
with graph.as_default():
tf.import_graph_def(graph_def, name='')
return graph
class TestQuantization(unittest.TestCase):
@classmethod
def setUpClass(self):
self.constant_graph = build_fake_model()
build_fake_yaml()
build_fake_yaml2()
@classmethod
def tearDownClass(self):
os.remove('fake_yaml.yaml')
os.remove('fake_yaml2.yaml')
shutil.rmtree("saved", ignore_errors=True)
def test_ru_random_one_trial(self):
from neural_compressor.experimental import Quantization, common
quantizer = Quantization('fake_yaml.yaml')
dataset = quantizer.dataset('dummy', (100, 3, 3, 1), label=True)
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.eval_dataloader = common.DataLoader(dataset)
quantizer.model = self.constant_graph
quantizer()
def test_ru_random_max_trials(self):
from neural_compressor.experimental import Quantization, common
quantizer = Quantization('fake_yaml2.yaml')
dataset = quantizer.dataset('dummy', (100, 3, 3, 1), label=True)
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.eval_dataloader = common.DataLoader(dataset)
quantizer.model = self.constant_graph
quantizer()
if __name__ == "__main__":
unittest.main()
| 32.115942 | 108 | 0.559792 | 495 | 4,432 | 4.818182 | 0.252525 | 0.036897 | 0.022642 | 0.055346 | 0.761426 | 0.748008 | 0.695178 | 0.695178 | 0.695178 | 0.695178 | 0 | 0.02168 | 0.333935 | 4,432 | 137 | 109 | 32.350365 | 0.786247 | 0.004964 | 0 | 0.573913 | 0 | 0 | 0.243262 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06087 | false | 0 | 0.086957 | 0 | 0.165217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4aa9bbcefe6db481163c6d0a501873756cbebc17 | 565 | py | Python | src/sentry/receivers/experiments.py | FelixSchwarz/sentry | 7c92c4fa2b6b9f214764f48c82594acae1549e52 | [
"BSD-3-Clause"
] | null | null | null | src/sentry/receivers/experiments.py | FelixSchwarz/sentry | 7c92c4fa2b6b9f214764f48c82594acae1549e52 | [
"BSD-3-Clause"
] | null | null | null | src/sentry/receivers/experiments.py | FelixSchwarz/sentry | 7c92c4fa2b6b9f214764f48c82594acae1549e52 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function, absolute_import
from sentry import analytics
from sentry.signals import join_request_created, join_request_link_viewed
@join_request_created.connect(weak=False)
def record_join_request_created(member, **kwargs):
analytics.record(
"join_request.created", member_id=member.id, organization_id=member.organization_id
)
@join_request_link_viewed.connect(weak=False)
def record_join_request_link_viewed(organization, **kwargs):
analytics.record("join_request.link_viewed", organization_id=organization.id)
| 33.235294 | 91 | 0.823009 | 74 | 565 | 5.878378 | 0.310811 | 0.202299 | 0.165517 | 0.193103 | 0.473563 | 0.305747 | 0.165517 | 0 | 0 | 0 | 0 | 0 | 0.095575 | 565 | 16 | 92 | 35.3125 | 0.851272 | 0 | 0 | 0 | 0 | 0 | 0.077876 | 0.042478 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.454545 | 0.090909 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4abeb59415a08109665cd4a0b2b19c7296f2ab4d | 6,316 | py | Python | src/abaqus/Material/Elastic/Linear/Elastic.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | 7 | 2022-01-21T09:15:45.000Z | 2022-02-15T09:31:58.000Z | src/abaqus/Material/Elastic/Linear/Elastic.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | src/abaqus/Material/Elastic/Linear/Elastic.py | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | from abaqusConstants import *
from .FailStrain import FailStrain
from .FailStress import FailStress
class Elastic:
"""The Elastic object specifies elastic material properties.
Notes
-----
This object can be accessed by:
.. code-block:: python
import material
mdb.models[name].materials[name].elastic
import odbMaterial
session.odbs[name].materials[name].elastic
The table data for this object are:
- If *type*=ISOTROPIC, the table data specify the following:
- The Young's modulus, E.
- The Poisson's ratio, v.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=SHEAR, the table data specify the following:
- The shear modulus,G.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=ENGINEERING_CONSTANTS, the table data specify the following:
- E1.
- E2.
- E3.
- v12.
- v13.
- v23.
- G12.
- G13.
- G23.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=LAMINA, the table data specify the following:
- E1.
- E2.
- v12.
- G12.
- G13. This shear modulus is needed to define transverse shear behavior in shells.
- G23. This shear modulus is needed to define transverse shear behavior in shells.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=ORTHOTROPIC, the table data specify the following:
- D1111.
- D1122.
- D2222.
- D1133.
- D2233.
- D3333.
- D1212.
- D1313.
- D2323.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=ANISOTROPIC, the table data specify the following:
- D1111.
- D1122.
- D2222.
- D1133.
- D2233.
- D3333.
- D1112.
- D2212.
- D3312.
- D1212.
- D1113.
- D2213.
- D3313.
- D1213.
- D1313.
- D1123.
- D2223.
- D3323.
- D1223.
- D1323.
- D2323.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=TRACTION, the table data specify the following:
- EE for warping elements; Enn for cohesive elements.
- G1 for warping elements; Ess for cohesive elements.
- G2 for warping elements; Ett for cohesive elements.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=BILAMINA, the table data specify the following:
- E1+.
- E2+.
- v12+.
- G12.
- E1-.
- E2-.
- v112-.
- Temperature, if the data depend on temperature.
- Value of the first field variable, if the data depend on field variables.
- Value of the second field variable.
- Etc.
- If *type*=SHORT_FIBER, there is no table data.
The corresponding analysis keywords are:
- ELASTIC
"""
# A FailStress object.
failStress: FailStress = FailStress(((),))
# A FailStrain object.
failStrain: FailStrain = FailStrain(((),))
def __init__(self, table: tuple, type: SymbolicConstant = ISOTROPIC, noCompression: Boolean = OFF,
noTension: Boolean = OFF, temperatureDependency: Boolean = OFF, dependencies: int = 0,
moduli: SymbolicConstant = LONG_TERM):
"""This method creates an Elastic object.
Notes
-----
This function can be accessed by:
.. code-block:: python
mdb.models[name].materials[name].Elastic
session.odbs[name].materials[name].Elastic
Parameters
----------
table
A sequence of sequences of Floats specifying the items described below.
type
A SymbolicConstant specifying the type of elasticity data provided. Possible values are:
- ISOTROPIC
- ORTHOTROPIC
- ANISOTROPIC
- ENGINEERING_CONSTANTS
- LAMINA
- TRACTION
- COUPLED_TRACTION
- SHORT_FIBER
- SHEAR
- BILAMINA
The default value is ISOTROPIC.
noCompression
A Boolean specifying whether compressive stress is allowed. The default value is OFF.
noTension
A Boolean specifying whether tensile stress is allowed. The default value is OFF.
temperatureDependency
A Boolean specifying whether the data depend on temperature. The default value is OFF.
dependencies
An Int specifying the number of field variable dependencies. The default value is 0.
moduli
A SymbolicConstant specifying the time-dependence of the elastic material constants.
Possible values are INSTANTANEOUS and LONG_TERM. The default value is LONG_TERM.
Returns
-------
An Elastic object.
Raises
------
RangeError
"""
pass
def setValues(self):
"""This method modifies the Elastic object.
Raises
------
RangeError
"""
pass
| 32.22449 | 103 | 0.572198 | 681 | 6,316 | 5.28928 | 0.249633 | 0.033037 | 0.061355 | 0.070794 | 0.53859 | 0.507496 | 0.461133 | 0.425597 | 0.396446 | 0.396446 | 0 | 0.039389 | 0.356871 | 6,316 | 195 | 104 | 32.389744 | 0.847366 | 0.773908 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.25 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
4ac2c45edfed557313913a01b6d6e982c2b62143 | 858 | py | Python | setup.py | methane/pymemcache | 0ff5430cdcef7ed52fb3edc2a90c1c7d208ad77f | [
"Apache-2.0"
] | null | null | null | setup.py | methane/pymemcache | 0ff5430cdcef7ed52fb3edc2a90c1c7d208ad77f | [
"Apache-2.0"
] | null | null | null | setup.py | methane/pymemcache | 0ff5430cdcef7ed52fb3edc2a90c1c7d208ad77f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup, find_packages
from pymemcache import __version__
setup(
name = 'pymemcache',
version = __version__,
author = 'Charles Gordon',
author_email = 'charles@pinterest.com',
packages = find_packages(),
tests_require = ['nose>=1.0'],
install_requires = ['six'],
description = 'A comprehensive, fast, pure Python memcached client',
long_description = open('README.md').read(),
license = 'Apache License 2.0',
url = 'https://github.com/Pinterest/pymemcache',
classifiers = [
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.3',
'License :: OSI Approved :: Apache Software License',
'Topic :: Database',
],
)
| 29.586207 | 72 | 0.632867 | 90 | 858 | 5.877778 | 0.622222 | 0.143667 | 0.189036 | 0.098299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015106 | 0.228438 | 858 | 28 | 73 | 30.642857 | 0.783988 | 0.02331 | 0 | 0 | 0 | 0 | 0.456938 | 0.02512 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4ac74e03723bd148ef8b0804cbefc4d25af183f4 | 2,577 | py | Python | orders/views.py | DobromirZlatkov/anteya | 9c66c64643350ad1710bcf60e2e38169e389a66b | [
"MIT"
] | null | null | null | orders/views.py | DobromirZlatkov/anteya | 9c66c64643350ad1710bcf60e2e38169e389a66b | [
"MIT"
] | null | null | null | orders/views.py | DobromirZlatkov/anteya | 9c66c64643350ad1710bcf60e2e38169e389a66b | [
"MIT"
] | null | null | null | from django.core.urlresolvers import reverse_lazy
from django.views import generic
from django.shortcuts import redirect, render
from django.http import HttpResponseRedirect
from django.core.urlresolvers import reverse
from . import forms
from . import models
from custommixins import mixins
class OrderView(generic.View):
template_name = 'orders/order_create.html'
def get(self, request):
qs = models.Product.objects.none()
formset = forms.ProductFormSet(queryset=qs, prefix='formset')
order_form = forms.OrderForm(prefix='order_form')
return render(request, self.template_name, {'formset': formset, 'order_form': order_form})
def post(self, request):
formset = forms.ProductFormSet(request.POST, prefix='formset')
order_form = forms.OrderForm(request.POST, prefix='order_form')
if formset.is_valid():
order = order_form.save()
for form in formset.forms:
product = form.save(commit=False)
order.products.add(product)
order.save()
return HttpResponseRedirect(reverse('order_details', args=(order.id,)))
else:
return render(request, self.template_name, {'formset': formset, 'order_form': order_form})
class OrderDetails(generic.DetailView):
model = models.Order
template_name_suffix = '_details'
class OrderList(mixins.LoginRequiredMixin, mixins.AdminRequiredMixin, generic.ListView):
model = models.Order
class OrderEdit(generic.View):
template_name = 'orders/order_edit.html'
def get(self, request, pk):
order = models.Order.objects.get(pk=pk)
formset = forms.ProductFormSet(queryset=order.products.all(), prefix='formset')
order_form = forms.OrderForm(prefix='order_form', instance=order)
return render(request, self.template_name, {'formset': formset, 'order_form': order_form})
def post(self, request, pk):
order = models.Order.objects.get(pk=pk)
formset = forms.ProductFormSet(request.POST, prefix='formset')
order_form = forms.OrderForm(request.POST, prefix='order_form')
if formset.is_valid():
order = order_form.save()
for form in formset.forms:
product = form.save(commit=False)
order.products.add(product)
order.save()
return HttpResponseRedirect(reverse('order_details', args=(order.id,)))
else:
return render(request, self.template_name, {'formset': formset, 'order_form': order_form})
| 37.897059 | 102 | 0.67404 | 296 | 2,577 | 5.753378 | 0.236486 | 0.095126 | 0.075161 | 0.051674 | 0.72108 | 0.702877 | 0.617146 | 0.617146 | 0.617146 | 0.557252 | 0 | 0 | 0.215367 | 2,577 | 67 | 103 | 38.462687 | 0.842235 | 0 | 0 | 0.538462 | 0 | 0 | 0.083818 | 0.01785 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.519231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
434a5580172de0ca0736b7166ab6de48eed316fe | 121 | py | Python | output/models/ms_data/regex/re_g22_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/ms_data/regex/re_g22_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/ms_data/regex/re_g22_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.ms_data.regex.re_g22_xsd.re_g22 import (
Regex,
Doc,
)
__all__ = [
"Regex",
"Doc",
]
| 12.1 | 59 | 0.603306 | 17 | 121 | 3.823529 | 0.705882 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043956 | 0.247934 | 121 | 9 | 60 | 13.444444 | 0.67033 | 0 | 0 | 0 | 0 | 0 | 0.066116 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
434c335f1ca44ae4f15f8789642b629548cea61b | 659 | py | Python | students/K33402/Akhmetzhanov Alisher/lr2/main/forms.py | AlishKZ/ITMO_ICT_WebDevelopment_2020-2021 | b3ce82e17392d26d815e64343f5103f1bd46cd81 | [
"MIT"
] | null | null | null | students/K33402/Akhmetzhanov Alisher/lr2/main/forms.py | AlishKZ/ITMO_ICT_WebDevelopment_2020-2021 | b3ce82e17392d26d815e64343f5103f1bd46cd81 | [
"MIT"
] | null | null | null | students/K33402/Akhmetzhanov Alisher/lr2/main/forms.py | AlishKZ/ITMO_ICT_WebDevelopment_2020-2021 | b3ce82e17392d26d815e64343f5103f1bd46cd81 | [
"MIT"
] | null | null | null | from django.db.models import fields
from main.models import RoomReservation, UserRoom
from django import forms
from django.core.exceptions import ValidationError
from django.contrib.auth import authenticate, login
from django.contrib.auth import get_user_model
class ReservateRoomForm(forms.Form):
begin_date = forms.DateField()
end_date = forms.DateField()
class AddCommentForm(forms.Form):
text = forms.CharField(max_length=410)
accommodation = forms.ModelChoiceField(queryset=UserRoom.objects.all())
class EditReservationForm(forms.ModelForm):
class Meta:
model = RoomReservation
fields = ['begin_date', 'end_date']
| 31.380952 | 75 | 0.775417 | 79 | 659 | 6.379747 | 0.518987 | 0.099206 | 0.06746 | 0.083333 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0053 | 0.141123 | 659 | 20 | 76 | 32.95 | 0.885159 | 0 | 0 | 0 | 0 | 0 | 0.027314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
43580621cfd0f7e6c205651bbcde02772c3c846a | 628 | py | Python | subs2srs/gui/state.py | TFarla/subs2srs-cross-platform | 79158a313ca4099adb20df97207b19d7bc948697 | [
"MIT"
] | 3 | 2020-07-04T22:34:50.000Z | 2020-08-10T18:18:51.000Z | subs2srs/gui/state.py | TFarla/subs2srs-cross-platform | 79158a313ca4099adb20df97207b19d7bc948697 | [
"MIT"
] | 5 | 2020-07-04T08:34:36.000Z | 2021-05-19T01:27:04.000Z | subs2srs/gui/state.py | TFarla/subs2srs-cross-platform | 79158a313ca4099adb20df97207b19d7bc948697 | [
"MIT"
] | null | null | null | from typing import List
from subs2srs.core.preview_item import PreviewItem
class StatePreview:
items: List[PreviewItem] = []
inactive_items = set()
def __init__(self):
super().__init__()
self.items = []
self.inactive_items = set()
self.audio = None
class State:
deck_name = None
sub1_file = "/Users/thomasfarla/Documents/subs2srs-cross-platform/tests/fixtures/in.srt"
sub2_file = None
video_file = "/Users/thomasfarla/Documents/subs2srs-cross-platform/tests/fixtures/in.mkv"
output_file = "/Users/thomasfarla/Documents/test-subs"
preview = StatePreview()
| 27.304348 | 93 | 0.694268 | 74 | 628 | 5.675676 | 0.540541 | 0.064286 | 0.142857 | 0.207143 | 0.309524 | 0.309524 | 0.309524 | 0.309524 | 0.309524 | 0.309524 | 0 | 0.009881 | 0.194268 | 628 | 22 | 94 | 28.545455 | 0.820158 | 0 | 0 | 0 | 0 | 0 | 0.296178 | 0.296178 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4366c0b4bfc3d82921cf8860654a9fdf8156bfc0 | 893 | py | Python | src/states.py | amancevice/terraform-aws-slack-interactive-components | 819a9b6a408b36cd1a0100859801bc47c437fdc8 | [
"MIT"
] | 24 | 2018-10-17T04:42:56.000Z | 2022-03-03T10:27:56.000Z | src/states.py | amancevice/terraform-aws-slack-interactive-components | 819a9b6a408b36cd1a0100859801bc47c437fdc8 | [
"MIT"
] | 5 | 2019-03-01T17:14:48.000Z | 2022-01-21T23:11:39.000Z | src/states.py | amancevice/terraform-aws-slack-interactive-components | 819a9b6a408b36cd1a0100859801bc47c437fdc8 | [
"MIT"
] | 11 | 2019-03-01T15:16:24.000Z | 2022-03-03T10:27:59.000Z | import boto3
from logger import logger
class States:
def __init__(self, boto3_session=None):
self.boto3_session = boto3_session or boto3.Session()
self.client = self.boto3_session.client('stepfunctions')
def fail(self, task_token, error, cause):
params = dict(taskToken=task_token, error=error, cause=cause)
logger.info('SEND TASK FAILURE %s', logger.json(params))
return self.client.send_task_failure(**params)
def heartbeat(self, task_token):
params = dict(taskToken=task_token)
logger.info('SEND TASK HEARTBEAT %s', logger.json(params))
return self.client.send_task_heartbeat(**params)
def succeed(self, task_token, output):
params = dict(taskToken=task_token, output=output)
logger.info('SEND TASK SUCCESS %s', logger.json(params))
return self.client.send_task_success(**params)
| 35.72 | 69 | 0.693169 | 116 | 893 | 5.163793 | 0.275862 | 0.09015 | 0.080134 | 0.115192 | 0.345576 | 0.205342 | 0.205342 | 0.205342 | 0.205342 | 0 | 0 | 0.008345 | 0.194849 | 893 | 24 | 70 | 37.208333 | 0.824757 | 0 | 0 | 0 | 0 | 0 | 0.083987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
436a24c432c8bd3a3066c5adcc757a189d209bf5 | 332 | py | Python | utils/path_utils.py | kuyu12/pygame_fight_game | 3bbc286b9f33c6d6d9db9bea21f9b7af15247df5 | [
"MIT"
] | 1 | 2020-08-03T07:54:59.000Z | 2020-08-03T07:54:59.000Z | utils/path_utils.py | kuyu12/pygame_fight_game | 3bbc286b9f33c6d6d9db9bea21f9b7af15247df5 | [
"MIT"
] | null | null | null | utils/path_utils.py | kuyu12/pygame_fight_game | 3bbc286b9f33c6d6d9db9bea21f9b7af15247df5 | [
"MIT"
] | null | null | null | import sys
IMAGES_PATH = sys.path[1] + "/Images"
BACKGROUND_IMAGES_PATH = IMAGES_PATH + '/background'
USER_INFO_BACKGROUND_PATH = BACKGROUND_IMAGES_PATH+"/blue_background.jpg"
SPRINT_IMAGE_PATH = IMAGES_PATH + '/sprite'
PROFILE_IMAGES_PATH = IMAGES_PATH + '/profile'
CONFIGURATION_FILES_PATH = sys.path[1] + "/configuration_files" | 36.888889 | 73 | 0.795181 | 44 | 332 | 5.568182 | 0.363636 | 0.285714 | 0.171429 | 0.097959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006645 | 0.093373 | 332 | 9 | 74 | 36.888889 | 0.807309 | 0 | 0 | 0 | 0 | 0 | 0.219219 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
437984a8785d9b1726c62d66ab94644c9b6578d8 | 5,275 | py | Python | CAutomation/settings.py | Rich9rd/CAutomation | d1c1b963e806a216d4c825243c1c405336414413 | [
"MIT"
] | null | null | null | CAutomation/settings.py | Rich9rd/CAutomation | d1c1b963e806a216d4c825243c1c405336414413 | [
"MIT"
] | null | null | null | CAutomation/settings.py | Rich9rd/CAutomation | d1c1b963e806a216d4c825243c1c405336414413 | [
"MIT"
] | null | null | null | """
Django settings for CAutomation project.
Generated by 'django-admin startproject' using Django 3.2.4.
For more information on this file, see
https://docs.djangoproject.com/en/3.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.2/ref/settings/
"""
from pathlib import Path
import os
import dj_database_url
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
STATIC_ROOT = os.path.join(PROJECT_ROOT, 'staticfiles')
STATICFILES_DIRS = (
os.path.join(PROJECT_ROOT, 'static'),
)
ACCOUNT_AUTHENTICATION_METHOD = 'username_email'
ACCOUNT_LOGOUT_ON_GET = False
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_EMAIL_VERIFICATION = "none"
AUTH_USER_MODEL = 'cleaning.User'
AUTHENTICATION_BACKENDS = (
# Needed to login by username in Django admin, regardless of `allauth`
'django.contrib.auth.backends.ModelBackend',
# `allauth` specific authentication methods, such as login by e-mail
'allauth.account.auth_backends.AuthenticationBackend',
)
ACCOUNT_CONFIRM_EMAIL_ON_GET = False
SWAGGER_SETTINGS = {
'SECURITY_DEFINITIONS': {
'api_key': {
'type': 'apiKey',
'in': 'header',
'name': 'Authorization'
}
},
'USE_SESSION_AUTH': False,
'JSON_EDITOR': True,
}
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-=(#vt!5x^l3-j(e*%@p0)d_p&qd2x_#&n*^i=j38@b(26zz^mr'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['*']
REST_FRAMEWORK = {
'DEFAULT_SCHEMA_CLASS': 'rest_framework.schemas.coreapi.AutoSchema',
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly'
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.TokenAuthentication',
],
}
# Application definition
SITE_ID = 1
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'corsheaders',
'allauth',
'allauth.account',
'allauth.socialaccount',
'drf_yasg',
'rest_framework',
'rest_framework.authtoken',
'rest_auth.registration',
'rest_auth',
'common.apps.CommonConfig',
'cleaning.apps.CleaningConfig',
]
#'corsheaders',
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.common.CommonMiddleware',
'corsheaders.middleware.CorsMiddleware',
]
#'django.middleware.common.CommonMiddleware',
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
#'corsheaders.middleware.CommonMiddleware',
ROOT_URLCONF = 'CAutomation.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'CAutomation.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.2/ref/settings/#databases
DATABASES = {
'default': dj_database_url.config(
default='postgres://mzqgdpoeqiolgg:270514539442574d87e9f9c742314e58d57ff59139679e5c6e46eff5482b5b6e@ec2-52-208-221-89.eu-west-1.compute.amazonaws.com:5432/d96ohaomhouuat'
),
}
# Password validation
# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
CORS_ALLOW_ALL_ORIGINS = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.2/howto/static-files/
STATIC_URL = '/static/'
# Default primary key field type
# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
| 27.473958 | 178 | 0.714123 | 578 | 5,275 | 6.351211 | 0.435986 | 0.060202 | 0.047943 | 0.054481 | 0.137565 | 0.126124 | 0.083628 | 0.083628 | 0.043585 | 0 | 0 | 0.022322 | 0.159242 | 5,275 | 191 | 179 | 27.617801 | 0.805412 | 0.250427 | 0 | 0.02521 | 1 | 0.016807 | 0.539383 | 0.441244 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.042017 | 0.02521 | 0 | 0.02521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4389b795742ce4092fa55a8e1be92e8c6adf1239 | 2,945 | py | Python | neutron/plugins/ofagent/agent/ports.py | armando-migliaccio/neutron-1 | e31861c15bc73e65a7c22212df2a56f9e45aa0e4 | [
"Apache-2.0"
] | null | null | null | neutron/plugins/ofagent/agent/ports.py | armando-migliaccio/neutron-1 | e31861c15bc73e65a7c22212df2a56f9e45aa0e4 | [
"Apache-2.0"
] | null | null | null | neutron/plugins/ofagent/agent/ports.py | armando-migliaccio/neutron-1 | e31861c15bc73e65a7c22212df2a56f9e45aa0e4 | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2014 VA Linux Systems Japan K.K.
# Copyright (C) 2014 YAMAMOTO Takashi <yamamoto at valinux co jp>
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class OFPort(object):
def __init__(self, port_name, ofport):
self.port_name = port_name
self.ofport = ofport
@classmethod
def from_ofp_port(cls, ofp_port):
"""Convert from ryu OFPPort."""
return cls(port_name=ofp_port.name, ofport=ofp_port.port_no)
PORT_NAME_LEN = 14
PORT_NAME_PREFIXES = [
"tap", # common cases, including ovs_use_veth=True
"qvo", # nova hybrid interface driver
"qr-", # l3-agent INTERNAL_DEV_PREFIX (ovs_use_veth=False)
"qg-", # l3-agent EXTERNAL_DEV_PREFIX (ovs_use_veth=False)
]
def _is_neutron_port(name):
"""Return True if the port name looks like a neutron port."""
if len(name) != PORT_NAME_LEN:
return False
for pref in PORT_NAME_PREFIXES:
if name.startswith(pref):
return True
return False
def get_normalized_port_name(interface_id):
"""Convert from neutron device id (uuid) to "normalized" port name.
This needs to be synced with ML2 plugin's _device_to_port_id().
An assumption: The switch uses an OS's interface name as the
corresponding OpenFlow port name.
NOTE(yamamoto): While it's true for Open vSwitch, it isn't
necessarily true everywhere. For example, LINC uses something
like "LogicalSwitch0-Port2".
NOTE(yamamoto): The actual prefix might be different. For example,
with the hybrid interface driver, it's "qvo". However, we always
use "tap" prefix throughout the agent and plugin for simplicity.
Some care should be taken when talking to the switch.
"""
return ("tap" + interface_id)[0:PORT_NAME_LEN]
def _normalize_port_name(name):
"""Normalize port name.
See comments in _get_ofport_name.
"""
for pref in PORT_NAME_PREFIXES:
if name.startswith(pref):
return "tap" + name[len(pref):]
return name
class Port(OFPort):
def __init__(self, *args, **kwargs):
super(Port, self).__init__(*args, **kwargs)
self.vif_mac = None
def is_neutron_port(self):
"""Return True if the port looks like a neutron port."""
return _is_neutron_port(self.port_name)
def normalized_port_name(self):
return _normalize_port_name(self.port_name)
| 33.089888 | 78 | 0.69202 | 428 | 2,945 | 4.586449 | 0.408879 | 0.089659 | 0.024452 | 0.016302 | 0.117168 | 0.076414 | 0.051961 | 0.051961 | 0.051961 | 0.051961 | 0 | 0.00873 | 0.222071 | 2,945 | 88 | 79 | 33.465909 | 0.848101 | 0.568761 | 0 | 0.166667 | 0 | 0 | 0.015358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0.027778 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
438cea957a4d584b046abd2a8ee5c64fd504407c | 1,168 | py | Python | pipeline/validators/handlers.py | ZhuoZhuoCrayon/bk-nodeman | 76cb71fcc971c2a0c2be161fcbd6b019d4a7a8ab | [
"MIT"
] | 31 | 2021-07-28T13:06:11.000Z | 2022-03-10T12:16:44.000Z | pipeline/validators/handlers.py | ZhuoZhuoCrayon/bk-nodeman | 76cb71fcc971c2a0c2be161fcbd6b019d4a7a8ab | [
"MIT"
] | 483 | 2021-07-29T03:17:44.000Z | 2022-03-31T13:03:04.000Z | pipeline/validators/handlers.py | ZhuoZhuoCrayon/bk-nodeman | 76cb71fcc971c2a0c2be161fcbd6b019d4a7a8ab | [
"MIT"
] | 29 | 2021-07-28T13:06:21.000Z | 2022-03-25T06:18:18.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS Community
Edition) available.
Copyright (C) 2017-2019 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
from django.dispatch import receiver
from pipeline.core.flow.event import EndEvent
from pipeline.core.flow.signals import post_new_end_event_register
from pipeline.validators import rules
@receiver(post_new_end_event_register, sender=EndEvent)
def post_new_end_event_register_handler(sender, node_type, node_cls, **kwargs):
rules.NODE_RULES[node_type] = rules.SINK_RULE
rules.FLOW_NODES_WITHOUT_STARTEVENT.append(node_type)
| 46.72 | 115 | 0.808219 | 178 | 1,168 | 5.179775 | 0.623596 | 0.065076 | 0.032538 | 0.048807 | 0.074837 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010816 | 0.129281 | 1,168 | 24 | 116 | 48.666667 | 0.895772 | 0.618151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
43a01f33e82c9b00675c1f842c3ac9effea08533 | 7,335 | py | Python | api/config.py | sumesh-aot/namex | 53e11aed5ea550b71b7b983f1b57b65db5a06766 | [
"Apache-2.0"
] | 1 | 2020-03-23T21:43:15.000Z | 2020-03-23T21:43:15.000Z | api/config.py | sumesh-aot/namex | 53e11aed5ea550b71b7b983f1b57b65db5a06766 | [
"Apache-2.0"
] | null | null | null | api/config.py | sumesh-aot/namex | 53e11aed5ea550b71b7b983f1b57b65db5a06766 | [
"Apache-2.0"
] | null | null | null | """Config for initializing the namex-api."""
import os
from dotenv import find_dotenv, load_dotenv
# this will load all the envars from a .env file located in the project root (api)
load_dotenv(find_dotenv())
CONFIGURATION = {
'development': 'config.DevConfig',
'testing': 'config.TestConfig',
'production': 'config.Config',
'default': 'config.Config'
}
class Config(object):
"""Base config (also production config)."""
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
SECRET_KEY = 'a secret'
SQLALCHEMY_TRACK_MODIFICATIONS = False
NRO_SERVICE_ACCOUNT = os.getenv('NRO_SERVICE_ACCOUNT', 'nro_service_account')
SOLR_BASE_URL = os.getenv('SOLR_BASE_URL', None)
SOLR_SYNONYMS_API_URL = os.getenv('SOLR_SYNONYMS_API_URL', None)
NRO_EXTRACTOR_URI = os.getenv('NRO_EXTRACTOR_URI', None)
AUTO_ANALYZE_URL = os.getenv('AUTO_ANALYZE_URL', None)
AUTO_ANALYZE_CONFIG = os.getenv('AUTO_ANALYZE_CONFIG', None)
REPORT_SVC_URL = os.getenv('REPORT_SVC_URL', None)
REPORT_TEMPLATE_PATH = os.getenv('REPORT_PATH', 'report-templates')
ALEMBIC_INI = 'migrations/alembic.ini'
# POSTGRESQL
DB_USER = os.getenv('DATABASE_USERNAME', '')
DB_PASSWORD = os.getenv('DATABASE_PASSWORD', '')
DB_NAME = os.getenv('DATABASE_NAME', '')
DB_HOST = os.getenv('DATABASE_HOST', '')
DB_PORT = os.getenv('DATABASE_PORT', '5432')
SQLALCHEMY_DATABASE_URI = 'postgresql://{user}:{password}@{host}:{port}/{name}'.format(
user=DB_USER,
password=DB_PASSWORD,
host=DB_HOST,
port=int(DB_PORT),
name=DB_NAME
)
# ORACLE - LEGACY NRO NAMESDB
NRO_USER = os.getenv('NRO_USER', '')
NRO_SCHEMA = os.getenv('NRO_SCHEMA', None)
NRO_PASSWORD = os.getenv('NRO_PASSWORD', '')
NRO_DB_NAME = os.getenv('NRO_DB_NAME', '')
NRO_HOST = os.getenv('NRO_HOST', '')
NRO_PORT = int(os.getenv('NRO_PORT', '1521'))
# JWT_OIDC Settings
JWT_OIDC_WELL_KNOWN_CONFIG = os.getenv('JWT_OIDC_WELL_KNOWN_CONFIG')
JWT_OIDC_ALGORITHMS = os.getenv('JWT_OIDC_ALGORITHMS')
JWT_OIDC_JWKS_URI = os.getenv('JWT_OIDC_JWKS_URI')
JWT_OIDC_ISSUER = os.getenv('JWT_OIDC_ISSUER')
JWT_OIDC_AUDIENCE = os.getenv('JWT_OIDC_AUDIENCE')
JWT_OIDC_CLIENT_SECRET = os.getenv('JWT_OIDC_CLIENT_SECRET')
JWT_OIDC_CACHING_ENABLED = os.getenv('JWT_OIDC_CACHING_ENABLED')
JWT_OIDC_JWKS_CACHE_TIMEOUT = int(os.getenv('JWT_OIDC_JWKS_CACHE_TIMEOUT', '300'))
TESTING = False,
DEBUG = False
# You can disable NRO updates for Name Requests by setting the variable in your .env / OpenShift configuration
DISABLE_NAMEREQUEST_NRO_UPDATES = int(os.getenv('DISABLE_NAMEREQUEST_NRO_UPDATES', 0))
DISABLE_NAMEREQUEST_SOLR_UPDATES = int(os.getenv('DISABLE_NAMEREQUEST_SOLR_UPDATES', 0))
class DevConfig(Config):
"""Dev config used for development."""
TESTING = False,
DEBUG = True
# We can't run NRO locally unless you're provisioned, you can disable NRO updates for Name Requests by setting the variable in your .env
DISABLE_NAMEREQUEST_NRO_UPDATES = int(os.getenv('DISABLE_NAMEREQUEST_NRO_UPDATES', 0))
DISABLE_NAMEREQUEST_SOLR_UPDATES = int(os.getenv('DISABLE_NAMEREQUEST_SOLR_UPDATES', 0))
class TestConfig(Config):
"""Test config used for pytests."""
DEBUG = True
TESTING = True
# POSTGRESQL
DB_USER = os.getenv('DATABASE_TEST_USERNAME', '')
DB_PASSWORD = os.getenv('DATABASE_TEST_PASSWORD', '')
DB_NAME = os.getenv('DATABASE_TEST_NAME', '')
DB_HOST = os.getenv('DATABASE_TEST_HOST', '')
DB_PORT = os.getenv('DATABASE_TEST_PORT', '5432')
# Allows for NRO add / update bypass if necessary (for local development)
LOCAL_DEV_MODE = os.getenv('LOCAL_DEV_MODE', False)
# Set this in your .env to debug SQL Alchemy queries (for local development)
SQLALCHEMY_ECHO = 'debug' if os.getenv('DEBUG_SQL_QUERIES', False) else False
SQLALCHEMY_DATABASE_URI = 'postgresql://{user}:{password}@{host}:{port}/{name}'.format(
user=DB_USER,
password=DB_PASSWORD,
host=DB_HOST,
port=int(DB_PORT),
name=DB_NAME
)
# We can't run NRO locally for running our tests
DISABLE_NAMEREQUEST_NRO_UPDATES = int(os.getenv('DISABLE_NAMEREQUEST_NRO_UPDATES', 1))
DISABLE_NAMEREQUEST_SOLR_UPDATES = int(os.getenv('DISABLE_NAMEREQUEST_SOLR_UPDATES', 0))
# JWT OIDC settings
# JWT_OIDC_TEST_MODE will set jwt_manager to use
JWT_OIDC_TEST_MODE = True
JWT_OIDC_TEST_AUDIENCE = 'example'
JWT_OIDC_TEST_ISSUER = 'https://example.localdomain/auth/realms/example'
JWT_OIDC_TEST_KEYS = {
'keys': [
{
'kid': 'flask-jwt-oidc-test-client',
'kty': 'RSA',
'alg': 'RS256',
'use': 'sig',
'n': 'AN-fWcpCyE5KPzHDjigLaSUVZI0uYrcGcc40InVtl-rQRDmAh-C2W8H4_Hxhr5VLc6crsJ2LiJTV_E72S03pzpOOaaYV6-TzAjCou2GYJIXev7f6Hh512PuG5wyxda_TlBSsI-gvphRTPsKCnPutrbiukCYrnPuWxX5_cES9eStR', # noqa: E501
'e': 'AQAB'
}
]
}
JWT_OIDC_TEST_PRIVATE_KEY_JWKS = {
'keys': [
{
'kid': 'flask-jwt-oidc-test-client',
'kty': 'RSA',
'alg': 'RS256',
'use': 'sig',
'n': 'AN-fWcpCyE5KPzHDjigLaSUVZI0uYrcGcc40InVtl-rQRDmAh-C2W8H4_Hxhr5VLc6crsJ2LiJTV_E72S03pzpOOaaYV6-TzAjCou2GYJIXev7f6Hh512PuG5wyxda_TlBSsI-gvphRTPsKCnPutrbiukCYrnPuWxX5_cES9eStR', # noqa: E501
'e': 'AQAB',
'd': 'C0G3QGI6OQ6tvbCNYGCqq043YI_8MiBl7C5dqbGZmx1ewdJBhMNJPStuckhskURaDwk4-8VBW9SlvcfSJJrnZhgFMjOYSSsBtPGBIMIdM5eSKbenCCjO8Tg0BUh_xa3CHST1W4RQ5rFXadZ9AeNtaGcWj2acmXNO3DVETXAX3x0', # noqa: E501
'p': 'APXcusFMQNHjh6KVD_hOUIw87lvK13WkDEeeuqAydai9Ig9JKEAAfV94W6Aftka7tGgE7ulg1vo3eJoLWJ1zvKM',
'q': 'AOjX3OnPJnk0ZFUQBwhduCweRi37I6DAdLTnhDvcPTrrNWuKPg9uGwHjzFCJgKd8KBaDQ0X1rZTZLTqi3peT43s',
'dp': 'AN9kBoA5o6_Rl9zeqdsIdWFmv4DB5lEqlEnC7HlAP-3oo3jWFO9KQqArQL1V8w2D4aCd0uJULiC9pCP7aTHvBhc',
'dq': 'ANtbSY6njfpPploQsF9sU26U0s7MsuLljM1E8uml8bVJE1mNsiu9MgpUvg39jEu9BtM2tDD7Y51AAIEmIQex1nM',
'qi': 'XLE5O360x-MhsdFXx8Vwz4304-MJg-oGSJXCK_ZWYOB_FGXFRTfebxCsSYi0YwJo-oNu96bvZCuMplzRI1liZw'
}
]
}
JWT_OIDC_TEST_PRIVATE_KEY_PEM = """
-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQDfn1nKQshOSj8xw44oC2klFWSNLmK3BnHONCJ1bZfq0EQ5gIfg
tlvB+Px8Ya+VS3OnK7Cdi4iU1fxO9ktN6c6TjmmmFevk8wIwqLthmCSF3r+3+h4e
ddj7hucMsXWv05QUrCPoL6YUUz7Cgpz7ra24rpAmK5z7lsV+f3BEvXkrUQIDAQAB
AoGAC0G3QGI6OQ6tvbCNYGCqq043YI/8MiBl7C5dqbGZmx1ewdJBhMNJPStuckhs
kURaDwk4+8VBW9SlvcfSJJrnZhgFMjOYSSsBtPGBIMIdM5eSKbenCCjO8Tg0BUh/
xa3CHST1W4RQ5rFXadZ9AeNtaGcWj2acmXNO3DVETXAX3x0CQQD13LrBTEDR44ei
lQ/4TlCMPO5bytd1pAxHnrqgMnWovSIPSShAAH1feFugH7ZGu7RoBO7pYNb6N3ia
C1idc7yjAkEA6Nfc6c8meTRkVRAHCF24LB5GLfsjoMB0tOeEO9w9Ous1a4o+D24b
AePMUImAp3woFoNDRfWtlNktOqLel5PjewJBAN9kBoA5o6/Rl9zeqdsIdWFmv4DB
5lEqlEnC7HlAP+3oo3jWFO9KQqArQL1V8w2D4aCd0uJULiC9pCP7aTHvBhcCQQDb
W0mOp436T6ZaELBfbFNulNLOzLLi5YzNRPLppfG1SRNZjbIrvTIKVL4N/YxLvQbT
NrQw+2OdQACBJiEHsdZzAkBcsTk7frTH4yGx0VfHxXDPjfTj4wmD6gZIlcIr9lZg
4H8UZcVFN95vEKxJiLRjAmj6g273pu9kK4ymXNEjWWJn
-----END RSA PRIVATE KEY-----"""
| 43.402367 | 210 | 0.718609 | 765 | 7,335 | 6.588235 | 0.288889 | 0.063492 | 0.031746 | 0.02381 | 0.35377 | 0.314881 | 0.248611 | 0.248611 | 0.248611 | 0.248611 | 0 | 0.052597 | 0.180913 | 7,335 | 168 | 211 | 43.660714 | 0.786285 | 0.112474 | 0 | 0.264 | 0 | 0 | 0.479228 | 0.353668 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.056 | 0.016 | 0 | 0.504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
43a26f9573c5f714eb41be0b40f5f0e94681fe54 | 1,013 | py | Python | gfworkflow/core.py | andersonbrands/gfworkflow | 81c646fd53b8227691bcd3e236f538fee0d9d93c | [
"MIT"
] | null | null | null | gfworkflow/core.py | andersonbrands/gfworkflow | 81c646fd53b8227691bcd3e236f538fee0d9d93c | [
"MIT"
] | null | null | null | gfworkflow/core.py | andersonbrands/gfworkflow | 81c646fd53b8227691bcd3e236f538fee0d9d93c | [
"MIT"
] | null | null | null | import re
import subprocess as sp
from typing import Union, List
from gfworkflow.exceptions import RunCommandException
def run(command: Union[str, List[str]]):
completed_process = sp.run(command, stdout=sp.PIPE, stderr=sp.PIPE, universal_newlines=True)
if completed_process.returncode:
raise RunCommandException(completed_process)
return completed_process
def init():
run('git flow init -d -f')
run('git config gitflow.prefix.versiontag v')
def bump_version(part: str):
run(f'bumpversion {part}')
def start_release(new_version: str):
run(f'git flow release start {new_version}')
def get_new_version(part: str):
output = run(f'bumpversion {part} --list -n --allow-dirty --no-configured-files').stdout
return re.compile(r'new_version=(\S+)').search(output).group(1)
def get_current_branch_name():
return run('git rev-parse --abbrev-ref HEAD').stdout.strip()
def finish_release(release_name):
run(f'git flow release finish -m " - " {release_name}')
| 25.974359 | 96 | 0.722606 | 145 | 1,013 | 4.924138 | 0.482759 | 0.089636 | 0.039216 | 0.053221 | 0.05042 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001163 | 0.151037 | 1,013 | 38 | 97 | 26.657895 | 0.82907 | 0 | 0 | 0 | 0 | 0 | 0.266535 | 0.04541 | 0 | 0 | 0 | 0 | 0 | 1 | 0.304348 | false | 0 | 0.173913 | 0.043478 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
43a4f6e31b5eece16d50c0585d3ecac08d080d46 | 5,919 | py | Python | orio/module/loop/cfg.py | zhjp0/Orio | 7dfb80527053c5697d1bce1bd8ed996b1ea192c8 | [
"MIT"
] | null | null | null | orio/module/loop/cfg.py | zhjp0/Orio | 7dfb80527053c5697d1bce1bd8ed996b1ea192c8 | [
"MIT"
] | null | null | null | orio/module/loop/cfg.py | zhjp0/Orio | 7dfb80527053c5697d1bce1bd8ed996b1ea192c8 | [
"MIT"
] | null | null | null | '''
Created on April 26, 2015
@author: norris
'''
import ast, sys, os, traceback
from orio.main.util.globals import *
from orio.tool.graphlib import graph
from orio.module.loop import astvisitors
class CFGVertex(graph.Vertex):
'''A CFG vertex is a basic block.'''
def __init__(self, name, node=None):
try: graph.Vertex.__init__(self, name)
except Exception,e: err("CFGVertex.__init__:" + str(e))
self.stmts = [node] # basic block, starting with leader node
pass
def append(self, node):
self.stmts.append(node)
def copy(self):
v = CFGVertex(self.name)
v.e = self.e
v.data = self.data
return v
def succ(self):
return self.out_v()
def pred(self):
return self.in_v()
def __str__(self):
return "<%s> " % self.name + str(self.stmts)
pass # End of CFG vertex class
class CFGEdge(graph.DirEdge):
def __init__(self, v1, v2, name=''):
if not name: name = Globals().incrementCounter()
graph.DirEdge.__init__(self, name, v1, v2)
pass
pass # End of CFGEdge class
class CFGGraph(graph.Graph):
def __init__(self, nodes, name='CFG'):
graph.Graph.__init__(self, name)
self.cfgVisitor = CFGVisitor(self)
self.cfgVisitor.visit(nodes)
if True:
self.display()
pass
def nodes(self):
return self.v
def pred(self, bb):
return self.v[bb.name].in_v()
def succ(self, bb):
return self.v[bb.name].out_v()
def display(self):
#sys.stdout.write(str(self))
self.genDOT()
def genDOT(self, fname=''):
buf = 'digraph CFG {\n'
for n,vertex in self.v.items():
label = '[label="%s%s...",shape=box]' % (n,str(vertex.stmts[0]).split('\n')[0])
buf += '\t%s %s;\n' % (n, label)
for edge in vertex.out_e:
for dv in edge.dest_v:
buf += '\t%s -> %s;\n' % (n, dv.name)
buf += '\n}\n'
if fname == '': fname = Globals().tempfilename + '.dot'
f=open(fname,'w')
f.write(buf)
f.close()
# print buf
return buf
pass # End of CFG Graph class
class CFGVisitor(astvisitors.ASTVisitor):
def __init__(self, graph):
astvisitors.ASTVisitor.__init__(self)
self.cfg = graph
v = CFGVertex('_TOP_')
self.cfg.add_v(v)
self.stack = [v]
self.lead = True
self.verbose = False
self.last = None
def display(self, node, msg=''):
if self.verbose:
sys.stdout.write("[%s] " % self.__class__.__name__ + node.__class__.__name__ + ': ' + msg+'\n')
def visit(self, nodes, params={}):
'''Invoke accept method for specified AST node'''
if not isinstance(nodes, (list, tuple)):
nodes = [nodes]
try:
for node in nodes:
if not node: continue
v = CFGVertex(node.id, node)
if isinstance(node, ast.ForStmt):
self.display(node)
# Children: header: node.init, node.test, node.iter; body: node.stmt
v = CFGVertex('ForLoop' + str(node.id), node)
self.cfg.add_v(v)
self.cfg.add_e(CFGEdge(self.stack.pop(),v))
self.stack.append(v)
self.lead = True
self.stack.append(v)
self.visit(node.stmt)
vbottom = CFGVertex('_JOIN_' + str(node.id))
self.cfg.add_v(vbottom)
self.cfg.add_e(CFGEdge(v,vbottom))
self.cfg.add_e(CFGEdge(self.stack.pop(),vbottom))
self.stack.append(vbottom)
self.lead = True
elif isinstance(node, ast.IfStmt):
self.display(node)
v = CFGVertex('IfStmt' + str(node.id) , node)
self.cfg.add_v(v)
self.cfg.add_e(CFGEdge(self.stack.pop(),v))
self.stack.append(v)
self.lead = True
self.visit(node.true_stmt)
truelast = self.stack.pop()
self.stack.append(v)
self.lead = True
self.visit(node.false_stmt)
falselast = self.stack.pop()
self.lead = True
vbottom = CFGVertex('_JOIN_' + str(node.id))
self.cfg.add_v(vbottom)
self.cfg.add_e(CFGEdge(truelast,vbottom))
self.cfg.add_e(CFGEdge(falselast,vbottom))
self.stack.append(vbottom)
elif isinstance(node, ast.CompStmt):
self.display(node)
self.visit(node.stmts)
# TODO: handle gotos
else:
# Add to previous basic block
if self.lead:
v = CFGVertex(node.id, node)
self.cfg.add_v(v)
self.cfg.add_e(CFGEdge(self.stack.pop(),v))
self.stack.append(v)
self.lead = False
else:
self.stack.pop()
self.stack.append(v)
self.stack[-1].append(node)
except Exception as ex:
err("[orio.module.loop.cfg.CFGVisitor.visit()] %s" % str(ex))
return
def getCFG(self):
return self.cfg
pass # end of class CFGVisitor
| 32.521978 | 107 | 0.478459 | 665 | 5,919 | 4.133835 | 0.216541 | 0.055657 | 0.04729 | 0.02801 | 0.268825 | 0.224809 | 0.204074 | 0.184431 | 0.157512 | 0.157512 | 0 | 0.003687 | 0.404291 | 5,919 | 181 | 108 | 32.701657 | 0.77595 | 0.047643 | 0 | 0.288889 | 0 | 0 | 0.034062 | 0.012386 | 0 | 0 | 0 | 0.005525 | 0 | 0 | null | null | 0.051852 | 0.02963 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
43a848be2ab70fca075a6b29e18609d29a8a5a7d | 1,109 | py | Python | newsapp/migrations/0003_news.py | adi112100/newsapp | 7cdf6070299b4a8dcc950e7fcdfb82cf1a1d98cb | [
"MIT"
] | null | null | null | newsapp/migrations/0003_news.py | adi112100/newsapp | 7cdf6070299b4a8dcc950e7fcdfb82cf1a1d98cb | [
"MIT"
] | null | null | null | newsapp/migrations/0003_news.py | adi112100/newsapp | 7cdf6070299b4a8dcc950e7fcdfb82cf1a1d98cb | [
"MIT"
] | null | null | null | # Generated by Django 3.0.8 on 2020-07-11 08:10
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('newsapp', '0002_auto_20200711_1124'),
]
operations = [
migrations.CreateModel(
name='News',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date', models.DateTimeField()),
('indian_news', models.TextField()),
('national_news', models.TextField()),
('international_news', models.TextField()),
('bollywood_news', models.TextField()),
('lifestyle_news', models.TextField()),
('sport_news', models.TextField()),
('business_news', models.TextField()),
('sharemarket_news', models.TextField()),
('corona_news', models.TextField()),
('space_news', models.TextField()),
('motivation_news', models.TextField()),
],
),
]
| 34.65625 | 114 | 0.538323 | 95 | 1,109 | 6.105263 | 0.557895 | 0.189655 | 0.360345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040736 | 0.313796 | 1,109 | 31 | 115 | 35.774194 | 0.721419 | 0.040577 | 0 | 0 | 1 | 0 | 0.176083 | 0.021657 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
43ad3e59d1619acb8d9309d2b2e5ad3161003839 | 2,664 | py | Python | tests/selenium/test_about/test_about_page.py | technolotrix/tests | ae5b9741e80a1fd735c66de93cc014f672c5afb2 | [
"Apache-2.0"
] | null | null | null | tests/selenium/test_about/test_about_page.py | technolotrix/tests | ae5b9741e80a1fd735c66de93cc014f672c5afb2 | [
"Apache-2.0"
] | null | null | null | tests/selenium/test_about/test_about_page.py | technolotrix/tests | ae5b9741e80a1fd735c66de93cc014f672c5afb2 | [
"Apache-2.0"
] | null | null | null | import unittest
from selenium import webdriver
import page
class AboutPage(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.get("http://nicolesmith.nyc")
#self.driver.get("http://127.0.0.1:4747/about")
self.about_page = page.AboutPage(self.driver)
######## HEADER STUFF ########
def test_title_on_about_page(self):
assert self.about_page.is_title_matches(), "about page title doesn't match"
def test_click_get_quote(self):
assert self.about_page.click_quote_button(), "link to contact page is broken"
def test_click_home_button(self):
assert self.about_page.click_home_button(), "home button does not go to homepage"
@unittest.skip("Needs fixing.")
def test_click_about_link(self):
assert self.about_page.click_projects_link(), "about link does not go to about page"
@unittest.skip("Needs fixing.")
def test_click_projects_link(self):
assert self.about_page.click_projects_link(), "projects link does not go to projects page"
@unittest.skip("Needs fixing.")
def test_click_services_link(self):
assert self.about_page.click_projects_link(), "services link does not go to services page"
######## PAGE SPECIFIC STUFF ########
def test_click_resume(self):
return self.about_page.click_resume(), "link to resume is broken"
def test_click_resumator(self):
return self.about_page.click_resumator(), "link to resumator is broken"
def test_click_contact_me(self):
return self.about_page.click_contact_me(), "link to contact me page is broken in FAQ"
def test_click_html5up_backlink(self):
return self.about_page.click_html5up_backlink(), "backlink to html5up in FAQ is broken"
######## FOOTER STUFF ########
def test_click_github(self):
assert self.about_page.click_github_button(), "link to github is broken"
def test_click_linkedin(self):
assert self.about_page.click_linkedin_button(), "link to linkedin is broken"
def test_click_gplus(self):
assert self.about_page.click_gplus_button(), "link to google plus is broken"
def test_click_twitter(self):
assert self.about_page.click_twitter_button(), "link to twitter is broken"
def test_click_html5up(self):
assert self.about_page.click_html5up_link(), "link to html5up template owner is broken"
def test_copyright_on_about_page(self):
assert self.about_page.is_copyright_matches(), "about page has wrong copyright"
def tearDown(self):
self.driver.close()
if __name__ == "__main__":
unittest.main() | 36 | 98 | 0.703453 | 375 | 2,664 | 4.736 | 0.205333 | 0.111486 | 0.124437 | 0.141892 | 0.463964 | 0.351914 | 0.178491 | 0.158784 | 0.114865 | 0 | 0 | 0.007401 | 0.188438 | 2,664 | 74 | 99 | 36 | 0.814061 | 0.035285 | 0 | 0.06383 | 0 | 0 | 0.232051 | 0 | 0 | 0 | 0 | 0 | 0.255319 | 1 | 0.382979 | false | 0 | 0.06383 | 0.085106 | 0.553191 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
43b1df830b2abdb7a53300c3467f70be764c0f6f | 1,235 | py | Python | k_values_graph.py | leobouts/Skyline_top_k_queries | 5f5e8ab8f5e521dc20f33a69dd042917ff5d42f0 | [
"MIT"
] | null | null | null | k_values_graph.py | leobouts/Skyline_top_k_queries | 5f5e8ab8f5e521dc20f33a69dd042917ff5d42f0 | [
"MIT"
] | null | null | null | k_values_graph.py | leobouts/Skyline_top_k_queries | 5f5e8ab8f5e521dc20f33a69dd042917ff5d42f0 | [
"MIT"
] | null | null | null | from a_top_k import *
from b_top_k import *
import time
def main():
# test the generator for the top-k input
# starting time
values_k = [1, 2, 5, 10, 20, 50, 100]
times_topk_join_a = []
times_topk_join_b = []
number_of_valid_lines_a = []
number_of_valid_lines_b = []
for k in values_k:
number_of_valid_lines = []
top_k_a_generator = generate_top_join_a(number_of_valid_lines)
start_time_a = time.time()
for i in range(k):
next(top_k_a_generator)
number_of_valid_lines_a.append(len(number_of_valid_lines))
top_k_time_a = time.time() - start_time_a
times_topk_join_a.append(top_k_time_a)
number_of_valid_lines = []
top_k_b_generator = generate_top_join_b(number_of_valid_lines)
start_time_b = time.time()
for i in range(k):
next(top_k_b_generator)
number_of_valid_lines_b.append(len(number_of_valid_lines))
top_k_time_b = time.time() - start_time_b
times_topk_join_b.append(top_k_time_b)
print(times_topk_join_a)
print(times_topk_join_b)
print(number_of_valid_lines_a)
print(number_of_valid_lines_b)
if __name__ == "__main__":
main()
| 24.7 | 70 | 0.673684 | 202 | 1,235 | 3.564356 | 0.20297 | 0.133333 | 0.216667 | 0.3 | 0.525 | 0.35 | 0.175 | 0.175 | 0.175 | 0.077778 | 0 | 0.012862 | 0.244534 | 1,235 | 49 | 71 | 25.204082 | 0.758842 | 0.042105 | 0 | 0.125 | 0 | 0 | 0.006785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.09375 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
43c14b71a9e55a3f072d7e8094c999b91490df88 | 507 | py | Python | python_clean_architecture/use_cases/orderdata_use_case.py | jfsolarte/python_clean_architecture | 56b0c0eff50bc98774a0caee12e3030789476687 | [
"MIT"
] | null | null | null | python_clean_architecture/use_cases/orderdata_use_case.py | jfsolarte/python_clean_architecture | 56b0c0eff50bc98774a0caee12e3030789476687 | [
"MIT"
] | null | null | null | python_clean_architecture/use_cases/orderdata_use_case.py | jfsolarte/python_clean_architecture | 56b0c0eff50bc98774a0caee12e3030789476687 | [
"MIT"
] | null | null | null | from python_clean_architecture.shared import use_case as uc
from python_clean_architecture.shared import response_object as res
class OrderDataGetUseCase(uc.UseCase):
def __init__(self, repo):
self.repo = repo
def execute(self, request_object):
#if not request_object:
#return res.ResponseFailure.build_from_invalid_request_object(request_object)
storage_rooms = self.repo.order(items=request_object.items)
return res.ResponseSuccess(storage_rooms)
| 31.6875 | 89 | 0.755424 | 64 | 507 | 5.671875 | 0.515625 | 0.179063 | 0.082645 | 0.14876 | 0.214876 | 0.214876 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 507 | 16 | 90 | 31.6875 | 0.872596 | 0.193294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
43c4a0c547cce9ae68639184c6cd8640efc21e50 | 857 | py | Python | tests/metarl/tf/baselines/test_baselines.py | neurips2020submission11699/metarl | ae4825d21478fa1fd0aa6b116941ea40caa152a5 | [
"MIT"
] | 2 | 2021-02-07T12:14:52.000Z | 2021-07-29T08:07:22.000Z | tests/metarl/tf/baselines/test_baselines.py | neurips2020submission11699/metarl | ae4825d21478fa1fd0aa6b116941ea40caa152a5 | [
"MIT"
] | null | null | null | tests/metarl/tf/baselines/test_baselines.py | neurips2020submission11699/metarl | ae4825d21478fa1fd0aa6b116941ea40caa152a5 | [
"MIT"
] | null | null | null | """
This script creates a test that fails when
metarl.tf.baselines failed to initialize.
"""
import tensorflow as tf
from metarl.envs import MetaRLEnv
from metarl.tf.baselines import ContinuousMLPBaseline
from metarl.tf.baselines import GaussianMLPBaseline
from tests.fixtures import TfGraphTestCase
from tests.fixtures.envs.dummy import DummyBoxEnv
class TestTfBaselines(TfGraphTestCase):
def test_baseline(self):
"""Test the baseline initialization."""
box_env = MetaRLEnv(DummyBoxEnv())
deterministic_mlp_baseline = ContinuousMLPBaseline(env_spec=box_env)
gaussian_mlp_baseline = GaussianMLPBaseline(env_spec=box_env)
self.sess.run(tf.compat.v1.global_variables_initializer())
deterministic_mlp_baseline.get_param_values()
gaussian_mlp_baseline.get_param_values()
box_env.close()
| 31.740741 | 76 | 0.772462 | 102 | 857 | 6.284314 | 0.5 | 0.037442 | 0.079563 | 0.065523 | 0.162246 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001381 | 0.155193 | 857 | 26 | 77 | 32.961538 | 0.883978 | 0.13769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.4 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
43d418c8d833bba41481c7b2cbeab0fbbe8f44c5 | 548 | py | Python | example/example.py | saravanabalagi/imshowtools | ea81af888c69223ff8b42b5c4b8c034483eebe21 | [
"MIT"
] | 4 | 2019-07-18T17:24:02.000Z | 2020-10-14T06:09:05.000Z | example/example.py | saravanabalagi/imshowtools | ea81af888c69223ff8b42b5c4b8c034483eebe21 | [
"MIT"
] | 1 | 2020-04-18T01:05:22.000Z | 2020-04-18T01:10:53.000Z | example/example.py | saravanabalagi/imshowtools | ea81af888c69223ff8b42b5c4b8c034483eebe21 | [
"MIT"
] | null | null | null | from imshowtools import imshow
import cv2
if __name__ == '__main__':
image_lenna = cv2.imread("lenna.png")
imshow(image_lenna, mode='BGR', window_title="LennaWindow", title="Lenna")
image_lenna_bgr = cv2.imread("lenna_bgr.png")
imshow(image_lenna, image_lenna_bgr, mode=['BGR', 'RGB'], title=['lenna_rgb', 'lenna_bgr'])
imshow(*[image_lenna for _ in range(12)], title=["Lenna" for _ in range(12)], window_title="LennaWindow")
imshow(*[image_lenna for _ in range(30)], title="Lenna", padding=(1, 1, 0, (0, 0, 0.8, 0.8)))
| 39.142857 | 109 | 0.678832 | 82 | 548 | 4.231707 | 0.329268 | 0.201729 | 0.184438 | 0.129683 | 0.204611 | 0.149856 | 0 | 0 | 0 | 0 | 0 | 0.038298 | 0.142336 | 548 | 13 | 110 | 42.153846 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0.171533 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
78db1f0ed3fd45150eca94cbff8fdb625dd1d917 | 156 | py | Python | testData/completion/classMethodCls.py | seandstewart/typical-pycharm-plugin | 4f6ec99766239421201faae9d75c32fa0ee3565a | [
"MIT"
] | null | null | null | testData/completion/classMethodCls.py | seandstewart/typical-pycharm-plugin | 4f6ec99766239421201faae9d75c32fa0ee3565a | [
"MIT"
] | null | null | null | testData/completion/classMethodCls.py | seandstewart/typical-pycharm-plugin | 4f6ec99766239421201faae9d75c32fa0ee3565a | [
"MIT"
] | null | null | null | from builtins import *
from pydantic import BaseModel
class A(BaseModel):
abc: str
@classmethod
def test(cls):
return cls.<caret>
| 11.142857 | 30 | 0.647436 | 19 | 156 | 5.315789 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275641 | 156 | 13 | 31 | 12 | 0.893805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.285714 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
78ddef69c8c618801719da4ee218c45f1df458b0 | 25,941 | py | Python | mars/tensor/execution/tests/test_base_execute.py | lmatz/mars | 45f9166b54eb91b21e66cef8b590a41aa8ac9569 | [
"Apache-2.0"
] | 1 | 2018-12-26T08:37:04.000Z | 2018-12-26T08:37:04.000Z | mars/tensor/execution/tests/test_base_execute.py | lmatz/mars | 45f9166b54eb91b21e66cef8b590a41aa8ac9569 | [
"Apache-2.0"
] | null | null | null | mars/tensor/execution/tests/test_base_execute.py | lmatz/mars | 45f9166b54eb91b21e66cef8b590a41aa8ac9569 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 1999-2018 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import numpy as np
import scipy.sparse as sps
from mars.tensor.execution.core import Executor
from mars import tensor as mt
from mars.tensor.expressions.datasource import tensor, ones, zeros, arange
from mars.tensor.expressions.base import copyto, transpose, moveaxis, broadcast_to, broadcast_arrays, where, \
expand_dims, rollaxis, atleast_1d, atleast_2d, atleast_3d, argwhere, array_split, split, \
hsplit, vsplit, dsplit, roll, squeeze, ptp, diff, ediff1d, digitize, average, cov, corrcoef, \
flip, flipud, fliplr, repeat, tile, isin
from mars.tensor.expressions.merge import stack
from mars.tensor.expressions.reduction import all as tall
class Test(unittest.TestCase):
def setUp(self):
self.executor = Executor('numpy')
def testRechunkExecution(self):
raw = np.random.random((11, 8))
arr = tensor(raw, chunks=3)
arr2 = arr.rechunk(4)
res = self.executor.execute_tensor(arr2)
self.assertTrue(np.array_equal(res[0], raw[:4, :4]))
self.assertTrue(np.array_equal(res[1], raw[:4, 4:]))
self.assertTrue(np.array_equal(res[2], raw[4:8, :4]))
self.assertTrue(np.array_equal(res[3], raw[4:8, 4:]))
self.assertTrue(np.array_equal(res[4], raw[8:, :4]))
self.assertTrue(np.array_equal(res[5], raw[8:, 4:]))
def testCopytoExecution(self):
a = ones((2, 3), chunks=1)
b = tensor([3, -1, 3], chunks=2)
copyto(a, b, where=b > 1)
res = self.executor.execute_tensor(a, concat=True)[0]
expected = np.array([[3, 1, 3], [3, 1, 3]])
np.testing.assert_equal(res, expected)
def testAstypeExecution(self):
raw = np.random.random((10, 5))
arr = tensor(raw, chunks=3)
arr2 = arr.astype('i8')
res = self.executor.execute_tensor(arr2, concat=True)
self.assertTrue(np.array_equal(res[0], raw.astype('i8')))
raw = sps.random(10, 5, density=.2)
arr = tensor(raw, chunks=3)
arr2 = arr.astype('i8')
res = self.executor.execute_tensor(arr2, concat=True)
self.assertTrue(np.array_equal(res[0].toarray(), raw.astype('i8').toarray()))
def testTransposeExecution(self):
raw = np.random.random((11, 8, 5))
arr = tensor(raw, chunks=3)
arr2 = transpose(arr)
res = self.executor.execute_tensor(arr2, concat=True)
self.assertTrue(np.array_equal(res[0], raw.T))
arr3 = transpose(arr, axes=(-2, -1, -3))
res = self.executor.execute_tensor(arr3, concat=True)
self.assertTrue(np.array_equal(res[0], raw.transpose(1, 2, 0)))
raw = sps.random(11, 8)
arr = tensor(raw, chunks=3)
arr2 = transpose(arr)
self.assertTrue(arr2.issparse())
res = self.executor.execute_tensor(arr2, concat=True)
self.assertTrue(np.array_equal(res[0].toarray(), raw.T.toarray()))
def testSwapaxesExecution(self):
raw = np.random.random((11, 8, 5))
arr = tensor(raw, chunks=3)
arr2 = arr.swapaxes(2, 0)
res = self.executor.execute_tensor(arr2, concat=True)
self.assertTrue(np.array_equal(res[0], raw.swapaxes(2, 0)))
raw = sps.random(11, 8, density=.2)
arr = tensor(raw, chunks=3)
arr2 = arr.swapaxes(1, 0)
res = self.executor.execute_tensor(arr2, concat=True)
self.assertTrue(np.array_equal(res[0].toarray(), raw.toarray().swapaxes(1, 0)))
def testMoveaxisExecution(self):
x = zeros((3, 4, 5), chunks=2)
t = moveaxis(x, 0, -1)
res = self.executor.execute_tensor(t, concat=True)[0]
self.assertEqual(res.shape, (4, 5, 3))
t = moveaxis(x, -1, 0)
res = self.executor.execute_tensor(t, concat=True)[0]
self.assertEqual(res.shape, (5, 3, 4))
t = moveaxis(x, [0, 1], [-1, -2])
res = self.executor.execute_tensor(t, concat=True)[0]
self.assertEqual(res.shape, (5, 4, 3))
t = moveaxis(x, [0, 1, 2], [-1, -2, -3])
res = self.executor.execute_tensor(t, concat=True)[0]
self.assertEqual(res.shape, (5, 4, 3))
def testBroadcastToExecution(self):
raw = np.random.random((10, 5, 1))
arr = tensor(raw, chunks=2)
arr2 = broadcast_to(arr, (5, 10, 5, 6))
res = self.executor.execute_tensor(arr2, concat=True)
self.assertTrue(np.array_equal(res[0], np.broadcast_to(raw, (5, 10, 5, 6))))
def testBroadcastArraysExecutions(self):
x_data = [[1, 2, 3]]
x = tensor(x_data, chunks=1)
y_data = [[1], [2], [3]]
y = tensor(y_data, chunks=2)
a = broadcast_arrays(x, y)
res = [self.executor.execute_tensor(arr, concat=True)[0] for arr in a]
expected = np.broadcast_arrays(x_data, y_data)
for r, e in zip(res, expected):
np.testing.assert_equal(r, e)
def testWhereExecution(self):
raw_cond = np.random.randint(0, 2, size=(4, 4), dtype='?')
raw_x = np.random.rand(4, 1)
raw_y = np.random.rand(4, 4)
cond, x, y = tensor(raw_cond, chunks=2), tensor(raw_x, chunks=2), tensor(raw_y, chunks=2)
arr = where(cond, x, y)
res = self.executor.execute_tensor(arr, concat=True)
self.assertTrue(np.array_equal(res[0], np.where(raw_cond, raw_x, raw_y)))
raw_cond = sps.csr_matrix(np.random.randint(0, 2, size=(4, 4), dtype='?'))
raw_x = sps.random(4, 1, density=.1)
raw_y = sps.random(4, 4, density=.1)
cond, x, y = tensor(raw_cond, chunks=2), tensor(raw_x, chunks=2), tensor(raw_y, chunks=2)
arr = where(cond, x, y)
res = self.executor.execute_tensor(arr, concat=True)[0]
self.assertTrue(np.array_equal(res.toarray(),
np.where(raw_cond.toarray(), raw_x.toarray(), raw_y.toarray())))
def testReshapeExecution(self):
raw_data = np.random.rand(10, 20, 30)
x = tensor(raw_data, chunks=6)
y = x.reshape(-1, 30)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], raw_data.reshape(-1, 30)))
y2 = x.reshape(10, -1)
res = self.executor.execute_tensor(y2, concat=True)
self.assertTrue(np.array_equal(res[0], raw_data.reshape(10, -1)))
y3 = x.reshape(-1)
res = self.executor.execute_tensor(y3, concat=True)
self.assertTrue(np.array_equal(res[0], raw_data.reshape(-1)))
y4 = x.ravel()
res = self.executor.execute_tensor(y4, concat=True)
self.assertTrue(np.array_equal(res[0], raw_data.ravel()))
raw_data = np.random.rand(30, 100, 20)
x = tensor(raw_data, chunks=6)
y = x.reshape(-1, 20, 5, 5, 4)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], raw_data.reshape(-1, 20, 5, 5, 4)))
y2 = x.reshape(3000, 10, 2)
res = self.executor.execute_tensor(y2, concat=True)
self.assertTrue(np.array_equal(res[0], raw_data.reshape(3000, 10, 2)))
y3 = x.reshape(60, 25, 40)
res = self.executor.execute_tensor(y3, concat=True)
self.assertTrue(np.array_equal(res[0], raw_data.reshape(60, 25, 40)))
def testExpandDimsExecution(self):
raw_data = np.random.rand(10, 20, 30)
x = tensor(raw_data, chunks=6)
y = expand_dims(x, 1)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], np.expand_dims(raw_data, 1)))
y = expand_dims(x, 0)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], np.expand_dims(raw_data, 0)))
y = expand_dims(x, 3)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], np.expand_dims(raw_data, 3)))
y = expand_dims(x, -1)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], np.expand_dims(raw_data, -1)))
y = expand_dims(x, -4)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], np.expand_dims(raw_data, -4)))
with self.assertRaises(np.AxisError):
expand_dims(x, -5)
with self.assertRaises(np.AxisError):
expand_dims(x, 4)
def testRollAxisExecution(self):
x = ones((3, 4, 5, 6), chunks=1)
y = rollaxis(x, 3, 1)
res = self.executor.execute_tensor(y, concat=True)
self.assertTrue(np.array_equal(res[0], np.rollaxis(np.ones((3, 4, 5, 6)), 3, 1)))
def testAtleast1dExecution(self):
x = 1
y = ones(3, chunks=2)
z = ones((3, 4), chunks=2)
t = atleast_1d(x, y, z)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in t]
self.assertTrue(np.array_equal(res[0], np.array([1])))
self.assertTrue(np.array_equal(res[1], np.ones(3)))
self.assertTrue(np.array_equal(res[2], np.ones((3, 4))))
def testAtleast2dExecution(self):
x = 1
y = ones(3, chunks=2)
z = ones((3, 4), chunks=2)
t = atleast_2d(x, y, z)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in t]
self.assertTrue(np.array_equal(res[0], np.array([[1]])))
self.assertTrue(np.array_equal(res[1], np.atleast_2d(np.ones(3))))
self.assertTrue(np.array_equal(res[2], np.ones((3, 4))))
def testAtleast3dExecution(self):
x = 1
y = ones(3, chunks=2)
z = ones((3, 4), chunks=2)
t = atleast_3d(x, y, z)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in t]
self.assertTrue(np.array_equal(res[0], np.atleast_3d(x)))
self.assertTrue(np.array_equal(res[1], np.atleast_3d(np.ones(3))))
self.assertTrue(np.array_equal(res[2], np.atleast_3d(np.ones((3, 4)))))
def testArgwhereExecution(self):
x = arange(6, chunks=2).reshape(2, 3)
t = argwhere(x > 1)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.argwhere(np.arange(6).reshape(2, 3) > 1)
self.assertTrue(np.array_equal(res, expected))
def testArraySplitExecution(self):
x = arange(48, chunks=3).reshape(2, 3, 8)
ss = array_split(x, 3, axis=2)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.array_split(np.arange(48).reshape(2, 3, 8), 3, axis=2)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r, e) for r, e in zip(res, expected)]
ss = array_split(x, [3, 5, 6, 10], axis=2)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.array_split(np.arange(48).reshape(2, 3, 8), [3, 5, 6, 10], axis=2)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r, e) for r, e in zip(res, expected)]
def testSplitExecution(self):
x = arange(48, chunks=3).reshape(2, 3, 8)
ss = split(x, 4, axis=2)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.split(np.arange(48).reshape(2, 3, 8), 4, axis=2)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r, e) for r, e in zip(res, expected)]
ss = split(x, [3, 5, 6, 10], axis=2)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.split(np.arange(48).reshape(2, 3, 8), [3, 5, 6, 10], axis=2)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r, e) for r, e in zip(res, expected)]
# hsplit
x = arange(120, chunks=3).reshape(2, 12, 5)
ss = hsplit(x, 4)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.hsplit(np.arange(120).reshape(2, 12, 5), 4)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r, e) for r, e in zip(res, expected)]
# vsplit
x = arange(48, chunks=3).reshape(8, 3, 2)
ss = vsplit(x, 4)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.vsplit(np.arange(48).reshape(8, 3, 2), 4)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r, e) for r, e in zip(res, expected)]
# dsplit
x = arange(48, chunks=3).reshape(2, 3, 8)
ss = dsplit(x, 4)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.dsplit(np.arange(48).reshape(2, 3, 8), 4)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r, e) for r, e in zip(res, expected)]
x_data = sps.random(12, 8, density=.1)
x = tensor(x_data, chunks=3)
ss = split(x, 4, axis=0)
res = [self.executor.execute_tensor(i, concat=True)[0] for i in ss]
expected = np.split(x_data.toarray(), 4, axis=0)
self.assertEqual(len(res), len(expected))
[np.testing.assert_equal(r.toarray(), e) for r, e in zip(res, expected)]
def testRollExecution(self):
x = arange(10, chunks=2)
t = roll(x, 2)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.roll(np.arange(10), 2)
np.testing.assert_equal(res, expected)
x2 = x.reshape(2, 5)
t = roll(x2, 1)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.roll(np.arange(10).reshape(2, 5), 1)
np.testing.assert_equal(res, expected)
t = roll(x2, 1, axis=0)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.roll(np.arange(10).reshape(2, 5), 1, axis=0)
np.testing.assert_equal(res, expected)
t = roll(x2, 1, axis=1)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.roll(np.arange(10).reshape(2, 5), 1, axis=1)
np.testing.assert_equal(res, expected)
def testSqueezeExecution(self):
data = np.array([[[0], [1], [2]]])
x = tensor(data, chunks=1)
t = squeeze(x)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.squeeze(data)
np.testing.assert_equal(res, expected)
t = squeeze(x, axis=2)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.squeeze(data, axis=2)
np.testing.assert_equal(res, expected)
def testPtpExecution(self):
x = arange(4, chunks=1).reshape(2, 2)
t = ptp(x, axis=0)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.ptp(np.arange(4).reshape(2, 2), axis=0)
np.testing.assert_equal(res, expected)
t = ptp(x, axis=1)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.ptp(np.arange(4).reshape(2, 2), axis=1)
np.testing.assert_equal(res, expected)
t = ptp(x)
res = self.executor.execute_tensor(t)[0]
expected = np.ptp(np.arange(4).reshape(2, 2))
np.testing.assert_equal(res, expected)
def testDiffExecution(self):
data = np.array([1, 2, 4, 7, 0])
x = tensor(data, chunks=2)
t = diff(x)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.diff(data)
np.testing.assert_equal(res, expected)
t = diff(x, n=2)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.diff(data, n=2)
np.testing.assert_equal(res, expected)
data = np.array([[1, 3, 6, 10], [0, 5, 6, 8]])
x = tensor(data, chunks=2)
t = diff(x)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.diff(data)
np.testing.assert_equal(res, expected)
t = diff(x, axis=0)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.diff(data, axis=0)
np.testing.assert_equal(res, expected)
x = mt.arange('1066-10-13', '1066-10-16', dtype=mt.datetime64)
t = diff(x)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.diff(np.arange('1066-10-13', '1066-10-16', dtype=np.datetime64))
np.testing.assert_equal(res, expected)
def testEdiff1d(self):
data = np.array([1, 2, 4, 7, 0])
x = tensor(data, chunks=2)
t = ediff1d(x)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.ediff1d(data)
np.testing.assert_equal(res, expected)
to_begin = tensor(-99, chunks=2)
to_end = tensor([88, 99], chunks=2)
t = ediff1d(x, to_begin=to_begin, to_end=to_end)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.ediff1d(data, to_begin=-99, to_end=np.array([88, 99]))
np.testing.assert_equal(res, expected)
data = [[1, 2, 4], [1, 6, 24]]
t = ediff1d(tensor(data, chunks=2))
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.ediff1d(data)
np.testing.assert_equal(res, expected)
def testDigitizeExecution(self):
data = np.array([0.2, 6.4, 3.0, 1.6])
x = tensor(data, chunks=2)
bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0])
inds = digitize(x, bins)
res = self.executor.execute_tensor(inds, concat=True)[0]
expected = np.digitize(data, bins)
np.testing.assert_equal(res, expected)
b = tensor(bins, chunks=2)
inds = digitize(x, b)
res = self.executor.execute_tensor(inds, concat=True)[0]
expected = np.digitize(data, bins)
np.testing.assert_equal(res, expected)
data = np.array([1.2, 10.0, 12.4, 15.5, 20.])
x = tensor(data, chunks=2)
bins = np.array([0, 5, 10, 15, 20])
inds = digitize(x, bins, right=True)
res = self.executor.execute_tensor(inds, concat=True)[0]
expected = np.digitize(data, bins, right=True)
np.testing.assert_equal(res, expected)
inds = digitize(x, bins, right=False)
res = self.executor.execute_tensor(inds, concat=True)[0]
expected = np.digitize(data, bins, right=False)
np.testing.assert_equal(res, expected)
data = sps.random(10, 1, density=.1) * 12
x = tensor(data, chunks=2)
bins = np.array([1.0, 2.0, 2.5, 4.0, 10.0])
inds = digitize(x, bins)
res = self.executor.execute_tensor(inds, concat=True)[0]
expected = np.digitize(data.toarray(), bins, right=False)
np.testing.assert_equal(res.toarray(), expected)
def testAverageExecution(self):
data = arange(1, 5, chunks=1)
t = average(data)
res = self.executor.execute_tensor(t)[0]
expected = np.average(np.arange(1, 5))
self.assertEqual(res, expected)
t = average(arange(1, 11, chunks=2), weights=arange(10, 0, -1, chunks=2))
res = self.executor.execute_tensor(t)[0]
expected = np.average(range(1, 11), weights=range(10, 0, -1))
self.assertEqual(res, expected)
data = arange(6, chunks=2).reshape((3, 2))
t = average(data, axis=1, weights=tensor([1./4, 3./4], chunks=2))
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.average(np.arange(6).reshape(3, 2), axis=1, weights=(1./4, 3./4))
np.testing.assert_equal(res, expected)
with self.assertRaises(TypeError):
average(data, weights=tensor([1./4, 3./4], chunks=2))
def testCovExecution(self):
data = np.array([[0, 2], [1, 1], [2, 0]]).T
x = tensor(data, chunks=1)
t = cov(x)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.cov(data)
np.testing.assert_equal(res, expected)
data_x = [-2.1, -1, 4.3]
data_y = [3, 1.1, 0.12]
x = tensor(data_x, chunks=1)
y = tensor(data_y, chunks=1)
X = stack((x, y), axis=0)
t = cov(x, y)
r = tall(t == cov(X))
self.assertTrue(self.executor.execute_tensor(r)[0])
def testCorrcoefExecution(self):
data_x = [-2.1, -1, 4.3]
data_y = [3, 1.1, 0.12]
x = tensor(data_x, chunks=1)
y = tensor(data_y, chunks=1)
t = corrcoef(x, y)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.corrcoef(data_x, data_y)
np.testing.assert_equal(res, expected)
def testFlipExecution(self):
a = arange(8, chunks=2).reshape((2, 2, 2))
t = flip(a, 0)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.flip(np.arange(8).reshape(2, 2, 2), 0)
np.testing.assert_equal(res, expected)
t = flip(a, 1)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.flip(np.arange(8).reshape(2, 2, 2), 1)
np.testing.assert_equal(res, expected)
t = flipud(a)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.flipud(np.arange(8).reshape(2, 2, 2))
np.testing.assert_equal(res, expected)
t = fliplr(a)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.fliplr(np.arange(8).reshape(2, 2, 2))
np.testing.assert_equal(res, expected)
def testRepeatExecution(self):
a = repeat(3, 4)
res = self.executor.execute_tensor(a)[0]
expected = np.repeat(3, 4)
np.testing.assert_equal(res, expected)
x_data = np.random.randn(20, 30)
x = tensor(x_data, chunks=(3, 4))
t = repeat(x, 2)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.repeat(x_data, 2)
np.testing.assert_equal(res, expected)
t = repeat(x, 3, axis=1)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.repeat(x_data, 3, axis=1)
np.testing.assert_equal(res, expected)
t = repeat(x, np.arange(20), axis=0)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.repeat(x_data, np.arange(20), axis=0)
np.testing.assert_equal(res, expected)
t = repeat(x, arange(20, chunks=5), axis=0)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.repeat(x_data, np.arange(20), axis=0)
np.testing.assert_equal(res, expected)
x_data = sps.random(20, 30, density=.1)
x = tensor(x_data, chunks=(3, 4))
t = repeat(x, 2, axis=1)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.repeat(x_data.toarray(), 2, axis=1)
np.testing.assert_equal(res.toarray(), expected)
def testTileExecution(self):
a_data = np.array([0, 1, 2])
a = tensor(a_data, chunks=2)
t = tile(a, 2)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.tile(a_data, 2)
np.testing.assert_equal(res, expected)
t = tile(a, (2, 2))
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.tile(a_data, (2, 2))
np.testing.assert_equal(res, expected)
t = tile(a, (2, 1, 2))
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.tile(a_data, (2, 1, 2))
np.testing.assert_equal(res, expected)
b_data = np.array([[1, 2], [3, 4]])
b = tensor(b_data, chunks=1)
t = tile(b, 2)
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.tile(b_data, 2)
np.testing.assert_equal(res, expected)
t = tile(b, (2, 1))
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.tile(b_data, (2, 1))
np.testing.assert_equal(res, expected)
c_data = np.array([1, 2, 3, 4])
c = tensor(c_data, chunks=3)
t = tile(c, (4, 1))
res = self.executor.execute_tensor(t, concat=True)[0]
expected = np.tile(c_data, (4, 1))
np.testing.assert_equal(res, expected)
def testIsInExecution(self):
element = 2 * arange(4, chunks=1).reshape((2, 2))
test_elements = [1, 2, 4, 8]
mask = isin(element, test_elements)
res = self.executor.execute_tensor(mask, concat=True)[0]
expected = np.isin(2 * np.arange(4).reshape((2, 2)), test_elements)
np.testing.assert_equal(res, expected)
res = self.executor.execute_tensor(element[mask], concat=True)[0]
expected = np.array([2, 4])
np.testing.assert_equal(res, expected)
mask = isin(element, test_elements, invert=True)
res = self.executor.execute_tensor(mask, concat=True)[0]
expected = np.isin(2 * np.arange(4).reshape((2, 2)), test_elements, invert=True)
np.testing.assert_equal(res, expected)
res = self.executor.execute_tensor(element[mask], concat=True)[0]
expected = np.array([0, 6])
np.testing.assert_equal(res, expected)
test_set = {1, 2, 4, 8}
mask = isin(element, test_set)
res = self.executor.execute_tensor(mask, concat=True)[0]
expected = np.isin(2 * np.arange(4).reshape((2, 2)), test_set)
np.testing.assert_equal(res, expected)
| 34.132895 | 110 | 0.596623 | 3,879 | 25,941 | 3.909255 | 0.068317 | 0.072804 | 0.11402 | 0.150026 | 0.761343 | 0.739976 | 0.710367 | 0.672118 | 0.615273 | 0.577025 | 0 | 0.050625 | 0.247677 | 25,941 | 759 | 111 | 34.177866 | 0.726378 | 0.024402 | 0 | 0.428571 | 0 | 0 | 0.002175 | 0 | 0 | 0 | 0 | 0 | 0.223092 | 1 | 0.062622 | false | 0 | 0.017613 | 0 | 0.082192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
78df11b8ab67a00fef993f03b911ed0dd7fc3180 | 707 | py | Python | src/python_minifier/transforms/remove_pass.py | donno2048/python-minifier | 9a9ff4dd5d2bb8dc666cae5939c125d420c2ffd5 | [
"MIT"
] | null | null | null | src/python_minifier/transforms/remove_pass.py | donno2048/python-minifier | 9a9ff4dd5d2bb8dc666cae5939c125d420c2ffd5 | [
"MIT"
] | null | null | null | src/python_minifier/transforms/remove_pass.py | donno2048/python-minifier | 9a9ff4dd5d2bb8dc666cae5939c125d420c2ffd5 | [
"MIT"
] | null | null | null | import ast
from python_minifier.transforms.suite_transformer import SuiteTransformer
class RemovePass(SuiteTransformer):
"""
Remove Pass keywords from source
If a statement is syntactically necessary, use an empty expression instead
"""
def __call__(self, node):
return self.visit(node)
def suite(self, node_list, parent):
without_pass = [self.visit(a) for a in filter(lambda n: not self.is_node(n, ast.Pass), node_list)]
if len(without_pass) == 0:
if isinstance(parent, ast.Module):
return []
else:
return [self.add_child(ast.Expr(value=ast.Num(0)), parent=parent)]
return without_pass
| 27.192308 | 106 | 0.649222 | 90 | 707 | 4.955556 | 0.566667 | 0.073991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00381 | 0.257426 | 707 | 25 | 107 | 28.28 | 0.845714 | 0.152758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.307692 | 0.153846 | 0.076923 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
78e0a22b8b4b6603603bcdb8feefa51265cf9c14 | 345 | py | Python | src/backend/common/models/favorite.py | ofekashery/the-blue-alliance | df0e47d054161fe742ac6198a6684247d0713279 | [
"MIT"
] | 266 | 2015-01-04T00:10:48.000Z | 2022-03-28T18:42:05.000Z | src/backend/common/models/favorite.py | ofekashery/the-blue-alliance | df0e47d054161fe742ac6198a6684247d0713279 | [
"MIT"
] | 2,673 | 2015-01-01T20:14:33.000Z | 2022-03-31T18:17:16.000Z | src/backend/common/models/favorite.py | ofekashery/the-blue-alliance | df0e47d054161fe742ac6198a6684247d0713279 | [
"MIT"
] | 230 | 2015-01-04T00:10:48.000Z | 2022-03-26T18:12:04.000Z | from backend.common.models.mytba import MyTBAModel
class Favorite(MyTBAModel):
"""
In order to make strongly consistent DB requests, instances of this class
should be created with a parent that is the associated Account key.
"""
def __init__(self, *args, **kwargs):
super(Favorite, self).__init__(*args, **kwargs)
| 28.75 | 77 | 0.704348 | 45 | 345 | 5.222222 | 0.844444 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202899 | 345 | 11 | 78 | 31.363636 | 0.854545 | 0.408696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
78e3d8480adc030df86059c4a34f7c8aad96d287 | 306 | py | Python | day1/loops.py | alqmy/The-Garage-Summer-Of-Code | af310d5e5194a62962db2fc1e601099468251efa | [
"MIT"
] | null | null | null | day1/loops.py | alqmy/The-Garage-Summer-Of-Code | af310d5e5194a62962db2fc1e601099468251efa | [
"MIT"
] | null | null | null | day1/loops.py | alqmy/The-Garage-Summer-Of-Code | af310d5e5194a62962db2fc1e601099468251efa | [
"MIT"
] | null | null | null | # while True:
# # ejecuta esto
# print("Hola")
real = 7
print("Entre un numero entre el 1 y el 10")
guess = int(input())
# =/=
while guess != real:
print("Ese no es el numero")
print("Entre un numero entre el 1 y el 10")
guess = int(input())
# el resto
print("Yay! Lo sacastes!")
| 16.105263 | 47 | 0.591503 | 49 | 306 | 3.693878 | 0.510204 | 0.110497 | 0.132597 | 0.198895 | 0.486188 | 0.486188 | 0.486188 | 0.486188 | 0.486188 | 0.486188 | 0 | 0.030837 | 0.25817 | 306 | 18 | 48 | 17 | 0.76652 | 0.199346 | 0 | 0.5 | 0 | 0 | 0.436975 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
78f362e6e499abd6ba76d1b520e7369bf25061c9 | 257 | py | Python | retrieval/urls.py | aipassio/visual_retrieval | ce8dae2ad517a9edb5e278163dd6d0f7ffc1b5f4 | [
"MIT"
] | null | null | null | retrieval/urls.py | aipassio/visual_retrieval | ce8dae2ad517a9edb5e278163dd6d0f7ffc1b5f4 | [
"MIT"
] | null | null | null | retrieval/urls.py | aipassio/visual_retrieval | ce8dae2ad517a9edb5e278163dd6d0f7ffc1b5f4 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('retrieval_insert', views.retrieval_insert, name='retrieval_insert'),
path('retrieval_get', views.retrieval_get, name='retrieval_get')
] | 28.555556 | 78 | 0.723735 | 32 | 257 | 5.625 | 0.375 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132296 | 257 | 9 | 79 | 28.555556 | 0.807175 | 0 | 0 | 0 | 0 | 0 | 0.244186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
78f63355867462f1a454c939b07a72f40e12bd55 | 955 | py | Python | src/net/pluto_ftp.py | WardenAllen/Uranus | 0d20cac631320b558254992c17678ddd1658587b | [
"MIT"
] | null | null | null | src/net/pluto_ftp.py | WardenAllen/Uranus | 0d20cac631320b558254992c17678ddd1658587b | [
"MIT"
] | null | null | null | src/net/pluto_ftp.py | WardenAllen/Uranus | 0d20cac631320b558254992c17678ddd1658587b | [
"MIT"
] | null | null | null | # !/usr/bin/python
# -*- coding: utf-8 -*-
# @Time : 2020/9/18 12:02
# @Author : WardenAllen
# @File : pluto_ftp.py
# @Brief :
import paramiko
class PlutoFtp :
# paramiko's Sftp() object.
__sftp = object
def connect_by_pass(self, host, port, uname, pwd):
transport = paramiko.Transport((host, port))
transport.connect(username=uname, password=pwd)
self.__sftp = paramiko.SFTPClient.from_transport(transport)
def connect_by_key(self, host, port, uname, key_path, key_pass = ''):
key = paramiko.RSAKey.from_private_key_file(key_path, key_pass)
transport = paramiko.Transport((host, port))
transport.connect(username=uname, pkey=key)
self.__sftp = paramiko.SFTPClient.from_transport(transport)
def get(self, remote, local, cb = None):
self.__sftp.get(remote, local, cb)
def put(self, local, remote, cb = None):
self.__sftp.put(local, remote, cb) | 31.833333 | 73 | 0.655497 | 123 | 955 | 4.894309 | 0.414634 | 0.053156 | 0.039867 | 0.056478 | 0.378738 | 0.378738 | 0.378738 | 0.378738 | 0.209302 | 0 | 0 | 0.015979 | 0.213613 | 955 | 30 | 74 | 31.833333 | 0.785619 | 0.157068 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.0625 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
78ff50d0ef3b81ac606726766e87dc4af67964c3 | 480 | py | Python | test.py | KipCrossing/Micropython-AD9833 | c684f5a9543bc5b67dcbf357c50f4d8f4057b2bf | [
"MIT"
] | 11 | 2018-12-13T23:39:18.000Z | 2022-02-24T11:59:36.000Z | test.py | KipCrossing/Micropython-AD9833 | c684f5a9543bc5b67dcbf357c50f4d8f4057b2bf | [
"MIT"
] | 1 | 2019-12-02T20:54:05.000Z | 2019-12-04T00:34:25.000Z | test.py | KipCrossing/Micropython-AD9833 | c684f5a9543bc5b67dcbf357c50f4d8f4057b2bf | [
"MIT"
] | 2 | 2019-05-03T10:58:36.000Z | 2020-02-20T10:21:43.000Z | from ad9833 import AD9833
# DUMMY classes for testing without board
class SBI(object):
def __init__(self):
pass
def send(self, data):
print(data)
class Pin(object):
def __init__(self):
pass
def low(self):
print(" 0")
def high(self):
print(" 1")
# Code
SBI1 = SBI()
PIN3 = Pin()
wave = AD9833(SBI1, PIN3)
wave.set_freq(14500)
wave.set_type(2)
wave.send()
print(wave.shape_type)
| 13.333333 | 41 | 0.566667 | 63 | 480 | 4.142857 | 0.539683 | 0.068966 | 0.099617 | 0.130268 | 0.183908 | 0.183908 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 0.316667 | 480 | 35 | 42 | 13.714286 | 0.722561 | 0.091667 | 0 | 0.2 | 0 | 0 | 0.078522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.1 | 0.05 | 0 | 0.4 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
600fae89534379bad1faa45aa725f0ecd7646d79 | 142 | py | Python | util/infoclient/test_infoclient.py | cdla/murfi2 | 45dba5eb90e7f573f01706a50e584265f0f8ffa7 | [
"Apache-2.0"
] | 7 | 2015-02-10T17:00:49.000Z | 2021-07-27T22:09:43.000Z | util/infoclient/test_infoclient.py | cdla/murfi2 | 45dba5eb90e7f573f01706a50e584265f0f8ffa7 | [
"Apache-2.0"
] | 11 | 2015-02-22T19:15:53.000Z | 2021-08-04T17:26:18.000Z | util/infoclient/test_infoclient.py | cdla/murfi2 | 45dba5eb90e7f573f01706a50e584265f0f8ffa7 | [
"Apache-2.0"
] | 8 | 2015-07-06T22:31:51.000Z | 2019-04-22T21:22:07.000Z |
from infoclientLib import InfoClient
ic = InfoClient('localhost', 15002, 'localhost', 15003)
ic.add('roi-weightedave', 'active')
ic.start()
| 20.285714 | 55 | 0.739437 | 17 | 142 | 6.176471 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07874 | 0.105634 | 142 | 6 | 56 | 23.666667 | 0.748032 | 0 | 0 | 0 | 0 | 0 | 0.278571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6012d662e5b654522d75f6dba733bb788998a6c0 | 812 | py | Python | python/10.Authentication-&-API-Keys.py | 17nikhil/codecademy | 58fbd652691c9df8139544965ebb0e9748142538 | [
"Apache-2.0"
] | null | null | null | python/10.Authentication-&-API-Keys.py | 17nikhil/codecademy | 58fbd652691c9df8139544965ebb0e9748142538 | [
"Apache-2.0"
] | null | null | null | python/10.Authentication-&-API-Keys.py | 17nikhil/codecademy | 58fbd652691c9df8139544965ebb0e9748142538 | [
"Apache-2.0"
] | 1 | 2018-10-03T14:36:31.000Z | 2018-10-03T14:36:31.000Z | # Authentication & API Keys
# Many APIs require an API key. Just as a real-world key allows you to access something, an API key grants you access to a particular API. Moreover, an API key identifies you to the API, which helps the API provider keep track of how their service is used and prevent unauthorized or malicious activity.
#
# Some APIs require authentication using a protocol called OAuth. We won't get into the details, but if you've ever been redirected to a page asking for permission to link an application with your account, you've probably used OAuth.
#
# API keys are often long alphanumeric strings. We've made one up in the editor to the right! (It won't actually work on anything, but when you receive your own API keys in future projects, they'll look a lot like this.)
api_key = "string"
| 81.2 | 303 | 0.777094 | 145 | 812 | 4.344828 | 0.655172 | 0.038095 | 0.038095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181034 | 812 | 9 | 304 | 90.222222 | 0.947368 | 0.958128 | 0 | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
601874835949dbb0ebb74e3019f720313e38011d | 2,763 | py | Python | quadpy/triangle/cools_haegemans.py | melvyniandrag/quadpy | ae28fc17351be8e76909033f03d71776c7ef8280 | [
"MIT"
] | 1 | 2019-01-02T19:04:42.000Z | 2019-01-02T19:04:42.000Z | quadpy/triangle/cools_haegemans.py | melvyniandrag/quadpy | ae28fc17351be8e76909033f03d71776c7ef8280 | [
"MIT"
] | null | null | null | quadpy/triangle/cools_haegemans.py | melvyniandrag/quadpy | ae28fc17351be8e76909033f03d71776c7ef8280 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
from mpmath import mp
from .helpers import untangle2
class CoolsHaegemans(object):
"""
R. Cools, A. Haegemans,
Construction of minimal cubature formulae for the square and the triangle
using invariant theory,
Department of Computer Science, K.U.Leuven,
TW Reports vol:TW96, Sept. 1987,
<https://lirias.kuleuven.be/handle/123456789/131869>.
"""
def __init__(self, index, mpmath=False):
self.name = "CoolsHaegemans({})".format(index)
assert index == 1
self.degree = 8
flt = mp.mpf if mpmath else float
mp.dps = 20
data = {
"rot": [
[
flt("0.16058343856681218798E-09"),
flt("0.34579201116826902882E+00"),
flt("0.36231682215692616667E+01"),
],
[
flt("0.26530624434780379347E-01"),
flt("0.65101993458939166328E-01"),
flt("0.87016510156356306078E+00"),
],
[
flt("0.29285717640155892159E-01"),
flt("0.65177530364879570754E+00"),
flt("0.31347788752373300717E+00"),
],
[
flt("0.43909556791220782402E-01"),
flt("0.31325121067172530696E+00"),
flt("0.63062143431895614010E+00"),
],
[
flt("0.66940767639916174192E-01"),
flt("0.51334692063945414949E+00"),
flt("0.28104124731511039057E+00"),
],
]
}
# elif index == 2:
# self.degree = 10
# data = [
# (0.15319130036758557631E-06_r3(+0.58469201683584513031E-01, -0.54887778772527519316E+00)),
# (0.13260526227928785221E-01_r3(0.50849285064031410705E-01, 0.90799059794957813439E+00)),
# (0.15646439344539042136E-01_r3(0.51586732419949574487E+00, 0.46312452842927062902E+00)),
# (0.21704258224807323311E-01_r3(0.24311033191739048230E+00, 0.72180595182371959467E-00)),
# (0.21797613600129922367E-01_r3(0.75397765920922660134E-00, 0.20647569839132397633E+00)),
# (0.38587913508193459468E-01_r3(0.42209207910846960294E-00, 0.12689533413411127327E+00)),
# (0.39699584282594413022E-01_r3(0.19823878346663354068E+00, 0.62124412566393319745E+00)),
# (0.47910534861520060665E-01numpy.array([[1.0/3.0, 1.0/3.0, 1.0/3.0]])
# ]
self.bary, self.weights = untangle2(data)
self.points = self.bary[:, 1:]
self.weights *= 2
return
| 38.375 | 108 | 0.534202 | 249 | 2,763 | 5.883534 | 0.481928 | 0.040956 | 0.028669 | 0.008191 | 0.008191 | 0.008191 | 0.008191 | 0.008191 | 0 | 0 | 0 | 0.496973 | 0.342381 | 2,763 | 71 | 109 | 38.915493 | 0.309301 | 0.396308 | 0 | 0.119048 | 0 | 0 | 0.25386 | 0.240889 | 0 | 0 | 0 | 0 | 0.02381 | 1 | 0.02381 | false | 0 | 0.047619 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
603b5710a40e621c6b937d72101edf1cadc2be7f | 5,089 | py | Python | test/test_airfoil.py | chabotsi/pygmsh | f2c26d9193c63efd9fa7676ea0860a18de7e8b52 | [
"MIT"
] | null | null | null | test/test_airfoil.py | chabotsi/pygmsh | f2c26d9193c63efd9fa7676ea0860a18de7e8b52 | [
"MIT"
] | null | null | null | test/test_airfoil.py | chabotsi/pygmsh | f2c26d9193c63efd9fa7676ea0860a18de7e8b52 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
import numpy
import pygmsh
from helpers import compute_volume
def test():
# Airfoil coordinates
airfoil_coordinates = numpy.array([
[1.000000, 0.000000, 0.0],
[0.999023, 0.000209, 0.0],
[0.996095, 0.000832, 0.0],
[0.991228, 0.001863, 0.0],
[0.984438, 0.003289, 0.0],
[0.975752, 0.005092, 0.0],
[0.965201, 0.007252, 0.0],
[0.952825, 0.009744, 0.0],
[0.938669, 0.012538, 0.0],
[0.922788, 0.015605, 0.0],
[0.905240, 0.018910, 0.0],
[0.886092, 0.022419, 0.0],
[0.865417, 0.026096, 0.0],
[0.843294, 0.029903, 0.0],
[0.819807, 0.033804, 0.0],
[0.795047, 0.037760, 0.0],
[0.769109, 0.041734, 0.0],
[0.742094, 0.045689, 0.0],
[0.714107, 0.049588, 0.0],
[0.685258, 0.053394, 0.0],
[0.655659, 0.057071, 0.0],
[0.625426, 0.060584, 0.0],
[0.594680, 0.063897, 0.0],
[0.563542, 0.066977, 0.0],
[0.532136, 0.069789, 0.0],
[0.500587, 0.072303, 0.0],
[0.469022, 0.074486, 0.0],
[0.437567, 0.076312, 0.0],
[0.406350, 0.077752, 0.0],
[0.375297, 0.078743, 0.0],
[0.344680, 0.079180, 0.0],
[0.314678, 0.079051, 0.0],
[0.285418, 0.078355, 0.0],
[0.257025, 0.077096, 0.0],
[0.229618, 0.075287, 0.0],
[0.203313, 0.072945, 0.0],
[0.178222, 0.070096, 0.0],
[0.154449, 0.066770, 0.0],
[0.132094, 0.063005, 0.0],
[0.111248, 0.058842, 0.0],
[0.091996, 0.054325, 0.0],
[0.074415, 0.049504, 0.0],
[0.058573, 0.044427, 0.0],
[0.044532, 0.039144, 0.0],
[0.032343, 0.033704, 0.0],
[0.022051, 0.028152, 0.0],
[0.013692, 0.022531, 0.0],
[0.007292, 0.016878, 0.0],
[0.002870, 0.011224, 0.0],
[0.000439, 0.005592, 0.0],
[0.000000, 0.000000, 0.0],
[0.001535, -0.005395, 0.0],
[0.005015, -0.010439, 0.0],
[0.010421, -0.015126, 0.0],
[0.017725, -0.019451, 0.0],
[0.026892, -0.023408, 0.0],
[0.037880, -0.026990, 0.0],
[0.050641, -0.030193, 0.0],
[0.065120, -0.033014, 0.0],
[0.081257, -0.035451, 0.0],
[0.098987, -0.037507, 0.0],
[0.118239, -0.039185, 0.0],
[0.138937, -0.040493, 0.0],
[0.161004, -0.041444, 0.0],
[0.184354, -0.042054, 0.0],
[0.208902, -0.042343, 0.0],
[0.234555, -0.042335, 0.0],
[0.261221, -0.042058, 0.0],
[0.288802, -0.041541, 0.0],
[0.317197, -0.040817, 0.0],
[0.346303, -0.039923, 0.0],
[0.376013, -0.038892, 0.0],
[0.406269, -0.037757, 0.0],
[0.437099, -0.036467, 0.0],
[0.468187, -0.035009, 0.0],
[0.499413, -0.033414, 0.0],
[0.530654, -0.031708, 0.0],
[0.561791, -0.029917, 0.0],
[0.592701, -0.028066, 0.0],
[0.623264, -0.026176, 0.0],
[0.653358, -0.024269, 0.0],
[0.682867, -0.022360, 0.0],
[0.711672, -0.020466, 0.0],
[0.739659, -0.018600, 0.0],
[0.766718, -0.016774, 0.0],
[0.792738, -0.014999, 0.0],
[0.817617, -0.013284, 0.0],
[0.841253, -0.011637, 0.0],
[0.863551, -0.010068, 0.0],
[0.884421, -0.008583, 0.0],
[0.903777, -0.007191, 0.0],
[0.921540, -0.005900, 0.0],
[0.937637, -0.004717, 0.0],
[0.952002, -0.003650, 0.0],
[0.964576, -0.002708, 0.0],
[0.975305, -0.001896, 0.0],
[0.984145, -0.001222, 0.0],
[0.991060, -0.000691, 0.0],
[0.996020, -0.000308, 0.0],
[0.999004, -0.000077, 0.0]
])
# Scale airfoil to input coord
coord = 1.0
airfoil_coordinates *= coord
# Instantiate geometry object
geom = pygmsh.built_in.Geometry()
# Create polygon for airfoil
char_length = 1.0e-1
airfoil = geom.add_polygon(
airfoil_coordinates,
char_length,
make_surface=False
)
# Create surface for numerical domain with an airfoil-shaped hole
left_dist = 1.0
right_dist = 3.0
top_dist = 1.0
bottom_dist = 1.0
xmin = airfoil_coordinates[:, 0].min() - left_dist*coord
xmax = airfoil_coordinates[:, 0].max() + right_dist*coord
ymin = airfoil_coordinates[:, 1].min() - bottom_dist*coord
ymax = airfoil_coordinates[:, 1].max() + top_dist*coord
domainCoordinates = numpy.array([
[xmin, ymin, 0.0],
[xmax, ymin, 0.0],
[xmax, ymax, 0.0],
[xmin, ymax, 0.0],
])
polygon = geom.add_polygon(
domainCoordinates,
char_length,
holes=[airfoil]
)
geom.add_raw_code('Recombine Surface {%s};' % polygon.surface.id)
ref = 10.525891646546
points, cells, _, _, _ = pygmsh.generate_mesh(geom)
assert abs(compute_volume(points, cells) - ref) < 1.0e-2 * ref
return points, cells
if __name__ == '__main__':
import meshio
meshio.write('airfoil.vtu', *test())
| 31.608696 | 69 | 0.503046 | 792 | 5,089 | 3.184343 | 0.353535 | 0.160983 | 0.117764 | 0.011102 | 0.012688 | 0.012688 | 0 | 0 | 0 | 0 | 0 | 0.457405 | 0.294164 | 5,089 | 160 | 70 | 31.80625 | 0.24471 | 0.041265 | 0 | 0.028169 | 0 | 0 | 0.008622 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 1 | 0.007042 | false | 0 | 0.028169 | 0 | 0.042254 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
60484feb7046b3c272c1b83d25957af04879dd6e | 4,681 | py | Python | sppas/sppas/src/anndata/aio/__init__.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | sppas/sppas/src/anndata/aio/__init__.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | sppas/sppas/src/anndata/aio/__init__.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
"""
..
---------------------------------------------------------------------
___ __ __ __ ___
/ | \ | \ | \ / the automatic
\__ |__/ |__/ |___| \__ annotation and
\ | | | | \ analysis
___/ | | | | ___/ of speech
http://www.sppas.org/
Use of this software is governed by the GNU Public License, version 3.
SPPAS is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
SPPAS is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with SPPAS. If not, see <http://www.gnu.org/licenses/>.
This banner notice must not be removed.
---------------------------------------------------------------------
anndata.aio
~~~~~~~~~~~
Readers and writers of annotated data.
:author: Brigitte Bigi
:organization: Laboratoire Parole et Langage, Aix-en-Provence, France
:contact: develop@sppas.org
:license: GPL, v3
:copyright: Copyright (C) 2011-2018 Brigitte Bigi
"""
from .annotationpro import sppasANT
from .annotationpro import sppasANTX
from .anvil import sppasAnvil
from .audacity import sppasAudacity
from .elan import sppasEAF
from .htk import sppasLab
from .phonedit import sppasMRK
from .phonedit import sppasSignaix
from .praat import sppasTextGrid
from .praat import sppasIntensityTier
from .praat import sppasPitchTier
from .sclite import sppasCTM
from .sclite import sppasSTM
from .subtitle import sppasSubRip
from .subtitle import sppasSubViewer
from .text import sppasRawText
from .text import sppasCSV
from .weka import sppasARFF
from .weka import sppasXRFF
from .xtrans import sppasTDF
from .xra import sppasXRA
# ----------------------------------------------------------------------------
# Variables
# ----------------------------------------------------------------------------
# TODO: get extension from the "default_extension" member of each class
ext_sppas = ['.xra', '.[Xx][Rr][Aa]']
ext_praat = ['.TextGrid', '.PitchTier', '.[Tt][eE][xX][tT][Gg][Rr][Ii][dD]','.[Pp][Ii][tT][cC][hH][Tt][Ii][Ee][rR]']
ext_transcriber = ['.trs','.[tT][rR][sS]']
ext_elan = ['.eaf', '[eE][aA][fF]']
ext_ascii = ['.txt', '.csv', '.[cC][sS][vV]', '.[tT][xX][Tt]', '.info']
ext_phonedit = ['.mrk', '.[mM][rR][kK]']
ext_signaix = ['.hz', '.[Hh][zZ]']
ext_sclite = ['.stm', '.ctm', '.[sScC][tT][mM]']
ext_htk = ['.lab', '.mlf']
ext_subtitles = ['.sub', '.srt', '.[sS][uU][bB]', '.[sS][rR][tT]']
ext_anvil = ['.anvil', '.[aA][aN][vV][iI][lL]']
ext_annotationpro = ['.antx', '.[aA][aN][tT][xX]']
ext_xtrans = ['.tdf', '.[tT][dD][fF]']
ext_audacity = ['.aup']
ext_weka = ['.arff', '.xrff']
primary_in = ['.hz', '.PitchTier']
annotations_in = ['.xra', '.TextGrid', '.eaf', '.csv', '.mrk', '.txt', '.stm', '.ctm', '.lab', '.mlf', '.sub', '.srt', '.antx', '.anvil', '.aup', '.trs', '.tdf']
extensions = ['.xra', '.textgrid', '.pitchtier', '.hz', '.eaf', '.trs', '.csv', '.mrk', '.txt', '.mrk', '.stm', '.ctm', '.lab', '.mlf', '.sub', '.srt', 'anvil', '.antx', '.tdf', '.arff', '.xrff']
extensionsul = ext_sppas + ext_praat + ext_transcriber + ext_elan + ext_ascii + ext_phonedit + ext_signaix + ext_sclite + ext_htk + ext_subtitles + ext_anvil + ext_annotationpro + ext_xtrans + ext_audacity + ext_weka
extensions_in = primary_in + annotations_in
extensions_out = ['.xra', '.TextGrid', '.eaf', '.csv', '.mrk', '.txt', '.stm', '.ctm', '.lab', '.mlf', '.sub', '.srt', '.antx', '.arff', '.xrff']
extensions_out_multitiers = ['.xra', '.TextGrid', '.eaf', '.csv', '.mrk', '.antx', '.arff', '.xrff']
# ----------------------------------------------------------------------------
__all__ = (
"sppasANT",
"sppasANTX",
"sppasAnvil",
"sppasAudacity",
"sppasEAF",
"sppasLab",
"sppasMRK",
"sppasSignaix",
"sppasTextGrid",
"sppasIntensityTier",
"sppasPitchTier",
"sppasCTM",
"sppasSTM",
"sppasSubRip",
"sppasSubViewer",
"sppasRawText",
"sppasCSV",
"sppasARFF",
"sppasXRFF",
"sppasTDF",
"sppasXRA",
"extensions",
"extensions_in",
"extensions_out"
)
| 36.858268 | 216 | 0.554582 | 512 | 4,681 | 4.925781 | 0.394531 | 0.009516 | 0.015464 | 0.022601 | 0.083267 | 0.065028 | 0.035686 | 0.035686 | 0.035686 | 0.035686 | 0 | 0.003203 | 0.199744 | 4,681 | 126 | 217 | 37.150794 | 0.670048 | 0.384106 | 0 | 0 | 0 | 0 | 0.326916 | 0.033055 | 0 | 0 | 0 | 0.007937 | 0 | 1 | 0 | false | 0.115942 | 0.304348 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
6049a1eccd8b14db6687d766205e1b913a98cd6d | 226 | py | Python | models/__init__.py | dapengchen123/hfsoftmax | 467bd90814abdf3e5ad8384e6e05749172b68ae6 | [
"MIT"
] | 1 | 2018-10-11T09:27:53.000Z | 2018-10-11T09:27:53.000Z | models/__init__.py | dapengchen123/hfsoftmax | 467bd90814abdf3e5ad8384e6e05749172b68ae6 | [
"MIT"
] | null | null | null | models/__init__.py | dapengchen123/hfsoftmax | 467bd90814abdf3e5ad8384e6e05749172b68ae6 | [
"MIT"
] | null | null | null | from .resnet import *
from .hynet import *
from .classifier import Classifier, HFClassifier, HNSWClassifier
from .ext_layers import ParameterClient
samplerClassifier = {
'hf': HFClassifier,
'hnsw': HNSWClassifier,
}
| 20.545455 | 64 | 0.756637 | 22 | 226 | 7.727273 | 0.590909 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159292 | 226 | 10 | 65 | 22.6 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0.026549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.