hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dc27e9112b2e7f6835e04b3442e472d51ccba89e | 685 | py | Python | Sprint1Lecture/Module2/demo1_retrievesElement.py | marianvinas/CS_Notes | b43010dda5617336d7295d08f66fa24dbf786144 | [
"MIT"
] | null | null | null | Sprint1Lecture/Module2/demo1_retrievesElement.py | marianvinas/CS_Notes | b43010dda5617336d7295d08f66fa24dbf786144 | [
"MIT"
] | null | null | null | Sprint1Lecture/Module2/demo1_retrievesElement.py | marianvinas/CS_Notes | b43010dda5617336d7295d08f66fa24dbf786144 | [
"MIT"
] | null | null | null | """
Challenge #1:
Write a function that retrieves the last n elements from a list.
Examples:
- last([1, 2, 3, 4, 5], 1) ➞ [5]
- last([4, 3, 9, 9, 7, 6], 3) ➞ [9, 7, 6]
- last([1, 2, 3, 4, 5], 7) ➞ "invalid"
- last([1, 2, 3, 4, 5], 0) ➞ []
Notes:
- Return "invalid" if n exceeds the length of the list.
- Return an empty list if n == 0.
"""
def last(arr, n):
# Your code here
if n > len(arr):
return 'invalid'
elif n == 0:
return []
# main solution
return arr[ len(arr)-n : ]
print(last([1, 2, 3, 4, 5], 1)) #5
print(last([4, 3, 9, 9, 7, 6], 3)) #[9, 7, 6]
print(last([1, 2, 3, 4, 5], 7)) #invalid
print(last([1, 2, 3, 4, 5], 0)) #empty [] | 22.833333 | 64 | 0.512409 | 134 | 685 | 2.649254 | 0.313433 | 0.084507 | 0.101408 | 0.11831 | 0.273239 | 0.273239 | 0.273239 | 0.061972 | 0 | 0 | 0 | 0.121272 | 0.265693 | 685 | 30 | 65 | 22.833333 | 0.576541 | 0.567883 | 0 | 0 | 0 | 0 | 0.024648 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.4 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc3087b1cd6e043ab247cf9a5e6b80511711cc17 | 1,297 | py | Python | cwa_qr/poster.py | MaZderMind/cwa-qr | 60799315f1508483025e1e57000a5116dea78449 | [
"MIT"
] | 16 | 2021-04-22T07:12:18.000Z | 2022-02-07T04:54:54.000Z | cwa_qr/poster.py | MaZderMind/cwa-qr | 60799315f1508483025e1e57000a5116dea78449 | [
"MIT"
] | 10 | 2021-04-22T15:33:23.000Z | 2022-03-06T10:54:07.000Z | cwa_qr/poster.py | MaZderMind/cwa-qr | 60799315f1508483025e1e57000a5116dea78449 | [
"MIT"
] | 7 | 2021-04-22T12:37:20.000Z | 2021-08-09T05:47:54.000Z | import io
import os
from svgutils import transform as svg_utils
import qrcode.image.svg
from cwa_qr import generate_qr_code, CwaEventDescription
class CwaPoster(object):
POSTER_PORTRAIT = 'portrait'
POSTER_LANDSCAPE = 'landscape'
TRANSLATIONS = {
POSTER_PORTRAIT: {
'file': 'poster/portrait.svg',
'x': 80,
'y': 60,
'scale': 6
},
POSTER_LANDSCAPE: {
'file': 'poster/landscape.svg',
'x': 42,
'y': 120,
'scale': 4.8
}
}
def generate_poster(event_description: CwaEventDescription, template: CwaPoster) -> svg_utils.SVGFigure:
qr = generate_qr_code(event_description)
svg = qr.make_image(image_factory=qrcode.image.svg.SvgPathImage)
svg_bytes = io.BytesIO()
svg.save(svg_bytes)
poster = svg_utils.fromfile('{}/{}'.format(
os.path.dirname(os.path.abspath(__file__)),
CwaPoster.TRANSLATIONS[template]['file']
))
overlay = svg_utils.fromstring(svg_bytes.getvalue().decode('UTF-8')).getroot()
overlay.moveto(
CwaPoster.TRANSLATIONS[template]['x'],
CwaPoster.TRANSLATIONS[template]['y'],
CwaPoster.TRANSLATIONS[template]['scale']
)
poster.append([overlay])
return poster
| 27.020833 | 104 | 0.625289 | 139 | 1,297 | 5.654676 | 0.42446 | 0.040712 | 0.147583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01332 | 0.247494 | 1,297 | 47 | 105 | 27.595745 | 0.792008 | 0 | 0 | 0 | 1 | 0 | 0.07633 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.128205 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc353a0a9a2f3afa4d5df4e0a5dd29cb203037fa | 803 | py | Python | pageviews.py | priyankamandikal/arowf | 0b0226da6c1f0d06360c9243334977f885b70a4c | [
"Apache-2.0"
] | 7 | 2017-10-09T05:39:14.000Z | 2019-06-26T18:26:40.000Z | pageviews.py | priyankamandikal/minireview | 0b0226da6c1f0d06360c9243334977f885b70a4c | [
"Apache-2.0"
] | null | null | null | pageviews.py | priyankamandikal/minireview | 0b0226da6c1f0d06360c9243334977f885b70a4c | [
"Apache-2.0"
] | 3 | 2017-03-20T05:56:05.000Z | 2018-12-19T03:07:09.000Z | from datetime import date, datetime, timedelta
from traceback import format_exc
from requests import get
pageviews_url = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article'
def format_date(d):
return datetime.strftime(d, '%Y%m%d%H')
def article_views(article, project='en.wikipedia', access='all-access', agent='all-agents', granularity='daily', start=None, end=None):
endDate = date.today()
startDate = endDate - timedelta(30)
url = '/'.join([pageviews_url, project, access, agent, article, 'daily', format_date(startDate), format_date(endDate)])
try:
result = get(url).json()
last30dayscount = 0
for item in result['items']:
last30dayscount += item['views']
return last30dayscount
except:
print 'Error while fetching page views of ' + article
print format_exc() | 34.913043 | 135 | 0.737235 | 109 | 803 | 5.348624 | 0.568807 | 0.051458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014265 | 0.127024 | 803 | 23 | 136 | 34.913043 | 0.817404 | 0 | 0 | 0 | 0 | 0 | 0.197761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.157895 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc37303245aa0b25a23b011f2a90851c1f3dd75f | 2,619 | py | Python | conftest.py | juju-solutions/kubeflow | b23fe95b8d239fd979f47784b51a8cb9284ccea4 | [
"Apache-2.0"
] | null | null | null | conftest.py | juju-solutions/kubeflow | b23fe95b8d239fd979f47784b51a8cb9284ccea4 | [
"Apache-2.0"
] | null | null | null | conftest.py | juju-solutions/kubeflow | b23fe95b8d239fd979f47784b51a8cb9284ccea4 | [
"Apache-2.0"
] | null | null | null | import argparse
import os
# Use a custom parser that lets us require a variable from one of CLI or environment variable,
# this way we can pass creds through CLI for local testing but via environment variables in CI
class EnvDefault(argparse.Action):
"""Argument parser that accepts input from CLI (preferred) or an environment variable
If a value is not specified in the CLI argument, the content of the environment variable
named `envvar` is used. If this environment variable is also empty, the parser will fail
citing a missing required argument
Note this Action does not accept the `required` and `default` kwargs and will set them itself
as appropriate.
Modified from https://stackoverflow.com/a/10551190/5394584
"""
def __init__(self, option_strings, dest, envvar, **kwargs):
# Determine the values for `required` and `default` based on whether defaults are available
# from an environment variable
if envvar:
if envvar in os.environ:
# An environment variable of this name exists, use that as a default
default = os.environ[envvar]
required = False
else:
# We have no default, require a value from the CLI
required = True
default = None
else:
raise ValueError(f"EnvDefault requires non-null envvar, got '{envvar}'")
self.envvar = envvar
super(EnvDefault, self).__init__(option_strings, dest, default=default, required=required, **kwargs)
def __call__(self, parser, namespace, values, option_string):
# Actually set values to the destination arg in the namespace
setattr(namespace, self.dest, values)
def pytest_addoption(parser):
parser.addoption("--proxy", action="store", help="Proxy to use")
parser.addoption("--url", action="store", help="Kubeflow dashboard URL")
parser.addoption("--headful", action="store_true", help="Juju model")
username_envvar = "KUBEFLOW_AUTH_USERNAME"
parser.addoption(
"--username",
action=EnvDefault,
envvar=username_envvar,
help=f"Dex username (email address). Required, but can be passed either through CLI or "
f"via environment variable '{username_envvar}",
)
password_envvar = "KUBEFLOW_AUTH_PASSWORD"
parser.addoption(
"--password",
action=EnvDefault,
envvar=password_envvar,
help=f"Dex password. Required, but can be passed either through CLI or "
f"via environment variable '{password_envvar}"
)
| 40.921875 | 108 | 0.670867 | 327 | 2,619 | 5.29052 | 0.409786 | 0.087861 | 0.036416 | 0.02659 | 0.072832 | 0.072832 | 0.072832 | 0.072832 | 0.072832 | 0.072832 | 0 | 0.007653 | 0.251623 | 2,619 | 63 | 109 | 41.571429 | 0.875 | 0.362734 | 0 | 0.162162 | 0 | 0 | 0.265356 | 0.027027 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0.162162 | 0.054054 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dc3da5a24d4fd4a5555347785b65914a5905c48f | 728 | py | Python | common/__init__.py | timmartin19/pycon-ripozo-tutorial | d6f68d0b7c8c8aacb090014c5ff1f34b21ded017 | [
"MIT"
] | null | null | null | common/__init__.py | timmartin19/pycon-ripozo-tutorial | d6f68d0b7c8c8aacb090014c5ff1f34b21ded017 | [
"MIT"
] | null | null | null | common/__init__.py | timmartin19/pycon-ripozo-tutorial | d6f68d0b7c8c8aacb090014c5ff1f34b21ded017 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from logging import config
config.dictConfig({
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'standard': {
'format': '%(asctime)s| %(name)s/%(process)d: %(message)s @%(funcName)s:%(lineno)d #%(levelname)s',
}
},
'handlers': {
'console': {
'formatter': 'standard',
'class': 'logging.StreamHandler',
}
},
'root': {
'handlers': ['console'],
'level': 'INFO',
},
'loggers': {
'ripozo': {
'level': 'INFO',
}
}
})
| 22.75 | 111 | 0.541209 | 64 | 728 | 5.828125 | 0.609375 | 0.107239 | 0.171582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001957 | 0.298077 | 728 | 31 | 112 | 23.483871 | 0.727984 | 0 | 0 | 0.068966 | 0 | 0.034483 | 0.342033 | 0.123626 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.172414 | 0 | 0.172414 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc3dd9ba91b522b9cfe61eccaaba1f3a5d171d62 | 612 | py | Python | 2019_618_PickMaomao/2019_618_PickMaomao.py | yanaizhen/PythonApps | 21c554980df00795e1af6a8a17224358222d28e5 | [
"MIT"
] | 1 | 2021-07-06T11:12:54.000Z | 2021-07-06T11:12:54.000Z | 2019_618_PickMaomao/2019_618_PickMaomao.py | yanaizhen/PythonApps | 21c554980df00795e1af6a8a17224358222d28e5 | [
"MIT"
] | null | null | null | 2019_618_PickMaomao/2019_618_PickMaomao.py | yanaizhen/PythonApps | 21c554980df00795e1af6a8a17224358222d28e5 | [
"MIT"
] | 2 | 2019-12-09T16:31:26.000Z | 2021-08-15T08:09:37.000Z | # @Time : 2019/06/14 7:55AM
# @Author : HGzhao
# @File : 2019_618_PickMaomao.py
import os,time
def pick_maomao():
print(f"点 合合卡 按钮")
os.system('adb shell input tap 145 1625')
time.sleep(1)
print(f"点 进店找卡 按钮")
os.system('adb shell input tap 841 1660')
time.sleep(13)
print(f"猫猫出现啦,点击得喵币")
os.system('adb shell input tap 967 1134')
time.sleep(1)
print(f"点 开心收下")
os.system('adb shell input tap 569 1380')
time.sleep(1)
print(f"利用全面屏手势退出店铺")
os.system('adb shell input swipe 0 1500 500 1500')
time.sleep(1)
for i in range(40):
pick_maomao() | 23.538462 | 54 | 0.630719 | 104 | 612 | 3.673077 | 0.5 | 0.078534 | 0.143979 | 0.209424 | 0.447644 | 0.350785 | 0.136126 | 0 | 0 | 0 | 0 | 0.140725 | 0.23366 | 612 | 26 | 55 | 23.538462 | 0.673774 | 0.130719 | 0 | 0.210526 | 0 | 0 | 0.36673 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.105263 | 0.263158 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc3e7021531cfd65338ffc0247d5d19af8ab45a7 | 676 | py | Python | examples/helix-example/helix_example/components/python.py | HELIX-Datasets/helix | 7b89b4139e580518b58e109a96ef70f2a71bb780 | [
"MIT"
] | 7 | 2021-12-15T03:22:29.000Z | 2022-03-09T16:11:08.000Z | examples/helix-example/helix_example/components/python.py | HELIX-Datasets/helix | 7b89b4139e580518b58e109a96ef70f2a71bb780 | [
"MIT"
] | 10 | 2021-09-14T16:39:31.000Z | 2021-09-14T21:41:49.000Z | examples/helix-example/helix_example/components/python.py | HELIX-Datasets/helix | 7b89b4139e580518b58e109a96ef70f2a71bb780 | [
"MIT"
] | 1 | 2022-01-31T00:01:58.000Z | 2022-01-31T00:01:58.000Z | from helix import component
class ExamplePythonComponent(component.Component):
"""An example Python component."""
name = "example-python-component"
verbose_name = "Example Python Component"
type = "example"
version = "1.0.0"
description = "An example Python component"
date = "2020-10-20 12:00:00.000000"
tags = (("group", "example"),)
blueprints = ["example-python"]
functions = [
"""def ${example}():
print("hello world")
""",
"""from datetime import datetime
def ${now}():
print(datetime.now())
""",
]
calls = {"startup": ["${example}()"], "loop": ["${now}()"]}
globals = ["example", "now"]
| 23.310345 | 63 | 0.587278 | 69 | 676 | 5.73913 | 0.536232 | 0.164141 | 0.222222 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043726 | 0.221893 | 676 | 28 | 64 | 24.142857 | 0.709125 | 0.04142 | 0 | 0 | 0 | 0 | 0.348837 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.8125 | 0.0625 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc40fe59f59a392bcc67c51aae722672e0bfc90e | 8,263 | py | Python | functions/email_habit_survey.py | jamesshapiro/aws-habit-tracker | bbae9866dc4ce744832e42d02a997fc9bebda517 | [
"MIT"
] | null | null | null | functions/email_habit_survey.py | jamesshapiro/aws-habit-tracker | bbae9866dc4ce744832e42d02a997fc9bebda517 | [
"MIT"
] | null | null | null | functions/email_habit_survey.py | jamesshapiro/aws-habit-tracker | bbae9866dc4ce744832e42d02a997fc9bebda517 | [
"MIT"
] | null | null | null | import os
import json
import boto3
import datetime
import hashlib
import secrets
import time
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
table_name = os.environ['DDB_TABLE']
ses_client = boto3.client('ses')
ddb_client = boto3.client('dynamodb')
unsubscribe_url = os.environ['UNSUBSCRIBE_URL']
config_set_name = os.environ['CONFIG_SET_NAME']
months = {
'01': 'January',
'02': 'February',
'03': 'March',
'04': 'April',
'05': 'May',
'06': 'June',
'07': 'July',
'08': 'August',
'09': 'September',
'10': 'October',
'11': 'November',
'12': 'December'
}
days = {
'01':'1st', '02':'2nd',
'03':'3rd', '04':'4th',
'05':'5th', '06':'6th',
'07':'7th', '08':'8th',
'09':'9th', '10':'10th',
'11':'11th', '12':'12th',
'13':'13th', '14':'14th',
'15':'15th', '16':'16th',
'17':'17th', '18':'18th',
'19':'19th', '20':'20th',
'21':'21st', '22':'22nd',
'23':'23rd', '24':'24th',
'25':'25th', '26':'26th',
'27':'27th', '28':'28th',
'29':'29th', '30':'30th',
'31':'31st'
}
paginator = ddb_client.get_paginator('query')
sender = os.environ['SENDER']
def get_subscribers(event):
if 'user' in event:
return [event['user']]
response_iterator = paginator.paginate(
TableName=table_name,
KeyConditionExpression='#pk1=:pk1',
ExpressionAttributeNames={'#pk1':'PK1'},
ExpressionAttributeValues={':pk1':{'S':f'SUBSCRIBED'}}
)
subscribers = []
for items in response_iterator:
subscriber_page = [item['SK1']['S'] for item in items['Items']]
subscribers.extend(subscriber_page)
return subscribers
def get_token():
m = hashlib.sha256()
m.update(secrets.token_bytes(4096))
return m.hexdigest()
def lambda_handler(event, context):
three_days_from_now = int( time.time() ) + 259200
est_time_delta = datetime.timedelta(hours=5)
subscribers = get_subscribers(event)
print(f'{subscribers=}')
for subscriber in subscribers:
print(f'{subscriber=}')
print(f'Habit Survey <{sender}>')
now = datetime.datetime.now()
now -= est_time_delta
sha256 = get_token()
year = str(now.year)
day = str(now.day).zfill(2)
month = str(now.month).zfill(2)
survey_link = f'https://survey.githabit.com/?token={sha256}&date_string={year}-{month}-{day}'
ddb_client.put_item(
TableName=table_name,
Item={
'PK1': {'S': f'TOKEN'},
'SK1': {'S': f'TOKEN#{sha256}'},
'USER': {'S': f'USER#{subscriber}'},
'DATE_STRING': {'S': f'{year}-{month}-{day}'},
'TTL_EXPIRATION': {'N': str(three_days_from_now)}
}
)
unsubscribe_token = ddb_client.get_item(TableName=table_name,Key={'PK1': {'S': 'USER#USER'}, 'SK1': {'S': f'USER#{subscriber}'}})['Item']['UNSUBSCRIBE_TOKEN']['S']
unsubscribe_link = f'{unsubscribe_url}?token={unsubscribe_token}'
email_day = days[day]
email_month = months[month]
message = f"""
<!DOCTYPE html>
<html lang="en" xmlns="http://www.w3.org/1999/xhtml" xmlns:o="urn:schemas-microsoft-com:office:office">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width,initial-scale=1">
<meta name="x-apple-disable-message-reformatting">
<title></title>
<!--[if mso]>
<noscript>
<xml>
<o:OfficeDocumentSettings>
<o:PixelsPerInch>96</o:PixelsPerInch>
</o:OfficeDocumentSettings>
</xml>
</noscript>
<![endif]-->
<style>
table, td, div, h1, p {{font-family: Arial, sans-serif;}}
</style>
</head>
<body style="margin:0;padding:0;">
<table role="presentation" style="width:100%;border-collapse:collapse;border:0;border-spacing:0;background:#ffffff;">
<tr>
<td align="center" style="padding:0;">
<table role="presentation" style="width:602px;border-collapse:collapse;border:1px solid #cccccc;border-spacing:0;text-align:left;">
<tr>
<td align="center" style="padding:40px 0 30px 0;background:#70bbd9;">
<img src="https://cdkhabits-surveygithabitcombucket4f6ffd5a-1mwnd3a635op9.s3.amazonaws.com/cropped.png" alt="" width="300" style="height:auto;display:block;" />
</td>
</tr>
<tr>
<td style="padding:36px 30px 42px 30px;">
<table role="presentation" style="width:100%;border-collapse:collapse;border:0;border-spacing:0;">
<tr>
<td style="padding:0 0 36px 0;color:#153643;">
<h1 style="font-size:40px;margin:0 0 20px 0;font-family:Arial,sans-serif;">Today's Habit Survey!</h1>
<p style="margin:40px 0 0 0;font-size:36px;line-height:30px;font-family:Arial,sans-serif;">Click <a href="{survey_link}" style="font-weight:bold;color:#ee4c50;text-decoration:underline;">HERE</a> to fill it out
</td>
</tr>
<tr>
<td style="padding:0 0 0 0;color:#153643;">
<li style="margin:0 0 12px 0;font-size:30px;line-height:24px;font-family:Arial,sans-serif;">The survey expires 💣</a></li>
<li style="margin:0 0 12px 0;font-size:30px;line-height:24px;font-family:Arial,sans-serif;">Complete it before time runs out! ⏰</a></li>
<li style="margin:0 0 12px 0;font-size:30px;line-height:24px;font-family:Arial,sans-serif;">Depending on your timezone <span style="font-size:20px;">🌐</span>, it may take up to 24 hours for daily results to appear on the grid.</a></li>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td style="padding:30px;background:#ee4c50;">
<table role="presentation" style="width:100%;border-collapse:collapse;border:0;border-spacing:0;font-size:9px;font-family:Arial,sans-serif;">
<tr>
<td style="padding:0;width:50%;" align="left">
<p style="margin:0;font-size:14px;line-height:16px;font-family:Arial,sans-serif;color:#ffffff;">
GitHabit {year}<br/><a href="{unsubscribe_link}" style="color:#ffffff;text-decoration:underline;">Unsubscribe</a>
</p>
</td>
<td style="padding:0;width:50%;" align="right">
<table role="presentation" style="border-collapse:collapse;border:0;border-spacing:0;">
<tr>
<td style="padding:0 0 0 10px;width:38px;">
<a href="https://githabit.com/" style="color:#ffffff;"><img src="https://cdkhabits-surveygithabitcombucket4f6ffd5a-1mwnd3a635op9.s3.amazonaws.com/blue-rabbit.png" alt="GitHabit" width="38" style="height:auto;display:block;border:0;" /></a>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
</body>
</html>
"""
msg = MIMEMultipart()
msg["Subject"] = f'📆🐇 Habits Survey: {email_month} {email_day}, {year}'
#msg["From"] = sender
msg["From"] = f'GitHabit.com <{sender}>'
msg["To"] = subscriber
body_txt = MIMEText(message, "html")
msg.attach(body_txt)
msg['Reply-To'] = 'GitHabit <yes-reply@mail.githabit.com>'
mail_unsubscribe_link = 'mailto: unsubscribe@mail.githabit.com?subject=unsubscribe'
#msg['list-unsubscribe'] = f'<{mail_unsubscribe_link}>, <{unsubscribe_link}>'
msg.add_header('List-Unsubscribe', f'<{mail_unsubscribe_link}>, <{unsubscribe_link}>')
msg.add_header('List-Unsubscribe-Post', 'List-Unsubscribe=One-Click')
response = ses_client.send_raw_email(
Source = f'GitHabit.com <{sender}>',
Destinations = [subscriber],
RawMessage = {"Data": msg.as_string()},
ConfigurationSetName=config_set_name
)
print(f'{response=}')
return {
'statusCode': 200,
'body': 'shalom haverim!'
}
| 40.11165 | 265 | 0.572673 | 998 | 8,263 | 4.674349 | 0.340681 | 0.005145 | 0.025723 | 0.032583 | 0.26881 | 0.229153 | 0.209646 | 0.183708 | 0.183708 | 0.142551 | 0 | 0.056973 | 0.245916 | 8,263 | 205 | 266 | 40.307317 | 0.6909 | 0.011618 | 0 | 0.164103 | 0 | 0.076923 | 0.627557 | 0.214452 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0.046154 | 0 | 0.082051 | 0.020513 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc61c240ace363029e65db44548f6e19544dc644 | 5,077 | py | Python | NaiveNeurals/MLP/activation_functions.py | stovorov/NaiveNeurals | 88d91f3d4d39859eef372285f093643a447571a4 | [
"MIT"
] | 1 | 2019-01-16T13:45:47.000Z | 2019-01-16T13:45:47.000Z | NaiveNeurals/MLP/activation_functions.py | stovorov/NaiveNeurals | 88d91f3d4d39859eef372285f093643a447571a4 | [
"MIT"
] | 2 | 2020-03-24T16:17:06.000Z | 2020-03-30T23:53:16.000Z | NaiveNeurals/MLP/activation_functions.py | stovorov/NaiveNeurals | 88d91f3d4d39859eef372285f093643a447571a4 | [
"MIT"
] | null | null | null | """Module containing definitions of arithmetic functions used by perceptrons"""
from abc import ABC, abstractmethod
import numpy as np
from NaiveNeurals.utils import ErrorAlgorithm
class ActivationFunction(ABC):
"""Abstract function for defining functions"""
label = ''
@staticmethod
@abstractmethod
def function(arg: np.array) -> np.array:
"""Implementation of function
:param arg: float
:return: float
"""
raise NotImplementedError()
@classmethod
@abstractmethod
def prime(cls, arg: np.array) -> np.array:
"""First derivative of implemented function
:param arg: float
:return: float
"""
raise NotImplementedError()
class Sigmoid(ActivationFunction):
"""Represents sigmoid function and its derivative"""
label = 'sigmoid'
@staticmethod
def function(arg: np.array) -> np.array:
"""Calculate sigmoid(arg)
:param arg: float input value
:return: float sig(arg) value
"""
return 1 / (1 + np.exp(-arg))
@classmethod
def prime(cls, arg: np.array) -> np.array:
"""Calculate value of sigmoid's prime derivative for given arg
:param arg: float input value
:return: float value
"""
return cls.function(arg) * (1 - cls.function(arg))
class Tanh(ActivationFunction):
"""Represents hyperbolic tangent"""
label = 'tanh'
@staticmethod
def function(arg: np.array) -> np.array:
"""Calculate tanh(arg)
:param arg: float input value
:return: float tanh(arg) value
"""
return np.tanh(arg)
@classmethod
def prime(cls, arg: np.array) -> np.array:
"""Calculate value of tanh's prime derivative for given arg
:param arg: float input value
:return: float value
"""
return 1 - np.tanh(arg)**2
class Linear(ActivationFunction):
"""Represents linear function"""
label = 'lin'
@staticmethod
def function(arg: np.array) -> np.array:
"""Calculate lin(arg)
:param arg: float input value
:return: float lin(arg) value
"""
return arg
@classmethod
def prime(cls, arg: np.array) -> np.array:
"""Calculate value of lin's prime derivative for given arg
:param arg: float input value
:return: float value
"""
ones = np.array(arg)
ones[::] = 1.0
return ones
class SoftMax(ActivationFunction):
"""Represents SoftMax function
The ``softmax`` function takes an N-dimensional vector of arbitrary real values and produces
another N-dimensional vector with real values in the range (0, 1) that add up to 1.0.
source: https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/
"""
label = 'softmax'
@staticmethod
def function(arg: np.array, beta: int = 20) -> np.array: # pylint: disable=arguments-differ
"""Calculate softmax(arg)
:param arg: float input value
:param beta: scaling parameter
:return: float softmax(arg) value
"""
exps = np.exp(beta * arg - beta * arg.max())
return exps / np.sum(exps)
@classmethod
def prime(cls, arg: np.array) -> np.array:
"""Calculate value of softmax's prime derivative for given arg
:param arg: float input value
:return: float value
"""
return cls.function(arg) * (1 - cls.function(arg))
class SoftPlus(ActivationFunction):
"""Represents softplus function"""
label = 'softplus'
@staticmethod
def function(arg: np.array) -> np.array:
"""Calculate softplus(arg)
:param arg: float input value
:return: float softmax(arg) value
"""
return np.log(1 + np.exp(arg))
@classmethod
def prime(cls, arg: np.array) -> np.array:
"""Calculate value of softplus's prime derivative for given arg
:param arg: float input value
:return: float value
"""
return 1/(1 + np.exp(-arg))
def get_activation_function(label: str) -> ActivationFunction:
"""Get activation function by label
:param label: string denoting function
:return: callable function
"""
if label == 'lin':
return Linear()
if label == 'sigmoid':
return Sigmoid()
if label == 'tanh':
return Tanh()
return Sigmoid()
def calculate_error(target: np.array, actual: np.array,
func_type: ErrorAlgorithm = ErrorAlgorithm.SQR) -> np.array:
"""Calculates error for provided actual and targeted data.
:param target: target data
:param actual: actual training data
:param func_type: denotes type of used function for error
:return: calculated error
"""
if func_type == ErrorAlgorithm.SQR:
return np.sum(0.5 * np.power(actual - target, 2), axis=1)
elif func_type == ErrorAlgorithm.CE:
return -1 * np.sum(target * np.log(abs(actual)), axis=1)
raise NotImplementedError()
| 26.035897 | 102 | 0.614142 | 598 | 5,077 | 5.202341 | 0.212375 | 0.063002 | 0.038573 | 0.04243 | 0.454516 | 0.422694 | 0.403729 | 0.387978 | 0.295403 | 0.232401 | 0 | 0.007343 | 0.275753 | 5,077 | 194 | 103 | 26.170103 | 0.838727 | 0.398267 | 0 | 0.478873 | 0 | 0 | 0.016602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.197183 | false | 0 | 0.042254 | 0 | 0.633803 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dc65314f9ccc4d000bf72b8fe66d6e754777b9ee | 502 | py | Python | postmark_incoming/models.py | hkhanna/django-postmark-incoming | aa49b08ae27abf25bd164ac145de20e28b978725 | [
"MIT"
] | null | null | null | postmark_incoming/models.py | hkhanna/django-postmark-incoming | aa49b08ae27abf25bd164ac145de20e28b978725 | [
"MIT"
] | null | null | null | postmark_incoming/models.py | hkhanna/django-postmark-incoming | aa49b08ae27abf25bd164ac145de20e28b978725 | [
"MIT"
] | null | null | null | import logging
from django.db import models
logger = logging.getLogger(__name__)
class PostmarkWebhook(models.Model):
received_at = models.DateTimeField(auto_now_add=True)
body = models.JSONField()
headers = models.JSONField()
note = models.TextField(blank=True)
class Status(models.TextChoices):
NEW = "new"
PROCESSED = "processed"
ERROR = "error"
status = models.CharField(
max_length=127, choices=Status.choices, default=Status.NEW
)
| 23.904762 | 66 | 0.687251 | 56 | 502 | 6.017857 | 0.642857 | 0.089021 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.211155 | 502 | 20 | 67 | 25.1 | 0.843434 | 0 | 0 | 0 | 0 | 0 | 0.033865 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dc66e87ae5e869bb818c926e0d037764b9733b9b | 2,948 | py | Python | email_api/api/migrations/0001_initial.py | PawlikMateusz/DjangoEmailRestApi | 1701033de109432a36252ce88389111c65e77249 | [
"MIT"
] | null | null | null | email_api/api/migrations/0001_initial.py | PawlikMateusz/DjangoEmailRestApi | 1701033de109432a36252ce88389111c65e77249 | [
"MIT"
] | null | null | null | email_api/api/migrations/0001_initial.py | PawlikMateusz/DjangoEmailRestApi | 1701033de109432a36252ce88389111c65e77249 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.13 on 2019-02-28 19:18
import django.contrib.postgres.fields
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Email',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('to', django.contrib.postgres.fields.ArrayField(base_field=models.EmailField(max_length=254), size=None)),
('cc', django.contrib.postgres.fields.ArrayField(base_field=models.EmailField(blank=True, max_length=254, null=True), size=None)),
('bcc', django.contrib.postgres.fields.ArrayField(base_field=models.EmailField(blank=True, max_length=254, null=True), size=None)),
('reply_to', models.EmailField(blank=True, default=None, max_length=254, null=True)),
('send_date', models.DateTimeField()),
('date', models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name='Mailbox',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('host', models.CharField(max_length=100)),
('port', models.IntegerField(default=465)),
('login', models.CharField(max_length=30)),
('password', models.CharField(max_length=30)),
('email_from', models.CharField(max_length=50)),
('use_ssl', models.BooleanField(default=True)),
('is_active', models.BooleanField(default=False)),
('date', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
],
),
migrations.CreateModel(
name='Template',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('subject', models.CharField(max_length=200)),
('text', models.TextField()),
('attachment', models.FileField(blank=True, null=True, upload_to='')),
('date', models.DateTimeField(auto_now_add=True)),
('last_update', models.DateTimeField(auto_now=True)),
],
),
migrations.AddField(
model_name='email',
name='mailbox',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='emails', to='api.Mailbox'),
),
migrations.AddField(
model_name='email',
name='template',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='emails', to='api.Template'),
),
]
| 44.666667 | 147 | 0.592605 | 303 | 2,948 | 5.643564 | 0.313531 | 0.047368 | 0.067251 | 0.076023 | 0.587719 | 0.545614 | 0.509357 | 0.487719 | 0.487719 | 0.451462 | 0 | 0.021237 | 0.265265 | 2,948 | 65 | 148 | 45.353846 | 0.768236 | 0.015604 | 0 | 0.448276 | 1 | 0 | 0.073103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.017241 | 0.068966 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc7db988b842558ad78f3ddf6acf90c07a5a88a6 | 1,227 | py | Python | api/needley/models.py | kino-ma/needley | ae0463d24aed64f37385385415c766a43fa7d1d4 | [
"MIT"
] | null | null | null | api/needley/models.py | kino-ma/needley | ae0463d24aed64f37385385415c766a43fa7d1d4 | [
"MIT"
] | 4 | 2021-01-28T02:35:40.000Z | 2021-02-17T10:51:31.000Z | api/needley/models.py | kino-ma/python-portfolio | ae0463d24aed64f37385385415c766a43fa7d1d4 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils import timezone
from django.core.validators import MinLengthValidator
from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
email = models.EmailField(unique=True)
# Nickname is display name
nickname = models.CharField(
validators=[MinLengthValidator(1)], max_length=20)
# Avator is a url icon image url
avatar = models.URLField(
validators=[MinLengthValidator(1)], max_length=200, null=True)
def __str__(self):
return "@%s" % self.username
class Article(models.Model):
# The author of this article. This field can be referenced by `article.author`
author = models.ForeignKey(
User,
related_name="author",
on_delete=models.CASCADE
)
# The title of this article
title = models.CharField(
validators=[MinLengthValidator(1)], max_length=100)
# Actual content of this article
content = models.TextField()
# Date when data were created/updated
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return "\"%s\" by %s" % (self.title, self.author.profile)
| 31.461538 | 82 | 0.698452 | 152 | 1,227 | 5.519737 | 0.5 | 0.047676 | 0.103695 | 0.114422 | 0.288439 | 0.1764 | 0.126341 | 0 | 0 | 0 | 0 | 0.011294 | 0.206194 | 1,227 | 38 | 83 | 32.289474 | 0.850103 | 0.183374 | 0 | 0.08 | 0 | 0 | 0.01608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.16 | 0.08 | 0.72 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dc88577e170bf6ccee3242427067dd649b90ce5d | 661 | py | Python | Modules/secreat-message.py | cclauss/pythonCodes | feee6fda9fcab90b6eda7b97e1c50af21861df4e | [
"MIT"
] | null | null | null | Modules/secreat-message.py | cclauss/pythonCodes | feee6fda9fcab90b6eda7b97e1c50af21861df4e | [
"MIT"
] | null | null | null | Modules/secreat-message.py | cclauss/pythonCodes | feee6fda9fcab90b6eda7b97e1c50af21861df4e | [
"MIT"
] | null | null | null | #This file contain examples for os module
#What is os module?
#is a module using for list files in folder, we can get name of current working directory
#rename files , write on files
import os
def rename_files():
# (1) get file names from a folder
file_list = os.listdir(r"C:\Users\user\Desktop\python\pythonCodes\Modules\images")
# r (row path) mean take the string as it's and don't interpreter
saved_path = os.getcwd()
print saved_path
saved_path = os.chdir(r"C:\Users\user\Desktop\python\pythonCodes\Modules\images")
for file_name in file_list:
os.rename(file_name , file_name.translate(None , "0123456789"))
print saved_path
rename_files() | 31.47619 | 89 | 0.75643 | 112 | 661 | 4.366071 | 0.544643 | 0.07362 | 0.0409 | 0.04499 | 0.196319 | 0.196319 | 0.196319 | 0.196319 | 0.196319 | 0 | 0 | 0.019538 | 0.14826 | 661 | 21 | 90 | 31.47619 | 0.849023 | 0.411498 | 0 | 0.2 | 0 | 0 | 0.313316 | 0.287206 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc8ea639aefe380ccad355b48b1e7babaaca301c | 1,922 | py | Python | meiduo_mall/meiduo_mall/apps/contents/views.py | hgztlmb/meiduo_project | 5eb3b875faa06db844c8ba842cc299eccd976311 | [
"MIT"
] | 2 | 2019-05-27T12:41:53.000Z | 2019-06-11T02:58:00.000Z | meiduo_mall/meiduo_mall/apps/contents/views.py | hgztlmb/meiduo_project | 5eb3b875faa06db844c8ba842cc299eccd976311 | [
"MIT"
] | null | null | null | meiduo_mall/meiduo_mall/apps/contents/views.py | hgztlmb/meiduo_project | 5eb3b875faa06db844c8ba842cc299eccd976311 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.views import View
from goods.models import GoodsChannel
from contents.models import ContentCategory
from .utils import get_categories
class IndexView(View):
def get(self, request):
# 定义一个字典categories包装所有数据
# categories = {}
# # 查询出所有商品频道数据并按照group_id(组),sequence(组内顺序)进行排序
# good_channel_qs = GoodsChannel.objects.order_by('group_id', 'sequence')
# # 遍历查询集(商品频道数据)
# for channel in good_channel_qs:
# # 获取当前商品group_id
# group_id = channel.group_id
# # 判断组别(group_id)是否存在字典中
# if channel.group_id not in categories:
# # 不存在则添加一个新的数据格式:{group_id:{“channels":[],"sub_cats":[]}}
# categories[group_id] = {"channels": [], "sub_cats": []}
# # 通过频道获取一级商品数据模型category
# cat1 = channel.category
# # 把频道的url赋给cat1
# cat1.url = channel.url
# # 将一级商品数据加入字典channels中
# categories[group_id]["channels"].append(cat1)
# # 获取当前一级下二级商品数据查询集
# cat2_qs = cat1.subs.all()
# # 遍历二级数据查询集
# for cat2 in cat2_qs:
# # 获取三级数据查询集
# cat3_qs = cat2.subs.all()
# # 将当前二级数据下的三级数据查询集保存到二级数据的sub_cats属性中
# cat2.sub_cats = cat3_qs
# # 将二级数据加入字典sub_cats中
# categories[group_id]["sub_cats"].append(cat2)
# 首页广告
# 建立字典保存广告数据
contents = {}
# 获取广告类别查询集
contents_qs = ContentCategory.objects.all()
# 遍历广告类型查询集
for cat in contents_qs:
# 构建广告数据格式
contents[cat.key] = cat.content_set.filter(status=True).order_by('sequence')
context = {
"categories": get_categories(),
"contents": contents
}
return render(request, 'index.html', context)
| 35.592593 | 88 | 0.564516 | 178 | 1,922 | 5.91573 | 0.455056 | 0.059829 | 0.042735 | 0.034188 | 0.041785 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010148 | 0.333507 | 1,922 | 53 | 89 | 36.264151 | 0.811866 | 0.510406 | 0 | 0 | 0 | 0 | 0.040268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
dc92728440353e2f2240052acb296267123bd464 | 21,328 | py | Python | dist/Basilisk/fswAlgorithms/rwMotorVoltage/rwMotorVoltage.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | null | null | null | dist/Basilisk/fswAlgorithms/rwMotorVoltage/rwMotorVoltage.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | 1 | 2019-03-13T20:52:22.000Z | 2019-03-13T20:52:22.000Z | dist/Basilisk/fswAlgorithms/rwMotorVoltage/rwMotorVoltage.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | null | null | null | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 3.0.12
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info as _swig_python_version_info
if _swig_python_version_info >= (2, 7, 0):
def swig_import_helper():
import importlib
pkg = __name__.rpartition('.')[0]
mname = '.'.join((pkg, '_rwMotorVoltage')).lstrip('.')
try:
return importlib.import_module(mname)
except ImportError:
return importlib.import_module('_rwMotorVoltage')
_rwMotorVoltage = swig_import_helper()
del swig_import_helper
elif _swig_python_version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_rwMotorVoltage', [dirname(__file__)])
except ImportError:
import _rwMotorVoltage
return _rwMotorVoltage
try:
_mod = imp.load_module('_rwMotorVoltage', fp, pathname, description)
finally:
if fp is not None:
fp.close()
return _mod
_rwMotorVoltage = swig_import_helper()
del swig_import_helper
else:
import _rwMotorVoltage
del _swig_python_version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
try:
import builtins as __builtin__
except ImportError:
import __builtin__
def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
if (name == "thisown"):
return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name, None)
if method:
return method(self, value)
if (not static):
if _newclass:
object.__setattr__(self, name, value)
else:
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self, class_type, name, value):
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
def _swig_getattr(self, class_type, name):
if (name == "thisown"):
return self.this.own()
method = class_type.__swig_getmethods__.get(name, None)
if method:
return method(self)
raise AttributeError("'%s' object has no attribute '%s'" % (class_type.__name__, name))
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except __builtin__.Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except __builtin__.Exception:
class _object:
pass
_newclass = 0
def new_doubleArray(nelements):
return _rwMotorVoltage.new_doubleArray(nelements)
new_doubleArray = _rwMotorVoltage.new_doubleArray
def delete_doubleArray(ary):
return _rwMotorVoltage.delete_doubleArray(ary)
delete_doubleArray = _rwMotorVoltage.delete_doubleArray
def doubleArray_getitem(ary, index):
return _rwMotorVoltage.doubleArray_getitem(ary, index)
doubleArray_getitem = _rwMotorVoltage.doubleArray_getitem
def doubleArray_setitem(ary, index, value):
return _rwMotorVoltage.doubleArray_setitem(ary, index, value)
doubleArray_setitem = _rwMotorVoltage.doubleArray_setitem
def new_longArray(nelements):
return _rwMotorVoltage.new_longArray(nelements)
new_longArray = _rwMotorVoltage.new_longArray
def delete_longArray(ary):
return _rwMotorVoltage.delete_longArray(ary)
delete_longArray = _rwMotorVoltage.delete_longArray
def longArray_getitem(ary, index):
return _rwMotorVoltage.longArray_getitem(ary, index)
longArray_getitem = _rwMotorVoltage.longArray_getitem
def longArray_setitem(ary, index, value):
return _rwMotorVoltage.longArray_setitem(ary, index, value)
longArray_setitem = _rwMotorVoltage.longArray_setitem
def new_intArray(nelements):
return _rwMotorVoltage.new_intArray(nelements)
new_intArray = _rwMotorVoltage.new_intArray
def delete_intArray(ary):
return _rwMotorVoltage.delete_intArray(ary)
delete_intArray = _rwMotorVoltage.delete_intArray
def intArray_getitem(ary, index):
return _rwMotorVoltage.intArray_getitem(ary, index)
intArray_getitem = _rwMotorVoltage.intArray_getitem
def intArray_setitem(ary, index, value):
return _rwMotorVoltage.intArray_setitem(ary, index, value)
intArray_setitem = _rwMotorVoltage.intArray_setitem
def new_shortArray(nelements):
return _rwMotorVoltage.new_shortArray(nelements)
new_shortArray = _rwMotorVoltage.new_shortArray
def delete_shortArray(ary):
return _rwMotorVoltage.delete_shortArray(ary)
delete_shortArray = _rwMotorVoltage.delete_shortArray
def shortArray_getitem(ary, index):
return _rwMotorVoltage.shortArray_getitem(ary, index)
shortArray_getitem = _rwMotorVoltage.shortArray_getitem
def shortArray_setitem(ary, index, value):
return _rwMotorVoltage.shortArray_setitem(ary, index, value)
shortArray_setitem = _rwMotorVoltage.shortArray_setitem
def getStructSize(self):
try:
return eval('sizeof_' + repr(self).split(';')[0].split('.')[-1])
except (NameError) as e:
typeString = 'sizeof_' + repr(self).split(';')[0].split('.')[-1]
raise NameError(e.message + '\nYou tried to get this size macro: ' + typeString +
'\n It appears to be undefined. \nYou need to run the SWIG GEN_SIZEOF' +
' SWIG macro against the class/struct in your SWIG file if you want to ' +
' make this call.\n')
def protectSetAttr(self, name, value):
if(hasattr(self, name) or name == 'this'):
object.__setattr__(self, name, value)
else:
raise ValueError('You tried to add this variable: ' + name + '\n' +
'To this class: ' + str(self))
def protectAllClasses(moduleType):
import inspect
clsmembers = inspect.getmembers(sys.modules[__name__], inspect.isclass)
for member in clsmembers:
try:
exec(str(member[0]) + '.__setattr__ = protectSetAttr')
exec(str(member[0]) + '.getStructSize = getStructSize')
except (AttributeError, TypeError) as e:
pass
Update_rwMotorVoltage = _rwMotorVoltage.Update_rwMotorVoltage
SelfInit_rwMotorVoltage = _rwMotorVoltage.SelfInit_rwMotorVoltage
CrossInit_rwMotorVoltage = _rwMotorVoltage.CrossInit_rwMotorVoltage
Reset_rwMotorVoltage = _rwMotorVoltage.Reset_rwMotorVoltage
sizeof_rwMotorVoltageConfig = _rwMotorVoltage.sizeof_rwMotorVoltageConfig
sizeof_RWArrayTorqueIntMsg = _rwMotorVoltage.sizeof_RWArrayTorqueIntMsg
sizeof_RWAvailabilityFswMsg = _rwMotorVoltage.sizeof_RWAvailabilityFswMsg
sizeof_RWSpeedIntMsg = _rwMotorVoltage.sizeof_RWSpeedIntMsg
sizeof_RWArrayConfigFswMsg = _rwMotorVoltage.sizeof_RWArrayConfigFswMsg
class rwMotorVoltageConfig(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, rwMotorVoltageConfig, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, rwMotorVoltageConfig, name)
__repr__ = _swig_repr
__swig_setmethods__["VMin"] = _rwMotorVoltage.rwMotorVoltageConfig_VMin_set
__swig_getmethods__["VMin"] = _rwMotorVoltage.rwMotorVoltageConfig_VMin_get
if _newclass:
VMin = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_VMin_get, _rwMotorVoltage.rwMotorVoltageConfig_VMin_set)
__swig_setmethods__["VMax"] = _rwMotorVoltage.rwMotorVoltageConfig_VMax_set
__swig_getmethods__["VMax"] = _rwMotorVoltage.rwMotorVoltageConfig_VMax_get
if _newclass:
VMax = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_VMax_get, _rwMotorVoltage.rwMotorVoltageConfig_VMax_set)
__swig_setmethods__["K"] = _rwMotorVoltage.rwMotorVoltageConfig_K_set
__swig_getmethods__["K"] = _rwMotorVoltage.rwMotorVoltageConfig_K_get
if _newclass:
K = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_K_get, _rwMotorVoltage.rwMotorVoltageConfig_K_set)
__swig_setmethods__["rwSpeedOld"] = _rwMotorVoltage.rwMotorVoltageConfig_rwSpeedOld_set
__swig_getmethods__["rwSpeedOld"] = _rwMotorVoltage.rwMotorVoltageConfig_rwSpeedOld_get
if _newclass:
rwSpeedOld = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_rwSpeedOld_get, _rwMotorVoltage.rwMotorVoltageConfig_rwSpeedOld_set)
__swig_setmethods__["priorTime"] = _rwMotorVoltage.rwMotorVoltageConfig_priorTime_set
__swig_getmethods__["priorTime"] = _rwMotorVoltage.rwMotorVoltageConfig_priorTime_get
if _newclass:
priorTime = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_priorTime_get, _rwMotorVoltage.rwMotorVoltageConfig_priorTime_set)
__swig_setmethods__["resetFlag"] = _rwMotorVoltage.rwMotorVoltageConfig_resetFlag_set
__swig_getmethods__["resetFlag"] = _rwMotorVoltage.rwMotorVoltageConfig_resetFlag_get
if _newclass:
resetFlag = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_resetFlag_get, _rwMotorVoltage.rwMotorVoltageConfig_resetFlag_set)
__swig_setmethods__["voltageOutMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgName_set
__swig_getmethods__["voltageOutMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgName_get
if _newclass:
voltageOutMsgName = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgName_get, _rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgName_set)
__swig_setmethods__["voltageOutMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgID_set
__swig_getmethods__["voltageOutMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgID_get
if _newclass:
voltageOutMsgID = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgID_get, _rwMotorVoltage.rwMotorVoltageConfig_voltageOutMsgID_set)
__swig_setmethods__["torqueInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgName_set
__swig_getmethods__["torqueInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgName_get
if _newclass:
torqueInMsgName = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgName_get, _rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgName_set)
__swig_setmethods__["torqueInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgID_set
__swig_getmethods__["torqueInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgID_get
if _newclass:
torqueInMsgID = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgID_get, _rwMotorVoltage.rwMotorVoltageConfig_torqueInMsgID_set)
__swig_setmethods__["rwParamsInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgName_set
__swig_getmethods__["rwParamsInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgName_get
if _newclass:
rwParamsInMsgName = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgName_get, _rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgName_set)
__swig_setmethods__["rwParamsInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgID_set
__swig_getmethods__["rwParamsInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgID_get
if _newclass:
rwParamsInMsgID = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgID_get, _rwMotorVoltage.rwMotorVoltageConfig_rwParamsInMsgID_set)
__swig_setmethods__["inputRWSpeedsInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgName_set
__swig_getmethods__["inputRWSpeedsInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgName_get
if _newclass:
inputRWSpeedsInMsgName = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgName_get, _rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgName_set)
__swig_setmethods__["inputRWSpeedsInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgID_set
__swig_getmethods__["inputRWSpeedsInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgID_get
if _newclass:
inputRWSpeedsInMsgID = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgID_get, _rwMotorVoltage.rwMotorVoltageConfig_inputRWSpeedsInMsgID_set)
__swig_setmethods__["rwAvailInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgName_set
__swig_getmethods__["rwAvailInMsgName"] = _rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgName_get
if _newclass:
rwAvailInMsgName = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgName_get, _rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgName_set)
__swig_setmethods__["rwAvailInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgID_set
__swig_getmethods__["rwAvailInMsgID"] = _rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgID_get
if _newclass:
rwAvailInMsgID = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgID_get, _rwMotorVoltage.rwMotorVoltageConfig_rwAvailInMsgID_set)
__swig_setmethods__["rwConfigParams"] = _rwMotorVoltage.rwMotorVoltageConfig_rwConfigParams_set
__swig_getmethods__["rwConfigParams"] = _rwMotorVoltage.rwMotorVoltageConfig_rwConfigParams_get
if _newclass:
rwConfigParams = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_rwConfigParams_get, _rwMotorVoltage.rwMotorVoltageConfig_rwConfigParams_set)
__swig_setmethods__["voltageOut"] = _rwMotorVoltage.rwMotorVoltageConfig_voltageOut_set
__swig_getmethods__["voltageOut"] = _rwMotorVoltage.rwMotorVoltageConfig_voltageOut_get
if _newclass:
voltageOut = _swig_property(_rwMotorVoltage.rwMotorVoltageConfig_voltageOut_get, _rwMotorVoltage.rwMotorVoltageConfig_voltageOut_set)
def __init__(self):
this = _rwMotorVoltage.new_rwMotorVoltageConfig()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _rwMotorVoltage.delete_rwMotorVoltageConfig
__del__ = lambda self: None
rwMotorVoltageConfig_swigregister = _rwMotorVoltage.rwMotorVoltageConfig_swigregister
rwMotorVoltageConfig_swigregister(rwMotorVoltageConfig)
class RWSpeedIntMsg(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, RWSpeedIntMsg, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, RWSpeedIntMsg, name)
__repr__ = _swig_repr
__swig_setmethods__["wheelSpeeds"] = _rwMotorVoltage.RWSpeedIntMsg_wheelSpeeds_set
__swig_getmethods__["wheelSpeeds"] = _rwMotorVoltage.RWSpeedIntMsg_wheelSpeeds_get
if _newclass:
wheelSpeeds = _swig_property(_rwMotorVoltage.RWSpeedIntMsg_wheelSpeeds_get, _rwMotorVoltage.RWSpeedIntMsg_wheelSpeeds_set)
__swig_setmethods__["wheelThetas"] = _rwMotorVoltage.RWSpeedIntMsg_wheelThetas_set
__swig_getmethods__["wheelThetas"] = _rwMotorVoltage.RWSpeedIntMsg_wheelThetas_get
if _newclass:
wheelThetas = _swig_property(_rwMotorVoltage.RWSpeedIntMsg_wheelThetas_get, _rwMotorVoltage.RWSpeedIntMsg_wheelThetas_set)
def __init__(self):
this = _rwMotorVoltage.new_RWSpeedIntMsg()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _rwMotorVoltage.delete_RWSpeedIntMsg
__del__ = lambda self: None
RWSpeedIntMsg_swigregister = _rwMotorVoltage.RWSpeedIntMsg_swigregister
RWSpeedIntMsg_swigregister(RWSpeedIntMsg)
MAX_EFF_CNT = _rwMotorVoltage.MAX_EFF_CNT
MAX_NUM_CSS_SENSORS = _rwMotorVoltage.MAX_NUM_CSS_SENSORS
MAX_ST_VEH_COUNT = _rwMotorVoltage.MAX_ST_VEH_COUNT
NANO2SEC = _rwMotorVoltage.NANO2SEC
class RWArrayTorqueIntMsg(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, RWArrayTorqueIntMsg, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, RWArrayTorqueIntMsg, name)
__repr__ = _swig_repr
__swig_setmethods__["motorTorque"] = _rwMotorVoltage.RWArrayTorqueIntMsg_motorTorque_set
__swig_getmethods__["motorTorque"] = _rwMotorVoltage.RWArrayTorqueIntMsg_motorTorque_get
if _newclass:
motorTorque = _swig_property(_rwMotorVoltage.RWArrayTorqueIntMsg_motorTorque_get, _rwMotorVoltage.RWArrayTorqueIntMsg_motorTorque_set)
def __init__(self):
this = _rwMotorVoltage.new_RWArrayTorqueIntMsg()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _rwMotorVoltage.delete_RWArrayTorqueIntMsg
__del__ = lambda self: None
RWArrayTorqueIntMsg_swigregister = _rwMotorVoltage.RWArrayTorqueIntMsg_swigregister
RWArrayTorqueIntMsg_swigregister(RWArrayTorqueIntMsg)
class RWArrayVoltageIntMsg(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, RWArrayVoltageIntMsg, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, RWArrayVoltageIntMsg, name)
__repr__ = _swig_repr
__swig_setmethods__["voltage"] = _rwMotorVoltage.RWArrayVoltageIntMsg_voltage_set
__swig_getmethods__["voltage"] = _rwMotorVoltage.RWArrayVoltageIntMsg_voltage_get
if _newclass:
voltage = _swig_property(_rwMotorVoltage.RWArrayVoltageIntMsg_voltage_get, _rwMotorVoltage.RWArrayVoltageIntMsg_voltage_set)
def __init__(self):
this = _rwMotorVoltage.new_RWArrayVoltageIntMsg()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _rwMotorVoltage.delete_RWArrayVoltageIntMsg
__del__ = lambda self: None
RWArrayVoltageIntMsg_swigregister = _rwMotorVoltage.RWArrayVoltageIntMsg_swigregister
RWArrayVoltageIntMsg_swigregister(RWArrayVoltageIntMsg)
class RWAvailabilityFswMsg(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, RWAvailabilityFswMsg, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, RWAvailabilityFswMsg, name)
__repr__ = _swig_repr
__swig_setmethods__["wheelAvailability"] = _rwMotorVoltage.RWAvailabilityFswMsg_wheelAvailability_set
__swig_getmethods__["wheelAvailability"] = _rwMotorVoltage.RWAvailabilityFswMsg_wheelAvailability_get
if _newclass:
wheelAvailability = _swig_property(_rwMotorVoltage.RWAvailabilityFswMsg_wheelAvailability_get, _rwMotorVoltage.RWAvailabilityFswMsg_wheelAvailability_set)
def __init__(self):
this = _rwMotorVoltage.new_RWAvailabilityFswMsg()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _rwMotorVoltage.delete_RWAvailabilityFswMsg
__del__ = lambda self: None
RWAvailabilityFswMsg_swigregister = _rwMotorVoltage.RWAvailabilityFswMsg_swigregister
RWAvailabilityFswMsg_swigregister(RWAvailabilityFswMsg)
BOOL_FALSE = _rwMotorVoltage.BOOL_FALSE
BOOL_TRUE = _rwMotorVoltage.BOOL_TRUE
AVAILABLE = _rwMotorVoltage.AVAILABLE
UNAVAILABLE = _rwMotorVoltage.UNAVAILABLE
class RWArrayConfigFswMsg(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, RWArrayConfigFswMsg, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, RWArrayConfigFswMsg, name)
__repr__ = _swig_repr
__swig_setmethods__["GsMatrix_B"] = _rwMotorVoltage.RWArrayConfigFswMsg_GsMatrix_B_set
__swig_getmethods__["GsMatrix_B"] = _rwMotorVoltage.RWArrayConfigFswMsg_GsMatrix_B_get
if _newclass:
GsMatrix_B = _swig_property(_rwMotorVoltage.RWArrayConfigFswMsg_GsMatrix_B_get, _rwMotorVoltage.RWArrayConfigFswMsg_GsMatrix_B_set)
__swig_setmethods__["JsList"] = _rwMotorVoltage.RWArrayConfigFswMsg_JsList_set
__swig_getmethods__["JsList"] = _rwMotorVoltage.RWArrayConfigFswMsg_JsList_get
if _newclass:
JsList = _swig_property(_rwMotorVoltage.RWArrayConfigFswMsg_JsList_get, _rwMotorVoltage.RWArrayConfigFswMsg_JsList_set)
__swig_setmethods__["numRW"] = _rwMotorVoltage.RWArrayConfigFswMsg_numRW_set
__swig_getmethods__["numRW"] = _rwMotorVoltage.RWArrayConfigFswMsg_numRW_get
if _newclass:
numRW = _swig_property(_rwMotorVoltage.RWArrayConfigFswMsg_numRW_get, _rwMotorVoltage.RWArrayConfigFswMsg_numRW_set)
__swig_setmethods__["uMax"] = _rwMotorVoltage.RWArrayConfigFswMsg_uMax_set
__swig_getmethods__["uMax"] = _rwMotorVoltage.RWArrayConfigFswMsg_uMax_get
if _newclass:
uMax = _swig_property(_rwMotorVoltage.RWArrayConfigFswMsg_uMax_get, _rwMotorVoltage.RWArrayConfigFswMsg_uMax_set)
def __init__(self):
this = _rwMotorVoltage.new_RWArrayConfigFswMsg()
try:
self.this.append(this)
except __builtin__.Exception:
self.this = this
__swig_destroy__ = _rwMotorVoltage.delete_RWArrayConfigFswMsg
__del__ = lambda self: None
RWArrayConfigFswMsg_swigregister = _rwMotorVoltage.RWArrayConfigFswMsg_swigregister
RWArrayConfigFswMsg_swigregister(RWArrayConfigFswMsg)
import sys
protectAllClasses(sys.modules[__name__])
# This file is compatible with both classic and new-style classes.
| 49.256351 | 177 | 0.790463 | 1,969 | 21,328 | 7.891823 | 0.117318 | 0.159727 | 0.029539 | 0.053285 | 0.316365 | 0.152519 | 0.11661 | 0.095888 | 0.079542 | 0.079542 | 0 | 0.001365 | 0.14141 | 21,328 | 432 | 178 | 49.37037 | 0.847204 | 0.013832 | 0 | 0.317333 | 1 | 0 | 0.053037 | 0.002093 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082667 | false | 0.008 | 0.056 | 0.045333 | 0.344 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc9cf14d3bafd94c1a7db1cea9146e94b8b3873a | 1,574 | py | Python | sms.py | varunotelli/Gujarat | c5e757026feba3938162eeda21ebf6ff96ff3468 | [
"MIT"
] | 2 | 2019-08-21T15:35:00.000Z | 2021-04-30T14:58:04.000Z | sms.py | varunotelli/E-Seva | c5e757026feba3938162eeda21ebf6ff96ff3468 | [
"MIT"
] | 10 | 2018-03-25T20:32:05.000Z | 2018-04-03T06:33:03.000Z | sms.py | varunotelli/E-Seva | c5e757026feba3938162eeda21ebf6ff96ff3468 | [
"MIT"
] | null | null | null | import urllib.request, urllib.error, urllib.parse
import http.cookiejar
from getpass import getpass
import sys
def send(number,scheme):
username="9791011603"
passwd="D5222M"
message="You have successfully been enrolled for "+scheme
'''
username = input("Enter Username: ")
passwd = getpass()
message = input("Enter Message: ")
number = input("Enter Mobile number:")
'''
message = "+".join(message.split(' '))
#Logging into the SMS Site
url = 'http://site24.way2sms.com/Login1.action?'
data = 'username='+username+'&password='+passwd+'&Submit=Sign+in'
#For Cookies:
cj = http.cookiejar.CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
# Adding Header detail:
opener.addheaders = [('User-Agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36')]
try:
usock = opener.open(url, data.encode('utf-8'))
except IOError:
print("Error while logging in.")
#sys.exit(1)
jession_id = str(cj).split('~')[1].split(' ')[0]
send_sms_url = 'http://site24.way2sms.com/smstoss.action?'
send_sms_data = 'ssaction=ss&Token='+jession_id+'&mobile='+number+'&message='+message+'&msgLen=136'
opener.addheaders = [('Referer', 'http://site25.way2sms.com/sendSMS?Token='+jession_id)]
try:
sms_sent_page = opener.open(send_sms_url,send_sms_data.encode('utf-8'))
except IOError:
print("Error while sending message")
#sys.exit(1)
print("SMS has been sent.")
return True
#send("9791011603","hello")
#send("8870173154","piyu") | 30.269231 | 145 | 0.695044 | 213 | 1,574 | 5.065728 | 0.507042 | 0.02595 | 0.035218 | 0.037071 | 0.120482 | 0.07785 | 0.07785 | 0.07785 | 0.07785 | 0 | 0 | 0.059341 | 0.132783 | 1,574 | 52 | 146 | 30.269231 | 0.731136 | 0.083863 | 0 | 0.142857 | 0 | 0.035714 | 0.356535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0.107143 | 0.142857 | 0 | 0.214286 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dc9cf905743fa7cb5e599cce0ef947cd502fe5fb | 1,169 | py | Python | tests/python-playground/tv_1d_0.py | marcocannici/scs | 799a4f7daed4294cd98c73df71676195e6c63de4 | [
"MIT"
] | 25 | 2017-06-30T15:31:33.000Z | 2021-04-21T20:12:18.000Z | tests/python-playground/tv_1d_0.py | marcocannici/scs | 799a4f7daed4294cd98c73df71676195e6c63de4 | [
"MIT"
] | 34 | 2017-06-07T01:18:17.000Z | 2021-04-24T09:44:00.000Z | tests/python-playground/tv_1d_0.py | marcocannici/scs | 799a4f7daed4294cd98c73df71676195e6c63de4 | [
"MIT"
] | 13 | 2017-06-07T01:16:09.000Z | 2021-06-07T09:12:56.000Z | # This is automatically-generated code.
# Uses the jinja2 library for templating.
import cvxpy as cp
import numpy as np
import scipy as sp
# setup
problemID = "tv_1d_0"
prob = None
opt_val = None
# Variable declarations
np.random.seed(0)
n = 100000
k = max(int(np.sqrt(n)/2), 1)
x0 = np.ones((n,1))
idxs = np.random.randint(0, n, (k,2))
idxs.sort()
for a, b in idxs:
x0[a:b] += 10*(np.random.rand()-0.5)
b = x0 + np.random.randn(n, 1)
lam = np.sqrt(n)
# Problem construction
x = cp.Variable(n)
f = 0.5*cp.sum_squares(x-b) + lam*cp.norm1(x[1:]-x[:-1])
prob = cp.Problem(cp.Minimize(f))
# Problem collection
# Single problem collection
problemDict = {
"problemID" : problemID,
"problem" : prob,
"opt_val" : opt_val
}
problems = [problemDict]
# For debugging individual problems:
if __name__ == "__main__":
def printResults(problemID = "", problem = None, opt_val = None):
print(problemID)
problem.solve()
print("\tstatus: {}".format(problem.status))
print("\toptimal value: {}".format(problem.value))
print("\ttrue optimal value: {}".format(opt_val))
printResults(**problems[0])
| 17.712121 | 69 | 0.640719 | 172 | 1,169 | 4.261628 | 0.47093 | 0.040928 | 0.027285 | 0.038199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03095 | 0.19846 | 1,169 | 65 | 70 | 17.984615 | 0.751334 | 0.176219 | 0 | 0 | 1 | 0 | 0.097689 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.090909 | 0 | 0.121212 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dc9cfa85420ac5293533d515f08d8e42f1aa97a1 | 1,071 | py | Python | src/pipeline.py | iyunbo/logz | e529188be8a232827dddf203b0c61bf32730ecff | [
"MIT"
] | null | null | null | src/pipeline.py | iyunbo/logz | e529188be8a232827dddf203b0c61bf32730ecff | [
"MIT"
] | 3 | 2020-03-01T17:43:08.000Z | 2020-03-01T17:50:43.000Z | src/pipeline.py | iyunbo/logx | e529188be8a232827dddf203b0c61bf32730ecff | [
"MIT"
] | null | null | null | """Construction of the master pipeline.
"""
from typing import Dict
from kedro.pipeline import Pipeline
from .data import pipeline as de
from .models import pipeline as ds
###########################################################################
# Here you can find an example pipeline, made of two modular pipelines.
#
# Delete this when you start working on your own Kedro project as
# well as pipelines/data_science AND pipelines/data_engineering
# -------------------------------------------------------------------------
def create_pipelines(**kwargs) -> Dict[str, Pipeline]:
"""Create the project's pipeline.
Args:
kwargs: Ignore any additional arguments added in the future.
Returns:
A mapping from a pipeline name to a ``Pipeline`` object.
"""
data_engineering_pipeline = de.create_pipeline()
data_science_pipeline = ds.create_pipeline()
return {
"de": data_engineering_pipeline,
"ds": data_science_pipeline,
"__default__": data_engineering_pipeline + data_science_pipeline,
}
| 27.461538 | 75 | 0.616246 | 120 | 1,071 | 5.325 | 0.491667 | 0.068858 | 0.107981 | 0.084507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181139 | 1,071 | 38 | 76 | 28.184211 | 0.72862 | 0.449113 | 0 | 0 | 0 | 0 | 0.031447 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
dca244a29e158a60507829eefc5aceb50205134c | 1,341 | py | Python | Drivers/PS-228xS/PS228xS_Python_Sockets_Driver/PS228xS_Example.py | 398786172/keithley | f78c5220841775a45ae60645c774e8b443b02ec3 | [
"BSD-Source-Code"
] | 31 | 2019-04-11T14:25:39.000Z | 2022-03-18T15:09:33.000Z | Drivers/PS-228xS/PS228xS_Python_Sockets_Driver/PS228xS_Example.py | 398786172/keithley | f78c5220841775a45ae60645c774e8b443b02ec3 | [
"BSD-Source-Code"
] | 27 | 2019-04-10T20:21:52.000Z | 2021-12-09T01:59:32.000Z | Drivers/PS-228xS/PS228xS_Python_Sockets_Driver/PS228xS_Example.py | 398786172/keithley | f78c5220841775a45ae60645c774e8b443b02ec3 | [
"BSD-Source-Code"
] | 30 | 2019-06-08T09:38:20.000Z | 2022-03-18T15:10:37.000Z | #!/usr/bin/python
import socket
import struct
import math
import time
import Keithley_PS228xS_Sockets_Driver as ps
echoCmd = 1
#===== MAIN PROGRAM STARTS HERE =====
ipAddress1 = "134.63.78.214"
ipAddress2 = "134.63.74.152"
ipAddress3 = "134.63.78.214"
port = 5025
timeout = 20.0
t1 = time.time()
#ps.instrConnect(s1, ipAddress1, port, timeout, 0, 0)
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s1, idStr = ps.PowerSupply_Connect(s1, ipAddress1, port, timeout, echoCmd, 1, 1)
print(idStr)
ps.PowerSupply_SetVoltage(s1, 10.0)
ps.PowerSupply_SetCurrent(s1, 1.5)
ps.PowerSupply_SetVoltageProtection(s1, 33.0)
ps.PowerSupply_SetCurrentProtection(s1, 2.0)
print(ps.PowerSupply_GetVoltage(s1))
print(ps.PowerSupply_GetCurrent(s1))
ps.PowerSupply_SetDataFormat(s1, 1, 0, 0)
ps.PowerSupply_SetOutputState(s1, 1)
ps.PowerSupply_SetDisplayText(s1, "Powering On DUT...")
print(ps.PowerSupply_GetOutputState(s1))
time.sleep(3.0)
print(ps.PowerSupply_MeasureCurrent(s1))
print(ps.PowerSupply_MeasureVoltage(s1))
time.sleep(1.0)
ps.PowerSupply_SetOutputState(s1, 0)
ps.PowerSupply_SetDisplayText(s1, "Powering Off DUT...")
ps.PowerSupply_Disconnect(s1)
t2 = time.time()
# Notify the user of completion and the test time achieved.
print("done")
print("{0:.6f} s".format(t2-t1))
input("Press Enter to continue...")
exit()
exit()
| 22.35 | 80 | 0.756898 | 199 | 1,341 | 4.994975 | 0.437186 | 0.209256 | 0.070423 | 0.020121 | 0.134809 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077879 | 0.099925 | 1,341 | 59 | 81 | 22.728814 | 0.74565 | 0.120805 | 0 | 0.052632 | 0 | 0 | 0.098123 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.131579 | 0 | 0.131579 | 0.210526 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dca3d2cab2990fa6bda7c9d0fc5f20898f5d6657 | 2,424 | py | Python | baseline/eval_sent.py | parallelcrawl/DataCollection | 4308473e6b53779159a15c1416bff3f2291dd1f2 | [
"Apache-2.0"
] | 8 | 2018-02-08T16:03:00.000Z | 2022-01-19T11:41:38.000Z | baseline/eval_sent.py | christianbuck/CorpusMining | f9248c3528a415a1e5af2c5a54a60c16cd79ff1d | [
"Apache-2.0"
] | 3 | 2017-08-08T10:53:29.000Z | 2017-08-08T10:58:51.000Z | baseline/eval_sent.py | parallelcrawl/DataCollection | 4308473e6b53779159a15c1416bff3f2291dd1f2 | [
"Apache-2.0"
] | 4 | 2018-06-09T21:53:09.000Z | 2022-01-19T11:41:48.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from strip_language_from_uri import LanguageStripper
import urlparse
correct, wrong = [], []
def strip_uri(uri, language_stripper):
parsed_uri = urlparse.urlparse(uri)
matched_language = language_stripper.match(parsed_uri.path)
if not matched_language:
matched_language = language_stripper.match(parsed_uri.query)
assert matched_language
stripped_path = language_stripper.strip(parsed_uri.path)
stripped_query = language_stripper.strip(parsed_uri.query)
stripped_uri = urlparse.ParseResult(parsed_uri.scheme,
parsed_uri.netloc,
stripped_path,
parsed_uri.params,
stripped_query,
parsed_uri.fragment).geturl()
return matched_language, stripped_uri
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'infile', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument(
'--outfile', type=argparse.FileType('w'))
parser.add_argument('-filter', action="store_true")
args = parser.parse_args()
stripper = LanguageStripper()
source_uris, target_uris = set(), set()
for line in args.infile:
source_uri, target_uri, source, target, score = line.split("\t")
source_lang, stripped_source_uri = strip_uri(source_uri, stripper)
target_lang, stripped_target_uri = strip_uri(target_uri, stripper)
source_uris.add(source_uri)
target_uris.add(target_uri)
if stripped_source_uri != stripped_target_uri:
wrong.append((stripped_source_uri, stripped_target_uri))
else:
if args.outfile:
args.outfile.write(line)
correct.append((stripped_source_uri, stripped_target_uri))
print "found %s source and %s target uris" % (len(source_uris), len(target_uris))
total = len(wrong) + len(correct)
total_unique = len(set(wrong).union(set(correct)))
if wrong:
print "Wrong: ", len(wrong), len(set(wrong))
if correct:
print "Correct", len(correct), len(set(correct))
if total > 0:
print "Acc1", float(len(wrong)) / total
print "Acc2", float(len(set(wrong))) / total_unique
| 36.727273 | 85 | 0.634076 | 280 | 2,424 | 5.217857 | 0.292857 | 0.055441 | 0.046543 | 0.051335 | 0.180698 | 0.13963 | 0.116359 | 0 | 0 | 0 | 0 | 0.002237 | 0.262376 | 2,424 | 65 | 86 | 37.292308 | 0.814877 | 0.017327 | 0 | 0.038462 | 0 | 0 | 0.042017 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0 | null | null | 0 | 0.076923 | null | null | 0.096154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dca7b5998d3f52978888cf82288345897a3e20b1 | 2,162 | py | Python | UseImportJumpCommand.py | tinwatchman/Sublime-UseImport | a08770c4d1a32be947033a05e640afdf69fd95f1 | [
"MIT"
] | null | null | null | UseImportJumpCommand.py | tinwatchman/Sublime-UseImport | a08770c4d1a32be947033a05e640afdf69fd95f1 | [
"MIT"
] | null | null | null | UseImportJumpCommand.py | tinwatchman/Sublime-UseImport | a08770c4d1a32be947033a05e640afdf69fd95f1 | [
"MIT"
] | null | null | null | import sublime, sublime_plugin
import json
import useutil
class UseImportJumpCommand(sublime_plugin.TextCommand):
def description(self):
return 'Jump to File (Use-Import)'
def is_enabled(self):
return self.is_javascript_view()
def is_visible(self):
return self.is_javascript_view() and self.is_use_import_name()
def run(self, edit):
if self.is_javascript_view():
name = self.find_use_import_name()
if (name != False):
data = self.get_config()
if name in data:
relpath = data.get(name)
configpath = self.view.settings().get('UseImport_use_json_path')
abspath = useutil.get_abs_filepath(relpath, configpath)
if abspath != False:
self.view.window().open_file(abspath)
def is_javascript_view(self):
file_syntax = self.view.settings().get('syntax')
return useutil.is_javascript_syntax(file_syntax)
def is_use_import_name(self):
sels = self.view.sel()
for sel in sels:
curline = self.view.substr(self.view.line(sel))
m = useutil.parse_use_import_name(curline)
if (m != False):
return True
return False
def find_use_import_name(self):
sels = self.view.sel()
for sel in sels:
curline = self.view.substr(self.view.line(sel))
m = useutil.parse_use_import_name(curline)
if (m != False):
return m
return False
def get_config(self):
if self.view.settings().has('UseImport_use_json_path'):
filepath = self.view.settings().get('UseImport_use_json_path')
else:
filepath = useutil.search(self.view.file_name())
self.view.settings().set('UseImport_use_json_path', filepath)
if filepath != False:
return self.load_file(filepath)
return False
def load_file(self, filepath):
with open(filepath, 'r') as myfile:
rawdata = myfile.read()
return json.loads(rawdata)
| 34.31746 | 84 | 0.592044 | 259 | 2,162 | 4.733591 | 0.247104 | 0.084829 | 0.063622 | 0.065253 | 0.365416 | 0.319739 | 0.270799 | 0.270799 | 0.207178 | 0.207178 | 0 | 0 | 0.308973 | 2,162 | 62 | 85 | 34.870968 | 0.820616 | 0 | 0 | 0.245283 | 0 | 0 | 0.057354 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169811 | false | 0 | 0.283019 | 0.056604 | 0.679245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dca8bfe06cd9c25c611a5f5c53620c138ed89415 | 221 | py | Python | ExerciciosPython/67- Tabuada.py | lucadomingues/Python | b129c03cb95dac2a0baf59461eef7667373dc52f | [
"MIT"
] | null | null | null | ExerciciosPython/67- Tabuada.py | lucadomingues/Python | b129c03cb95dac2a0baf59461eef7667373dc52f | [
"MIT"
] | null | null | null | ExerciciosPython/67- Tabuada.py | lucadomingues/Python | b129c03cb95dac2a0baf59461eef7667373dc52f | [
"MIT"
] | null | null | null | while True:
print('\n--- MULTIPLICATION TABLE ---')
num = int(input('Type a number integer: '))
if num < 0:
break
for c in range(1, 11):
print(f'{c} X {num} = {c*num}')
print('END PROGRAM') | 27.625 | 47 | 0.533937 | 33 | 221 | 3.575758 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.276018 | 221 | 8 | 48 | 27.625 | 0.7125 | 0 | 0 | 0 | 0 | 0 | 0.382883 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4915d9213c03b46a6f7201a11146fd617cb2dbb | 800 | py | Python | src/command_modules/azure-cli-monitor/azure/cli/command_modules/monitor/_exception_handler.py | viananth/azure-cli | 4d23492ed03e946cfc11bae23b29acb971fb137d | [
"MIT"
] | null | null | null | src/command_modules/azure-cli-monitor/azure/cli/command_modules/monitor/_exception_handler.py | viananth/azure-cli | 4d23492ed03e946cfc11bae23b29acb971fb137d | [
"MIT"
] | null | null | null | src/command_modules/azure-cli-monitor/azure/cli/command_modules/monitor/_exception_handler.py | viananth/azure-cli | 4d23492ed03e946cfc11bae23b29acb971fb137d | [
"MIT"
] | 1 | 2017-12-28T04:51:44.000Z | 2017-12-28T04:51:44.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from azure.cli.core.util import CLIError
def monitor_exception_handler(ex):
from azure.mgmt.monitor.models import ErrorResponseException
if hasattr(ex, 'inner_exception') and 'MonitoringService' in ex.inner_exception.message:
raise CLIError(ex.inner_exception.code)
elif isinstance(ex, ErrorResponseException):
raise CLIError(ex)
else:
import sys
from six import reraise
reraise(*sys.exc_info())
| 42.105263 | 94 | 0.56375 | 77 | 800 | 5.779221 | 0.649351 | 0.047191 | 0.107865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1475 | 800 | 18 | 95 | 44.444444 | 0.652493 | 0.42 | 0 | 0 | 0 | 0 | 0.069717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f49e24bc85091e60f6700fa12214c8674aef6354 | 661 | py | Python | racoon/view/error/custom.py | onukura/Racoon | 96c96f2a37b8d33b35ca368c085d90e7a7caf105 | [
"MIT"
] | 3 | 2020-05-21T12:11:43.000Z | 2020-06-08T13:04:40.000Z | racoon/view/error/custom.py | onukura/Racoon | 96c96f2a37b8d33b35ca368c085d90e7a7caf105 | [
"MIT"
] | 6 | 2020-05-14T11:52:16.000Z | 2020-05-23T18:03:23.000Z | racoon/view/error/custom.py | onukura/Racoon | 96c96f2a37b8d33b35ca368c085d90e7a7caf105 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from flask import Blueprint, render_template
bp_error = Blueprint("bp_error", __name__, url_prefix="/error")
# Specific Error Handlers
@bp_error.route("/default")
def default():
return render_template(
"error/error_base.html",
error_code=500,
header_name="Error",
error_message="We will work on fixing that right away.",
)
@bp_error.route("/unauthorized")
def unauthorized():
return render_template(
"error/error_base.html",
error_code=500,
header_name="Unauthorized",
error_message="Not allowed to access this contents.",
)
| 24.481481 | 65 | 0.635401 | 77 | 661 | 5.194805 | 0.532468 | 0.07 | 0.06 | 0.125 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0 | 0.014056 | 0.246596 | 661 | 26 | 66 | 25.423077 | 0.789157 | 0.068079 | 0 | 0.333333 | 0 | 0 | 0.287905 | 0.07155 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0.111111 | 0.277778 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
f4a2d6d380e2e0e8ef71012f0474f13521d69e9d | 1,551 | py | Python | classes/Friend.py | brandonwarech/book-tracker-capstone | d2b66db94a9be6b1577e2d8eb4b5b9d8eb87b790 | [
"Apache-2.0"
] | null | null | null | classes/Friend.py | brandonwarech/book-tracker-capstone | d2b66db94a9be6b1577e2d8eb4b5b9d8eb87b790 | [
"Apache-2.0"
] | null | null | null | classes/Friend.py | brandonwarech/book-tracker-capstone | d2b66db94a9be6b1577e2d8eb4b5b9d8eb87b790 | [
"Apache-2.0"
] | 1 | 2019-06-25T07:01:58.000Z | 2019-06-25T07:01:58.000Z | import logging
import sys
import classes.iDb as db
# Set Logging Level
logging.basicConfig(level=logging.INFO)
class Friend:
def __init__(self, User, Friend):
self.user_id = User.user_id
self.friend_id = Friend.user_id
pass
def addFriend(self):
pass
def removeFriend(self):
pass
@staticmethod
def getFriends(user_id):
print(user_id)
try:
sql = "SELECT * FROM FRIEND WHERE USER1 = \'" + str(user_id) +'\' OR USER2 = \'' + str(user_id) + "'"
# Calls database with constructed SQL from imported db class
#favs = db.db.callDbFetch(sql)
friends_query_obj = db.dbQuery(sql)
friends = db.dbQuery.callDbFetch(friends_query_obj)
# Log Results of DB call and return results
logging.debug("successful connect to db2")
logging.info("favorites response: " + str(friends))
if friends != [False]:
return friends
else:
return {
"statusCode": 400,
"headers": {"Content-Type": "application/json"},
"body": {"error": str(sql) + str(sys.exc_info())}
}
except:
logging.error("Oops!" + str(sys.exc_info()) + "occured. ")
return {
"statusCode": 400,
"headers": {"Content-Type": "application/json"},
"body": {"error": str(sql) + str(sys.exc_info())}
}
| 29.264151 | 113 | 0.523533 | 162 | 1,551 | 4.895062 | 0.450617 | 0.052963 | 0.034048 | 0.04918 | 0.201765 | 0.201765 | 0.201765 | 0.201765 | 0.201765 | 0.201765 | 0 | 0.009063 | 0.359768 | 1,551 | 53 | 114 | 29.264151 | 0.789527 | 0.094778 | 0 | 0.297297 | 0 | 0 | 0.147143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0.081081 | 0.081081 | 0 | 0.297297 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f4b256384aba49c788282f89f33675aa04a6159a | 460 | py | Python | setup.py | ovod88/studentsdb | ade2cff0eea7a13644a0f708133901457352ff5c | [
"MIT"
] | null | null | null | setup.py | ovod88/studentsdb | ade2cff0eea7a13644a0f708133901457352ff5c | [
"MIT"
] | null | null | null | setup.py | ovod88/studentsdb | ade2cff0eea7a13644a0f708133901457352ff5c | [
"MIT"
] | null | null | null | from setuptools import find_packages, setup
setup(
name='django-studentsdb-app',
version='1.0',
author=u'Vova Khrystenko',
author_email='ovod88@bigmir.net',
packages=find_packages(),
license='BSD licence, see LICENCE.txt',
description='Students DB application',
long_description=open('README.txt').read(),
zip_safe=False,
include_package_data=True,
package_data = {
'students': ['requirements.txt']
},
) | 27.058824 | 47 | 0.676087 | 54 | 460 | 5.611111 | 0.777778 | 0.079208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010667 | 0.184783 | 460 | 17 | 48 | 27.058824 | 0.797333 | 0 | 0 | 0 | 0 | 0 | 0.305857 | 0.045553 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4b74cdb7ebe0043071f600b703680593e743888 | 3,020 | py | Python | Scripts-python/addRulesIptables.py | Brotic66/Script-Python | fd9094efabacc38d4aec46d43cae20f83791d83b | [
"CNRI-Python"
] | null | null | null | Scripts-python/addRulesIptables.py | Brotic66/Script-Python | fd9094efabacc38d4aec46d43cae20f83791d83b | [
"CNRI-Python"
] | null | null | null | Scripts-python/addRulesIptables.py | Brotic66/Script-Python | fd9094efabacc38d4aec46d43cae20f83791d83b | [
"CNRI-Python"
] | null | null | null | # coding=utf-8
'''
Ce fichier contient un script permettant de lier une application web en, PHP 5.6 avec Symfony et doctrine, avec l'administration d'un serveur et nottament de son pare-feu (iptables)
Permet d'ouvrir des ports pour des adresses IPs récupérer en base de données et ajouté via l'application web.
'''
__author__ = 'brice VICO'
import os
import MySQLdb
import re
PORTNUMBER = {"Silvaco Enseignement": 1000, "Cadence Enseignement": 1001, "Synopsys Enseignement": 1002,
"Memscap Enseignement": 1003, "Coventor Enseignement": 1004}
'''
Cette fonction extrait une liste d'adresse IPs dans un tableau depuis un élément Doctrine ArrayCollection d'une BDD (array SQL étrange...)
'''
def getIps(data):
listeIp = []
match = re.search(r'.*a:\d+:{(.*)}}', data)
listeNonParse = match.group(1)
result = re.sub(r'i:\d+;s:\d+:', '', listeNonParse)
result2 = re.search(r'"(.*?)";', result)
while result != '':
result2 = re.search(r'"(.*?)";', result)
result = re.sub(r'("(.*?)";)?', '', result, 1)
listeIp.append(result2.group(1))
return listeIp
'''
Cette fonction récupère la liste des installations ayant été validée par un responsable via l'application web.
'''
def recupererInstallationsValide():
db = MySQLdb.connect("localhost", "xxxxxx", "xxxxxx", "Crcc")
cursor = db.cursor()
query = "SELECT i.ips, l.nom FROM Installation i, Logiciel l WHERE valide = 1 AND l.id = i.logiciel_id"
lines = cursor.execute(query)
data = cursor.fetchall()
db.close()
return data
'''
Cette fonction classe par nom de logiciel dans un Dictionnary, les liste d'adresse IPs
'''
def classerParLogiciel(data):
listeFinale = {}
for installation in data:
if installation[1] in listeFinale:
listeFinale[installation[1]].append(installation[0])
else:
listeFinale[installation[1]] = [installation[0]]
return listeFinale
'''
########## Main ##########
'''
data = recupererInstallationsValide()
listeFinale = {}
if data:
listeFinale = classerParLogiciel(data)
else:
print "Pas de données à traiter"
print os.popen("iptables -F").read()
for logiciel in listeFinale:
print logiciel + ' : '
for listeIp in listeFinale[logiciel]:
ips = getIps(listeIp)
for ip in ips:
print os.popen("iptables -A INPUT -s " + ip + " -p tcp --dport " + str(
PORTNUMBER[logiciel]) + " -j ACCEPT").read()
print os.popen("iptables -A INPUT -s " + ip + " -p udp --dport " + str(
PORTNUMBER[logiciel]) + " -j ACCEPT").read()
for logiciel in PORTNUMBER:
print os.popen('iptables -A INPUT -p tcp --dport ' + str(PORTNUMBER[logiciel]) + " -j DROP").read()
print os.popen('iptables -A INPUT -p udp --dport ' + str(PORTNUMBER[logiciel]) + " -j DROP").read()
# Permet de conserver la mise à jour de iptables aprés un redémarrage (coupure de courant ou autre...)
print os.popen('service iptables-persistent save').read()
| 32.826087 | 181 | 0.648344 | 386 | 3,020 | 5.059585 | 0.42487 | 0.021505 | 0.036866 | 0.051203 | 0.160778 | 0.138249 | 0.138249 | 0.030722 | 0.030722 | 0 | 0 | 0.014669 | 0.209934 | 3,020 | 91 | 182 | 33.186813 | 0.803856 | 0.037417 | 0 | 0.150943 | 0 | 0.018868 | 0.239494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.056604 | null | null | 0.150943 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4b8b68f4b6ed74ffd5cdd6c3f4a46584a38442e | 1,266 | py | Python | yolov5/temp.py | shuyansy/A-detection-and-recognition-pipeline-of-complex-meters-in-wild | 15bc2b97078d3216cfd075ccba1cf2d2e42af54f | [
"MIT"
] | 17 | 2022-03-20T05:41:51.000Z | 2022-03-25T04:53:17.000Z | yolov5/temp.py | shuyansy/A-detection-and-recognition-pipeline-of-complex-meters-in-wild | 15bc2b97078d3216cfd075ccba1cf2d2e42af54f | [
"MIT"
] | null | null | null | yolov5/temp.py | shuyansy/A-detection-and-recognition-pipeline-of-complex-meters-in-wild | 15bc2b97078d3216cfd075ccba1cf2d2e42af54f | [
"MIT"
] | 1 | 2022-03-23T03:06:51.000Z | 2022-03-23T03:06:51.000Z | import os
import cv2
import numpy as np
from utils.augmentations import Albumentations, augment_hsv, copy_paste, letterbox
from models.common import DetectMultiBackend
from utils.datasets import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
from utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr,
increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import select_device, time_sync
# Load model
device='0'
weights='runs/train/exp6/weights/best.pt'
data='data/mydata.yaml'
device = select_device(device)
model = DetectMultiBackend(weights, device=device, dnn=False, data=data)
stride, names, pt, jit, onnx, engine = model.stride, model.names, model.pt, model.jit, model.onnx, model.engine
imgsz = check_img_size([640], s=stride) # check image size
path='/home/sy/ocr/datasets/all_meter_image/'
img_dir=os.listdir(path)
for i in img_dir:
img=cv2.imread(path+i)
# Padded resize
img = letterbox(img, [640], stride=stride, auto=pt)[0]
# Convert
img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
img = np.ascontiguousarray(img)
| 36.171429 | 117 | 0.751185 | 186 | 1,266 | 4.967742 | 0.548387 | 0.048701 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014747 | 0.14297 | 1,266 | 34 | 118 | 37.235294 | 0.836866 | 0.056872 | 0 | 0 | 0 | 0 | 0.072452 | 0.05813 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f4c3eeb99cd13bf33d9ab5ff221738ac9379ca5d | 370 | py | Python | common/migrations/0002_auto_20180722_1316.py | Red-Teapot/bbyaworld.com-django | 6eb8febd2cfa304a062ac924240cbdf060499cfc | [
"MIT"
] | 1 | 2020-01-11T18:04:15.000Z | 2020-01-11T18:04:15.000Z | common/migrations/0002_auto_20180722_1316.py | Red-Teapot/bbyaworld.com-django | 6eb8febd2cfa304a062ac924240cbdf060499cfc | [
"MIT"
] | 2 | 2018-08-24T08:53:27.000Z | 2019-07-05T16:08:28.000Z | common/migrations/0002_auto_20180722_1316.py | Red-Teapot/bbyaworld.com-django | 6eb8febd2cfa304a062ac924240cbdf060499cfc | [
"MIT"
] | 1 | 2018-11-22T16:19:52.000Z | 2018-11-22T16:19:52.000Z | # Generated by Django 2.0.7 on 2018-07-22 10:16
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('common', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='miscstorageentry',
name='value',
field=models.TextField(),
),
]
| 19.473684 | 47 | 0.583784 | 37 | 370 | 5.783784 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073359 | 0.3 | 370 | 18 | 48 | 20.555556 | 0.752896 | 0.121622 | 0 | 0 | 1 | 0 | 0.120743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4c845969dcb245e178d58873d955a0721ef9c69 | 2,602 | py | Python | contrib/bulk_operations/metadata.py | tzhaoredhat/automation | a1867dc2d3591fdae1fa7f80d457c25f9705070e | [
"MIT"
] | 18 | 2015-12-15T17:56:18.000Z | 2021-04-10T13:49:48.000Z | contrib/bulk_operations/metadata.py | tzhaoredhat/automation | a1867dc2d3591fdae1fa7f80d457c25f9705070e | [
"MIT"
] | 303 | 2015-11-18T07:37:06.000Z | 2021-05-26T12:34:01.000Z | contrib/bulk_operations/metadata.py | tzhaoredhat/automation | a1867dc2d3591fdae1fa7f80d457c25f9705070e | [
"MIT"
] | 27 | 2015-11-19T20:33:54.000Z | 2021-03-25T08:15:28.000Z | #
# Copyright (c) 2015 Red Hat
# Licensed under The MIT License (MIT)
# http://opensource.org/licenses/MIT
#
"""
Use the provided metadata generator if you wish to support OPTIONS requests on
list url of resources that support bulk operations. The only difference from
the generator provided by REST Framework is that it does not try to check
object permissions when the request would be bulk update.
To use the class, add this to your settings:
REST_FRAMEWORK = {
'DEFAULT_METADATA_CLASS': 'contrib.bulk_operations.metadata.BulkMetadata'
}
"""
from django.core.exceptions import PermissionDenied
from django.http import Http404
from rest_framework import exceptions
from rest_framework import metadata
from rest_framework.request import clone_request
class BulkMetadata(metadata.SimpleMetadata):
"""
Simple wrapper around `SimpleMetadata` provided by REST Framework. This
class can handle views supporting bulk operations by not checking object
permissions on list URL.
"""
def determine_actions(self, request, view):
"""
For generic class based views we return information about the fields
that are accepted for 'PUT' and 'POST' methods.
This method expects that `get_object` may actually fail and gracefully
handles it.
Most of the code in this method is copied from the parent class.
"""
actions = {}
for method in set(['PUT', 'POST']) & set(view.allowed_methods):
view.request = clone_request(request, method)
try:
# Test global permissions
if hasattr(view, 'check_permissions'):
view.check_permissions(view.request)
# Test object permissions. This will fail on list url for
# resources supporting bulk operations. In such case
# permissions are not checked.
if method == 'PUT' and hasattr(view, 'get_object'):
try:
view.get_object()
except (AssertionError, KeyError):
pass
except (exceptions.APIException, PermissionDenied, Http404):
pass
else:
# If user has appropriate permissions for the view, include
# appropriate metadata about the fields that should be supplied.
serializer = view.get_serializer()
actions[method] = self.get_serializer_info(serializer)
finally:
view.request = request
return actions
| 37.710145 | 81 | 0.645273 | 300 | 2,602 | 5.533333 | 0.44 | 0.046988 | 0.016265 | 0.027711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00545 | 0.294773 | 2,602 | 68 | 82 | 38.264706 | 0.899183 | 0.485396 | 0 | 0.153846 | 0 | 0 | 0.029767 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.038462 | false | 0.076923 | 0.192308 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f4cc9923dfd51c7eac4621172705c16b6f05d784 | 878 | py | Python | test/py/t-searchmem.py | alexbudmsft/dbgscript | 76dc77109bbeb8f09a893e9dd56012ff8a4b601f | [
"PSF-2.0"
] | 27 | 2015-11-05T22:19:34.000Z | 2021-08-21T02:03:52.000Z | test/py/t-searchmem.py | alexbudmsft/dbgscript | 76dc77109bbeb8f09a893e9dd56012ff8a4b601f | [
"PSF-2.0"
] | null | null | null | test/py/t-searchmem.py | alexbudmsft/dbgscript | 76dc77109bbeb8f09a893e9dd56012ff8a4b601f | [
"PSF-2.0"
] | 2 | 2015-11-06T04:32:31.000Z | 2016-08-22T18:24:20.000Z | from utils import *
car = get_car()
# Positive cases. Can't print the result because the address may change
# from run to run.
#
dbgscript.search_memory(car['name'].address-16, 100, b'FooCar', 1)
dbgscript.search_memory(car['name'].address-16, 100, b'FooCar', 2)
# Negative cases.
#
# 4 is not a multiple of the pattern length.
#
try:
dbgscript.search_memory(car['name'].address-16, 100, b'FooCar', 4)
except ValueError:
print('Swallowed ValueError')
# Try a non-existent pattern.
#
try:
dbgscript.search_memory(car['name'].address-16, 100, b'AbcDefAb', 4)
except LookupError:
print('Swallowed LookupError')
# 3 is a multiple of the pat. len, but the pattern won't be found on a
# 3 byte granularity.
#
try:
dbgscript.search_memory(car['name'].address-16, 100, b'FooCar', 3)
except LookupError:
print('Swallowed LookupError')
| 25.085714 | 72 | 0.692483 | 131 | 878 | 4.59542 | 0.419847 | 0.124585 | 0.174419 | 0.199336 | 0.534884 | 0.395349 | 0.395349 | 0.395349 | 0.395349 | 0.395349 | 0 | 0.045643 | 0.176538 | 878 | 34 | 73 | 25.823529 | 0.786999 | 0.298405 | 0 | 0.4375 | 0 | 0 | 0.200351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4d365edd139d60dd0919dc1d55ad6225c11b8e7 | 3,657 | py | Python | pysnmp/TPLINK-COMMANDER-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/TPLINK-COMMANDER-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/TPLINK-COMMANDER-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module TPLINK-COMMANDER-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/TPLINK-COMMANDER-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 21:16:55 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, Integer, OctetString = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "Integer", "OctetString")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, ValueRangeConstraint, ConstraintsIntersection, ValueSizeConstraint, SingleValueConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "ValueRangeConstraint", "ConstraintsIntersection", "ValueSizeConstraint", "SingleValueConstraint")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
NotificationType, Unsigned32, MibScalar, MibTable, MibTableRow, MibTableColumn, TimeTicks, MibIdentifier, iso, IpAddress, ModuleIdentity, ObjectIdentity, Counter64, Bits, Integer32, Counter32, Gauge32 = mibBuilder.importSymbols("SNMPv2-SMI", "NotificationType", "Unsigned32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "TimeTicks", "MibIdentifier", "iso", "IpAddress", "ModuleIdentity", "ObjectIdentity", "Counter64", "Bits", "Integer32", "Counter32", "Gauge32")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
clusterManage, = mibBuilder.importSymbols("TPLINK-CLUSTER-MIB", "clusterManage")
clusterConfig = MibIdentifier((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2))
commanderConfig = MibIdentifier((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 4))
clusterName = MibScalar((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 16))).setMaxAccess("readonly")
if mibBuilder.loadTexts: clusterName.setStatus('current')
clusterHoldTime = MibScalar((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 255))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: clusterHoldTime.setStatus('current')
clusterIntervalTime = MibScalar((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 255))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: clusterIntervalTime.setStatus('current')
commanderClusterName = MibScalar((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 4, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 16))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: commanderClusterName.setStatus('current')
clusterIp = MibScalar((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 4, 2), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: clusterIp.setStatus('current')
clusterIpMask = MibScalar((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 4, 3), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: clusterIpMask.setStatus('current')
clusterCommit = MibScalar((1, 3, 6, 1, 4, 1, 11863, 6, 33, 1, 1, 3, 2, 4, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1))).clone(namedValues=NamedValues(("commit", 1)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: clusterCommit.setStatus('current')
mibBuilder.exportSymbols("TPLINK-COMMANDER-MIB", commanderConfig=commanderConfig, clusterIntervalTime=clusterIntervalTime, clusterConfig=clusterConfig, clusterIp=clusterIp, clusterHoldTime=clusterHoldTime, clusterIpMask=clusterIpMask, clusterName=clusterName, clusterCommit=clusterCommit, commanderClusterName=commanderClusterName)
| 114.28125 | 477 | 0.76292 | 418 | 3,657 | 6.674641 | 0.277512 | 0.012903 | 0.009677 | 0.012903 | 0.459498 | 0.362724 | 0.326165 | 0.326165 | 0.278853 | 0.278853 | 0 | 0.079175 | 0.084769 | 3,657 | 31 | 478 | 117.967742 | 0.754407 | 0.091332 | 0 | 0 | 0 | 0 | 0.183464 | 0.013277 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.291667 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4d92ae8855426c9f556531456dc67c3b915c5e8 | 1,653 | py | Python | src/routers/user.py | momonoki1990/fastapi-todo-list | fffcc072ab1181ce63f163c2bf0614551b3e40ed | [
"MIT"
] | 1 | 2022-02-17T07:35:43.000Z | 2022-02-17T07:35:43.000Z | src/routers/user.py | momonoki1990/fastapi-todo-list-api | fffcc072ab1181ce63f163c2bf0614551b3e40ed | [
"MIT"
] | 2 | 2021-12-05T06:37:35.000Z | 2022-01-04T11:08:10.000Z | src/routers/user.py | momonoki1990/fastapi-todo-list | fffcc072ab1181ce63f163c2bf0614551b3e40ed | [
"MIT"
] | 1 | 2022-01-11T02:02:31.000Z | 2022-01-11T02:02:31.000Z | from typing import List
from fastapi import APIRouter, Depends, HTTPException, status
from fastapi.security import OAuth2PasswordRequestForm
from sqlalchemy.ext.asyncio import AsyncSession
from src.schema import user as user_schema, task as task_schema
from src.cruds import user as user_crud
from src.libs import authenticate
from src.db import get_db
router = APIRouter(prefix="", tags=["user"])
@router.post("/user", response_model=user_schema.Token)
async def register_user(
form_data: user_schema.UserCreate = Depends(),
db: AsyncSession = Depends(get_db)
):
form_data.password = authenticate.get_hashed_password(form_data.password)
user = await user_crud.create_user(db, form_data)
access_token = authenticate.create_access_token(user.username)
return {"access_token": access_token, "token_type": "bearer"}
@router.post("/token", response_model=user_schema.Token)
async def login_for_access_token(
form_data: OAuth2PasswordRequestForm = Depends(),
db: AsyncSession = Depends(get_db)
):
user = await authenticate.authenticate_user(db, form_data.username, form_data.password)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Bearer"}
)
access_token = authenticate.create_access_token(user.username)
return {"access_token": access_token, "token_type": "bearer"}
@router.get("/users/me", response_model=user_schema.User)
async def read_users_me(current_user: user_schema.User = Depends(authenticate.get_current_active_user)):
return current_user | 42.384615 | 104 | 0.762855 | 215 | 1,653 | 5.623256 | 0.311628 | 0.081886 | 0.042184 | 0.057072 | 0.281224 | 0.281224 | 0.226634 | 0.16708 | 0.16708 | 0.16708 | 0 | 0.003524 | 0.141561 | 1,653 | 39 | 105 | 42.384615 | 0.848485 | 0 | 0 | 0.228571 | 0 | 0 | 0.079807 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.142857 | 0.228571 | 0 | 0.314286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f4dc218ab525c8b2b47959492cb994fb65259ae5 | 13,238 | py | Python | isolates/download_accession_list.py | josl/ASM_challenge | f6bc31ab29d7589e259e1f3a2acbb613db6f03f3 | [
"Apache-2.0"
] | 2 | 2015-11-12T11:18:11.000Z | 2015-11-12T22:29:59.000Z | isolates/download_accession_list.py | josl/ASM_challenge | f6bc31ab29d7589e259e1f3a2acbb613db6f03f3 | [
"Apache-2.0"
] | null | null | null | isolates/download_accession_list.py | josl/ASM_challenge | f6bc31ab29d7589e259e1f3a2acbb613db6f03f3 | [
"Apache-2.0"
] | 1 | 2015-11-10T16:10:36.000Z | 2015-11-10T16:10:36.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This is a skeleton file that can serve as a starting point for a Python
console script. To run this script uncomment the following line in the
entry_points section in setup.cfg:
Then run `python setup.py install` which will install the command `download`
inside your current environment.
Besides console scripts, the header (i.e. until _logger...) of this file can
also be used as template for Python modules.
Note: This skeleton file can be safely removed if not needed!
"""
from __future__ import division, print_function, absolute_import
import os
import sys
import json
import argparse
from shutil import move
from progressbar import Bar, Percentage, ProgressBar, ETA
from isolates import __version__, TemporaryDirectory
from isolates.log import _logger
from isolates.metadata import (ExtractExperimentMetadata,
ExtractExperimentIDs_acc)
from isolates.sequence import Sequence
from isolates.source import acctypes
__author__ = "Jose Luis Bellod Cisneros"
__coauthor__ = "Martin C F Thomsen"
__copyright__ = "Jose Luis Bellod Cisneros"
__license__ = "none"
def parse_args_accessions(args):
"""
Parse command line parameters
:param args: command line parameters as list of strings
:return: command line parameters as :obj:`argparse.Namespace`
"""
parser = argparse.ArgumentParser(
description="Download script of isolates from" +
"ENA taxonomy or Accession list")
parser.add_argument(
'--version',
action='version',
version='isolates {ver}'.format(ver=__version__))
parser.add_argument(
'-a',
nargs=1,
metavar=('PATH'),
help='Format: [PATH]\n' +
'to file containing list of ACCESSION IDs, 1 per line\n' +
'Name of the file is used to identify the isolates downloaded.'
)
parser.add_argument(
'-m',
nargs=1,
type=argparse.FileType('r'),
metavar=('METADATA'),
default=None,
help='JSON file with seed attributes and mandatory fields\n'
)
parser.add_argument(
'-out',
nargs=1,
metavar=('OUTPUT'),
required=True,
help='Path to save isolates'
)
parser.add_argument(
'-p',
'--preserve',
action="store_true",
dest="preserve",
default=False,
help='preserve any existing SRA and fastq files\n'
)
parser.add_argument(
'--all_runs_as_samples',
action="store_true",
dest="all_runs_as_samples",
default=False,
help=('Treat all runs associated to a sample as separate samples. '
'Default is to combine them into one run.\n')
)
parser.add_argument(
'--skip_files',
action="store_true",
dest="skip_files",
default=False,
help=('Treat all runs associated to a sample as separate samples. '
'Default is to combine them into one run.\n')
)
return parser.parse_args(args)
def DownloadRunFiles(runid, tmpdir):
# Download run files
try:
s = Sequence(runid, tmpdir)
s.download_fastq()
if not s.error:
_logger.info("Downloaded files: %s", ','.join(s.files))
return s.files
else: return None
except ValueError, e:
_logger.error(e)
return None
def CreateSampleDir(sfiles, m, sample_dir, preserve=False, skip_files=False):
sample_dir = str(sample_dir)
if not skip_files and len(sfiles) == 0:
_logger.error("Error: No files were found! (%s)", sample_dir)
return False
if not os.path.exists(sample_dir):
_logger.info("Create sample dir: %s", sample_dir)
# Create 'sample' dir
os.mkdir(sample_dir)
# Move files from tmpdir to sample dir
for sf in sfiles: move(sf, sample_dir)
elif not preserve and not skip_files:
# Empty sample directory
for fn in os.listdir(sample_dir):
os.unlink("%s/%s"%(sample_dir, fn))
# Move files from tmpdir to sample dir
for sf in sfiles: move(sf, sample_dir)
# Update and create metadata file
try:
m.metadata["file_names"] = ' '.join(
[os.path.basename(sf).replace(' ','_')
for sf in sfiles
if not os.path.basename(sf) == 'meta.json']
)
m.save_metadata(sample_dir)
except ValueError, e:
_logger.error(e)
return False
else:
return True
def download_fastq_from_list(accession_list, output, json, preserve=False, all_runs_as_samples=False, skip_files=False):
"""
Get Fastq from list of IDs
:param accession_list: List of accessions
:param dir: Output folder
"""
metadata = []
cwd = os.getcwd()
with open(accession_list, 'r') as f:
# Setup batch dir
batch_dir = "%s/%s/"%(cwd, output)
if not os.path.exists(batch_dir): os.mkdir(batch_dir)
os.chdir(batch_dir)
# Set logging
_logger.Set(filename="%s/download-acceession-list.log"%batch_dir)
# Count samples in accession_list
n_samples = sum(1 for l in f)
f.seek(0)
_logger.info("Number of samples to download: %s", n_samples)
# Start progress bar
pbar = ProgressBar(
widgets = [ETA(), ' - ', Percentage(), ' : ', Bar()],
maxval = n_samples
).start()
pbar.update(0)
failed_accession = []
sample_dir_id = 0
for i, l in enumerate(f):
accession = l.strip()
if accession == '': continue
# Determine accession type
if accession[:3] in acctypes:
accession_type = acctypes[accession[:3]]
else:
_logger.error("unknown accession type for '%s'!", accession)
failed_accession.append(accession)
continue
_logger.info("Acc Found: %s (%s)", accession, accession_type)
if accession_type in ['study', 'sample']:
for experiment_id in ExtractExperimentIDs_acc(accession):
sample_dir_id = ProcessExperiment(
experiment_id, json, batch_dir,sample_dir_id, preserve,
failed_accession, all_runs_as_samples, skip_files)
elif accession_type == 'experiment':
sample_dir_id = ProcessExperiment(
accession, json, batch_dir,sample_dir_id, preserve,
failed_accession, all_runs_as_samples, skip_files)
elif accession_type == 'run':
sample_dir_id = ProcessExperiment(
accession, json, batch_dir,sample_dir_id, preserve,
failed_accession, all_runs_as_samples, skip_files)
pbar.update(i)
pbar.finish()
if failed_accession:
_logger.info("The following accessions were not downloaded!")
_logger.info('\n'.join(failed_accession))
else:
_logger.info("All accessions downloaded succesfully!")
def ProcessExperiment(experiment_id, json, batch_dir, sample_dir_id, preserve, failed_accession, all_runs_as_samples, skip_files=False):
_logger.info("Processing %s...", experiment_id)
if all_runs_as_samples:
sample_dir_id = ProcessExperimentSeparate(
experiment_id, json, batch_dir, sample_dir_id,
preserve, failed_accession, skip_files)
else:
sample_dir_id = ProcessExperimentCombined(
experiment_id, json, batch_dir, sample_dir_id,
preserve, failed_accession, skip_files)
return sample_dir_id
def ProcessExperimentSeparate(experiment_id, json, batch_dir, sample_dir_id, preserve, failed_accession, skip_files=False):
m = ExtractExperimentMetadata(experiment_id, json)
if m.valid_metadata():
# Check if a run ID was submitted, and if so only process that
if experiment_id in m.runIDs: m.runIDs = [experiment_id]
# Process the runIDs as samples
_logger.info("Found Following Runs: %s", ', '.join(m.runIDs))
for runid in m.runIDs:
with TemporaryDirectory() as tmpdir:
os.chdir(batch_dir)
sample_dir = "%s/%s/"%(batch_dir, sample_dir_id)
if os.path.exists(sample_dir):
sfiles = [x for x in os.listdir(sample_dir) if any([y in x for y in ['fq','fastq']])]
else:
sfiles = []
if not preserve or not skip_files or len(sfiles) == 0:
sfiles = DownloadRunFiles(runid, tmpdir)
if sfiles is not None:
success = CreateSampleDir(sfiles, m, sample_dir, preserve, skip_files)
if success:
sample_dir_id += 1
else:
failed_accession.append(runid)
else:
_logger.error("Files could not be retrieved! (%s)", runid)
failed_accession.append(runid)
else:
_logger.error("Metadata Invalid! (%s) - %s", experiment_id, m.metadata.items())
failed_accession.append(experiment_id)
return sample_dir_id
def ProcessExperimentCombined(experiment_id, json, batch_dir, sample_dir_id, preserve, failed_accession, skip_files=False):
m = ExtractExperimentMetadata(experiment_id, json)
if m.valid_metadata():
# Check if a run ID was submitted, and if so only process that
if experiment_id in m.runIDs: m.runIDs = [experiment_id]
# Process the runs as one sample
_logger.info("Found Following Runs: %s", ', '.join(m.runIDs))
with TemporaryDirectory() as tmpdir:
os.chdir(batch_dir)
sample_dir = "%s/%s/"%(batch_dir, sample_dir_id)
csfiles = []
if preserve and os.path.exists(sample_dir):
csfiles = [x for x in os.listdir(sample_dir) if any([y in x for y in ['fq','fastq']])]
if csfiles == [] and not skip_files:
sfiles = []
for runid in m.runIDs:
sf = DownloadRunFiles(runid, tmpdir)
if sf is not None:
sfiles.append(sf)
else:
_logger.error("Run files could not be retrieved! (%s)",
runid)
_logger.info("Found Following files sets:\n%s\n",
'\n'.join([', '.join(sf) for sf in sfiles]))
# Combine sfiles into one entry
if len(sfiles) > 1:
for file_no, file_set in enumerate(zip(*sfiles)):
ext = '.'.join(file_set[0].split('/')[-1].split('.')[1:])
if len(sfiles[0]) > 1:
new_file = "%s_%s.combined.%s"%(experiment_id,file_no+1, ext)
else:
new_file = "%s.combined.%s"%(experiment_id, ext)
with open(new_file, 'w') as nf:
for fn in file_set:
with open(fn, 'rb') as f:
nf.write(f.read())
if os.path.exists(new_file):
csfiles.append(new_file)
else:
_logger.error("Combined file creation failed! (%s: %s)",
experiment_id, file_no)
break
elif isinstance(sfiles[0], list):
csfiles = sfiles[0]
if csfiles == []:
_logger.error("Files could not be combined! (%s)",
experiment_id)
failed_accession.append(experiment_id)
if csfiles != [] or skip_files:
success = CreateSampleDir(csfiles, m, sample_dir, preserve, skip_files)
if success:
sample_dir_id += 1
else:
failed_accession.append(experiment_id)
else:
_logger.error("Files could not be retrieved! (%s)",
experiment_id)
failed_accession.append(experiment_id)
else:
_logger.error("Metadata Invalid! (%s) - %s", experiment_id, m.metadata.items())
failed_accession.append(experiment_id)
return sample_dir_id
def download_accession_list():
args = parse_args_accessions(sys.argv[1:])
if args.a is not None:
if args.m is not None:
try:
default = json.load(args.m[0])
except ValueError as e:
print("ERROR: Json file has the wrong format!\n", e)
exit()
else:
default = None
download_fastq_from_list(args.a[0], args.out[0], default, args.preserve, args.all_runs_as_samples, args.skip_files)
else:
print('Usage: -a PATH -o ORGANISM -out PATH [-m JSON]')
if __name__ == "__main__":
download_accession_list()
| 40.48318 | 136 | 0.5769 | 1,555 | 13,238 | 4.723473 | 0.186495 | 0.05514 | 0.03145 | 0.027774 | 0.37808 | 0.347992 | 0.336283 | 0.317767 | 0.303744 | 0.282914 | 0 | 0.003143 | 0.327013 | 13,238 | 326 | 137 | 40.607362 | 0.821304 | 0.03981 | 0 | 0.354244 | 0 | 0 | 0.132886 | 0.004362 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04428 | null | null | 0.01107 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4dfb6b1e34d706321fd96235fbfa3fb0950437c | 2,977 | py | Python | blogproject/blog/views.py | MrWolffy/django-tutorial | 4b00e35092d47e9a04a7019c3f803c5b09630ec6 | [
"MIT"
] | null | null | null | blogproject/blog/views.py | MrWolffy/django-tutorial | 4b00e35092d47e9a04a7019c3f803c5b09630ec6 | [
"MIT"
] | null | null | null | blogproject/blog/views.py | MrWolffy/django-tutorial | 4b00e35092d47e9a04a7019c3f803c5b09630ec6 | [
"MIT"
] | null | null | null | import re
import markdown
from django.contrib import messages
from django.db.models import Q
from django.shortcuts import render, get_object_or_404, redirect
from django.utils.text import slugify
from django.views.generic import ListView, DetailView
from markdown.extensions.toc import TocExtension
from pure_pagination.mixins import PaginationMixin
from .models import Post, Category, Tag
class IndexView(PaginationMixin, ListView):
model = Post
template_name = 'blog/index.html'
context_object_name = 'post_list'
# 指定 paginate_by 属性后开启分页功能,其值代表每一页包含多少篇文章
paginate_by = 10
class PostDetailView(DetailView):
# 这些属性的含义和 ListView 是一样的
model = Post
template_name = 'blog/detail.html'
context_object_name = 'post'
def get(self, request, *args, **kwargs):
# 覆写 get 方法的目的是因为每当文章被访问一次,就得将文章阅读量 +1
# get 方法返回的是一个 HttpResponse 实例
# 之所以需要先调用父类的 get 方法,是因为只有当 get 方法被调用后,
# 才有 self.object 属性,其值为 Post 模型实例,即被访问的文章 post
response = super(PostDetailView, self).get(request, *args, **kwargs)
# 将文章阅读量 +1
# 注意 self.object 的值就是被访问的文章 post
self.object.increase_views()
# 视图必须返回一个 HttpResponse 对象
return response
def get_object(self, queryset=None):
# 覆写 get_object 方法的目的是因为需要对 post 的 body 值进行渲染
post = super().get_object(queryset=None)
md = markdown.Markdown(extensions=[
'markdown.extensions.extra',
'markdown.extensions.codehilite',
# 记得在顶部引入 TocExtension 和 slugify
TocExtension(slugify=slugify),
])
post.body = md.convert(post.body)
m = re.search(r'<div class="toc">\s*<ul>(.*)</ul>\s*</div>', md.toc, re.S)
post.toc = m.group(1) if m is not None else ''
return post
class ArchiveView(IndexView):
def get_queryset(self):
year = self.kwargs.get('year')
month = self.kwargs.get('month')
return super(ArchiveView, self).get_queryset().filter(created_time__year=year,
created_time__month=month)
class CategoryView(IndexView):
def get_queryset(self):
cate = get_object_or_404(Category, pk=self.kwargs.get('pk'))
return super(CategoryView, self).get_queryset().filter(category=cate)
class TagView(ListView):
model = Tag
template_name = 'blog/index.html'
context_object_name = 'post_list'
def get_queryset(self):
t = get_object_or_404(Tag, pk=self.kwargs.get('pk'))
return super(TagView, self).get_queryset().filter(tags=t)
def search(request):
q = request.GET.get('q')
if not q:
error_msg = "请输入搜索关键词"
messages.add_message(request, messages.ERROR, error_msg, extra_tags='danger')
return redirect('blog:index')
post_list = Post.objects.filter(Q(title__icontains=q) | Q(body__icontains=q))
return render(request, 'blog/index.html', {'post_list': post_list})
| 31.336842 | 88 | 0.664091 | 373 | 2,977 | 5.16622 | 0.343164 | 0.028023 | 0.026985 | 0.021796 | 0.139595 | 0.080955 | 0.080955 | 0.051894 | 0.051894 | 0.051894 | 0 | 0.006108 | 0.230097 | 2,977 | 94 | 89 | 31.670213 | 0.834642 | 0.11824 | 0 | 0.152542 | 0 | 0 | 0.08694 | 0.035236 | 0 | 0 | 0 | 0 | 0 | 1 | 0.101695 | false | 0 | 0.169492 | 0 | 0.644068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f4e45601dde17142d5458f76f7251eb01214f0cd | 1,921 | py | Python | hashing/pdq/python/pdqhashing/utils/matrix.py | larrycameron80/ThreatExchange | 00f9c140360fd6189e2be7de4ad680d474cbeebb | [
"BSD-3-Clause"
] | 1 | 2021-10-11T21:43:04.000Z | 2021-10-11T21:43:04.000Z | preprocess/third_party/pdqhashing/utils/matrix.py | vegetable68/sbb | 5949632fbd95a9dd6f40fca806b9a9d56b41652a | [
"CC0-1.0"
] | null | null | null | preprocess/third_party/pdqhashing/utils/matrix.py | vegetable68/sbb | 5949632fbd95a9dd6f40fca806b9a9d56b41652a | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python
class MatrixUtil:
@classmethod
def allocateMatrix(cls, numRows, numCols):
rv = [0.0] * numRows
for i in range(numRows):
rv[i] = [0.0] * numCols
return rv
@classmethod
def allocateMatrixAsRowMajorArray(cls, numRows, numCols):
return [0.0] * numRows * numCols
@classmethod
def torben(cls, m, numRows, numCols):
n = numRows * numCols
midn = int((n + 1) / 2)
less = int()
greater = int()
equal = int()
min = float()
max = float()
guess = float()
maxltguess = float()
mingtguess = float()
min = max = m[0][0]
for i in range(numRows):
for j in range(numCols):
v = m[i][j]
if v < min:
min = v
if v > max:
max = v
while True:
guess = float((min + max) / 2)
less = 0
greater = 0
equal = 0
maxltguess = min
mingtguess = max
for _i in range(numRows):
for _j in range(numCols):
v = m[_i][_j]
if v < guess:
less += 1
if v > maxltguess:
maxltguess = v
elif v > guess:
greater += 1
if v < mingtguess:
mingtguess = v
else:
equal += 1
if less <= midn and greater <= midn:
break
elif less > greater:
max = maxltguess
else:
min = mingtguess
if less >= midn:
return maxltguess
elif less + equal >= midn:
return guess
else:
return mingtguess
| 27.442857 | 61 | 0.401353 | 182 | 1,921 | 4.214286 | 0.247253 | 0.091265 | 0.023468 | 0.043025 | 0.135593 | 0.112125 | 0.112125 | 0.112125 | 0.112125 | 0.112125 | 0 | 0.01826 | 0.515357 | 1,921 | 69 | 62 | 27.84058 | 0.805585 | 0.010411 | 0 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0 | 0.016129 | 0.145161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4f8ac48bcc18b69231760309200fe77cd389fd6 | 2,404 | py | Python | napari_svg/_tests/test_get_writer.py | nclack/napari-svg | 0e222857c3a61befe1ea4b4e97fd48b285877d02 | [
"BSD-3-Clause"
] | 1 | 2020-04-13T12:20:00.000Z | 2020-04-13T12:20:00.000Z | napari_svg/_tests/test_get_writer.py | nclack/napari-svg | 0e222857c3a61befe1ea4b4e97fd48b285877d02 | [
"BSD-3-Clause"
] | 13 | 2020-04-26T04:27:12.000Z | 2021-12-17T16:56:46.000Z | napari_svg/_tests/test_get_writer.py | nclack/napari-svg | 0e222857c3a61befe1ea4b4e97fd48b285877d02 | [
"BSD-3-Clause"
] | 8 | 2020-04-19T21:47:37.000Z | 2022-01-25T16:39:01.000Z | import os
import numpy as np
import pytest
from napari_svg import napari_get_writer
from napari.layers import Image, Labels, Points, Shapes, Vectors
@pytest.fixture
def layer_data_and_types():
np.random.seed(0)
layers = [
Image(np.random.rand(20, 20)),
Labels(np.random.randint(10, size=(20, 2))),
Points(np.random.rand(20, 2)),
Shapes(np.random.rand(10, 2, 2)),
Vectors(np.random.rand(10, 2, 2)),
]
layer_data = [l.as_layer_data_tuple() for l in layers]
layer_types = [ld[2] for ld in layer_data]
return layer_data, layer_types
def test_get_writer(tmpdir, layer_data_and_types):
"""Test writing layers data."""
layer_data, layer_types = layer_data_and_types
path = os.path.join(tmpdir, 'layers_file.svg')
writer = napari_get_writer(path, layer_types)
assert writer is not None
# Check file does not exist
assert not os.path.isfile(path)
# Write data
return_path = writer(path, layer_data)
assert return_path == path
# Check file now exists
assert os.path.isfile(path)
def test_get_writer_no_extension(tmpdir, layer_data_and_types):
"""Test writing layers data with no extension."""
layer_data, layer_types = layer_data_and_types
path = os.path.join(tmpdir, 'layers_file')
writer = napari_get_writer(path, layer_types)
assert writer is not None
# Check file does not exist
assert not os.path.isfile(path)
# Write data
return_path = writer(path, layer_data)
assert return_path == path + '.svg'
# Check file now exists
assert os.path.isfile(path + '.svg')
def test_get_writer_bad_extension(tmpdir, layer_data_and_types):
"""Test not writing layers data with bad extension."""
layer_data, layer_types = layer_data_and_types
path = os.path.join(tmpdir, 'layers_file.csv')
writer = napari_get_writer(path, layer_types)
assert writer is None
# Check file does not exist
assert not os.path.isfile(path)
def test_get_writer_bad_layer_types(tmpdir):
"""Test not writing layers data with bad extension."""
layer_types = ['image', 'points', 'bad_type']
path = os.path.join(tmpdir, 'layers_file.svg')
writer = napari_get_writer(path, layer_types)
assert writer is None
# Check file does not exist
assert not os.path.isfile(path)
| 26.417582 | 64 | 0.682196 | 356 | 2,404 | 4.390449 | 0.171348 | 0.092131 | 0.053743 | 0.076136 | 0.730006 | 0.715931 | 0.695457 | 0.669226 | 0.648752 | 0.492642 | 0 | 0.011715 | 0.218802 | 2,404 | 90 | 65 | 26.711111 | 0.820554 | 0.140599 | 0 | 0.395833 | 0 | 0 | 0.040726 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.104167 | false | 0 | 0.104167 | 0 | 0.229167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f4fd4b6da503b9271301852273d340f511178dff | 505 | py | Python | button.py | anas652/button | ce1436332f491815e1a0e760c725589372d3c0cb | [
"MIT"
] | null | null | null | button.py | anas652/button | ce1436332f491815e1a0e760c725589372d3c0cb | [
"MIT"
] | null | null | null | button.py | anas652/button | ce1436332f491815e1a0e760c725589372d3c0cb | [
"MIT"
] | null | null | null | n kjhimport Tkinter
window = Tkinter.Tk()
button = Tkinter.Button(window, text="do not press this because i will kill you.", width=40)
button.pack(padx=10, pady=10)
clickCount=0
def onClick(event):
global clickCount
clickCount = clickCount + 1
if clickCount == 1:
button.configure(text="seriously? do. not. press. it.")
elif clickCount == 2:
button.configure(text="gah! Next time, no more button.")
else:
button.pack_forget()
button.bind("<ButtonRelease-1>", onClick)
window.mainloop() | 20.2 | 92 | 0.716832 | 72 | 505 | 5.013889 | 0.625 | 0.027701 | 0.055402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025522 | 0.146535 | 505 | 25 | 93 | 20.2 | 0.812065 | 0 | 0 | 0 | 0 | 0 | 0.237154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
760122620b3c96cc231d8b54ec99cb4d4690794d | 380 | py | Python | setup.py | edjdavid/aiml | 6035cf3575137a8022fd373b8be9cfe16ee4ec61 | [
"Apache-2.0"
] | null | null | null | setup.py | edjdavid/aiml | 6035cf3575137a8022fd373b8be9cfe16ee4ec61 | [
"Apache-2.0"
] | null | null | null | setup.py | edjdavid/aiml | 6035cf3575137a8022fd373b8be9cfe16ee4ec61 | [
"Apache-2.0"
] | null | null | null | from setuptools import setup, find_packages
setup(name='aiml',
version='1.0',
description='ML Automation',
author='MSDS ML',
author_email='edjdavid@users.noreply.github.com',
packages=find_packages(),
install_requires=[
'numpy',
'pandas',
'matplotlib',
'scikit-learn',
'tqdm'
],
)
| 22.352941 | 55 | 0.55 | 36 | 380 | 5.694444 | 0.833333 | 0.117073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007692 | 0.315789 | 380 | 16 | 56 | 23.75 | 0.780769 | 0 | 0 | 0 | 0 | 0 | 0.255263 | 0.086842 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
76052e516679e30b2741000942a893a95667ab5b | 1,730 | py | Python | classes/strategies.py | noaillypau/Polyvalent_Backtester_3 | 19c11014300b5cf629c037cc5a6b18123237647f | [
"MIT"
] | 2 | 2021-04-30T21:36:29.000Z | 2021-06-10T23:34:38.000Z | classes/strategies.py | noaillypau/Polyvalent_Backtester_3 | 19c11014300b5cf629c037cc5a6b18123237647f | [
"MIT"
] | null | null | null | classes/strategies.py | noaillypau/Polyvalent_Backtester_3 | 19c11014300b5cf629c037cc5a6b18123237647f | [
"MIT"
] | null | null | null | import numpy as np, pandas as pd, json, os, datetime, time
from order import Order
class Strategies():
def __init__(self):
self._dict = {}
def add(self, name, strategy):
self._dict[name] = strategy
def compute(self, arrs, index):
list_order = []
for name,strategy in self._dict.items():
list_order = list_order + strategy.compute(arrs, index)
return list_order
def get_dic_symbol_strategy(self):
dic_strategy = {}
for name,strategy in self._dict.items():
if strategy.symbol not in dic_strategy:
dic_strategy[strategy.symbol] = []
dic_strategy[strategy.symbol].append(strategy)
return dic_strategy
def __repr__(self):
txt = f'Strategies:\n'
for key, item in self._dict.items():
txt += f'\nName: {key}\n{item}\n'
return txt
'''
Strategy object
create a strategy object with :
dataset_name: can be string or list string, set on whioch asset the order will be passed
trigger: function of dic of numpys (datas._dic) and index: will determine wether to send orders or not depending on index
list_order: list of orders to be sent
'''
class Strategy():
def __init__(self, trigger, symbol, params):
self.trigger = trigger
self.params = params
self.symbol = symbol
def compute(self, arrs, index):
return self.trigger(arrs, self.params, self.symbol, index)
def __repr__(self):
txt = f'Strategy on {self.symbol} with params:'
for key, item in self.params.items():
txt += f'\n {key}: {item}'
return txt
| 28.833333 | 126 | 0.602312 | 223 | 1,730 | 4.511211 | 0.32287 | 0.039761 | 0.029821 | 0.044732 | 0.166998 | 0.059642 | 0.059642 | 0 | 0 | 0 | 0 | 0 | 0.302312 | 1,730 | 59 | 127 | 29.322034 | 0.833471 | 0 | 0 | 0.222222 | 0 | 0 | 0.067407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0.027778 | 0.472222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
760878fec74dfca0b18e914746679e0b6733291a | 411 | py | Python | __findLengthsNoEmptyStrings.py | simdevex/01.Basics | cf4f372384e66f4b26e4887d2f5d815a1f8e929c | [
"MIT"
] | null | null | null | __findLengthsNoEmptyStrings.py | simdevex/01.Basics | cf4f372384e66f4b26e4887d2f5d815a1f8e929c | [
"MIT"
] | null | null | null | __findLengthsNoEmptyStrings.py | simdevex/01.Basics | cf4f372384e66f4b26e4887d2f5d815a1f8e929c | [
"MIT"
] | null | null | null | #License: https://bit.ly/3oLErEI
def test(strs):
return [*map(len, strs)]
strs = ['cat', 'car', 'fear', 'center']
print("Original strings:")
print(strs)
print("Lengths of the said list of non-empty strings:")
print(test(strs))
strs = ['cat', 'dog', 'shatter', 'donut', 'at', 'todo', '']
print("\nOriginal strings:")
print(strs)
print("Lengths of the said list of non-empty strings:")
print(test(strs))
| 27.4 | 60 | 0.652068 | 60 | 411 | 4.466667 | 0.516667 | 0.179104 | 0.08209 | 0.156716 | 0.529851 | 0.529851 | 0.529851 | 0.529851 | 0.529851 | 0.529851 | 0 | 0.002809 | 0.13382 | 411 | 14 | 61 | 29.357143 | 0.75 | 0.075426 | 0 | 0.5 | 0 | 0 | 0.443272 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0.083333 | 0.166667 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
7615add91618b64986aa97f7ee8e80579bbbb020 | 1,263 | py | Python | JPS_NLP/python/caresjpsnlq/NLQ_Chunker.py | mdhillmancmcl/TheWorldAvatar-CMCL-Fork | 011aee78c016b76762eaf511c78fabe3f98189f4 | [
"MIT"
] | 21 | 2021-03-08T01:58:25.000Z | 2022-03-09T15:46:16.000Z | JPS_NLP/python/caresjpsnlq/NLQ_Chunker.py | mdhillmancmcl/TheWorldAvatar-CMCL-Fork | 011aee78c016b76762eaf511c78fabe3f98189f4 | [
"MIT"
] | 63 | 2021-05-04T15:05:30.000Z | 2022-03-23T14:32:29.000Z | JPS_NLP/python/caresjpsnlq/NLQ_Chunker.py | mdhillmancmcl/TheWorldAvatar-CMCL-Fork | 011aee78c016b76762eaf511c78fabe3f98189f4 | [
"MIT"
] | 15 | 2021-03-08T07:52:03.000Z | 2022-03-29T04:46:20.000Z | import NLQ_Preprocessor as preProcessor
import NLP_Engine as nlpEngine
import NLQ_Interpreter as interpreter
import nltk
import time
class NLQ_Chunker:
def __init__(self):
self.preprocessor = preProcessor.PreProcessor()
self.nlp_engine = nlpEngine.NLP_Engine()
self.interpreter = interpreter.Interpreter()
def chunk_a_sentence(self, sentence):
sentence = self.preprocessor.replace_special_words(sentence)['sentence']
# this method returns an object {'sentence': xxxx, 'origional_sentence': xxxx}
tokens = self.preprocessor.filter_tokens_result(nltk.word_tokenize(sentence))
tags = self.preprocessor.recify_tagging_result(nltk.pos_tag(tokens))
# get the bigram of the sentence, which tells subjects/objects from other elements
bigram = self.nlp_engine.bigram_chunk_sentence(tags)
final_gram = self.nlp_engine.top_pattern_recognizer(bigram) # the fully processed tree that contains all the info needed.
# final_gram.draw()
return self.interpreter.main_tree_navigator(final_gram)
#
#
#
#
#
# chunker = NLQ_Chunker()
# sentence = input('Ask: ')
# start = time.time()
# chunker.chunk_a_sentence(sentence)
# print('took ' , time.time() - start, 'seconds') | 31.575 | 129 | 0.726841 | 154 | 1,263 | 5.733766 | 0.467532 | 0.050963 | 0.044168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178939 | 1,263 | 40 | 130 | 31.575 | 0.851495 | 0.307205 | 0 | 0 | 0 | 0 | 0.009292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.294118 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
76189c1c0f0a1c092eecb8d9cdb55104a570a87d | 9,020 | py | Python | oskb/ui_editkey.py | rushic24/oskb | d453a707d2a1d78d859d5e1648fe3804e40b4148 | [
"MIT"
] | 6 | 2020-05-06T16:59:48.000Z | 2021-09-18T12:48:21.000Z | oskb/ui_editkey.py | rushic24/oskb | d453a707d2a1d78d859d5e1648fe3804e40b4148 | [
"MIT"
] | 1 | 2022-03-24T19:19:11.000Z | 2022-03-24T19:19:11.000Z | oskb/ui_editkey.py | rushic24/oskb | d453a707d2a1d78d859d5e1648fe3804e40b4148 | [
"MIT"
] | 3 | 2020-05-06T16:59:52.000Z | 2021-09-18T12:48:54.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'EditKey.ui'
#
# Created by: PyQt5 UI code generator 5.14.0
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_EditKey(object):
def setupUi(self, EditKey):
EditKey.setObjectName("EditKey")
EditKey.resize(632, 444)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(EditKey.sizePolicy().hasHeightForWidth())
EditKey.setSizePolicy(sizePolicy)
EditKey.setMinimumSize(QtCore.QSize(632, 444))
EditKey.setMaximumSize(QtCore.QSize(632, 444))
EditKey.setModal(True)
self.cancelsavebuttons = QtWidgets.QDialogButtonBox(EditKey)
self.cancelsavebuttons.setGeometry(QtCore.QRect(330, 400, 291, 41))
self.cancelsavebuttons.setOrientation(QtCore.Qt.Horizontal)
self.cancelsavebuttons.setStandardButtons(QtWidgets.QDialogButtonBox.Cancel|QtWidgets.QDialogButtonBox.Save)
self.cancelsavebuttons.setObjectName("cancelsavebuttons")
self.maintabs = QtWidgets.QTabWidget(EditKey)
self.maintabs.setGeometry(QtCore.QRect(10, 10, 611, 381))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.maintabs.sizePolicy().hasHeightForWidth())
self.maintabs.setSizePolicy(sizePolicy)
self.maintabs.setMinimumSize(QtCore.QSize(611, 381))
self.maintabs.setMaximumSize(QtCore.QSize(611, 381))
self.maintabs.setObjectName("maintabs")
self.appearance = QtWidgets.QWidget()
self.appearance.setObjectName("appearance")
self.lbl_4 = QtWidgets.QLabel(self.appearance)
self.lbl_4.setGeometry(QtCore.QRect(300, 10, 151, 16))
self.lbl_4.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignVCenter)
self.lbl_4.setObjectName("lbl_4")
self.caption = QtWidgets.QLineEdit(self.appearance)
self.caption.setGeometry(QtCore.QRect(110, 20, 101, 21))
self.caption.setObjectName("caption")
self.cssclass = QtWidgets.QLineEdit(self.appearance)
self.cssclass.setGeometry(QtCore.QRect(40, 130, 221, 19))
self.cssclass.setObjectName("cssclass")
self.extracaptions = QtWidgets.QTableWidget(self.appearance)
self.extracaptions.setGeometry(QtCore.QRect(300, 30, 261, 121))
font = QtGui.QFont()
font.setPointSize(11)
self.extracaptions.setFont(font)
self.extracaptions.setShowGrid(True)
self.extracaptions.setRowCount(0)
self.extracaptions.setColumnCount(2)
self.extracaptions.setObjectName("extracaptions")
item = QtWidgets.QTableWidgetItem()
self.extracaptions.setHorizontalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.extracaptions.setHorizontalHeaderItem(1, item)
self.extracaptions.horizontalHeader().setCascadingSectionResizes(False)
self.extracaptions.horizontalHeader().setStretchLastSection(True)
self.extracaptions.verticalHeader().setVisible(False)
self.extracaptions.verticalHeader().setStretchLastSection(False)
self.lbl_3 = QtWidgets.QLabel(self.appearance)
self.lbl_3.setGeometry(QtCore.QRect(40, 90, 191, 41))
self.lbl_3.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignVCenter)
self.lbl_3.setWordWrap(True)
self.lbl_3.setObjectName("lbl_3")
self.lbl_1 = QtWidgets.QLabel(self.appearance)
self.lbl_1.setGeometry(QtCore.QRect(20, 20, 81, 16))
self.lbl_1.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.lbl_1.setObjectName("lbl_1")
self.style = QtWidgets.QPlainTextEdit(self.appearance)
self.style.setGeometry(QtCore.QRect(20, 190, 541, 131))
self.style.setObjectName("style")
self.lbl_5 = QtWidgets.QLabel(self.appearance)
self.lbl_5.setGeometry(QtCore.QRect(40, 170, 291, 16))
self.lbl_5.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignVCenter)
self.lbl_5.setObjectName("lbl_5")
self.lbl_6 = QtWidgets.QLabel(self.appearance)
self.lbl_6.setGeometry(QtCore.QRect(20, 320, 541, 16))
self.lbl_6.setAlignment(QtCore.Qt.AlignCenter)
self.lbl_6.setObjectName("lbl_6")
self.lbl_2 = QtWidgets.QLabel(self.appearance)
self.lbl_2.setGeometry(QtCore.QRect(20, 50, 81, 16))
self.lbl_2.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.lbl_2.setObjectName("lbl_2")
self.width = QtWidgets.QDoubleSpinBox(self.appearance)
self.width.setGeometry(QtCore.QRect(110, 50, 81, 24))
self.width.setDecimals(1)
self.width.setMinimum(0.1)
self.width.setSingleStep(0.1)
self.width.setProperty("value", 1.0)
self.width.setObjectName("width")
self.deletecaption = QtWidgets.QPushButton(self.appearance)
self.deletecaption.setGeometry(QtCore.QRect(530, 150, 31, 21))
font = QtGui.QFont()
font.setBold(True)
font.setWeight(75)
self.deletecaption.setFont(font)
self.deletecaption.setDefault(False)
self.deletecaption.setFlat(False)
self.deletecaption.setObjectName("deletecaption")
self.addcaption = QtWidgets.QPushButton(self.appearance)
self.addcaption.setGeometry(QtCore.QRect(500, 150, 31, 21))
self.addcaption.setDefault(False)
self.addcaption.setFlat(False)
self.addcaption.setObjectName("addcaption")
self.maintabs.addTab(self.appearance, "")
self.action = QtWidgets.QWidget()
self.action.setObjectName("action")
self.actiontabs = QtWidgets.QTabWidget(self.action)
self.actiontabs.setGeometry(QtCore.QRect(20, 20, 571, 321))
self.actiontabs.setObjectName("actiontabs")
self.single = QtWidgets.QWidget()
self.single.setObjectName("single")
self.actiontabs.addTab(self.single, "")
self.double = QtWidgets.QWidget()
self.double.setObjectName("double")
self.actiontabs.addTab(self.double, "")
self.long = QtWidgets.QWidget()
self.long.setObjectName("long")
self.actiontabs.addTab(self.long, "")
self.maintabs.addTab(self.action, "")
self.retranslateUi(EditKey)
self.maintabs.setCurrentIndex(0)
self.actiontabs.setCurrentIndex(0)
self.cancelsavebuttons.accepted.connect(EditKey.accept)
self.cancelsavebuttons.rejected.connect(EditKey.reject)
QtCore.QMetaObject.connectSlotsByName(EditKey)
EditKey.setTabOrder(self.maintabs, self.caption)
EditKey.setTabOrder(self.caption, self.width)
EditKey.setTabOrder(self.width, self.cssclass)
EditKey.setTabOrder(self.cssclass, self.extracaptions)
EditKey.setTabOrder(self.extracaptions, self.addcaption)
EditKey.setTabOrder(self.addcaption, self.deletecaption)
EditKey.setTabOrder(self.deletecaption, self.style)
EditKey.setTabOrder(self.style, self.actiontabs)
def retranslateUi(self, EditKey):
_translate = QtCore.QCoreApplication.translate
EditKey.setWindowTitle(_translate("EditKey", "Edit key properties"))
self.lbl_4.setText(_translate("EditKey", "Additional captions:"))
item = self.extracaptions.horizontalHeaderItem(0)
item.setText(_translate("EditKey", "CSS class"))
item = self.extracaptions.horizontalHeaderItem(1)
item.setText(_translate("EditKey", "Caption"))
self.lbl_3.setText(_translate("EditKey", "Additional CSS classes, separated by spaces:"))
self.lbl_1.setText(_translate("EditKey", "Caption:"))
self.lbl_5.setText(_translate("EditKey", "CSS StyleSheet specific to this key:"))
self.lbl_6.setText(_translate("EditKey", "(Better to add CSS class and put style info in the keyboard stylesheet)"))
self.lbl_2.setText(_translate("EditKey", "Key width:"))
self.deletecaption.setText(_translate("EditKey", "-"))
self.addcaption.setText(_translate("EditKey", "+"))
self.maintabs.setTabText(self.maintabs.indexOf(self.appearance), _translate("EditKey", "Appearance"))
self.actiontabs.setTabText(self.actiontabs.indexOf(self.single), _translate("EditKey", "Single Tap"))
self.actiontabs.setTabText(self.actiontabs.indexOf(self.double), _translate("EditKey", "Double Tap"))
self.actiontabs.setTabText(self.actiontabs.indexOf(self.long), _translate("EditKey", "Press and hold"))
self.maintabs.setTabText(self.maintabs.indexOf(self.action), _translate("EditKey", "Action"))
| 54.337349 | 124 | 0.704656 | 965 | 9,020 | 6.529534 | 0.211399 | 0.034439 | 0.055864 | 0.027615 | 0.269005 | 0.229646 | 0.152515 | 0.130455 | 0.11395 | 0.11395 | 0 | 0.034135 | 0.175055 | 9,020 | 165 | 125 | 54.666667 | 0.81266 | 0.020067 | 0 | 0.065789 | 1 | 0 | 0.063179 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013158 | false | 0 | 0.006579 | 0 | 0.026316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
761906b76105e1a9bca359fb807ff73effa0fbb1 | 5,584 | py | Python | variaveis.py | OHolandes/Athena-Public | ed697b012b0507d31e906026607be69dcb3460ce | [
"BSD-3-Clause"
] | null | null | null | variaveis.py | OHolandes/Athena-Public | ed697b012b0507d31e906026607be69dcb3460ce | [
"BSD-3-Clause"
] | null | null | null | variaveis.py | OHolandes/Athena-Public | ed697b012b0507d31e906026607be69dcb3460ce | [
"BSD-3-Clause"
] | null | null | null | CANAIS_ADM = {
"diretoria": 441263190832185350,
"secretaria": 731689039853518848
}
SAUDACOES = ["Olá!", "Oi!", "Iai!"]
GUIA_ANONIMA_ID = 956319073568976967
msg_ajuda = "**::ola** | **::oi** | **::iai** | **::athena**: Mande um ola caloroso para mim, e responderei!\n" \
"**::cool** `texto`: Você pode me perguntar se algo é COOl (provavelmente sou eu).\n" \
"**::pitagoras** `expressão...`: Resolvo uma expressão matemática no estilo Pitágoras.\n" \
'**::rola** | **::dado** `NdN`: Consigo rolar uns dados para você se for conveniente.\n' \
"**::escolha** | **::prefere** `opções...`: Vou escolher a melhor opção entre algumas opções.\n" \
"**::stalk**: Envio algumas informações suas... Anda stalkeando você mesmo(a)!?.\n" \
"**::privilegios** `membro...`: Mostro suas permissões nesse canal ou de outra pessoa.\n" \
"**::convite**: Mando o convite do servidor.\n" \
"**::chegamais** `menções...`: Separo um canal para você e mais pessoas ficarem a vontade.\n" \
"**::ajuda** | **::comandos**: Esse já é um pouco autoexplicativo não?" \
"\n\n" \
"**Administração**:\n\n" \
'**::teste** `N vezes` `palavra`: Repito uma mensagem para saber se estou "di Boa"\n' \
'**::prompt**: Abro meu console para você interagir com meu código ||pervertido(a)!||.\n' \
"**::ping**: Mando a minha latência (morar nos E.U.A é para poucos).\n" \
"**::cep**: Mando o ID do canal atual.\n" \
"**::cpf**: Envio o ID de alguém.\n" \
"**::relatorio**: Faço um relatório geral do servidor." \
"(n de membros, n de boosts, nivel, n de canais, n de categorias, n de cargos...).\n" \
"**::faxina** `limite`: Dou uma limpeza das últimas (100 por padrão) mensagens no canal atual.\n" \
"\n" \
"**::log** `membro`: Faço um pequeno histórico escolar de um membro especifico. " \
"Ou o seu, caso não for especificado. Por padrão o limite é 15.\n" \
"\n" \
"**::basta**: Mando todas as pessoas **comuns** calarem a boca.\n" \
"**::liberado**: Descalo a boca de todos (talvez não seja uma boa ideia).\n" \
"**::aviso**: Muto alguém pelos seus crimes contra a nação.\n" \
"\n" \
"**::kick** `membro` `motivo`: Dou uma voadora em algum membro...\n" \
"Você pode **kickar** sem um motivo especificado, porém isso seria abuso de autoridade...\n" \
"**::ban** `membro` `motivo`: Excluo um membro da sociedade.\n" \
"Você pode **banir** sem um motivo especificado, porém isso seria abuso de autoridade..." \
"\n\n\n" \
"Você ainda pode pedir uma explicação de alto calão de certos comandos usando **::ajuda** `comando`." \
" Os que tenho alto conhecimento:" \
"`cool`; `soma`; `rola`; `escolha`; `chegamais`; `basta`; `log`; `ban`/`kick`; `aviso`." \
"\n" \
"Também se quiser saber mais sobre as permissões de `Administração`, mande um `::ajuda adms`."
msg_adms = """
Vou dizer resumidamente quem pode oquê aqui e as permissões minimas do cargo mais alto seu.
**Comando** | **Permissão**
`::teste` | Gerenciar canais
`::prompt` | Administrador
`::ping` | Gerenciar canais
`::cep` | Gerenciar canais
`::cpf` | Gerenciar canais
`::relatorio`| Administrador
`::faxina` | Gerenciar mensagens
`::log` | Gerenciar mensagens
`::basta` | Gerenciar mensagens
`::liberado` | Gerenciar mensagens
`::aviso` | Gerenciar mensagens
`::kick` | Expulsar membros
`::ban` | Banir membros
"""
alta_ajuda = {
"adms": msg_adms,
"cool": "Digo se algo é _cool_, como por exemplo: ::cool athena",
"pitagoras": "Calculo uma expressão matemática, como: `(23 + 2) * 9 - 2**3`.\nAinda pode usar exponenciação = `**`, e resto de divisão = `%`",
"rola": "Rolo um dado descompromissadamente: ::rola 1d20 = 1 dado de 20",
"escolha": "Use para eu escolher coisas aleatórias, manda as opções em sequência: ::escolha loritta athena disboard",
"chegamais": """Tenho um sistema de mensagens anônimas.
Entre em um desses canais para usufruir:
<#956301680679473253>
<#957638065596272680>
<#957638119560192090>
Use `::chegamais` `menções` (onde "menções" são as menções dos membros que queira convidar), o canal será fechado para todos com o cargo **everyone** com exceção de vocês (logicamente os outros como administradores e moderadores poderão ver as mensagens) e será aberto depois de _10 minutos_ de inatividade (fique tranquilo, antes disso eu vou apagar tudo).
Obs: Sendo que os de patente alta podem ver as mensagens, não passem os limites, olhem <#441263333807751178> para terem certeza.
""",
"basta": "Todos com somente o cargo **everyone** serão impedidos de falar no canal com o comando invocado.",
"log": "Envio as últimas mensagens de alguém.",
"aviso": "Dou o cargo @Avisado para um membro e ele não poderá mandar mensagens em qualquer canal, para descastiga-lo use o comando novamente.",
"kick": "Use para por alguém nas rédias, use-o no canal em que o membro tenha acesso (para deixar as coisas um pouco mais democráticas).",
"ban": "Use para por alguém nas rédias, use-o no canal em que o membro tenha acesso (para deixar as coisas um pouco mais democráticas)."
} | 62.044444 | 377 | 0.604047 | 695 | 5,584 | 4.83741 | 0.446043 | 0.004164 | 0.004164 | 0.006544 | 0.092207 | 0.092207 | 0.092207 | 0.092207 | 0.092207 | 0.092207 | 0 | 0.034747 | 0.252686 | 5,584 | 90 | 378 | 62.044444 | 0.770908 | 0 | 0 | 0.049383 | 0 | 0.148148 | 0.82829 | 0.026679 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.012346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
761ca9f6ac724fb2550f3f650971c0642f340690 | 6,077 | py | Python | hxmirador/settings/prod.py | nmaekawa/hxmirador | c5e364a92c3631126a7fd9335af506270f52fe68 | [
"BSD-3-Clause"
] | null | null | null | hxmirador/settings/prod.py | nmaekawa/hxmirador | c5e364a92c3631126a7fd9335af506270f52fe68 | [
"BSD-3-Clause"
] | null | null | null | hxmirador/settings/prod.py | nmaekawa/hxmirador | c5e364a92c3631126a7fd9335af506270f52fe68 | [
"BSD-3-Clause"
] | null | null | null | """
Django settings for hxmirador project.
Generated by 'django-admin startproject' using Django 2.0.7.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
SETTINGS_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
BASE_DIR = os.path.dirname(SETTINGS_DIR)
PROJECT_NAME = "hxmirador"
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get("HXMIRADOR_DJANGO_SECRET_KEY", "CHANGE_ME")
# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ["localhost", "127.0.0.1"]
allowed_hosts_other = os.environ.get("HXMIRADOR_ALLOWED_HOSTS", "")
if allowed_hosts_other:
ALLOWED_HOSTS.extend(allowed_hosts_other.split())
# Application definition
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"hxlti",
"mirador",
"corsheaders",
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"corsheaders.middleware.CorsMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
]
ROOT_URLCONF = PROJECT_NAME + ".urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [os.path.join(BASE_DIR, "templates")],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = PROJECT_NAME + ".wsgi.application"
# Database
# https://docs.djangoproject.com/en/2.0/ref/settings/#databases
HXMIRADOR_DB_PATH = os.environ.get(
"HXMIRADOR_DB_PATH", os.path.join(BASE_DIR, PROJECT_NAME + "_sqlite3.db")
)
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": HXMIRADOR_DB_PATH,
}
}
# Password validation
# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
# {
# 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
# },
# {
# 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
# },
# {
# 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
# },
# {
# 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
# },
]
# Logging config
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"simple": {
"format": (
"%(asctime)s|%(levelname)s [%(filename)s:%(funcName)s]" " %(message)s"
)
},
},
"handlers": {
"console": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "simple",
"stream": "ext://sys.stdout",
},
"errorfile_handler": {
"level": "DEBUG",
"class": "logging.handlers.RotatingFileHandler",
"formatter": "simple",
"filename": os.path.join(BASE_DIR, PROJECT_NAME + "_errors.log"),
"maxBytes": 10485760, # 10MB
"backupCount": 7,
"encoding": "utf8",
},
},
"loggers": {
"mirador": {"level": "DEBUG", "handlers": ["console"], "propagate": True},
"hxlti_dj": {"level": "DEBUG", "handlers": ["console"], "propagate": True},
"oauthlib": {
"level": "DEBUG",
"handlers": ["console"],
"propagate": True,
},
"root": {
"level": "DEBUG",
"handlers": ["console"],
},
},
}
# Internationalization
# https://docs.djangoproject.com/en/2.0/topics/i18n/
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
STATIC_URL = "/static/"
STATIC_ROOT = os.environ.get("HXMIRADOR_STATIC_ROOT", os.path.join(BASE_DIR, "static/"))
# hxlti app settings
# assuming ssl terminator in front of django (nginx reverse proxy)
use_ssl = os.environ.get("HXLTI_ENFORCE_SSL", "false")
HXLTI_ENFORCE_SSL = use_ssl.lower() == "true"
HXLTI_DUMMY_CONSUMER_KEY = os.environ.get(
"HXLTI_DUMMY_CONSUMER_KEY",
"dummy_42237E2AB9614C4EAB0C089A96B40686B1C97DE114EC40659E64F1CE3C195AAC",
)
HXLTI_REDIS_URL = os.environ.get("REDIS_URL", "redis://localhost:6379/0")
#
# settings for django-cors-headers
#
CORS_ORIGIN_ALLOW_ALL = True # accept requests from anyone
# hxmirador lti params mapping
HXMIRADOR_CUSTOM_PARAMETERS_MAP = {
"custom_canvas_ids": {
"ptype": "list",
"mapto": "canvases",
},
"custom_object_ids": {
"ptype": "list",
"mapto": "manifests",
},
"custom_layout": {
"ptype": "string",
"mapto": "layout",
},
"custom_view_type": {
"ptype": "string",
"mapto": "view_type",
},
# if there's multiple params that map to the same var name
# and the request sends these multiple params (say with different values)
# the last one defined in this MAP takes precedence.
"custom_manifests": {
"ptype": "list",
"mapto": "manifests",
},
}
| 28.665094 | 96 | 0.632549 | 642 | 6,077 | 5.833333 | 0.373832 | 0.052069 | 0.041122 | 0.046729 | 0.180507 | 0.158344 | 0.086248 | 0.071295 | 0.032043 | 0 | 0 | 0.01878 | 0.220174 | 6,077 | 211 | 97 | 28.800948 | 0.771471 | 0.276946 | 0 | 0.110294 | 1 | 0 | 0.420883 | 0.228381 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.007353 | 0.007353 | 0 | 0.007353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7620fcf40ca8ac2c72d7f34f2ed6b0553720f0af | 2,613 | py | Python | Start.py | nuti23/StudentsAssignmentsManagement | ca3cb8a6f38c31cd0544a63179691f139a02612d | [
"Apache-2.0"
] | null | null | null | Start.py | nuti23/StudentsAssignmentsManagement | ca3cb8a6f38c31cd0544a63179691f139a02612d | [
"Apache-2.0"
] | null | null | null | Start.py | nuti23/StudentsAssignmentsManagement | ca3cb8a6f38c31cd0544a63179691f139a02612d | [
"Apache-2.0"
] | null | null | null | from Domain.student import Student
from Repository.assignment_repository import AssignmentRepository
from Repository.grade_repository import GradeRepository
from Repository.student_repository import StudentRepository
from Repository_Binary_File.assignment_repository_binary_file import AssignmentRepositoryBinaryFile
from Repository_Binary_File.grade_repository_binary_file import GradeRepositoryBinaryFile
from Repository_Binary_File.student_repository_binary_file import StudentRepositoryBinaryFile
from Repository_TextFile.assignment_repository_text_file import AssignmentRepositoryTextFile
from Repository_TextFile.grade_repository_text_file import GradeRepositoryTextFile
from Repository_TextFile.student_repository_text_file import StudentRepositoryTextFile
from Service.assignment_service import AssignmentService
from Service.grade_service import GradeService
from Service.settings_properties import SettingsProperties
from Service.student_service import StudentService
from Ui.console import Ui
from Undo.undo_service import UndoRedoService
from Validators.assignment_validator import AssignmentValidator
from Validators.grade_validator import GradeValidator
from Validators.student_validator import StudentValidator
settings_properties = SettingsProperties()
dictionary = settings_properties.settings_data
if dictionary["repository"] == "inmemory":
student_repository = StudentRepository()
assignment_repository = AssignmentRepository()
grade_repository = GradeRepository()
elif dictionary["repository"] == "textfiles":
student_repository = StudentRepositoryTextFile(dictionary['student'])
assignment_repository = AssignmentRepositoryTextFile(dictionary['assignment'])
grade_repository = GradeRepositoryTextFile(dictionary['grade'])
elif dictionary["repository"] == "binaryfiles":
student_repository = StudentRepositoryBinaryFile(dictionary['student'])
assignment_repository = AssignmentRepositoryBinaryFile(dictionary['assignment'])
grade_repository = GradeRepositoryBinaryFile(dictionary['grade'])
undo_redo_service = UndoRedoService()
grade_validator = GradeValidator()
grade_service = GradeService(grade_repository, grade_validator, undo_redo_service)
student_validator = StudentValidator()
student_service = StudentService(student_repository, student_validator, grade_service, undo_redo_service)
assignment_validator = AssignmentValidator()
assignment_service = AssignmentService(assignment_repository, assignment_validator, grade_service, undo_redo_service)
ui = Ui(student_service, assignment_service, grade_service, undo_redo_service)
ui.start()
| 46.660714 | 117 | 0.867585 | 252 | 2,613 | 8.690476 | 0.178571 | 0.057534 | 0.054795 | 0.032877 | 0.047032 | 0.047032 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078837 | 2,613 | 55 | 118 | 47.509091 | 0.909846 | 0 | 0 | 0 | 0 | 0 | 0.03908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.452381 | 0 | 0.452381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
76243510a1536cc8eb0c0666f32e3fd946357134 | 5,503 | py | Python | setup.py | ProjexSoftware/orb | 575be2689cb269e65a0a2678232ff940acc19e5a | [
"MIT"
] | 7 | 2016-03-30T18:15:46.000Z | 2021-02-19T14:55:01.000Z | setup.py | orb-framework/orb | 575be2689cb269e65a0a2678232ff940acc19e5a | [
"MIT"
] | 25 | 2016-02-02T20:52:35.000Z | 2017-12-12T06:14:21.000Z | setup.py | orb-framework/orb | 575be2689cb269e65a0a2678232ff940acc19e5a | [
"MIT"
] | 3 | 2015-12-30T22:27:02.000Z | 2016-08-24T22:33:42.000Z | import os
import re
import subprocess
from setuptools import setup, find_packages, Command
from setuptools.command.test import test as TestCommand
__author__ = 'Eric Hulser'
__email__ = 'eric.hulser@gmail.com'
__license__ = 'MIT'
INSTALL_REQUIRES = []
DEPENDENCY_LINKS = []
TESTS_REQUIRE = []
LONG_DESCRIPTION = ''
class Tox(TestCommand):
def run_tests(self):
import tox
tox.cmdline()
class MakeDocs(Command):
description = 'Generates documentation'
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
os.system('pip install -r requirements-dev.txt')
os.system('sphinx-apidoc -f -o docs/source/api orb')
os.system('sphinx-build -b html docs/source docs/build')
class Release(Command):
description = 'Runs the tests and releases a new version of the script'
user_options = [
('no-tests', None, 'Bypass the test validation before releasing')
]
def initialize_options(self):
self.no_tests = True # for now, default this to true...
def finalize_options(self):
pass
def run(self):
if self.no_tests:
print('[WARNING] No tests have been run for this release!')
if not self.no_tests and os.system('python setup.py test'):
print('[ERROR] Could not release, tests are failing!')
else:
os.system('python setup.py tag')
os.system('python setup.py bdist_wheel bdist_egg upload')
class Tag(Command):
description = 'Command used to release new versions of the website to the internal pypi server.'
user_options = [
('no-tag', None, 'Do not tag the repo before releasing')
]
def initialize_options(self):
self.no_tag = False
def finalize_options(self):
pass
def run(self):
# generate the version information from the current git commit
cmd = ['git', 'describe', '--match', 'v[0-9]*.[0-9]*.0']
desc = subprocess.check_output(cmd).strip()
result = re.match('v([0-9]+)\.([0-9]+)\.0-([0-9]+)-(.*)', desc)
print 'generating version information from:', desc
with open('./orb/_version.py', 'w') as f:
f.write('__major__ = {0}\n'.format(result.group(1)))
f.write('__minor__ = {0}\n'.format(result.group(2)))
f.write('__revision__ = "{0}"\n'.format(result.group(3)))
f.write('__hash__ = "{0}"'.format(result.group(4)))
# tag this new release version
if not self.no_tag:
version = '.'.join([result.group(1), result.group(2), result.group(3)])
print 'creating git tag:', 'v' + version
os.system('git tag -a v{0} -m "releasing {0}"'.format(version))
os.system('git push --tags')
else:
print 'warning: tagging ignored...'
def read_requirements_file(path):
"""
reads requirements.txt file and handles PyPI index URLs
:param path: (str) path to requirements.txt file
:return: (tuple of lists)
"""
last_pypi_url = None
with open(path) as f:
requires = []
pypi_urls = []
for line in f.readlines():
if not line:
continue
if '--' in line:
match = re.match(r'--index-url\s+([\w\d:/.-]+)\s', line)
if match:
last_pypi_url = match.group(1)
if not last_pypi_url.endswith("/"):
last_pypi_url += "/"
else:
if last_pypi_url:
pypi_urls.append(last_pypi_url + line.strip().lower())
requires.append(line)
return requires, pypi_urls
if __name__ == '__main__':
try:
with open('orb/_version.py', 'r') as f:
content = f.read()
major = re.search('__major__ = (\d+)', content).group(1)
minor = re.search('__minor__ = (\d+)', content).group(1)
rev = re.search('__revision__ = "([^"]+)"', content).group(1)
VERSION = '.'.join((major, minor, rev))
except StandardError:
VERSION = '0.0.0'
# parse the requirements file
if os.path.isfile('requirements.txt'):
_install_requires, _pypi_urls = read_requirements_file('requirements.txt')
INSTALL_REQUIRES.extend(_install_requires)
DEPENDENCY_LINKS.extend(_pypi_urls)
if os.path.isfile('tests/requirements.txt'):
_tests_require, _pypi_urls = read_requirements_file('tests/requirements.txt')
TESTS_REQUIRE.extend(_tests_require)
DEPENDENCY_LINKS.extend(_pypi_urls)
# Get the long description from the relevant file
if os.path.isfile('README.md'):
with open('README.md') as f:
LONG_DESCRIPTION = f.read()
setup(
name='orb-api',
version=VERSION,
author=__author__,
author_email=__email__,
maintainer=__author__,
maintainer_email=__email__,
description='Database ORM and API builder.',
license=__license__,
keywords='',
url='https://github.com/orb-framework/orb',
install_requires=INSTALL_REQUIRES,
packages=find_packages(),
tests_require=TESTS_REQUIRE,
test_suite='tests',
long_description=LONG_DESCRIPTION,
cmdclass={
'tag': Tag,
'release': Release,
'mkdocs': MakeDocs,
'test': Tox
}
) | 31.445714 | 100 | 0.590405 | 666 | 5,503 | 4.660661 | 0.295796 | 0.020619 | 0.021263 | 0.023196 | 0.191366 | 0.070876 | 0.070876 | 0.063789 | 0 | 0 | 0 | 0.008083 | 0.280574 | 5,503 | 175 | 101 | 31.445714 | 0.775954 | 0.03598 | 0 | 0.150376 | 0 | 0 | 0.230082 | 0.025262 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.037594 | 0.045113 | null | null | 0.037594 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
762736c3af383f7a5e7c51cd44492ea53e1dcb42 | 818 | py | Python | examples/example_plugin/example_plugin/tables.py | susanhooks/nautobot | bc3ef5958f0d5decb0be763342c790f26ff1e20e | [
"Apache-2.0"
] | null | null | null | examples/example_plugin/example_plugin/tables.py | susanhooks/nautobot | bc3ef5958f0d5decb0be763342c790f26ff1e20e | [
"Apache-2.0"
] | null | null | null | examples/example_plugin/example_plugin/tables.py | susanhooks/nautobot | bc3ef5958f0d5decb0be763342c790f26ff1e20e | [
"Apache-2.0"
] | null | null | null | import django_tables2 as tables
from nautobot.utilities.tables import (
BaseTable,
ButtonsColumn,
ToggleColumn,
)
from example_plugin.models import AnotherExampleModel, ExampleModel
class ExampleModelTable(BaseTable):
"""Table for list view of `ExampleModel` objects."""
pk = ToggleColumn()
name = tables.LinkColumn()
actions = ButtonsColumn(ExampleModel)
class Meta(BaseTable.Meta):
model = ExampleModel
fields = ["pk", "name", "number"]
class AnotherExampleModelTable(BaseTable):
"""Table for list view of `AnotherExampleModel` objects."""
pk = ToggleColumn()
name = tables.LinkColumn()
actions = ButtonsColumn(AnotherExampleModel)
class Meta(BaseTable.Meta):
model = AnotherExampleModel
fields = ["pk", "name", "number"]
| 24.058824 | 67 | 0.691932 | 76 | 818 | 7.421053 | 0.434211 | 0.060284 | 0.060284 | 0.074468 | 0.407801 | 0.312057 | 0.216312 | 0.216312 | 0 | 0 | 0 | 0.001536 | 0.204156 | 818 | 33 | 68 | 24.787879 | 0.864823 | 0.122249 | 0 | 0.380952 | 0 | 0 | 0.033946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
57ff0de84d2c52561ef644349243f41fda37acf1 | 251 | py | Python | sample_linear.py | Oleg-Krivosheev/Sample-Gamma-small-alpha | 05988f8532b471305e31d8b5d0b3e027fb5d0b80 | [
"MIT"
] | null | null | null | sample_linear.py | Oleg-Krivosheev/Sample-Gamma-small-alpha | 05988f8532b471305e31d8b5d0b3e027fb5d0b80 | [
"MIT"
] | null | null | null | sample_linear.py | Oleg-Krivosheev/Sample-Gamma-small-alpha | 05988f8532b471305e31d8b5d0b3e027fb5d0b80 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import math
import random
import matplotlib.pyplot as plt
# sample from linear distribution and plot it
bins = [0.1 * i for i in range(12)]
plt.hist([(1.0 - math.sqrt(random.random())) for k in range(10000)], bins)
plt.show()
| 20.916667 | 74 | 0.705179 | 45 | 251 | 3.933333 | 0.688889 | 0.079096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051887 | 0.155378 | 251 | 11 | 75 | 22.818182 | 0.783019 | 0.25498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
520aeb88ed49552d97150bdb774d3492dbf483cc | 2,816 | py | Python | cloudkitty-9.0.0/cloudkitty/rating/pyscripts/db/api.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | null | null | null | cloudkitty-9.0.0/cloudkitty/rating/pyscripts/db/api.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | cloudkitty-9.0.0/cloudkitty/rating/pyscripts/db/api.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 2 | 2020-03-15T01:24:15.000Z | 2020-07-22T20:34:26.000Z | # -*- coding: utf-8 -*-
# Copyright 2015 Objectif Libre
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Stéphane Albert
#
import abc
from oslo_config import cfg
from oslo_db import api as db_api
import six
_BACKEND_MAPPING = {
'sqlalchemy': 'cloudkitty.rating.pyscripts.db.sqlalchemy.api'}
IMPL = db_api.DBAPI.from_config(cfg.CONF,
backend_mapping=_BACKEND_MAPPING,
lazy=True)
def get_instance():
"""Return a DB API instance."""
return IMPL
class NoSuchScript(Exception):
"""Raised when the script doesn't exist."""
def __init__(self, name=None, uuid=None):
super(NoSuchScript, self).__init__(
"No such script: %s (UUID: %s)" % (name, uuid))
self.name = name
self.uuid = uuid
class ScriptAlreadyExists(Exception):
"""Raised when the script already exists."""
def __init__(self, name, uuid):
super(ScriptAlreadyExists, self).__init__(
"Script %s already exists (UUID: %s)" % (name, uuid))
self.name = name
self.uuid = uuid
@six.add_metaclass(abc.ABCMeta)
class PyScripts(object):
"""Base class for pyscripts configuration."""
@abc.abstractmethod
def get_migration(self):
"""Return a migrate manager.
"""
@abc.abstractmethod
def get_script(self, name=None, uuid=None):
"""Return a script object.
:param name: Filter on a script name.
:param uuid: The uuid of the script to get.
"""
@abc.abstractmethod
def list_scripts(self):
"""Return a UUID list of every scripts available.
"""
@abc.abstractmethod
def create_script(self, name, data):
"""Create a new script.
:param name: Name of the script to create.
:param data: Content of the python script.
"""
@abc.abstractmethod
def update_script(self, uuid, **kwargs):
"""Update a script.
:param uuid UUID of the script to modify.
:param data: Script data.
"""
@abc.abstractmethod
def delete_script(self, name=None, uuid=None):
"""Delete a list.
:param name: Name of the script to delete.
:param uuid: UUID of the script to delete.
"""
| 27.339806 | 78 | 0.631392 | 359 | 2,816 | 4.857939 | 0.387187 | 0.020069 | 0.068807 | 0.037271 | 0.192087 | 0.131881 | 0.102064 | 0.042431 | 0.042431 | 0.042431 | 0 | 0.004365 | 0.267756 | 2,816 | 102 | 79 | 27.607843 | 0.841416 | 0.449574 | 0 | 0.27027 | 0 | 0 | 0.086045 | 0.032538 | 0 | 0 | 0 | 0 | 0 | 1 | 0.243243 | false | 0 | 0.108108 | 0 | 0.459459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52169f9fb1131b55d0003765e76973d7f4ef98f8 | 1,901 | py | Python | Chapter8/datacurator.py | PacktPublishing/Testing-Time-Machines | fa85c5cb5252bf3dc83cad5d8ef3d3d425295df1 | [
"MIT"
] | 1 | 2022-01-02T16:38:29.000Z | 2022-01-02T16:38:29.000Z | Chapter8/datacurator.py | PacktPublishing/Testing-Time-Machines | fa85c5cb5252bf3dc83cad5d8ef3d3d425295df1 | [
"MIT"
] | null | null | null | Chapter8/datacurator.py | PacktPublishing/Testing-Time-Machines | fa85c5cb5252bf3dc83cad5d8ef3d3d425295df1 | [
"MIT"
] | 1 | 2022-01-02T16:38:31.000Z | 2022-01-02T16:38:31.000Z | import datetime
import pandas as pd
# Retrieve the test cases from the csv into a dictionary
dataFile = open('testcases.csv', 'r')
dataFrame = pd.read_csv('testcases.csv', index_col=0)
dataDictionary = dataFrame.transpose().to_dict()
# Create a new dictionary to save the results
result = {}
# Data optimization -> get the maximum number of steps and of differences
oldest = 0
maxSteps = 0
for i in dataDictionary:
createdDate = datetime.datetime.fromisoformat(dataDictionary[i]['CreatedDate'])
currentDate = datetime.datetime.now()
difference = currentDate - createdDate
daysOfDifference = difference.days
dataDictionary[i]['daysDiff'] = daysOfDifference
if daysOfDifference > oldest:
oldest = daysOfDifference
if dataDictionary[i]['Steps'] > maxSteps:
maxSteps = dataDictionary[i]['Steps']
# Save the data in the new dictionary
for i in dataDictionary:
result[i] = {}
passes = dataDictionary[i]['passes']
fails = dataDictionary[i]['fails']
# Data optimization - get the percentage of passes
passingTest = (passes / (fails + passes)) * 100
# Data optimization - get the percentage of test age
testAge = (dataDictionary[i]['daysDiff'] / oldest) * 100
priorityTest = dataDictionary[i]['Priority']
# Data optimization - get the percentage of number steps
numSteps = (dataDictionary[i]['Steps'] / maxSteps) * 100
# Save the new values
result[i]['Priority'] = float(priorityTest)
result[i]['Age'] = float(testAge)
result[i]['Steps'] = float(numSteps)
result[i]['Passing'] = float(passingTest)
result[i]['Expected'] = float(dataDictionary[i]['LastResult'])
# Save to the new Data File
dataFrame = pd.DataFrame.from_dict(result, orient='index')
dataFile = open('newData.csv', 'w')
dataFrame.to_csv(dataFile, sep=',')
dataFile.close()
| 41.326087 | 83 | 0.680168 | 220 | 1,901 | 5.854545 | 0.345455 | 0.11646 | 0.059006 | 0.068323 | 0.079193 | 0.079193 | 0 | 0 | 0 | 0 | 0 | 0.007895 | 0.200421 | 1,901 | 45 | 84 | 42.244444 | 0.839474 | 0.214098 | 0 | 0.057143 | 0 | 0 | 0.099057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.085714 | 0.057143 | 0 | 0.057143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5227a62aa3a0d8203233b6c9c2725988f35970a3 | 2,809 | py | Python | utils/GeneralUtils.py | nanohedra/nanohedra | 3921b7f5ce10e0e3393c3b675bb97ccbecb96663 | [
"MIT"
] | 2 | 2020-12-07T00:38:32.000Z | 2021-05-13T19:36:17.000Z | utils/GeneralUtils.py | nanohedra/nanohedra | 3921b7f5ce10e0e3393c3b675bb97ccbecb96663 | [
"MIT"
] | null | null | null | utils/GeneralUtils.py | nanohedra/nanohedra | 3921b7f5ce10e0e3393c3b675bb97ccbecb96663 | [
"MIT"
] | 1 | 2021-05-13T19:36:18.000Z | 2021-05-13T19:36:18.000Z | import numpy as np
# Copyright 2020 Joshua Laniado and Todd O. Yeates.
__author__ = "Joshua Laniado and Todd O. Yeates"
__copyright__ = "Copyright 2020, Nanohedra"
__version__ = "1.0"
def euclidean_squared_3d(coordinates_1, coordinates_2):
if len(coordinates_1) != 3 or len(coordinates_2) != 3:
raise ValueError("len(coordinate list) != 3")
elif type(coordinates_1) is not list or type(coordinates_2) is not list:
raise TypeError("input parameters are not of type list")
else:
x1, y1, z1 = coordinates_1[0], coordinates_1[1], coordinates_1[2]
x2, y2, z2 = coordinates_2[0], coordinates_2[1], coordinates_2[2]
return (x1 - x2) ** 2 + (y1 - y2) ** 2 + (z1 - z2) ** 2
def center_of_mass_3d(coordinates):
n = len(coordinates)
if n != 0:
cm = [0. for j in range(3)]
for i in range(n):
for j in range(3):
cm[j] = cm[j] + coordinates[i][j]
for j in range(3):
cm[j] = cm[j] / n
return cm
else:
print "ERROR CALCULATING CENTER OF MASS"
return None
def rot_txint_set_txext_frag_coord_sets(coord_sets, rot_mat=None, internal_tx_vec=None, set_mat=None, ext_tx_vec=None):
if coord_sets != list():
# Get the length of each coordinate set
coord_set_lens = []
for coord_set in coord_sets:
coord_set_lens.append(len(coord_set))
# Stack coordinate set arrays in sequence vertically (row wise)
coord_sets_vstacked = np.vstack(coord_sets)
# Rotate stacked coordinates if rotation matrix is provided
if rot_mat is not None:
rot_mat_T = np.transpose(rot_mat)
coord_sets_vstacked = np.matmul(coord_sets_vstacked, rot_mat_T)
# Translate stacked coordinates if internal translation vector is provided
if internal_tx_vec is not None:
coord_sets_vstacked = coord_sets_vstacked + internal_tx_vec
# Set stacked coordinates if setting matrix is provided
if set_mat is not None:
set_mat_T = np.transpose(set_mat)
coord_sets_vstacked = np.matmul(coord_sets_vstacked, set_mat_T)
# Translate stacked coordinates if external translation vector is provided
if ext_tx_vec is not None:
coord_sets_vstacked = coord_sets_vstacked + ext_tx_vec
# Slice stacked coordinates back into coordinate sets
transformed_coord_sets = []
slice_index_1 = 0
for coord_set_len in coord_set_lens:
slice_index_2 = slice_index_1 + coord_set_len
transformed_coord_sets.append(coord_sets_vstacked[slice_index_1:slice_index_2].tolist())
slice_index_1 += coord_set_len
return transformed_coord_sets
else:
return []
| 33.047059 | 119 | 0.655393 | 403 | 2,809 | 4.277916 | 0.263027 | 0.093968 | 0.098608 | 0.019142 | 0.264501 | 0.223898 | 0.12877 | 0.12877 | 0.12877 | 0.055684 | 0 | 0.029211 | 0.268779 | 2,809 | 84 | 120 | 33.440476 | 0.810127 | 0.163403 | 0 | 0.098039 | 0 | 0 | 0.066324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.019608 | null | null | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
522e8268e27cc0e4ac1467fdf1229c4f9a633e0c | 731 | py | Python | dlapp/forms.py | tavershimafx/E-library | 861b8aeba82e4bf0b63f5cb1421ca8f9f9cd1d96 | [
"Apache-2.0"
] | null | null | null | dlapp/forms.py | tavershimafx/E-library | 861b8aeba82e4bf0b63f5cb1421ca8f9f9cd1d96 | [
"Apache-2.0"
] | null | null | null | dlapp/forms.py | tavershimafx/E-library | 861b8aeba82e4bf0b63f5cb1421ca8f9f9cd1d96 | [
"Apache-2.0"
] | null | null | null | from django import forms
from . models import Holdings
class HoldingsForm(forms.ModelForm):
class Meta:
model = Holdings
fields = ['title', 'holding', 'authors', 'category']
widgets={
'title': forms.TextInput(attrs={'class': 'form-control'}),
'holding': forms.FileInput(attrs={'type': 'file'}),
'authors': forms.TextInput(attrs={'class': 'form-control'}),
'category': forms.Select(attrs={'class': 'form-control'}),
}
class SearchForm(forms.Form): # create a search form
query = forms.CharField(max_length=250)
class Meta:
widgets={
'query': forms.TextInput(attrs={'class': 'form-control'}),
}
| 34.809524 | 72 | 0.578659 | 73 | 731 | 5.780822 | 0.465753 | 0.094787 | 0.132701 | 0.199052 | 0.248815 | 0.248815 | 0 | 0 | 0 | 0 | 0 | 0.005556 | 0.261286 | 731 | 20 | 73 | 36.55 | 0.775926 | 0.02736 | 0 | 0.222222 | 0 | 0 | 0.190409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
522ec8665cdc06a15413dd19e0ca4b2d4864dbdc | 2,805 | py | Python | tests/aircraft/deploys/ubuntu/models/v1beta3/storage_config_data_test.py | relaxdiego/aircraft | ce9a6724fe33be38777991fbb1cd731e197fa468 | [
"Apache-2.0"
] | 9 | 2021-01-15T18:26:44.000Z | 2021-07-29T07:40:15.000Z | tests/aircraft/deploys/ubuntu/models/v1beta3/storage_config_data_test.py | relaxdiego/aircraft | ce9a6724fe33be38777991fbb1cd731e197fa468 | [
"Apache-2.0"
] | null | null | null | tests/aircraft/deploys/ubuntu/models/v1beta3/storage_config_data_test.py | relaxdiego/aircraft | ce9a6724fe33be38777991fbb1cd731e197fa468 | [
"Apache-2.0"
] | 1 | 2021-04-26T01:39:26.000Z | 2021-04-26T01:39:26.000Z | import pytest
from aircraft.deploys.ubuntu.models.v1beta3 import StorageConfigData
@pytest.fixture
def input_config(request):
marker = request.node.get_closest_marker('data_kwargs')
config = dict(
disks=[
{
'path': '/dev/sda',
'partitions': [
{
'size': 536870912, # 512MB
'format': 'fat32',
'mount_path': '/boot/efi',
'flag': 'boot',
'grub_device': True,
},
{
'size': 1073741824, # 1GB
'format': 'ext4',
'mount_path': '/boot',
},
{
'id': 'partition-for-ubuntu-vg',
'size': 429496729600, # 400GB
},
],
},
],
lvm_volgroups=[
{
'name': 'ubuntu-vg',
'devices': [
'partition-for-ubuntu-vg'
],
'logical_volumes': marker.kwargs.get(
'logical_volumes',
[
{
'name': 'ubuntu-lv',
'size': 397284474880, # 370GB
'format': 'ext4',
'mount_path': '/',
}
]
)
}
]
)
return config
@pytest.fixture
def no_lvm_volgroups(request):
config = dict(
disks=[
{
'path': '/dev/sda',
'partitions': [
{
'size': 536870912, # 512MB
'format': 'fat32',
'mount_path': '/boot/efi',
'flag': 'boot',
'grub_device': True,
},
{
'size': 1073741824, # 1GB
'format': 'ext4',
'mount_path': '/boot',
},
{
'id': 'partition-for-ubuntu-vg',
'size': 429496729600, # 400GB
},
],
},
],
)
return config
@pytest.mark.data_kwargs(logical_volumes=[])
def test__does_not_error_out_when_lvm_lvs_are_empty(input_config):
assert StorageConfigData(**input_config).export_lvm_logical_volumes() == []
def test__does_not_error_out_when_lvm_volgroups_is_empty(no_lvm_volgroups):
assert StorageConfigData(**no_lvm_volgroups).export_lvm_logical_volumes() == []
| 29.526316 | 83 | 0.370053 | 186 | 2,805 | 5.311828 | 0.38172 | 0.045547 | 0.052632 | 0.057692 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0 | 0.074349 | 0.520499 | 2,805 | 94 | 84 | 29.840426 | 0.660223 | 0.013191 | 0 | 0.457831 | 0 | 0 | 0.14058 | 0.025 | 0 | 0 | 0 | 0 | 0.024096 | 1 | 0.048193 | false | 0 | 0.024096 | 0 | 0.096386 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
522faca411638c4c5ca4e0265ac3ec446d4b7425 | 1,009 | py | Python | tests/tests_core/test_definitions.py | markuswiertarkus/simbatch | c0e21abdee9f3475a01779d35cbe19e607a2c502 | [
"MIT"
] | 1 | 2017-11-28T01:10:09.000Z | 2017-11-28T01:10:09.000Z | tests/tests_core/test_definitions.py | markuswiertarkus/simbatch | c0e21abdee9f3475a01779d35cbe19e607a2c502 | [
"MIT"
] | null | null | null | tests/tests_core/test_definitions.py | markuswiertarkus/simbatch | c0e21abdee9f3475a01779d35cbe19e607a2c502 | [
"MIT"
] | null | null | null | from simbatch.core import core
from simbatch.core.definitions import SingleAction
import pytest
# TODO check dir on prepare tests
TESTING_AREA_DIR = "S:\\simbatch\\data\\"
@pytest.fixture(scope="module")
def simbatch():
# TODO pytest-datadir pytest-datafiles vs ( path.dirname( path.realpath(sys.argv[0]) )
sib = core.SimBatch(5, ini_file="config_tests.ini")
sib.clear_all_memory_data()
sib.prj.create_example_project_data(do_save=False)
sib.prj.update_current_from_index(1)
sib.sch.create_example_schemas_data(do_save=False)
return sib
def test_exist_definitions_data(simbatch):
assert simbatch.comfun.file_exists(simbatch.sts.store_definitions_directory) is True
def test_load_definitions(simbatch):
assert simbatch.dfn.load_definitions() is True
# def test_load_definitions(sib):
# assert sib.dfn.load_definitions() is True
# def test_clear_all_definion_data
# def test_action_create(simbatch):
# sa = SingleAction()
# assert sa.
| 25.225 | 103 | 0.75223 | 142 | 1,009 | 5.091549 | 0.471831 | 0.048409 | 0.037344 | 0.053942 | 0.145228 | 0.145228 | 0.085754 | 0 | 0 | 0 | 0 | 0.003497 | 0.149653 | 1,009 | 39 | 104 | 25.871795 | 0.839161 | 0.306244 | 0 | 0 | 0 | 0 | 0.060781 | 0 | 0 | 0 | 0 | 0.025641 | 0.125 | 1 | 0.1875 | false | 0 | 0.1875 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52345fa930bdb9bec3252111f3c27febc14ab5da | 663 | py | Python | game/client/view/pad/pad.py | AntonYermilov/progue | 7f382208c9efc904cff9d8df4750606039801d45 | [
"MIT"
] | null | null | null | game/client/view/pad/pad.py | AntonYermilov/progue | 7f382208c9efc904cff9d8df4750606039801d45 | [
"MIT"
] | 6 | 2019-03-25T21:11:28.000Z | 2019-06-21T16:21:47.000Z | game/client/view/pad/pad.py | AntonYermilov/progue | 7f382208c9efc904cff9d8df4750606039801d45 | [
"MIT"
] | 1 | 2021-12-22T22:03:47.000Z | 2021-12-22T22:03:47.000Z | from abc import ABC, abstractmethod
class Pad(ABC):
def __init__(self, view, x0: int, y0: int, x1: int, y1: int):
"""
Creates pad with corners in specified coordinates
:param view: base view instance
:param x0: x-coordinate of top left corner (included)
:param y0: y-coordinate of top left corner (included)
:param x1: x-coordinate of bottom right corner (excluded)
:param y1: y-coordinate of bottom right corner (excluded)
"""
self.view = view
self.x0 = x0
self.y0 = y0
self.x1 = x1
self.y1 = y1
@abstractmethod
def refresh(self):
pass | 30.136364 | 65 | 0.597285 | 88 | 663 | 4.454545 | 0.420455 | 0.122449 | 0.066327 | 0.096939 | 0.382653 | 0.382653 | 0.193878 | 0 | 0 | 0 | 0 | 0.035242 | 0.315234 | 663 | 22 | 66 | 30.136364 | 0.828194 | 0.46003 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.090909 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
52371790ddfcd8b2c7d4c68f5e7cb458249f7867 | 16,914 | py | Python | web/poisson_web.py | DreadyBear/codereview | 9e3072ca79f97a067599c762cdea73da7607f671 | [
"Unlicense"
] | 60 | 2015-01-06T16:19:01.000Z | 2021-04-21T12:41:35.000Z | web/poisson_web.py | DreadyBear/codereview | 9e3072ca79f97a067599c762cdea73da7607f671 | [
"Unlicense"
] | 2 | 2015-02-18T19:59:37.000Z | 2015-03-25T20:10:59.000Z | web/poisson_web.py | DreadyBear/codereview | 9e3072ca79f97a067599c762cdea73da7607f671 | [
"Unlicense"
] | 27 | 2015-02-23T03:29:04.000Z | 2021-04-25T21:06:42.000Z | # -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <codecell>
import os
import psycopg2
import numpy as np
import pandas as pd
import patsy
import statsmodels.api as sm
import pickle
import random
from math import floor, exp
from datetime import *
import pytz
from dateutil.relativedelta import *
import calendar
from config import config
# Connect to postgres db
conn = psycopg2.connect("dbname= %s user= %s host=%s" % (config()["DB"], config()["USER"], config()["DB_URL"]))
# <codecell>
def get_station_data(station_id):
# Pulls Data for Given Station_id and Converts to Pandas Dataframe
cur = conn.cursor()
# Fetch data for station 17 in Washington, DC - 16th & Harvard St NW, terminalName: 31103
cur.execute("SELECT * FROM bike_ind_washingtondc WHERE tfl_id = %s;" % station_id)
station_data = cur.fetchall()
# Put data in pandas dataframe
station_updates = pd.DataFrame.from_records(station_data, columns = ["station_id", "bikes_available", "spaces_available", "timestamp"], index = "timestamp")
# Convert UTC timezone of the timestamps to DC's Eastern time
station_updates.index = station_updates.index.tz_localize('UTC').tz_convert('US/Eastern')
return station_updates
# <codecell>
def fit_poisson(station_updates):
# Find changes (deltas) in bike count
bikes_available = station_updates.bikes_available
deltas = bikes_available - bikes_available.shift()
# Show the histogram of the deltas. Need to remove outliers first.
# clipped_deltas = deltas[(deltas > -6) & (deltas < 6)]
# clipped_deltas.hist(bins=11)
# Separate positive and negative deltas
pos_deltas = deltas[deltas > 0]
neg_deltas = abs(deltas[deltas < 0])
# Count the number of positive and negative deltas per half hour per day, add them to new dataframe.
time_interval = '1H'
pos_interval_counts_null = pos_deltas.resample(time_interval, how ='sum')
neg_interval_counts_null = neg_deltas.resample(time_interval, how ='sum')
# Set NaN delta counts to 0
# By default the resampling step puts NaN (null values) into the data when there were no observations
# to count up during those thirty minutes.
arrivals = pos_interval_counts_null.fillna(0)
departures = neg_interval_counts_null.fillna(0)
arrivals_departures = pd.DataFrame(arrivals, columns=["arrivals"])
arrivals_departures['departures'] = departures
# Extract months for Month feature, add to model data
delta_months = arrivals_departures.index.month
arrivals_departures['months'] = delta_months
# Extract hours for Hour feature
delta_hours = arrivals_departures.index.hour
arrivals_departures['hours'] = delta_hours
# Extract weekday vs. weekend variable
delta_dayofweek = arrivals_departures.index.weekday
delta_weekday_dummy = delta_dayofweek.copy()
delta_weekday_dummy[delta_dayofweek < 5] = 1
delta_weekday_dummy[delta_dayofweek >= 5] = 0
arrivals_departures['weekday_dummy'] = delta_weekday_dummy
# print arrivals_departures
# print arrivals_departures.head(20)
# Create design matrix for months, hours, and weekday vs. weekend.
# We can't just create a "month" column to toss into our model, because it doesnt
# understand what "June" is. Instead, we need to create a column for each month
# and code each row according to what month it's in. Ditto for hours and weekday (=1).
y_arr, X_arr = patsy.dmatrices("arrivals ~ C(months, Treatment) + C(hours, Treatment) + C(weekday_dummy, Treatment)", arrivals_departures, return_type='dataframe')
y_dep, X_dep = patsy.dmatrices("departures ~ C(months, Treatment) + C(hours, Treatment) + C(weekday_dummy, Treatment)", arrivals_departures, return_type='dataframe')
y_dep[pd.isnull(y_dep)] = 0
# Fit poisson distributions for arrivals and departures, print results
arr_poisson_model = sm.Poisson(y_arr, X_arr)
arr_poisson_results = arr_poisson_model.fit()
dep_poisson_model = sm.Poisson(y_dep, X_dep)
dep_poisson_results = dep_poisson_model.fit()
# print arr_poisson_results.summary(), dep_poisson_results.summary()
poisson_results = [arr_poisson_results, dep_poisson_results]
return poisson_results
# <codecell>
# Predict *net* lambda value some time in the future, using the list of hours created above.
# You can predict any number of hours ahead using interval_length, default is set to 1 hour.
# The arrival lambda at 12pm actually means the expected arrival rate from 12pm to 1pm. But if the
# current time is 12:15pm and you're estimating an hour ahead to 1:15pm, you need to find
# 3/4ths of the lambda from 12pm - 1pm and add it to 1/4th of the lambda from 1pm to 2pm.
# This section returns the total lambda over that interval, during which the rate is changing.
# It also works for predictions multiple hours ahead, as all those lambdas will be summed
# and yield a large expected value, which makes sense if you're counting bikes over several hours.
# The function predicts arrival lambdas across the time interval, does the same thing independently
# for departure lambdas, and finds their difference to get the net lambda at that time - the change in bikes
# you'll see at the station in an hour. Add the net lambda to the current number of bikes to get
# the prediction of the expected value of how many bikes will be there.
def lambda_calc(month, time, weekday, poisson_results):
"Compute the lambda value for a specific month, time (hour), and weekday."
# Pull out coefficient estimates for the factored covariants
estimates = poisson_results["params"]
# Fetch intercept
intercept = estimates['Intercept']
# Fetch coefficient estimate that corresponds to the month..
if month == 1:
month_estimate = 0
else:
month_estimate = estimates['C(months, Treatment)[T.'+str(month)+']']
# .. to the hour
hour = floor(time)
if hour == 1:
hour_estimate = 0
else:
hour_estimate = estimates['C(hours, Treatment)[T.'+str(int(hour))+']']
# .. and to the weekday status.
if weekday == 0:
weekday_estimate = 0
else:
weekday_estimate = estimates['C(weekday_dummy, Treatment)[T.'+str(weekday)+']']
# Compute log lambda, which is linear function of the hour, month, and weekday coefficient estimates
log_lambda = intercept + month_estimate + hour_estimate + weekday_estimate
# Raise e to the computed log lambda to find the estimated value of the Poisson distribution for these covariates.
est_lambda = exp(log_lambda)
return est_lambda
def predict_net_lambda(current_time, prediction_interval, month, weekday, poisson_results):
# Define function that takes in a month, time, weekday and returns
# a lambda - the expected value of arrivals or departures during that hour (given that month)
# - using the covariate coefficients estimated above.
# Create list of hours in between the current time and the prediction time
# Need to do this to calculate cumulative rate of arrivals and departures
prediction_time = current_time + prediction_interval
time_list = [current_time]
next_step = current_time
while next_step != prediction_time:
if floor(next_step) + 1 < prediction_time:
next_step = floor(next_step) + 1
time_list.append(next_step)
else:
next_step = prediction_time
time_list.append(next_step)
# Calculate the cumulative lambda rate over the predition interval
arr_cum_lambda = 0
dep_cum_lambda = 0
# Find cumulative lambda for arrivals..
for i in range(1, len(time_list)):
est_lambda = lambda_calc(month, time_list[ i - 1 ], weekday, poisson_results[0])
hour_proportion = time_list[i] - time_list[ i - 1 ]
interval_lambda = est_lambda * hour_proportion
arr_cum_lambda += interval_lambda
# .. and departures
for i in range(1, len(time_list)):
est_lambda = lambda_calc(month, time_list[ i - 1 ], weekday, poisson_results[1])
hour_proportion = time_list[i] - time_list[ i - 1 ]
interval_lambda = est_lambda * hour_proportion
dep_cum_lambda += interval_lambda
net_lambda = arr_cum_lambda - dep_cum_lambda
return net_lambda
# <codecell>
# Estimate the poisson!
def save_poisson_results():
print ("saving")
# station_ids = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74]
# station_ids = getStations()
for station in station_ids:
station_id = station[0]
if (os.path.isfile("%spoisson_results_%s.p" % (pickle_folder, station_id))):
continue
station_updates = get_station_data(station_id)
print("Got data, now fitting")
poisson_results = fit_poisson(station_updates)
file_out = open("%spoisson_results_%s.p" % (pickle_folder, station_id), "wb")
to_save_ps = (poisson_results[0].params, poisson_results[1].params)
pickle.dump(to_save_ps, file_out)
file_out.close()
print "finished %s" % station_id
print("done saving")
# <codecell>
# pickle_folder = "/mnt/data1/BikeShare/pickles/"
pickle_folder = "/Users/darkzeroman/dssg/bikeshare/web/static/pickles/"
# save_poisson_results()
def load_poisson_result(station_id):
temp = pickle.load(open("%spoisson_results_%s.p" % (pickle_folder, station_id), "rb"))
return (dict(params=temp[0]), dict(params=temp[1]))
# <codecell>
'''
# Auxiliary code
# Try to predict!
current_time = 17.5
prediction_interval = 1
month = 5
weekday = 0
bike_change = predict_net_lambda(current_time, prediction_interval, month, weekday, poisson_results)
# print "The change in bikes at time %s and month %s is %s" % (str(floor(current_time)), str(month), str(bike_change))
# Plot predictions of available bikes by hour for given covariates
init_bikes = 18
bike = init_bikes
bikes = [init_bikes]
hours_of_day = range(1,24)
for hour in hours_of_day:
bike += predict_net_lambda(hour, prediction_interval, month, weekday, poisson_results)
bikes.append(bike)
pd.Series(bikes).plot()
'''
# <codecell>
# Validate the model!
# min_time_pt = datetime.datetime(2010,10,8)
# prediction_interval =
# time_step =
#def validate_model(min_time_pt):
# Generate list of time points incremented by the time_step
# Get observations before timepoint
# smaller_updates = station_updates[station_updates.index < min_time_pt]
# print station_updates
# print smaller_updates
#validate_model(min_time_pt)
# <codecell>
# Simulate bike availability at station 17 for next half hour
# We're doing this to flag when station is full or empty, which
# is what bikeshare operators want.
#import sys
def simulate_bikes(station_id, starting_time, final_time, max_slots, starting_bikes_available, month, weekday, poisson_results):
bikes_available = starting_bikes_available
current_time = starting_time
go_empty = 0
go_full = 0
while current_time < final_time:
# Calculate the Appropriate Up and Down Rate Terms
up_lambda = lambda_calc(month,current_time,weekday, poisson_results[0])
down_lambda = lambda_calc(month,current_time,weekday, poisson_results[1])
total_lambda = float(up_lambda + down_lambda)
next_obs_time = random.expovariate(total_lambda)
chance_up = up_lambda / total_lambda
# Update the Current Time to the Next Observation Time
current_time += next_obs_time
if current_time < final_time:
if random.uniform(0,1) > chance_up:
bikes_available -= 1
else:
bikes_available += 1
# Adjust Bikes Available to Sit Inside Range
if bikes_available < 0:
bikes_available = 0
elif bikes_available > max_slots:
bikes_available = max_slots
if bikes_available == 0:
go_empty = 1
if bikes_available == max_slots:
go_full = 1
return (bikes_available, go_empty, go_full)
def simulation(station_id, starting_time, final_time, max_slots, starting_bikes_available, month, weekday, simulate_bikes, trials=250):
poisson_results = load_poisson_result(station_id)
bikes_results = [] # numbikes at the station at the end of each trial
go_empty_results = [] #
go_full_results = [] #
for i in range(1,trials):
bikes, empty, full = simulate_bikes(station_id, starting_time,final_time,max_slots,starting_bikes_available,month,weekday, poisson_results)
bikes_results.append(bikes)
go_empty_results.append(empty)
go_full_results.append(full)
return (bikes_results, go_empty_results, go_full_results)
# <codecell>
def make_prediction(station, how_many_mins):
try:
station_id = station[0]
starting_datetime = datetime.now(pytz.timezone('US/Eastern'))
ending_datetime = starting_datetime + relativedelta(minutes=how_many_mins)
# protect sql injection later?
cur = conn.cursor()
cur.execute("select * from bike_ind_washingtondc where tfl_id = %s order by timestamp desc limit 1;" % station_id)
_, starting_bikes_available, num_spaces, _ = cur.fetchall()[0] #(station_id, bikes, spaces, timestamp)
max_slots = starting_bikes_available + num_spaces
month = starting_datetime.month # Between 1-12
weekday = 0
if (starting_datetime.isoweekday == 0) or (starting_datetime.isoweekday == 7):
weekday = 1
starting_time = round(starting_datetime.hour + (starting_datetime.minute / float(60)), 3)
ending_time = round(ending_datetime.hour + (ending_datetime.minute / float(60)), 3)
bikes_results, empty_results, full_results = simulation(station_id, starting_time, ending_time, max_slots, \
starting_bikes_available, month, weekday, simulate_bikes, 250)
week_dict = {'0': 'Week', '1' : 'Weekend'}
# net_lambda = predict_net_lambda(starting_time, final_time - starting_time, month, weekday, poisson_results)
# print ("In %s during the %s" % (calendar.month_name[month], week_dict[str(weekday)]))
# print ("For Starting Time: %0.2f and Ending Time: %0.2f with Initial Bikes: %d out of a Maximum: %d" % (starting_time, ending_time, starting_bikes_available, max_slots))
# print ('Expected Number of Bikes at %s: %0.2f' % (ending_time, round(np.mean(bikes_results),2)))
# print 'Other Expected Value : ', starting_bikes_available + net_lambda
# print ('Probability of Being (Empty, Full) Any Time in the Next %0.2f hours: (%0.2f, %0.2f)' % \
# (ending_time - starting_time, round(np.mean(empty_results),2), round(np.mean(full_results),2)))
print ", ".join(map(str, [how_many_mins, station_id]))
temp_res = (int(station_id), round(np.mean(bikes_results),2), round(np.mean(empty_results),2), \
round(np.mean(full_results),2), station[2], station[3], station[1], starting_bikes_available, max_slots)
res_names = ("station_id", "expected_num_bikes", "prob_empty", "prob_full", "lat", "lon", "name", "current_bikes", "max_slots")
return dict(zip(res_names, temp_res))
except KeyError:
return (int(station_id), "Prediction Error")
# %time make_prediction('17', 15*4)
# <codecell>
def run_code():
starting_time = 6.0
final_time = 6.5
starting_bikes_available = 21
max_slots = 25
month = 8
weekday = 0
station_id = '17'
#starting_time, final_time, max_slots, starting_bikes_available, month, weekday,
bikes_results, empty_results, full_results = simulation(station_id, starting_time, final_time, max_slots, starting_bikes_available, \
month, weekday,simulate_bikes, 500)
expected_num_bikes = round(np.mean(bikes_results), 2)
prob_empty_any_time = round(np.mean(empty_results), 2)
prob_full_any_time = round(np.mean(full_results), 2)
#print (expected_num_bikes, prob_empty_any_time, prob_full_any_time)
# %timeit run_code()
# <codecell>
def getStations():
cur = conn.cursor()
cur.execute("SELECT DISTINCT * FROM metadata_washingtondc order by id;")
station_ids = cur.fetchall()
station_list = []
for station in station_ids:
station_list.append(station)
return station_list
# print getIds() | 38.008989 | 303 | 0.691853 | 2,335 | 16,914 | 4.796574 | 0.221413 | 0.03625 | 0.025536 | 0.013125 | 0.24375 | 0.190357 | 0.154821 | 0.149018 | 0.145625 | 0.128661 | 0 | 0.02389 | 0.215502 | 16,914 | 445 | 304 | 38.008989 | 0.820182 | 0.336881 | 0 | 0.110553 | 0 | 0.01005 | 0.095926 | 0.017406 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.070352 | null | null | 0.025126 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5243ca3160b48deb0f80a4363230359abdd1c1a8 | 322 | py | Python | editor/urls.py | AndersonBY/p5py | f002caf94df800e29173f78931f5db90003cf4ae | [
"MIT"
] | 15 | 2019-12-13T04:25:23.000Z | 2021-11-21T06:32:25.000Z | editor/urls.py | AndersonHJB/p5py_update | 95b6bdf5353e70443b3e7444e6a698c0fb96aa2a | [
"MIT"
] | 8 | 2020-01-07T22:30:55.000Z | 2021-08-20T00:32:42.000Z | editor/urls.py | AndersonHJB/p5py_update | 95b6bdf5353e70443b3e7444e6a698c0fb96aa2a | [
"MIT"
] | 4 | 2020-01-03T19:18:40.000Z | 2021-06-26T14:10:55.000Z | # -*- coding: utf-8 -*-
# @Author: Anderson
# @Date: 2019-04-25 00:30:09
# @Last Modified by: ander
# @Last Modified time: 2019-12-07 01:14:16
from django.urls import path
from . import views
urlpatterns = [
path("", views.editor, name="editor"),
path("upload_code", views.upload_code, name="upload_code")
]
| 23 | 62 | 0.658385 | 48 | 322 | 4.354167 | 0.6875 | 0.143541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108614 | 0.170807 | 322 | 13 | 63 | 24.769231 | 0.674157 | 0.42236 | 0 | 0 | 0 | 0 | 0.155556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
5247c15213add12bf9a354a74ea71714c77c8312 | 1,179 | py | Python | tests/hpp/tests/test_generation_utils.py | javisenberg/addonpayments-Python-SDK | 8c7b60fd4d245dd588d9f230c17ffde4e8ed33ac | [
"MIT"
] | 2 | 2018-04-11T13:53:38.000Z | 2018-12-09T13:10:18.000Z | tests/hpp/tests/test_generation_utils.py | javisenberg/addonpayments-Python-SDK | 8c7b60fd4d245dd588d9f230c17ffde4e8ed33ac | [
"MIT"
] | 2 | 2019-03-28T12:49:16.000Z | 2019-03-28T12:52:09.000Z | tests/hpp/tests/test_generation_utils.py | javisenberg/addonpayments-Python-SDK | 8c7b60fd4d245dd588d9f230c17ffde4e8ed33ac | [
"MIT"
] | 8 | 2017-07-10T13:32:23.000Z | 2021-08-23T10:55:52.000Z | # -*- encoding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
import re
from addonpayments.utils import GenerationUtils
class TestGenerationUtils:
def test_generate_hash(self):
"""
Test Hash generation success case.
"""
test_string = '20120926112654.thestore.ORD453-11.00.Successful.3737468273643.79347'
secret = 'mysecret'
expected_result = '368df010076481d47a21e777871012b62b976339'
result = GenerationUtils.generate_hash(test_string, secret)
assert expected_result == result
def test_generate_timestamp(self):
"""
Test timestamp generation. Hard to test this in a meaningful way. Checking length and valid characters.
"""
result = GenerationUtils().generate_timestamp()
match = re.match(r'([0-9]{14})', result)
assert match
def test_generate_order_id(self):
"""
Test order Id generation. Hard to test this in a meaningful way. Checking length and valid characters.
"""
result = GenerationUtils().generate_order_id()
match = re.match(r'[A-Za-z0-9-_]{32}', result)
assert match
| 32.75 | 111 | 0.667515 | 130 | 1,179 | 5.884615 | 0.469231 | 0.027451 | 0.058824 | 0.052288 | 0.264052 | 0.264052 | 0.264052 | 0.264052 | 0.264052 | 0.264052 | 0 | 0.090301 | 0.239186 | 1,179 | 35 | 112 | 33.685714 | 0.762542 | 0.225615 | 0 | 0.111111 | 1 | 0 | 0.169632 | 0.126928 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
524c410c76a43bc48198d2c0eb692d37c913c273 | 1,207 | py | Python | form-ex2/main.py | acandreani/ads_web_exercicios | a97ee7ebd0dba9e308b8e2d2318e577903f83f72 | [
"MIT"
] | 1 | 2019-03-13T14:33:28.000Z | 2019-03-13T14:33:28.000Z | form-ex2/main.py | acandreani/ads_web_exercicios | a97ee7ebd0dba9e308b8e2d2318e577903f83f72 | [
"MIT"
] | 1 | 2021-06-23T20:56:49.000Z | 2021-06-23T20:56:49.000Z | form-ex2/main.py | acandreani/ads_web_exercicios | a97ee7ebd0dba9e308b8e2d2318e577903f83f72 | [
"MIT"
] | 1 | 2019-04-24T13:10:58.000Z | 2019-04-24T13:10:58.000Z | from flask import Flask, render_template, request
app = Flask(__name__)
bd={"usuario":"alexandre.c.andreani@gmail.com","senha":"12345"}
def usuario_existe(usuario):
return usuario == bd["usuario"]
def verifica_senha(usuario,senha):
return usuario == bd["usuario"] and senha==bd["senha"]
@app.route("/")
def student():
return render_template("aluno.html")
@app.route("/login")
def login():
return render_template("login.html")
@app.route("/loginresult",methods=['POST'])
def login_result():
if request.method == "POST":
result = request.form
print("result")
print(result)
if usuario_existe(result["email"]):
if verifica_senha(result["email"],result["senha"]):
return render_template("loginresult.html")
else:
return render_template("loginresult_senha_incorreta.html")
else:
return render_template("loginresult_usuario_incorreto.html")
@app.route("/result",methods=['POST'])
def result():
if request.method == "POST":
result = request.form
print("result")
print(result)
return render_template("result.html")
if __name__== "__main__":
app.run(host="0.0.0.0",debug= True)
| 21.553571 | 64 | 0.659486 | 146 | 1,207 | 5.260274 | 0.308219 | 0.127604 | 0.15625 | 0.121094 | 0.268229 | 0.268229 | 0.166667 | 0.166667 | 0.166667 | 0.166667 | 0 | 0.009082 | 0.178956 | 1,207 | 56 | 65 | 21.553571 | 0.765893 | 0 | 0 | 0.285714 | 0 | 0 | 0.228101 | 0.083261 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0 | 0.028571 | 0.114286 | 0.428571 | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
52580d1080b455dd0e2a017e5435bfca15bd5e74 | 232 | py | Python | CodeWars/7 Kyu/Broken sequence.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/7 Kyu/Broken sequence.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/7 Kyu/Broken sequence.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | def find_missing_number(sequence):
try:
numbers = sorted(int(word) for word in sequence.split(" ") if word)
except ValueError:
return 1
return next((i + 1 for i, n in enumerate(numbers) if i + 1 != n), 0) | 38.666667 | 75 | 0.625 | 36 | 232 | 3.972222 | 0.638889 | 0.027972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.258621 | 232 | 6 | 76 | 38.666667 | 0.80814 | 0 | 0 | 0 | 0 | 0 | 0.004292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
525a8b00826a337a4c293642d7c027ab056d2b82 | 2,259 | py | Python | nlp/router.py | kirollosHossam/MachineLearningTask | 3780513af04cf7bb97432436b4714c32d1c271e6 | [
"MIT"
] | null | null | null | nlp/router.py | kirollosHossam/MachineLearningTask | 3780513af04cf7bb97432436b4714c32d1c271e6 | [
"MIT"
] | null | null | null | nlp/router.py | kirollosHossam/MachineLearningTask | 3780513af04cf7bb97432436b4714c32d1c271e6 | [
"MIT"
] | null | null | null | from __future__ import annotations
from typing import Dict, List, Optional, Union
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from nlp.nlp import Trainer
app = FastAPI()
trainer = Trainer()
#BaseModel is used as data validator when using fast api it cares all about exception handilng and validate
#your incoming json to be what you want to be.
class TestingData(BaseModel):
texts: List[str]
class QueryText(BaseModel):
text: str
class StatusObject(BaseModel):
status: str
timestamp: str
classes: List[str]
evaluation: Dict
class PredictionObject(BaseModel):
text: str
predictions: Dict
class PredictionsObject(BaseModel):
predictions: List[PredictionObject]
@app.get("/status", summary="Get current status of the system")
def get_status():
status = trainer.get_status()
return StatusObject(**status)
@app.get("/trainMachineLearning", summary="Train a new Machine Learning model")
def train():
try:
trainer.trainMachineLearning(trainer.merge().text, trainer.merge().dialect)
status = trainer.get_status()
return StatusObject(**status)
except Exception as e:
raise HTTPException(status_code=503, detail=str(e))
@app.get("/trainDeepLearning", summary="Train a new Deep Learning model")
def train():
try:
trainer.trainDeepLearning(trainer.merge().text, trainer.merge().dialect)
status = trainer.get_status()
return StatusObject(**status)
except Exception as e:
raise HTTPException(status_code=503, detail=str(e))
@app.post("/predict", summary="Predict single input")
def predict(query_text: QueryText):
try:
prediction = trainer.predict([query_text.text])[0]
return PredictionObject(**prediction)
except Exception as e:
raise HTTPException(status_code=503, detail=str(e))
@app.post("/predict-batch", summary="predict a batch of sentences")
def predict_batch(testing_data:TestingData):
try:
predictions = trainer.predict(testing_data.texts)
return PredictionsObject(predictions=predictions)
except Exception as e:
raise HTTPException(status_code=503, detail=str(e))
@app.get("/")
def home():
return({"message": "System is up"})
| 30.527027 | 107 | 0.715803 | 276 | 2,259 | 5.797101 | 0.34058 | 0.028125 | 0.0425 | 0.045 | 0.34125 | 0.34125 | 0.3025 | 0.27375 | 0.27375 | 0.27375 | 0 | 0.006997 | 0.177512 | 2,259 | 73 | 108 | 30.945205 | 0.854144 | 0.066844 | 0 | 0.37931 | 0 | 0 | 0.110636 | 0.009972 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.086207 | 0.017241 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
52600d13d8ef93a468c1d0ff6267509d796630cf | 1,108 | py | Python | resources/trials/maya/rotateOrderSym.py | adrienparis/Gapalion | 35d66c2d0de05ffb493a4d8753f675999ff9eaab | [
"MIT"
] | null | null | null | resources/trials/maya/rotateOrderSym.py | adrienparis/Gapalion | 35d66c2d0de05ffb493a4d8753f675999ff9eaab | [
"MIT"
] | null | null | null | resources/trials/maya/rotateOrderSym.py | adrienparis/Gapalion | 35d66c2d0de05ffb493a4d8753f675999ff9eaab | [
"MIT"
] | null | null | null | #!/bin/env mayapy
# -- coding: utf-8 --
u"""Ce test vérifie que le transform [LEFT] a bien le même rotateOrder que le transform [RIGHT]
Les [does not exist] indique un problème dans la nomenclature des noms et donc qu'il ne peut tester la symétrie des rotateOrders"""
__author__ = "Adrien PARIS"
__email__ = "a.paris.cs@gmail.com"
import maya.cmds as cmds
title = u"Vérification de la symétrie des rotates orders"
image = ""
def test():
passed = True
errors = []
for s in cmds.ls(type="transform"):
if s[-2:] == "_L":
r = s[:-2] + "_R"
if not cmds.objExists(r):
errors.append("does not exists : " + r)
continue
if cmds.getAttr(s + ".rotateOrder") != cmds.getAttr(r + ".rotateOrder"):
passed = False
errors.append("not symetric : {0: <20} -> \t \t {1: <24}".format(s, r))
if s[-2:] == "_R":
r = s[:-2] + "_L"
if not cmds.objExists(r):
errors.append("does not exists : " + r)
continue
return passed, errors | 34.625 | 131 | 0.548736 | 149 | 1,108 | 4 | 0.563758 | 0.013423 | 0.04698 | 0.060403 | 0.177852 | 0.177852 | 0.177852 | 0.177852 | 0.177852 | 0.177852 | 0 | 0.014474 | 0.314079 | 1,108 | 32 | 132 | 34.625 | 0.769737 | 0.231949 | 0 | 0.25 | 0 | 0 | 0.231953 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0.125 | 0.041667 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
5265f7fb300d1d6611ad4be7e87983eb5c5ee653 | 568 | py | Python | freyr_app/core/forms.py | blanchefort/freyrmonitoring | 5bf10ba86d3f88390f91106426dd964289f5aee6 | [
"MIT"
] | 2 | 2021-06-01T20:27:14.000Z | 2021-10-01T23:24:45.000Z | freyr_app/core/forms.py | blanchefort/freyrmonitoring | 5bf10ba86d3f88390f91106426dd964289f5aee6 | [
"MIT"
] | null | null | null | freyr_app/core/forms.py | blanchefort/freyrmonitoring | 5bf10ba86d3f88390f91106426dd964289f5aee6 | [
"MIT"
] | null | null | null | from django import forms
class AddUrlForm(forms.Form):
"""Добавляем УРЛ для проверки
"""
url = forms.URLField(initial='https://', label='Введите ссылку для проверки')
class UploadHappinessIndex(forms.Form):
"""Форма для загрузки файла с кастомным индексом счастья
"""
name = forms.CharField(max_length=256, label='Название отчёта')
file = forms.FileField(label='CSV-файл с отчётом')
class SearchItem(forms.Form):
"""Поиск по проиндексированным материалам
"""
search_query = forms.CharField(max_length=512, label='✨✨✨Поиск!')
| 31.555556 | 81 | 0.705986 | 69 | 568 | 5.811594 | 0.681159 | 0.067332 | 0.084788 | 0.114713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012632 | 0.163732 | 568 | 17 | 82 | 33.411765 | 0.825263 | 0.227113 | 0 | 0 | 0 | 0 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
5269a1caf0be2c82d1b57ef29ea750514c38cbea | 1,795 | py | Python | tools/make_folders.py | leonzucchini/recipes | d11d1ee589bbea89b0d587e056c17718de25e9f3 | [
"MIT"
] | 8 | 2017-09-17T10:39:37.000Z | 2021-12-29T11:46:03.000Z | tools/make_folders.py | leonzucchini/Recipes | d11d1ee589bbea89b0d587e056c17718de25e9f3 | [
"MIT"
] | null | null | null | tools/make_folders.py | leonzucchini/Recipes | d11d1ee589bbea89b0d587e056c17718de25e9f3 | [
"MIT"
] | null | null | null | import os
import sys
import shutil
import re
def make_output_folder(folder_path, debug=False):
""" Make folder for output, checking for previous results """
# Skip if debug (avoids replace prompt)
if debug:
print "FolderSetup warning: Not creating directory because debug = True"
pass
else:
# If destination folder does not exist then create it
if not os.path.exists(folder_path):
os.mkdir(folder_path)
else:
# Otherwise give a choice to replace (overwrite), use, or exit
confirm_prompt = "The following folder exists:" + "\n" + \
str(folder_path) + "\n" + \
"Would you like to add to it ('a'), overwrite ('o'), or exit ('e'): "
confirm = raw_input(confirm_prompt)
# Prompt for correctly formatted input (y/n)
while not re.search(r'[aeo]', confirm):
confirm_prompt = "Please confirm what you want to do." + "\n" + \
"Would you like to add to it ('a'), overwrite ('o'), or exit ('e'):"
confirm = raw_input(confirm_prompt)
# If exit
if confirm == "e":
print "OK exiting."
sys.exit(1)
# Else if overwrite
elif confirm == "o":
# Make folder path
shutil.rmtree(folder_path)
os.mkdir(folder_path)
print "Created output folder: %s" %(folder_path)
# Else if add
elif confirm == "a":
print "OK adding to folder"
return None | 35.9 | 93 | 0.485794 | 193 | 1,795 | 4.440415 | 0.42487 | 0.093349 | 0.028005 | 0.039673 | 0.221704 | 0.221704 | 0.158693 | 0.158693 | 0.158693 | 0.158693 | 0 | 0.000972 | 0.426741 | 1,795 | 50 | 94 | 35.9 | 0.831876 | 0.138162 | 0 | 0.266667 | 0 | 0 | 0.223951 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | null | null | 0.033333 | 0.133333 | null | null | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
526d20bd4beb813f088b8c300d75c3314be9fe9d | 1,325 | py | Python | src/dynamic_programming/python/coins/tests/test_coins.py | djeada/GraphAlgorithms | 0961303ec20430f90053a4efb9074185f96dfddc | [
"MIT"
] | 2 | 2021-05-31T13:01:33.000Z | 2021-12-20T19:48:18.000Z | src/dynamic_programming/python/coins/tests/test_coins.py | djeada/GraphAlgorithms | 0961303ec20430f90053a4efb9074185f96dfddc | [
"MIT"
] | null | null | null | src/dynamic_programming/python/coins/tests/test_coins.py | djeada/GraphAlgorithms | 0961303ec20430f90053a4efb9074185f96dfddc | [
"MIT"
] | null | null | null | import unittest
import os
import sys
file_dir = os.path.dirname(os.path.dirname(__file__))
sys.path.append(file_dir + "/src")
from coins import coin_change_basic, coin_change_memo, coin_change_tab
class TestCoinChangeBasic(unittest.TestCase):
def test_negative(self):
num = 0
coins = [3, 2, 1]
result = 0
self.assertEqual(coin_change_basic(num, coins), result)
def test_positivie(self):
num = 25
coins = [5, 10]
result = 3
self.assertEqual(coin_change_basic(num, coins), result)
class TestCoinChangeMemo(unittest.TestCase):
def test_negative(self):
num = 0
coins = [3, 2, 1]
result = 0
self.assertEqual(coin_change_memo(num, coins), result)
def test_positivie(self):
num = 67
coins = [1, 5, 10, 25]
result = 6
self.assertEqual(coin_change_memo(num, coins), result)
class TestCoinChangeTab(unittest.TestCase):
def test_negative(self):
num = 0
coins = [3, 2, 1]
result = 0
self.assertEqual(coin_change_tab(num, coins), result)
def test_positivie(self):
num = 67
coins = [1, 5, 10, 25]
result = 6
self.assertEqual(coin_change_tab(num, coins), result)
if __name__ == "__main__":
unittest.main()
| 23.660714 | 70 | 0.621132 | 171 | 1,325 | 4.590643 | 0.245614 | 0.11465 | 0.145223 | 0.191083 | 0.673885 | 0.673885 | 0.673885 | 0.673885 | 0.510828 | 0.510828 | 0 | 0.040415 | 0.271698 | 1,325 | 55 | 71 | 24.090909 | 0.773057 | 0 | 0 | 0.658537 | 0 | 0 | 0.009057 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 1 | 0.146341 | false | 0 | 0.097561 | 0 | 0.317073 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52702373891582e4ba0d662955581802a9e0d1a1 | 12,915 | py | Python | find_duplicates.py | dallascard/LN_tools | 66be00f1fd11517f7bbf2949cc70f9552f3af4f4 | [
"Apache-2.0"
] | 1 | 2019-09-29T20:48:51.000Z | 2019-09-29T20:48:51.000Z | find_duplicates.py | dallascard/LN_tools | 66be00f1fd11517f7bbf2949cc70f9552f3af4f4 | [
"Apache-2.0"
] | null | null | null | find_duplicates.py | dallascard/LN_tools | 66be00f1fd11517f7bbf2949cc70f9552f3af4f4 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This script looks through the JSON files created by parse_LN_to_JSON.py and looks
for duplicates using the Jaccard Coefficient f k-grams of the article text (body).
It creates a CSV file with all of the cases listed and duplicates marked, and
optionally stores the same information in a JSON file for use by make_sample.py
@author: dcard
"""
#import modules
from os import path, makedirs
from optparse import OptionParser
from json import loads, dump
import codecs
import re
import glob
import csv
import datetime
# This function updates a dictionary if the given value is greater than the present value
# Inputs:
# d: a dictionary
# key: a key for the new dictionary
# value: a value to insert at that key (if it's bigger than the current value)
def insert_max(d, key, value):
# check to see if the key exists
if d.has_key(key):
# if it does, check the value
if value > d[key]:
# if the new value is bigger, update
d[key] = value
# if it doesn't exist, just store the value
else:
d[key] = value
# This function compares two sets of n-grams and reterns the Jaccard Coefficient
# Inputs:
# i_shingles, j_shingles: sets of n-grams
def compare_shingles(i_shingles,j_shingles):
# take the intersection between i and j
shared_shingles = i_shingles.intersection(j_shingles)
# get the size of the intesection
shared_count = len(shared_shingles)
# divide the sum of the individual counts by the shared count (unless 0)
total_count = len(i_shingles) + len(j_shingles) - shared_count
if total_count > 0:
JC = float(shared_count) / float(total_count)
else:
JC = 0
# return the Jaccard Coefficient
return JC
# This function is called when duplicates are found to update a dictionary
# which stores the case ids of all associate duplictes (including itself)
# Inputs:
# duplicates: a dictionary of duplicates
# i_id, j_id: case_ids of the two duplicates
def store_duplicates(duplicates, i_id, j_id):
# start with case_id i
# if the dictionary already has it as a duplicate then update the set
dup_i = set()
dup_j = set()
if duplicates.has_key(i_id):
dup_i = duplicates[i_id]
else:
dup_i = {i_id,j_id}
if duplicates.has_key(j_id):
dup_j = duplicates[j_id]
else:
dup_j = {i_id,j_id}
new_dup = dup_i.union(dup_j)
new_dup = new_dup.union({i_id, j_id})
for d in new_dup:
if duplicates.has_key(d):
duplicates[d].update(new_dup)
else:
duplicates[d] = new_dup
# return the modified dictionary
return duplicates
### MAIN ###
# set up an options parser
usage = "\n%prog project_dir [options]"
parser = OptionParser(usage=usage)
parser.add_option('-k', help='use K-grams for deduplication [default = %default]', metavar='K', default=4)
parser.add_option('-r', help='range (in days) over which to look for duplicates [default = %default]', metavar='RANGE', default = 62)
parser.add_option('-t', help='Threshold above which to consider similar articles as duplicates [default = %default]', metavar='THRESH', default=0.2)
# make a dictionary of months to look for
MONTHS = {u'january':1, u'february':2, u'march':3, u'april':4, u'may':5, u'june':6, u'july':7, u'august':8, u'september':9, u'october':10, u'november':11, u'december':12}
(options, args) = parser.parse_args()
if len(args) < 1:
exit("Error: Please provide a project directory")
# Make sure we can find the input directory
project_dir = args[0]
if not path.exists(project_dir):
exit("Error: Cannot find project directory")
input_dir = project_dir + '/json/'
output_dir = project_dir + '/metadata/'
if not path.exists(output_dir):
makedirs(output_dir)
# Open the csv file for writing
csv_file_name = output_dir + 'duplicates.csv'
csv_file = open(csv_file_name, 'wb')
writer = csv.writer(csv_file)
# Get a list of all the files in the input directory
files = glob.glob(input_dir + '/*.json')
files.sort()
print "Found", len(files), " files."
date_hash = {} # a dictionary of files (articles) indexed by date
case_years = {} # a dictionary of yeas, indexed by case id
shingle_k = int(options.k) # the size of shingles to use (k in k-grams)
shingle_thresh = float(options.t) # the threshold for the JC above which to consider duplicates
date_range = int(options.r) # the range (in days) over which to look for duplicates
# Start an empty list of case_ids
case_ids = []
# Go through all the files one by one
count = 0
for f in files:
# open the file and unpack the json object into dictionary
input_file_name = f
name_parts = input_file_name.split('/')
file_name = name_parts[-1]
input_file = codecs.open(input_file_name, encoding='utf-8')
input_text = input_file.read()
input_file.close()
doc = loads(input_text, encoding='utf-8')
# set default (blank) values for various strings
case_id = u'' # case_id
orig_date = u'' # the date string as written in the article
day = u'' # the day from the date string
month = u'' # the month from the date string
year = u'' # the year from the date string
fulldate = u'' # the date in the format YYYYMMDD
# Look for the case_id from this file and add it to the list
if doc.has_key(u'CASE_ID'):
case_id = doc[u'CASE_ID']
case_ids.append(case_id)
# Look for the date from this file and parse it
if doc.has_key(u'DATE'):
orig_date = doc[u'DATE']
year = doc[u'YEAR']
month = doc[u'MONTH']
if doc.has_key(u'DAY'):
day = doc[u'DAY']
else:
day = 0
if day == 0:
day = 15;
date = datetime.date(int(year), int(month), int(day))
# store this file in the dictionary of files indexed by date
if date_hash.has_key(date):
# if the date exists as a key, add this file to the list at that key
file_list = list(date_hash[date])
file_list.append(file_name)
date_hash[date] = file_list
else:
# otherwise, start a new list
date_hash[date] = [file_name]
# also store the year of this article
case_years[case_id] = int(year)
# keep a count for user feedback
count += 1
if (count%1000 == 0):
print "Processed", count, "files."
# get all the dates for which articles exist and sort them
dates = date_hash.keys()
dates.sort()
# set up some variables
first_date = dates[0] # the earliest date for which we have an article
last_date = dates[-1] # the last date for which we have an article
current_date = first_date # the date we're currently considering
nCases = len(case_ids) # the total number of articles
active_dates = [] # a list of lists of cases currently comparing against
duplicates = {} # a dictionary of duplicates, indexed by case_id
max_JCs = {} # the max Jaccard Coefficient (similarity) indexed by case_id
count = 0 # the number of pairs we have processed
one_day = datetime.timedelta(1) # a constant for incrementing by one day
print first_date, last_date
print "Starting loop"
# go through every day, starting with the first
while current_date <= last_date:
# if our list of active dates is full, pop off the first one added
if len(active_dates) == date_range:
# pop the oldest date
active_dates.pop(0)
# then add a new list for the current date
active_dates.append([])
# look for any files associated with the current date
if (date_hash.has_key(current_date)):
# start an empty list of case_ids
cases = []
# get all the files associatd with the current date
files = date_hash[current_date]
# process each file
for f in files:
# read in the json file and unpack it, as above
input_file_name = input_dir + '/' + f
input_file = codecs.open(input_file_name, encoding='utf-8')
input_text = input_file.read()
input_file.close()
doc = loads(input_text, encoding='utf-8')
# get the case id
if doc.has_key(u'CASE_ID'):
case_id = doc[u'CASE_ID']
# get the text of the article
if doc.has_key(u'BODY'):
body = doc[u'BODY']
text = ''
# combine the paragraphs
for b in body:
text += b + u' '
# split the text into words
words = text.split()
shingles = set()
# create a set of all the n-grams in the article
for w in range(len(words) - shingle_k + 1):
shingle = u''
# create a shingle from k words
for j in range(shingle_k):
shingle += words[w+j] + u''
# add it to the set
shingles.add(shingle)
# add this case and its shingles to the list of cases for this date
cases.append((case_id, shingles))
# add this list of cases from this date to the list of active cases
active_dates[-1] = cases
# compute similarities among new cases and with old cases
# check to see if anything was added this iteration
new_cases = active_dates[-1]
# if at least on case was added
for i in range(len(new_cases)):
# get the case_id from the tuple for this case
i_id = new_cases[i][0]
# get the set of shingles from the tuple for this case
i_shingles = new_cases[i][1]
# compare it to other new cases
for j in range(i+1, len(new_cases)):
j_id = new_cases[j][0]
j_shingles = new_cases[j][1]
# compute the Jaccard Coefficient between shingles
JC = compare_shingles(i_shingles, j_shingles)
# store the max JC for these cases
insert_max(max_JCs, i_id, JC)
insert_max(max_JCs, j_id, JC)
# if the JC is above our threshold, consider these to be duplicates
if (JC > shingle_thresh):
duplicates = store_duplicates(duplicates, i_id, j_id)
# keep a count for user feedback
count += 1
if (count%10000 == 0):
print "Processed", count, "pairs"
# now compare each new case to all old cases in the active range
# go through each date in the active range
for k in range(len(active_dates)-1):
# get each case associated with that date
for j in range(len(active_dates[k])):
# compare as above
j_id = active_dates[k][j][0]
j_shingles = active_dates[k][j][1]
JC = compare_shingles(i_shingles, j_shingles)
insert_max(max_JCs, i_id, JC)
insert_max(max_JCs, j_id, JC)
if (JC > shingle_thresh):
duplicates = store_duplicates(duplicates, i_id, j_id)
count += 1
if (count%10000 == 0):
print "Processed", count, "pairs"
# go to the next date
current_date = current_date + one_day
# output the results as a csv file
case_ids.sort()
# for each case write case_id, max_JC, and list of duplicates
for c in case_ids:
row = [c]
if max_JCs.has_key(c):
row.append(max_JCs[c])
if duplicates.has_key(c):
dup_list = list(duplicates[c])
dup_list.sort()
row.append(dup_list)
writer.writerow(row)
csv_file.close()
# also write this information as a JSON object
output = {}
# for each case, save the case_id, year, and list of duplicates
for c in case_ids:
case = []
if case_years.has_key(c):
case.append(case_years[c])
else:
case.append(0)
if duplicates.has_key(c):
case.append(list(duplicates[c]))
else:
case.append([])
output[c] = case
# save the output to a json file
output_file_name = output_dir + 'duplicates.json'
output_file = codecs.open(output_file_name, mode='w', encoding='utf-8')
dump(output, output_file, ensure_ascii=False, indent=2)
output_file.close()
| 36.176471 | 170 | 0.602787 | 1,874 | 12,915 | 4.024546 | 0.178228 | 0.015911 | 0.003713 | 0.005569 | 0.190135 | 0.145983 | 0.136303 | 0.102758 | 0.102758 | 0.08393 | 0 | 0.008453 | 0.312969 | 12,915 | 357 | 171 | 36.176471 | 0.841542 | 0.324739 | 0 | 0.23301 | 0 | 0 | 0.074168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.038835 | null | null | 0.029126 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52779a9cdce9f94293477ea1630e84c3ba8660ce | 2,842 | py | Python | LovelyFlask/App/views.py | legendary6666/AxfFlask | a93aa8a17fd8a04e4b6510bbb66b317e0da2bdd8 | [
"Apache-2.0"
] | null | null | null | LovelyFlask/App/views.py | legendary6666/AxfFlask | a93aa8a17fd8a04e4b6510bbb66b317e0da2bdd8 | [
"Apache-2.0"
] | null | null | null | LovelyFlask/App/views.py | legendary6666/AxfFlask | a93aa8a17fd8a04e4b6510bbb66b317e0da2bdd8 | [
"Apache-2.0"
] | null | null | null | from flask import Blueprint, request, render_template, url_for
from LovelyFlask.App.models import HomeWheel, HomeNav, HomeMustBuy, HomeMainShow, HomeShop, FoodTypes, Goods, CartModel
blue = Blueprint("first_blue", __name__, url_prefix="/api/")
def init_first_blue(app):
app.register_blueprint(blueprint=blue)
ALL_TYPE = 0
@blue.route("/home/",methods="GET","POST")
def home():
wheels = HomeWheel.query.all()
navs = HomeNav.query.all()
mustbuys = HomeMustBuy.query.all()
mainshows = HomeMainShow.query.all()
shops = HomeShop.query.all()
shops0_1 = shops[0:1]
shops1_3 = shops[1:3]
shops3_7 = shops[3:7]
shops7_11 = shops[7:11]
data = {
"wheels": wheels,
"navs": navs,
"mustbuys": mustbuys,
'shops0_1': shops0_1,
'shops1_3': shops1_3,
'shops3_7': shops3_7,
'shops7_11': shops7_11,
'mainshows': mainshows,
}
return render_template("test.html")
def market(request):
#先给了一个默认值,然后就可以直接跳转到这个页面
return render_template(url_for("marketWithParams"))
def marketWithParams(request, categoryid,childcid):
foods = FoodTypes.objects.all()
if ALL_TYPE == 0:
goods_list = Goods.objects.all().filter(categoryid=categoryid)
else:
goods_list = Goods.objects.all().filter(categoryid = categoryid).filter(childcid=childcid)
food = FoodTypes.query.get(typeid=categoryid)
#food 是每一个视频分类对象
#得到属性childtypenames,然后是一个字符串的形式,而且这个每一个信息中间有#
childtypestr = food.childtypenames
#以#分割得到一个列表
childtypelist = childtypestr.split("#")
#以遍历 的形式将这个列表,遍历,并将这个列表的每一个元素以:分割,添加到一个控列表中,得到一个
#嵌套列表
childlist = []
for child in childtypelist:
childlist.append(child.split(":"))
#[["全部分类,0“],["进口水果",110]]
#将得到的这个列表,元素遍历出来,得到遍历出来的小列表,将小列表索引为0 的元素拿出来当做是页面显示的东西。
data = {
"title":"闪购",
"foods":foods,
"goods_list":goods_list,
"categoryid":categoryid,
"childlist":childlist,
}
return render_template('/html/market/market.html',context=data)
def cart(request):
userid = request.session.get("user_id")
if not userid:
return render_template(url_for(""))
carts = CartModel.objects.filter(c_user_id=userid)
is_all_select = True
totalprice = 0
# 总价应该在一进页面就开始算.应该是选中的都算进去
for cart in carts:
if not cart.c_goods_select:
is_all_select = False
#break当有未选中的时候等于false然后选中的需要进行计算价格
else:
totalprice = totalprice + cart.c_goods_num * cart.c_goods.price
data = {
"title":"购物车",
"carts":carts,
"is_all_select":is_all_select,
"totalprice":totalprice
}
return render_template('/html/cart/cart.html',context=data)
| 22.555556 | 119 | 0.63582 | 313 | 2,842 | 5.607029 | 0.383387 | 0.047863 | 0.05698 | 0.034188 | 0.08661 | 0.05698 | 0.05698 | 0.05698 | 0 | 0 | 0 | 0.020465 | 0.243491 | 2,842 | 125 | 120 | 22.736 | 0.795814 | 0.097467 | 0 | 0.072464 | 0 | 0 | 0.095519 | 0.009434 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028986 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
528428e2a5380d1c525ea3a3731e3cf65f022c14 | 377 | py | Python | src/tests/wikipedia.py | alexseitsinger/page-scrapers | 20898c487fa2dae72e17c10d51b7481e62c72202 | [
"BSD-2-Clause"
] | 1 | 2019-02-23T13:25:22.000Z | 2019-02-23T13:25:22.000Z | src/tests/wikipedia.py | alexseitsinger/page_scrapers | 20898c487fa2dae72e17c10d51b7481e62c72202 | [
"BSD-2-Clause"
] | null | null | null | src/tests/wikipedia.py | alexseitsinger/page_scrapers | 20898c487fa2dae72e17c10d51b7481e62c72202 | [
"BSD-2-Clause"
] | 1 | 2019-08-28T18:16:45.000Z | 2019-08-28T18:16:45.000Z | from page_scrapers.wikipedia.scrapers.film import WikipediaFilmScraper
query_1 = "hellraiser 2"
scraper_1 = WikipediaFilmScraper(query_1)
scraped_1 = scraper_1.scrape()
filtered_1 = scraper_1.filter()
print(filtered_1)
query_2 = "hellraiser judgement"
scraper_2 = WikipediaFilmScraper(query_2)
scraped_2 = scraper_2.scrape()
filtered_2 = scraper_2.filter()
print(filtered_2)
| 26.928571 | 70 | 0.816976 | 52 | 377 | 5.596154 | 0.346154 | 0.257732 | 0.178694 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049563 | 0.090186 | 377 | 13 | 71 | 29 | 0.798834 | 0 | 0 | 0 | 0 | 0 | 0.084881 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.181818 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
529075584e7e7f5b22b46ff655d558470509c089 | 83,766 | py | Python | eden/modifier/rna/lib_forgi.py | zaidurrehman/EDeN | 1f29d4c9d458edb2bd62a98e57254d78a1f2093f | [
"MIT"
] | null | null | null | eden/modifier/rna/lib_forgi.py | zaidurrehman/EDeN | 1f29d4c9d458edb2bd62a98e57254d78a1f2093f | [
"MIT"
] | null | null | null | eden/modifier/rna/lib_forgi.py | zaidurrehman/EDeN | 1f29d4c9d458edb2bd62a98e57254d78a1f2093f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""bulge_graph.py: A graph representation of RNA secondary structure based
on its decomposition into primitive structure types: stems, hairpins,
interior loops, multiloops, etc...
for eden and graphlearn we stripped forgi down to this single file.
forgi: https://github.com/pkerpedjiev/forgi
"""
import sys
import collections as col
import itertools as it
import os
import operator as oper
import contextlib
import random
import shutil
import tempfile as tf
__author__ = "Peter Kerpedjiev"
__copyright__ = "Copyright 2012, 2013, 2014"
__version__ = "0.2"
__maintainer__ = "Peter Kerpedjiev"
__email__ = "pkerp@tbi.univie.ac.at"
bracket_left = "([{<ABCDEFGHIJKLMNOPQRSTUVWXYZ"
bracket_right = ")]}>abcdefghijklmnopqrstuvwxyz"
def gen_random_sequence(l):
'''
Generate a random RNA sequence of length l.
'''
return "".join([random.choice(['A', 'C', 'G', 'U']) for i in range(l)])
@contextlib.contextmanager
def make_temp_directory():
'''
Yanked from:
http://stackoverflow.com/questions/13379742/right-way-to-clean-up-a-temporary-folder-in-python-class
'''
temp_dir = tf.mkdtemp()
yield temp_dir
shutil.rmtree(temp_dir)
def insert_into_stack(stack, i, j):
# print "add", i,j
k = 0
while len(stack[k]) > 0 and stack[k][len(stack[k]) - 1] < j:
k += 1
stack[k].append(j)
return k
def delete_from_stack(stack, j):
# print "del", j
k = 0
while len(stack[k]) == 0 or stack[k][len(stack[k]) - 1] != j:
k += 1
stack[k].pop()
return k
def pairtable_to_dotbracket(pt):
"""
Converts arbitrary pair table array (ViennaRNA format) to structure in dot bracket format.
"""
stack = col.defaultdict(list)
seen = set()
res = ""
for i in range(1, pt[0] + 1):
if pt[i] != 0 and pt[i] in seen:
raise ValueError('Invalid pairtable contains duplicate entries')
seen.add(pt[i])
if pt[i] == 0:
res += '.'
else:
if pt[i] > i: # '(' check if we can stack it...
res += bracket_left[insert_into_stack(stack, i, pt[i])]
else: # ')'
res += bracket_right[delete_from_stack(stack, i)]
return res
def inverse_brackets(bracket):
res = col.defaultdict(int)
for i, a in enumerate(bracket):
res[a] = i
return res
def dotbracket_to_pairtable(struct):
"""
Converts arbitrary structure in dot bracket format to pair table (ViennaRNA format).
"""
pt = [0] * (len(struct) + 1)
pt[0] = len(struct)
stack = col.defaultdict(list)
inverse_bracket_left = inverse_brackets(bracket_left)
inverse_bracket_right = inverse_brackets(bracket_right)
for i, a in enumerate(struct):
i += 1
# print i,a, pt
if a == ".":
pt[i] = 0
else:
if a in inverse_bracket_left:
stack[inverse_bracket_left[a]].append(i)
else:
if len(stack[inverse_bracket_right[a]]) == 0:
raise ValueError('Too many closing brackets!')
j = stack[inverse_bracket_right[a]].pop()
pt[i] = j
pt[j] = i
if len(stack[inverse_bracket_left[a]]) != 0:
raise ValueError('Too many opening brackets!')
return pt
def pairtable_to_tuples(pt):
'''
Convert a pairtable to a list of base pair tuples.
i.e. [4,3,4,1,2] -> [(1,3),(2,4),(3,1),(4,2)]
:param pt: A pairtable
:return: A list paired tuples
'''
pt = iter(pt)
# get rid of the first element which contains the length
# of the sequence. We'll figure it out after the traversal
pt.next()
tuples = []
for i, p in enumerate(pt):
tuples += [(i + 1, p)]
return tuples
def tuples_to_pairtable(pair_tuples, seq_length=None):
'''
Convert a representation of an RNA consisting of a list of tuples
to a pair table:
i.e. [(1,3),(2,4),(3,1),(4,2)] -> [4,3,4,1,2]
:param tuples: A list of pair tuples
:param seq_length: How long is the sequence? Only needs to be passed in when
the unpaired nucleotides aren't passed in as (x,0) tuples.
:return: A pair table
'''
if seq_length is None:
max_bp = max([max(x) for x in pair_tuples])
else:
max_bp = seq_length
pt = [0] * (max_bp + 1)
pt[0] = max_bp
for tup in pair_tuples:
pt[tup[0]] = tup[1]
return pt
def add_bulge(bulges, bulge, context, message):
"""
A wrapper for a simple dictionary addition
Added so that debugging can be made easier
:param bulges:
:param bulge:
:param context:
:param message:
:return:
"""
# bulge = (context, bulge)
bulges[context] = bulges.get(context, []) + [bulge]
return bulges
def any_difference_of_one(stem, bulge):
"""
See if there's any difference of one between the two
ends of the stem [(a,b),(c,d)] and a bulge (e,f)
:param stem: A couple of couples (2 x 2-tuple) indicating the start and end
nucleotides of the stem in the form ((s1, e1), (s2, e2))
:param bulge: A couple (2-tuple) indicating the first and last position
of the bulge.
:return: True if there is an overlap between the stem nucleotides and the
bulge nucleotides. False otherwise
"""
for stem_part in stem:
for part in stem_part:
for bulge_part in bulge:
if abs(bulge_part - part) == 1:
return True
return False
def print_bulges(bulges):
"""
Print the names and definitions of the bulges.
:param bulges: A list of tuples of the form [(s, e)] where s and e are the
numbers of the nucleotides at the start and end of the bulge.
"""
for i in range(len(bulges)):
# print "bulge:", bulge
bulge_str = "define b{} 1".format(i)
bulge = bulges[i]
bulge_str += " {} {}".format(bulge[0] + 1, bulge[1] + 1)
print bulge_str
def condense_stem_pairs(stem_pairs):
"""
Given a list of stem pairs, condense them into stem definitions
I.e. the pairs (0,10),(1,9),(2,8),(3,7) can be condensed into
just the ends of the stem: [(0,10),(3,7)]
:param stem_pairs: A list of tuples containing paired base numbers.
:returns: A list of tuples of tuples of the form [((s1, e1), (s2, e2))]
where s1 and e1 are the nucleotides at one end of the stem
and s2 and e2 are the nucleotides at the other.
"""
stem_pairs.sort()
prev_pair = (-10, -10)
stems = []
start_pair = None
for pair in stem_pairs:
# There's a potential bug here since we don't check the direction
# but hopefully it won't bite us in the ass later
if abs(pair[0] - prev_pair[0]) != 1 or abs(pair[1] - prev_pair[1]) != 1:
if start_pair is not None:
stems += [(start_pair, prev_pair)]
start_pair = pair
prev_pair = pair
if start_pair is not None:
stems += [(start_pair, prev_pair)]
return stems
def print_brackets(brackets):
"""
Print the brackets and a numbering, for debugging purposes
:param brackets: A string with the dotplot passed as input to this script.
"""
numbers = [chr(ord('0') + i % 10) for i in range(len(brackets))]
tens = [chr(ord('0') + i / 10) for i in range(len(brackets))]
print "brackets:\n", brackets, "\n", "".join(tens), "\n", "".join(numbers)
def find_bulges_and_stems(brackets):
"""
Iterate through the structure and enumerate the bulges and the stems that are
present.
The returned stems are of the form [[(s1, s2), (e1,e2)], [(s1,s2),(e1,e2)],...]
where (s1,s2) are the residue numbers of one end of the stem and (e1,e2) are the
residue numbers at the other end of the stem
(see condense_stem_pairs)
The returned bulges are of the form [(s,e), (s,e),...] where s is the start of a bulge
and e is the end of a bulge
:param brackets: A string with the dotplot passed as input to this script.
"""
prev = 'x'
context = 0
bulges = dict()
finished_bulges = []
context_depths = dict()
opens = []
stem_pairs = []
dots_start = 0
context_depths[0] = 0
i = 0
for i in range(len(brackets)):
if brackets[i] == '(':
opens.append(i)
if prev == '(':
context_depths[context] = context_depths.get(context, 0) + 1
continue
else:
context += 1
context_depths[context] = 1
if prev == '.':
dots_end = i - 1
bulges = add_bulge(
bulges, (dots_start, dots_end), context, "4")
if brackets[i] == ')':
if len(opens) == 0:
raise Exception("Unmatched close bracket")
stem_pairs.append((opens.pop(), i))
context_depths[context] -= 1
if context_depths[context] == 0:
if context in bulges:
finished_bulges += bulges[context]
bulges[context] = []
context -= 1
if prev == '.':
dots_end = i - 1
bulges = add_bulge(
bulges, (dots_start, dots_end), context, "2")
if brackets[i] == '.':
if prev == '.':
continue
dots_start = i
prev = brackets[i]
if prev == '.':
dots_end = i
bulges = add_bulge(bulges, (dots_start, dots_end), context, "7")
elif prev == '(':
print >> sys.stderr, "Unmatched bracket at the end"
sys.exit(1)
"""
elif prev == ')':
bulges = add_bulge(bulges, (i+1, i+1), context, "8")
"""
if context in bulges.keys():
finished_bulges += bulges[context]
if len(opens) > 0:
raise Exception("Unmatched open bracket")
stem_pairs.sort()
stems = condense_stem_pairs(stem_pairs)
return finished_bulges, stems
def print_name(filename):
print "name", os.path.splitext(filename)[0]
class BulgeGraph(object):
def __init__(self, bg_file=None, dotbracket_str='', seq=''):
self.seq_length = 0
self.ang_types = None
self.mst = None
self.build_order = None
self.name = "untitled"
self.defines = dict()
self.edges = col.defaultdict(set)
self.longrange = col.defaultdict(set)
self.weights = dict()
# sort the coordinate basis for each stem
self.bases = dict()
self.stem_invs = dict()
self.seq_ids = []
self.name_counter = 0
if dotbracket_str != '':
self.from_dotbracket(dotbracket_str)
self.seq = seq
for i, s in enumerate(seq):
self.seq_ids += [(' ', str(i + 1), ' ')]
# if bg_file is not None:
# self.from_bg_file(bg_file)
# get an internal index for a named vertex
# this applies to both stems and edges
def get_vertex(self, name=None):
"""
Return a new unique vertex name.
"""
if name is None:
name = "x{}".format(self.name_counter)
self.name_counter += 1
return name
def element_length(self, key):
"""
Get the number of residues that are contained within this element.
:param key: The name of the element.
"""
d = self.defines[key]
length = 0
for i in range(0, len(d), 2):
length += d[i + 1] - d[i] + 1
return length
def stem_length(self, key):
"""
Get the length of a particular element. If it's a stem, it's equal to
the number of paired bases. If it's an interior loop, it's equal to the
number of unpaired bases on the strand with less unpaired bases. If
it's a multiloop, then it's the number of unpaired bases.
"""
d = self.defines[key]
if key[0] == 's' or key[0] == 'y':
return (d[1] - d[0]) + 1
elif key[0] == 'f':
return self.get_bulge_dimensions(key)[0]
elif key[0] == 't':
return self.get_bulge_dimensions(key)[1]
elif key[0] == 'h':
return self.get_bulge_dimensions(key)[0]
else:
return min(self.get_bulge_dimensions(key))
def get_single_define_str(self, key):
"""
Get a define string for a single key.
"""
return "define {} {}".format(key, " ".join([str(d) for d in self.defines[key]]))
def get_define_str(self):
"""
Convert the defines into a string.
Format:
define [name] [start_res1] [end_res1] [start_res2] [end_res2]
"""
defines_str = ''
# a method for sorting the defines
def define_sorter(k):
drni = self.define_residue_num_iterator(k, adjacent=True)
return drni.next()
for key in sorted(self.defines.keys(), key=define_sorter):
defines_str += self.get_single_define_str(key)
# defines_str += "define %s %s" % ( key, " ".join([str(d) for d in
# self.defines[key]]))
defines_str += '\n'
return defines_str
def get_length_str(self):
return "length " + str(self.seq_length) + '\n'
def get_connect_str(self):
"""
Get the connections of the bulges in the graph.
Format:
connect [from] [to1] [to2] [to3]
"""
whole_str = ''
for key in self.edges:
if len(self.edges[key]) == 0:
continue
# Our graph will be defined by the stems and the bulges they
# connect to
name = key
if name[0] == 's':
out_str = "connect {}".format(name)
for dest in self.edges[key]:
out_str += " {}".format(dest)
whole_str += out_str
whole_str += '\n'
return whole_str
def get_sequence_str(self):
"""
Return the sequence along with its keyword. I.e.
seq ACGGGCC
"""
if len(self.seq) > 0:
return "seq {}\n".format(self.seq)
else:
return ""
def get_name_str(self):
"""
Return the name of this structure along with its keyword:
name 1y26
"""
return "name {}\n".format(self.name)
def to_bg_string(self):
"""
Output a string representation that can be stored and reloaded.
"""
out_str = ''
out_str += self.get_name_str()
out_str += self.get_length_str()
out_str += self.get_sequence_str()
out_str += self.get_define_str()
out_str += self.get_connect_str()
return out_str
def to_file(self, filename):
with open(filename, 'w') as f:
out_str = self.to_bg_string()
f.write(out_str)
def to_element_string(self):
"""
Create a string similar to dotbracket notation that identifies what
type of element is present at each location.
For example the following dotbracket:
..((..))..
Should yield the following element string:
ffsshhsstt
Indicating that it begins with a fiveprime region, continues with a
stem, has a hairpin after the stem, the stem continues and it is terminated
by a threeprime region.
"""
output_str = [' '] * (self.seq_length + 1)
for d in self.defines.keys():
for resi in self.define_residue_num_iterator(d, adjacent=False):
output_str[resi] = d[0]
return "".join(output_str).strip()
def define_range_iterator(self, node, adjacent=False, seq_ids=False):
"""
Return the ranges of the nucleotides in the define.
In other words, if a define contains the following: [1,2,7,8]
The ranges will be [1,2] and [7,8].
:param adjacent: Use the nucleotides in the neighboring element which
connect to this element as the range starts and ends.
:return: A list of two-element lists
"""
a = iter(self.defines[node])
ranges = it.izip(a, a)
if node[0] == 'i':
# interior loops have to be treated specially because
# they might have a bulge that has no unpaired nucleotides on one
# strand
if adjacent:
conns = self.connections(node)
s1 = self.defines[conns[0]]
s2 = self.defines[conns[1]]
# offset by one, which will be reversed in the yield step
# below
ranges = [[s1[1] + 1, s2[0] - 1], [s2[3] + 1, s1[2] - 1]]
if node[0] == 'm':
if adjacent:
conns = self.connections(node)
s1 = self.get_sides_plus(conns[0], node)[0]
s2 = self.get_sides_plus(conns[1], node)[0]
rnge = sorted([self.defines[conns[0]][s1],
self.defines[conns[1]][s2]])
ranges = [[rnge[0] + 1, rnge[1] - 1]]
for (ds1, ds2) in ranges:
if adjacent:
if ds1 > 1:
ds1 -= 1
if ds2 < self.seq_length:
ds2 += 1
if seq_ids:
# this will cause problems if the nucleotide has insertion
# codes
yield [self.seq_ids[ds1 - 1], self.seq_ids[ds2 - 1]]
else:
yield [ds1, ds2]
def define_residue_num_iterator(self, node, adjacent=False, seq_ids=False):
"""
Iterate over the residue numbers that belong to this node.
:param node: The name of the node
"""
visited = set()
for r in self.define_range_iterator(node, adjacent, seq_ids=False):
for i in range(r[0], r[1] + 1):
if seq_ids:
if self.seq_ids[i - 1] not in visited:
visited.add(self.seq_ids[i - 1])
yield self.seq_ids[i - 1]
else:
if i not in visited:
visited.add(i)
yield i
def iterate_over_seqid_range(self, start_id, end_id):
"""
Iterate over the seq_ids between the start_id and end_id.
"""
i1 = self.seq_ids.index(start_id)
i2 = self.seq_ids.index(end_id)
for i in range(i1, i2 + 1):
yield self.seq_ids[i]
def create_bulge_graph(self, stems, bulges):
"""
Find out which stems connect to which bulges
Stems and bulges which share a nucleotide are considered connected.
:param stems: A list of tuples of tuples of the form [((s1, e1), (s2, e2))]
where s1 and e1 are the nucleotides at one end of the stem
and s2 and e2 are the nucleotides at the other.
:param bulges: A list of tuples of the form [(s, e)] where s and e are the
numbers of the nucleotides at the start and end of the bulge.
"""
for i in range(len(stems)):
stem = stems[i]
for j in range(len(bulges)):
bulge = bulges[j]
if any_difference_of_one(stem, bulge):
self.edges['y{}'.format(i)].add('b{}'.format(j))
self.edges['b{}'.format(j)].add('y{}'.format(i))
def create_stem_graph(self, stems, bulge_counter):
"""
Determine which stems are connected to each other. A stem can be connected to
another stem when there is an interior loop with an unpaired nucleotide on
one side. In this case, a bulge will be created on the other side, but it
will only consist of the two paired bases around where the unpaired base
would be if it existed.
The defines for these bulges will be printed as well as the connection strings
for the stems they are connected to.
:param stems: A list of tuples of tuples of the form [((s1, e1), (s2, e2))]
where s1 and e1 are the nucleotides at one end of the stem
and s2 and e2 are the nucleotides at the other.
:param bulge_counter: The number of bulges that have been encountered so far.
:returns: A dictionary indexed by the number of a stem, containing a set of the
other stems that the index is connected to.
"""
# print "stems:", stems
stem_stems = dict()
for i in range(len(stems)):
for j in range(i + 1, len(stems)):
for k1 in range(2):
# don't fear the for loop
for k2 in range(2):
for l1 in range(2):
for l2 in range(2):
s1 = stems[i][k1][l1]
s2 = stems[j][k2][l2]
if abs(s1 - s2) == 1:
stem_stems_set = stem_stems.get(i, set())
if j not in stem_stems_set:
bn = 'b{}'.format(bulge_counter)
# self.defines[bn] = [min(s1, s2)+1,
# max(s1, s2)+1]
self.defines[bn] = []
self.weights[bn] = 1
self.edges['y{}'.format(i)].add(bn)
self.edges[bn].add('y{}'.format(i))
self.edges['y{}'.format(j)].add(bn)
self.edges[bn].add('y{}'.format(j))
bulge_counter += 1
stem_stems_set.add(j)
stem_stems[i] = stem_stems_set
for d in self.defines.keys():
if d[0] != 'y':
continue
(s1, e1, s2, e2) = self.defines[d]
if abs(s2 - e1) == 1:
bn = 'b{}'.format(bulge_counter)
self.defines[bn] = []
self.weights[bn] = 1
self.edges[bn].add(d)
self.edges[d].add(bn)
bulge_counter += 1
return stem_stems
def remove_vertex(self, v):
"""
Delete a node after merging it with another
:param v: The name of the node
"""
# delete all edges to this node
for key in self.edges[v]:
self.edges[key].remove(v)
for edge in self.edges:
if v in self.edges[edge]:
self.edges[edge].remove(v)
# delete all edges from this node
del self.edges[v]
del self.defines[v]
def reduce_defines(self):
"""
Make defines like this:
define x0 2 124 124 3 4 125 127 5 5
Into this:
define x0 2 3 5 124 127
That is, consolidate contiguous bulge region defines.
"""
for key in self.defines.keys():
if key[0] != 's':
assert (len(self.defines[key]) % 2 == 0)
new_j = 0
while new_j < len(self.defines[key]):
j = new_j
new_j += j + 2
(f1, t1) = (
int(self.defines[key][j]), int(self.defines[key][j + 1]))
# remove bulges of length 0
if f1 == -1 and t1 == -2:
del self.defines[key][j]
del self.defines[key][j]
new_j = 0
continue
# merge contiguous bulge regions
for k in range(j + 2, len(self.defines[key]), 2):
if key[0] == 'y':
# we can have stems with defines like: [1,2,3,4]
# which would imply a non-existant loop at its end
continue
(f2, t2) = (
int(self.defines[key][k]), int(self.defines[key][k + 1]))
if t2 + 1 != f1 and t1 + 1 != f2:
continue
if t2 + 1 == f1:
self.defines[key][j] = str(f2)
self.defines[key][j + 1] = str(t1)
elif t1 + 1 == f2:
self.defines[key][j] = str(f1)
self.defines[key][j + 1] = str(t2)
del self.defines[key][k]
del self.defines[key][k]
new_j = 0
break
def merge_vertices(self, vertices):
"""
This is done when two of the outgoing strands of a stem
go to different bulges
It is assumed that the two ends are on the same sides because
at least one vertex has a weight of 2, implying that it accounts
for all of the edges going out of one side of the stem
:param vertices: A list of vertex names to combine into one.
"""
merge_str = ""
new_vertex = self.get_vertex()
self.weights[new_vertex] = 0
# assert(len(vertices) == 2)
connections = set()
for v in vertices:
merge_str += " {}".format(v)
# what are we gonna merge?
for item in self.edges[v]:
connections.add(item)
# Add the definition of this vertex to the new vertex
# self.merge_defs[new_vertex] = self.merge_defs.get(new_vertex, [])
# + [v]
if v[0] == 's':
self.defines[new_vertex] = self.defines.get(
new_vertex, []) + [self.defines[v][0],
self.defines[v][2]] + [
self.defines[v][1], self.defines[v][3]]
else:
self.defines[new_vertex] = self.defines.get(
new_vertex, []) + self.defines[v]
self.weights[new_vertex] += 1
# remove the old vertex, since it's been replaced by new_vertex
self.remove_vertex(v)
self.reduce_defines()
# self.weights[new_vertex] = 2
for connection in connections:
self.edges[new_vertex].add(connection)
self.edges[connection].add(new_vertex)
return new_vertex
def nucleotides_to_elements(self, nucleotides):
"""
Convert a list of nucleotides to element names.
Remove redundant entries and return a set.
"""
return set([self.get_node_from_residue_num(n) for n in nucleotides])
def find_bulge_loop(self, vertex, max_length=4):
"""
Find a set of nodes that form a loop containing the
given vertex and being no greater than 4 nodes long.
:param vertex: The vertex to start the search from.
:returns: A list of the nodes in the loop.
"""
visited = set()
to_visit = [(key, 1) for key in self.edges[vertex]]
visited.add(vertex)
in_path = [vertex]
while len(to_visit) > 0:
(current, depth) = to_visit.pop()
visited.add(current)
in_path = in_path[:depth]
in_path.append(current)
for key in self.edges[current]:
if key == vertex and depth > 1:
if len(in_path[:depth + 1]) > max_length:
continue
else:
return in_path[:depth + 1]
if key not in visited:
to_visit.append((key, depth + 1))
return []
def add_node(self, name, edges, define, weight=1):
self.defines[name] = define
self.edges[name] = edges
self.weights[name] = weight
for edge in self.edges[name]:
self.edges[edge].add(name)
def dissolve_stem(self, key):
"""
Remove a stem. This means that we need
to reconfigure all of the adjacent elements in such a manner
that they now include the nucleotides that were formerly
in this stem.
"""
st = list(self.stem_bp_iterator(key))
self.remove_base_pairs(st)
def remove_base_pairs(self, to_remove):
"""
Remove all of the base pairs which are in pair_list.
:param to_remove: A list of tuples containing the names of the base pairs.
:return: nothing
"""
pt = self.to_pair_tuples()
nt = []
for p in pt:
to_add = p
for s in to_remove:
if sorted(p) == sorted(s):
to_add = (p[0], 0)
break
nt += [to_add]
self.defines = dict()
# self.edges = dict()
self.from_tuples(nt)
def collapse(self):
"""
If any vertices form a loop, then they are either a bulge region of
a fork region. The bulge (interior loop) regions will be condensed
into one node.
"""
new_vertex = True
while new_vertex:
new_vertex = False
bulges = [k for k in self.defines if k[0] != 'y']
for (b1, b2) in it.combinations(bulges, r=2):
if self.edges[b1] == self.edges[b2] and len(self.edges[b1]) > 1:
connections = self.connections(b1)
all_connections = [sorted(
(self.get_sides_plus(connections[0], b1)[0],
self.get_sides_plus(
connections[0], b2)[0])),
sorted(
(self.get_sides_plus(connections[
1], b1)[0],
self.get_sides_plus(connections[1], b2)[0]))]
if all_connections == [[1, 2], [0, 3]]:
# interior loop
self.merge_vertices([b1, b2])
new_vertex = True
break
def interior_loop_iterator(self):
"""
Iterate over all of the interior loops.
An interior loop can only have two connections: to the two stems which it links.
"""
for key in self.defines.keys():
if key[0] == 'i':
yield key
def relabel_node(self, old_name, new_name):
"""
Change the name of a node.
param old_name: The previous name of the node
param new_name: The new name of the node
"""
# replace the define name
define = self.defines[old_name]
del self.defines[old_name]
self.defines[new_name] = define
# replace the index into the edges array
edge = self.edges[old_name]
del self.edges[old_name]
self.edges[new_name] = edge
# replace the name of any edge that pointed to old_name
for k in self.edges.keys():
new_edges = set()
for e in self.edges[k]:
if e == old_name:
new_edges.add(new_name)
else:
new_edges.add(e)
self.edges[k] = new_edges
def compare_stems(self, b):
"""
A function that can be passed in as the key to a sort.
"""
return (self.defines[b][0], 0)
def compare_bulges(self, b):
connections = self.connections(b)
return (self.defines[connections[0]][0],
self.defines[connections[1]][0])
def compare_hairpins(self, b):
connections = self.connections(b)
return (self.defines[connections[0]][1], sys.maxint)
def relabel_nodes(self):
"""
Change the labels of the nodes to be more indicative of their nature.
s: stem
h: hairpin
i: interior loop
m: multiloop
f: five-prime unpaired
t: three-prime unpaired
"""
stems = []
hairpins = []
interior_loops = []
multiloops = []
fiveprimes = []
threeprimes = []
for d in self.defines.keys():
if d[0] == 'y' or d[0] == 's':
stems += [d]
stems.sort(key=self.compare_stems)
continue
if len(self.defines[d]) == 0 and len(self.edges[d]) == 1:
hairpins += [d]
continue
if len(self.defines[d]) == 0 and len(self.edges[d]) == 2:
multiloops += [d]
continue
if len(self.edges[d]) <= 1 and self.defines[d][0] == 1:
fiveprimes += [d]
continue
if len(self.edges[d]) == 1 and self.defines[d][1] == self.seq_length:
threeprimes += [d]
continue
if (len(self.edges[d]) == 1 and
self.defines[d][0] != 1 and
self.defines[d][1] != self.seq_length):
hairpins += [d]
hairpins.sort(key=self.compare_hairpins)
continue
if d[0] == 'm' or (d[0] != 'i' and len(self.edges[d]) == 2 and
self.weights[d] == 1 and
self.defines[d][0] != 1 and
self.defines[d][1] != self.seq_length):
multiloops += [d]
multiloops.sort(key=self.compare_bulges)
continue
if d[0] == 'i' or self.weights[d] == 2:
interior_loops += [d]
interior_loops.sort(key=self.compare_stems)
for d in fiveprimes:
self.relabel_node(d, 'f1')
for d in threeprimes:
self.relabel_node(d, 't1')
for i, d in enumerate(stems):
self.relabel_node(d, 's%d' % (i))
for i, d in enumerate(interior_loops):
self.relabel_node(d, 'i%d' % (i))
for i, d in enumerate(multiloops):
self.relabel_node(d, 'm%d' % (i))
for i, d in enumerate(hairpins):
self.relabel_node(d, 'h%d' % (i))
def has_connection(self, v1, v2):
""" Is there an edge between these two nodes """
if v2 in self.edges[v1]:
return True
else:
# two multiloops can be connected at the end of a stem
for e in self.edges[v1]:
if e[0] != 's':
continue
if v2 in self.edges[e]:
(s1b, s1e) = self.get_sides(e, v1)
(s2b, s2e) = self.get_sides(e, v2)
if s1b == s2b:
return True
return False
def connection_type(self, define, connections):
"""
Classify the way that two stems are connected according to the type
of bulge that separates them.
Potential angle types for single stranded segments, and the ends of
the stems they connect:
1 2 (1, 1) #pseudoknot
1 0 (1, 0)
3 2 (0, 1)
3 0 (0, 0)
:param define: The name of the bulge separating the two stems
:param connections: The two stems and their separation
"""
if define[0] == 'i':
# interior loop, we just have to check if
# connections[0] < connections[1]
if self.defines[connections[0]][0] < self.defines[connections[1]][0]:
return 1
else:
return -1
elif define[0] == 'm':
(s1c, b1c) = self.get_sides_plus(connections[0], define)
(s2c, b2c) = self.get_sides_plus(connections[1], define)
if (s1c, s2c) == (1, 0):
return 2
elif (s1c, s2c) == (0, 1):
return -2
elif (s1c, s2c) == (3, 0):
return 3
elif (s1c, s2c) == (0, 3):
return -3
elif (s1c, s2c) == (2, 3):
return 4
elif (s1c, s2c) == (3, 2):
return -4
# the next two refer to pseudoknots
elif (s1c, s2c) == (2, 1):
return 5
elif (s1c, s2c) == (1, 2):
return -5
else:
raise Exception("Weird angle type: (s1c, s2c) = (%d, %d)" %
(s1c, s2c))
else:
raise Exception(
"connection_type called on non-interior loop/multiloop")
def connection_ends(self, connection_type):
"""
Find out which ends of the stems are connected by a particular angle
type.
:param connection_type: The angle type, as determined by which corners
of a stem are connected
:return: (s1e, s2b)
"""
ends = ()
if abs(connection_type) == 1:
ends = (1, 0)
elif abs(connection_type) == 2:
ends = (1, 0)
elif abs(connection_type) == 3:
ends = (0, 0)
elif abs(connection_type) == 4:
ends = (1, 0)
elif abs(connection_type) == 5:
ends = (1, 1)
else:
raise Exception('Unknown connection type: %d' % (connection_type))
if connection_type < 0:
return ends[::-1]
else:
return ends
def get_multiloop_nucleotides(self, multiloop_loop):
"""
Return a list of nucleotides which make up a particular
multiloop.
:param multiloop_loop: The elements which make up this multiloop
:return: A list of nucleotides
"""
stems = [d for d in multiloop_loop if d[0] == 's']
multis = [d for d in multiloop_loop if d[0] == 'm']
residues = []
for s in stems:
relevant_edges = [c for c in self.edges[s] if c in multiloop_loop]
sides = [self.get_sides_plus(s, c)[0] for c in relevant_edges]
sides.sort()
# the whole stem is part of this multiloop
if sides == [2, 3] or sides == [0, 1]:
residues += range(
self.defines[s][sides[0]], self.defines[s][sides[1]] + 1)
else:
residues += [
self.defines[s][sides[0]], self.defines[s][sides[1]]]
for m in multis:
residues += self.define_residue_num_iterator(m, adjacent=False)
return residues
def find_external_loops(self):
'''
Return all of the elements which are part of
an external loop.
:return: A list containing the external loops in this molecule
(i.e. ['f0, m3, m5, t0'])
'''
ext_loop = []
for d in it.chain(self.floop_iterator(),
self.tloop_iterator(),
self.mloop_iterator()):
loop_nts = self.shortest_bg_loop(d)
if len(loop_nts) == 0:
ext_loop += [d]
return ext_loop
def find_multiloop_loops(self):
"""
Find out which defines are connected in a multiloop.
:return: Two lists, one containing the sets of nucleotides comprising the shortest loops
and the other containing sets of nucleotides comprising the shortest loops.
"""
loops = set()
for d in self.mloop_iterator():
loop_nts = self.shortest_bg_loop(d)
if len(loop_nts) > 0:
loops.add(tuple(sorted(loop_nts)))
loops = list(loops)
loop_elems = []
for loop in loops:
all_loops = set([self.get_node_from_residue_num(n) for n in loop])
# some multiloops might not contain any nucleotides, so we
# have to explicitly add these
for a, b in it.combinations(all_loops, r=2):
common_edges = set.intersection(self.edges[a], self.edges[b])
for e in common_edges:
all_loops.add(e)
loop_elems += [all_loops]
return loop_elems, loops
def seq_ids_from_seq(self):
"""
Get the sequence ids of the string.
"""
self.seq_ids = []
# when provided with just a sequence, we presume that the
# residue ids are numbered from 1-up
for i, s in enumerate(self.seq):
self.seq_ids += [(' ', i + 1, ' ')]
def remove_degenerate_nodes(self):
"""
For now just remove all hairpins that have no length.
"""
to_remove = []
for d in self.defines:
if d[0] == 'h' and len(self.defines[d]) == 0:
to_remove += [d]
for r in to_remove:
self.remove_vertex(r)
def from_stems_and_bulges(self, stems, bulges):
"""
Create the graph from the list of stems and bulges.
:param stems: A list of tuples of two two-tuples, each containing the start
and end nucleotides of each strand of the stem.
:param bulges: A list of tuples containing the starts and ends of the
of the bulge regions.
:return: Nothing, just make the bulgegraph
"""
for i in range(len(stems)):
# one is added to each coordinate to make up for the fact that
# residues are 1-based
ss1 = stems[i][0][0] + 1
ss2 = stems[i][0][1] + 1
se1 = stems[i][1][0] + 1
se2 = stems[i][1][1] + 1
self.defines['y%d' % (i)] = [min(ss1, se1), max(ss1, se1),
min(ss2, se2), max(ss2, se2)]
self.weights['y%d' % (i)] = 1
for i in range(len(bulges)):
bulge = bulges[i]
self.defines['b%d' % (i)] = sorted([bulge[0] + 1, bulge[1] + 1])
self.weights['b%d' % (i)] = 1
self.create_bulge_graph(stems, bulges)
self.create_stem_graph(stems, len(bulges))
self.collapse()
self.relabel_nodes()
self.remove_degenerate_nodes()
self.sort_defines()
def dissolve_length_one_stems(self):
# dissolve all stems which have a length of one
repeat = True
while repeat:
repeat = False
for k in self.defines:
if k[0] == 's' and self.stem_length(k) == 1:
self.dissolve_stem(k)
repeat = True
break
def from_dotbracket(self, dotbracket_str, dissolve_length_one_stems=False):
"""
Populate the BulgeGraph structure from a dotbracket representation.
ie: ..((..))..
:param dotbracket_str: A string containing the dotbracket representation
of the structure
"""
self.__init__()
self.dotbracket_str = dotbracket_str
self.seq_length = len(dotbracket_str)
if len(dotbracket_str) == 0:
return
pt = dotbracket_to_pairtable(dotbracket_str)
tuples = pairtable_to_tuples(pt)
self.from_tuples(tuples)
if dissolve_length_one_stems:
self.dissolve_length_one_stems()
def to_pair_table(self):
"""
Create a pair table from the list of elements.
The first element in the returned list indicates the number of
nucleotides in the structure.
i.e. [5,5,4,0,2,1]
"""
pair_tuples = self.to_pair_tuples()
return tuples_to_pairtable(pair_tuples)
def to_pair_tuples(self):
"""
Create a list of tuples corresponding to all of the base pairs in the
structure. Unpaired bases will be shown as being paired with a
nucleotide numbered 0.
i.e. [(1,5),(2,4),(3,0),(4,2),(5,1)]
"""
# iterate over each element
table = []
for d in self.defines:
# iterate over each nucleotide in each element
for b in self.define_residue_num_iterator(d):
p = self.pairing_partner(b)
if p is None:
p = 0
table += [(b, p)]
return table
def to_bpseq_string(self):
"""
Create a bpseq string from this structure.
"""
out_str = ''
for i in range(1, self.seq_length + 1):
pp = self.pairing_partner(i)
if pp is None:
pp = 0
out_str += "{} {} {}\n".format(i, self.seq[i - 1], pp)
return out_str
def bpseq_to_tuples_and_seq(self, bpseq_str):
"""
Convert a bpseq string to a list of pair tuples and a sequence
dictionary. The return value is a tuple of the list of pair tuples
and a sequence string.
:param bpseq_str: The bpseq string
:return: ([(1,5),(2,4),(3,0),(4,2),(5,1)], 'ACCAA')
"""
lines = bpseq_str.split('\n')
seq = []
tuples = []
for line in lines:
parts = line.split()
if len(parts) == 0:
continue
(t1, s, t2) = (int(parts[0]), parts[1], int(parts[2]))
tuples += [(t1, t2)]
seq += [s]
seq = "".join(seq).upper().replace('T', 'U')
return (tuples, seq)
def from_tuples(self, tuples):
"""
Create a bulge_graph from a list of pair tuples. Unpaired
nucleotides have a pairing partner of 0.
"""
stems = []
bulges = []
tuples.sort()
tuples = iter(tuples)
(t1, t2) = tuples.next()
prev_from = t1
prev_to = t2
start_from = prev_from
start_to = prev_to
last_paired = prev_from
for t1, t2 in tuples:
(from_bp, to_bp) = (t1, t2)
if abs(to_bp - prev_to) == 1 and prev_to != 0:
# stem
if (((prev_to - prev_from > 0 and to_bp - from_bp > 0) or
(prev_to - prev_from < 0 and to_bp - from_bp < 0)) and
(to_bp - prev_to) == -(from_bp - prev_from)):
(prev_from, prev_to) = (from_bp, to_bp)
last_paired = from_bp
continue
if to_bp == 0 and prev_to == 0:
# bulge
(prev_from, prev_to) = (from_bp, to_bp)
continue
else:
if prev_to != 0:
new_stem = tuple(
sorted([tuple(sorted([start_from - 1, start_to - 1])),
tuple(sorted([prev_from - 1, prev_to - 1]))]))
if new_stem not in stems:
stems += [new_stem]
last_paired = from_bp
start_from = from_bp
start_to = to_bp
else:
new_bulge = ((last_paired - 1, prev_from - 1))
bulges += [new_bulge]
start_from = from_bp
start_to = to_bp
prev_from = from_bp
prev_to = to_bp
# Take care of the last element
if prev_to != 0:
new_stem = tuple(
sorted([tuple(sorted([start_from - 1, start_to - 1])),
tuple(sorted([prev_from - 1, prev_to - 1]))]))
if new_stem not in stems:
stems += [new_stem]
if prev_to == 0:
new_bulge = ((last_paired - 1, prev_from - 1))
bulges += [new_bulge]
self.from_stems_and_bulges(stems, bulges)
def sort_defines(self):
"""
Sort the defines of interior loops and stems so that the 5' region
is always first.
"""
for k in self.defines.keys():
d = self.defines[k]
if len(d) == 4:
if d[0] > d[2]:
new_d = [d[2], d[3], d[0], d[1]]
self.defines[k] = new_d
def to_dotbracket_string(self):
"""
Convert the BulgeGraph representation to a dot-bracket string
and return it.
:return: A dot-bracket representation of this BulgeGraph
"""
pt = self.to_pair_table()
return pairtable_to_dotbracket(pt)
def sorted_stem_iterator(self):
"""
Iterate over a list of the stems sorted by the lowest numbered
nucleotide in each stem.
"""
stems = [d for d in self.defines if d[0] == 's']
stems.sort(key=lambda s: self.defines[s][0])
for s in stems:
yield s
def is_single_stranded(self, node):
"""
Does this node represent a single-stranded region?
Single stranded regions are five-prime and three-prime unpaired
regions, multiloops, and hairpins
:param node: The name of the node
:return: True if yes, False if no
"""
if node[0] == 'f' or node[0] == 't' or node[0] == 'm' or node[0] == 'h':
return True
else:
return False
def get_node_dimensions(self, node):
"""
Return the dimensions of a node.
If the node is a stem, then the dimensions will be l where l is
the length of the stem.
Otherwise, see get_bulge_dimensions(node)
:param node: The name of the node
:return: A pair containing its dimensions
"""
if node[0] == 's':
return (self.stem_length(node), self.stem_length(node))
"""
return (self.defines[node][1] - self.defines[node][0] + 1,
self.defines[node][1] - self.defines[node][0] + 1)
"""
else:
return self.get_bulge_dimensions(node)
def adjacent_stem_pairs_iterator(self):
"""
Iterate over all pairs of stems which are separated by some element.
This will always yield triples of the form (s1, e1, s2) where s1 and
s2 are the stem identifiers and e1 denotes the element that separates
them.
"""
for d in self.defines.keys():
if len(self.edges[d]) == 2:
edges = list(self.edges[d])
if edges[0][0] == 's' and edges[1][0] == 's':
yield (edges[0], d, edges[1])
def stem_bp_iterator(self, stem):
"""
Iterate over all the base pairs in the stem.
"""
d = self.defines[stem]
stem_length = self.stem_length(stem)
for i in range(stem_length):
yield (d[0] + i, d[3] - i)
def get_connected_residues(self, s1, s2):
"""
Get the nucleotides which are connected by the element separating
s1 and s2. They should be adjacent stems.
The connected nucleotides are those which are spanned by a single
interior loop or multiloop. In the case of an interior loop, this
function will return a list of two tuples and in the case of multiloops
if it will be a list of one tuple.
If the two stems are not separated by a single element, then return
an empty list.
"""
# sort the stems according to the number of their first nucleotide
stems = [s1, s2]
stems.sort(key=lambda x: self.defines[x][0])
c1 = self.edges[s1]
c2 = self.edges[s2]
# find out which edges they share
common_edges = c1.intersection(c2)
if len(common_edges) == 0:
# not connected
return []
if len(common_edges) > 1:
raise Exception("Too many connections between the stems")
# the element linking the two stems
conn = list(common_edges)[0]
# find out the sides of the stems that face the bulge
(s1b, s1e) = self.get_sides(s1, conn)
(s2b, s2e) = self.get_sides(s2, conn)
# get the nucleotides on the side facing the stem
s1_nucleotides = self.get_side_nucleotides(s1, s1b)
s2_nucleotides = self.get_side_nucleotides(s2, s2b)
# find out the distances between all the nucleotides flanking
# the bulge
dists = []
for n1 in s1_nucleotides:
for n2 in s2_nucleotides:
dists += [(abs(n2 - n1), n1, n2)]
dists.sort()
# return the ones which are closest to each other
if conn[0] == 'i':
return sorted([sorted(dists[0][1:]), sorted(dists[1][1:])])
else:
return sorted([sorted(dists[0][1:])])
def get_side_nucleotides(self, stem, side):
"""
Get the nucleotide numbers on the given side of
them stem. Side 0 corresponds to the 5' end of the
stem whereas as side 1 corresponds to the 3' side
of the stem.
:param stem: The name of the stem
:param side: Either 0 or 1, indicating the 5' or 3' end of the stem
:return: A tuple of the nucleotide numbers on the given side of
the stem.
"""
if side == 0:
return (self.defines[stem][0], self.defines[stem][3])
elif side == 1:
return (self.defines[stem][1], self.defines[stem][2])
raise Exception("Invalid side (%d) for the stem (%s)." % (stem, side))
def get_any_sides(self, e1, e2):
"""
Get the side of e1 that e2 is on. The only difference from the get_sides
method is the fact that e1 does not have to be a stem.
0 indicates that e2 is on the side with lower numbered
nucleotides and 1 indicates that e2 is on the side with
greater nucleotide numbers.
:param e1: The name of the first element.
:param e2: The name of the second element.
:return: A tuple indicating the side of e1 adjacent to e2 and the side of e2
adjacent to e1
"""
if e1[0] == 's':
return self.get_sides(e1, e2)
elif e2[0] == 's':
return self.get_sides(e2, e1)[::-1]
return None
def get_sides(self, s1, b):
"""
Get the side of s1 that is next to b.
s1e -> s1b -> b
:param s1: The stem.
:param b: The bulge.
:return: A tuple indicating which side is the one next to the bulge
and which is away from the bulge.
"""
s1d = self.defines[s1]
bd = self.defines[b]
# if the bulge is a length 0, multiloop then use the adjacent
# stem to determine its side
if len(bd) == 0:
edges = self.edges[b]
for e in edges:
if e != s1:
bd = self.defines[e]
break
for i in xrange(4):
for k in xrange(len(bd)):
if s1d[i] - bd[k] == 1:
if i == 0:
s1b = 0
break
if i == 2:
s1b = 1
break
elif s1d[i] - bd[k] == -1:
if i == 1:
s1b = 1
break
if i == 3:
s1b = 0
break
if s1b == 0:
s1e = 1
else:
s1e = 0
return (s1b, s1e)
def get_sides_plus(self, s1, b):
"""
Get the side of s1 that is next to b.
s1e -> s1b -> b
:param s1: The stem.
:param b: The bulge.
:return: A tuple indicating the corner of the stem that connects
to the bulge as well as the corner of the bulge that connects
to the stem.
"""
s1d = self.defines[s1]
bd = self.defines[b]
if len(bd) == 0:
edges = self.edges[b]
for e in edges:
if e != s1:
bd = self.defines[e]
break
for k in xrange(len(bd)):
# before the stem on the 5' strand
if s1d[0] - bd[k] == 1:
return (0, k)
# after the stem on the 5' strand
elif bd[k] - s1d[1] == 1:
return (1, k)
# before the stem on the 3' strand
elif s1d[2] - bd[k] == 1:
return (2, k)
# after the stem on the 3' strand
elif bd[k] - s1d[3] == 1:
return (3, k)
raise Exception("Faulty multiloop %s connecting %s"
% (" ".join(map(str, bd)),
" ".join(map(str, s1d))))
def stem_side_vres_to_resn(self, stem, side, vres):
"""
Return the residue number given the stem name, the strand (side) it's on
and the virtual residue number.
"""
d = self.defines[stem]
if side == 0:
return d[0] + vres
else:
return d[3] - vres
def stem_iterator(self):
"""
Iterator over all of the stems in the structure.
"""
for d in self.defines.keys():
if d[0] == 's':
yield d
def hloop_iterator(self):
"""
Iterator over all of the hairpin in the structure.
"""
for d in self.defines.keys():
if d[0] == 'h':
yield d
def mloop_iterator(self):
"""
Iterator over all of the multiloops in the structure.
"""
for d in self.defines.keys():
if d[0] == 'm':
yield d
def iloop_iterator(self):
"""
Iterator over all of the interior loops in the structure.
"""
for d in self.defines.keys():
if d[0] == 'i':
yield d
def floop_iterator(self):
"""
Yield the name of the 5' prime unpaired region if it is
present in the structure.
"""
if 'f1' in self.defines.keys():
yield 'f1'
def tloop_iterator(self):
"""
Yield the name of the 3' prime unpaired region if it is
present in the structure.
"""
if 't1' in self.defines.keys():
yield 't1'
def pairing_partner(self, nucleotide_number):
"""
Return the base pairing partner of the nucleotide at position
nucleotide_number. If this nucleotide is unpaired, return None.
:param nucleotide_number: The position of the query nucleotide in the
sequence.
:return: The number of the nucleotide base paired with the one at
position nucleotide_number.
"""
for d in self.stem_iterator():
for (r1, r2) in self.stem_bp_iterator(d):
if r1 == nucleotide_number:
return r2
elif r2 == nucleotide_number:
return r1
return None
def connections(self, bulge):
"""
Return the edges that connect to a bulge in a list form,
sorted by lowest res number of the connection.
"""
def sort_key(x):
if len(self.defines[x]) > 0:
if self.defines[x][0] == 1:
# special case for stems at the beginning since there is no
# adjacent nucleotide 0
return 0
return list(self.define_residue_num_iterator(x, adjacent=True))[0]
connections = list(self.edges[bulge])
connections.sort(key=sort_key)
return connections
def get_define_seq_str(self, d, adjacent=False):
"""
Get an array containing the sequences for the given define.
Non-stem sequences will contain the sequence without the overlapping
stem residues that are part of the define.
:param d: The define for which to get the sequences
:return: An array containing the sequences corresponding to the defines
"""
define = self.defines[d]
ranges = zip(*[iter(define)] * 2)
c = self.connections(d)
if d[0] == 'i':
s1 = self.defines[c[0]]
s2 = self.defines[c[1]]
if adjacent:
return [self.seq[s1[1] - 1:s2[0]],
self.seq[s2[3] - 1:s1[2]]]
else:
return [self.seq[s1[1]:s2[0] - 1],
self.seq[s2[3]:s1[2] - 1]]
if d[0] == 'm':
s1 = self.defines[c[0]]
s2 = self.defines[c[1]]
i1 = s1[self.get_sides_plus(c[0], d)[0]]
i2 = s2[self.get_sides_plus(c[1], d)[0]]
(i1, i2) = (min(i1, i2), max(i1, i2))
if adjacent:
return [self.seq[i1 - 1:i2]]
else:
return [self.seq[i1:i2 - 1]]
else:
seqs = []
for r in ranges:
if d[0] == 's':
seqs += [self.seq[r[0] - 1:r[1]]]
else:
if adjacent:
if r[0] > 1:
seqs += [self.seq[r[0] - 2:r[1] + 1]]
else:
seqs += [self.seq[r[0] - 1:r[1] + 1]]
else:
seqs += [self.seq[r[0] - 1:r[1]]]
return seqs
def get_stem_direction(self, s1, s2):
"""
Return 0 if the lowest numbered residue in s1
is lower than the lowest numbered residue in s2.
"""
if self.defines[s1][0] < self.defines[s2][0]:
return 0
return 1
def get_multiloop_side(self, m):
"""
Find out which strand a multiloop is on. An example of a situation in
which the loop can be on both sides can be seen in the three-stemmed
structure below:
(.().().)
In this case, the first multiloop section comes off of the 5' strand of
the first stem (the prior stem is always the one with a lower numbered
first residue). The second multiloop section comess of the 3' strand of
the second stem and the third loop comes off the 3' strand of the third
stem.
"""
c = self.connections(m)
p1 = self.get_sides_plus(c[0], m)
p2 = self.get_sides_plus(c[1], m)
return (p1[0], p2[0])
def get_strand(self, multiloop):
"""
Get the strand on which this multiloop is located.
:param multiloop: The name of the multiloop
:return: 0 for being on the lower numbered strand and 1 for
being on the higher numbered strand.
"""
conn = self.connections(multiloop)
t = self.connection_type(multiloop, conn)
if abs(t) == 2:
return 1
elif abs(t) == 3:
return 0
else:
return 2
pass
def get_bulge_dimensions(self, bulge):
"""
Return the dimensions of the bulge.
If it is single stranded it will be (0, x). Otherwise it will be (x, y).
:param bulge: The name of the bulge.
:return: A pair containing its dimensions
"""
bd = self.defines[bulge]
c = self.connections(bulge)
if bulge[0] == 'i':
# if this interior loop only has one unpaired region
# then we have to find out if it's on the 5' strand or
# the 3' strand
# Example:
# s1 1 3
# 23 25
# s2 5 10
# 15 20
s1 = self.defines[c[0]]
s2 = self.defines[c[1]]
dims = (s2[0] - s1[1] - 1, s1[2] - s2[3] - 1)
if bulge[0] == 'm':
# Multiloops are also pretty easy
if len(bd) == 2:
dims = (bd[1] - bd[0] + 1, 1000)
else:
dims = (0, 1000)
if bulge[0] == 'f' or bulge[0] == 't':
dims = (bd[1] - bd[0] + 1, -1)
if bulge[0] == 'h':
dims = (bd[1] - bd[0] + 1, -1)
return dims
def get_node_from_residue_num(self, base_num, seq_id=False):
"""
Iterate over the defines and see which one encompasses this base.
"""
for key in self.defines.keys():
define = self.defines[key]
for i in range(0, len(define), 2):
a = [int(define[i]), int(define[i + 1])]
a.sort()
if seq_id:
for i in range(a[0], a[1] + 1):
if self.seq_ids[i - 1][1] == base_num:
return key
else:
if base_num >= a[0] and base_num <= a[1]:
return key
raise Exception(
"Base number %d not found in the defines." % (base_num))
def get_length(self, vertex):
"""
Get the minimum length of a vertex.
If it's a stem, then the result is its length (in base pairs).
If it's a bulge, then the length is the smaller of it's dimensions.
:param vertex: The name of the vertex.
"""
if vertex[0] == 's':
return abs(self.defines[vertex][1] - self.defines[vertex][0]) + 1
else:
if len(self.edges[vertex]) == 1:
return self.defines[vertex][1] - self.defines[vertex][0] + 1
else:
dims = list(self.get_bulge_dimensions(vertex))
dims.sort()
if vertex[0] == 'i':
return sum(dims) / float(len(dims))
else:
return min(dims)
def get_flanking_region(self, bulge_name, side=0):
"""
If a bulge is flanked by stems, return the lowest residue number
of the previous stem and the highest residue number of the next
stem.
:param bulge_name: The name of the bulge
:param side: The side of the bulge (indicating the strand)
"""
c = self.connections(bulge_name)
if bulge_name[0] == 'h':
s1 = self.defines[c[0]]
return (s1[0], s1[3])
s1 = self.defines[c[0]]
s2 = self.defines[c[1]]
if bulge_name[0] == 'i':
# interior loop
if side == 0:
return (s1[0], s2[1])
else:
return (s2[2], s1[3])
elif bulge_name[0] == 'm':
ss = self.get_multiloop_side(bulge_name)
st = [s1, s2]
ends = []
# go through the two sides and stems and pick
# the other end of the same strand
for i, s in enumerate(ss):
if s == 0:
ends += [st[i][1]]
elif s == 1:
ends += [st[i][0]]
elif s == 2:
ends += [st[i][3]]
elif s == 3:
ends += [st[i][2]]
else:
raise Exception("Weird multiloop sides: %s" %
bulge_name)
ends.sort()
return tuple(ends)
# multiloop
return (None, None)
def get_flanking_sequence(self, bulge_name, side=0):
if len(self.seq) == 0:
raise Exception(
"No sequence present in the bulge_graph: %s" % (self.name))
(m1, m2) = self.get_flanking_region(bulge_name, side)
return self.seq[m1 - 1:m2]
def get_flanking_handles(self, bulge_name, side=0):
"""
Get the indices of the residues for fitting bulge regions.
So if there is a loop like so (between residues 7 and 16):
(((...))))
7890123456
^ ^
Then residues 9 and 13 will be used as the handles against which
to align the fitted region.
In the fitted region, the residues (2,6) will be the ones that will
be aligned to the handles.
:return: (orig_chain_res1, orig_chain_res1, flanking_res1, flanking_res2)
"""
f1 = self.get_flanking_region(bulge_name, side)
c = self.connections(bulge_name)
if bulge_name[0] == 'h':
s1 = self.defines[c[0]]
ab = [s1[1], s1[2]]
return (ab[0], ab[1], ab[0] - f1[0], ab[1] - f1[0])
s1 = self.defines[c[0]]
s2 = self.defines[c[1]]
if bulge_name[0] == 'm':
sides = self.get_multiloop_side(bulge_name)
ab = [s1[sides[0]], s2[sides[1]]]
ab.sort()
return (ab[0], ab[1], ab[0] - f1[0], ab[1] - f1[0])
if bulge_name[0] == 'i':
if side == 0:
ab = [s1[1], s2[0]]
else:
ab = [s2[3], s1[2]]
return (ab[0], ab[1], ab[0] - f1[0], ab[1] - f1[0])
# probably still have to include the 5' and 3' regions, but that
# will come a little later
return None
def are_adjacent_stems(self, s1, s2, multiloops_count=True):
"""
Are two stems separated by only one element. If multiloops should not
count as edges, then the appropriate parameter should be set.
:param s1: The name of the first stem
:param s2: The name of the second stem
:param multiloops_count: Whether to count multiloops as an edge linking
two stems
"""
for e in self.edges[s1]:
if not multiloops_count and e[0] == 'm':
continue
if s2 in self.edges[e]:
return True
return False
def random_subgraph(self, subgraph_length=None):
"""
Return a random subgraph of this graph.
:return: A list containing a the nodes comprising a random subgraph
"""
if subgraph_length is None:
subgraph_length = random.randint(1, len(self.defines.keys()))
start_node = random.choice(self.defines.keys())
curr_length = 0
visited = set()
next_nodes = [start_node]
new_graph = []
while curr_length < subgraph_length:
curr_node = random.choice(next_nodes)
if curr_node[0] == 'i' or curr_node[0] == 'm':
# if it's an interior loop or a multiloop, then we have to
# add the adjacent stems
for e in self.edges[curr_node]:
if e in new_graph:
continue
visited.add(e)
new_graph += [e]
next_nodes += list(self.edges[e])
curr_length += 1
visited.add(curr_node)
next_nodes += list(self.edges[curr_node])
next_nodes = [n for n in next_nodes if n not in visited]
new_graph += [curr_node]
curr_length += 1 # self.element_length(curr_node)
return new_graph
def same_stem_end(self, sd):
"""
Return the index of the define that is on the same end of the
stem as the index sd.
:param sd: An index into a define.
:return: The index pointing to the nucleotide on the other strand
on the same side as the stem.
"""
if sd == 0:
return 3
elif sd == 1:
return 2
elif sd == 2:
return 1
else:
return 0
def get_resseqs(self, define, seq_ids=True):
"""
Return the pdb ids of the nucleotides in this define.
:param define: The name of this element.
:param: Return a tuple of two arrays containing the residue ids
on each strand
"""
resnames = []
ranges = zip(*[iter(self.defines[define])] * 2)
for r in ranges:
strand_resnames = []
for x in range(r[0], r[1] + 1):
if seq_ids:
strand_resnames += [self.seq_ids[x - 1]]
else:
strand_resnames += [x]
resnames += [strand_resnames]
return resnames
def connected_stem_iterator(self):
"""
Iterate over all pairs of connected stems.
"""
for l in it.chain(self.mloop_iterator(), self.iloop_iterator()):
edge_list = list(self.edges[l])
yield (edge_list[0], l, edge_list[1])
def get_mst(self):
"""
Create a minimum spanning tree from this BulgeGraph. This is useful
for constructing a structure where each section of a multiloop is
sampled independently and we want to introduce a break at the largest
multiloop section.
"""
priority = {'s': 1, 'i': 2, 'm': 3, 'f': 4, 't': 5}
# keep track of all linked nodes
edges = sorted(it.chain(self.mloop_iterator(),
self.iloop_iterator()),
key=lambda x: (priority[x[0]], min(self.get_node_dimensions(x))))
mst = set(it.chain(self.stem_iterator(),
self.floop_iterator(),
self.tloop_iterator()))
# store all of the disconnected trees
forest = [set([m]) for m in mst]
# get the tree containing a particular element
def get_tree(elem):
for t in forest:
if elem in t:
return t
while len(edges) > 0:
conn = edges.pop(0)
neighbors = list(self.edges[conn])
# get the trees containing the neighbors of this node
# the node should be an interior loop or multiloop so
# the neighbors should necessarily be stems, 5' or 3'
t1 = get_tree(neighbors[0])
t2 = get_tree(neighbors[1])
if len(set.intersection(t1, t2)) == 0:
# if this node connects two disparate trees, then add it to the
# mst
new_tree = t1.union(t2)
forest.remove(t1)
forest.remove(t2)
forest.append(new_tree)
mst.add(conn)
return mst
def traverse_graph(self):
"""
Traverse the graph to get the angle types. The angle type depends on
which corners of the stem are connected by the multiloop or internal
loop.
"""
if self.mst is None:
self.mst = self.get_mst()
build_order = []
to_visit = [('s0', 'start')]
visited = set(['s0'])
build_paths = col.defaultdict(list)
while len(to_visit) > 0:
to_visit.sort(key=lambda x: min(self.get_node_dimensions(x[0])))
(current, prev) = to_visit.pop(0)
for e in self.edges[current]:
if e not in visited and e in self.mst:
# make sure the node hasn't been visited
# and is in the minimum spanning tree
to_visit.append((e, current))
build_paths[e] += [e]
build_paths[e] += build_paths[current]
visited.add(e)
if current[0] != 's' and len(self.edges[current]) == 2:
# multiloop or interior loop
# overkill method of getting the stem that isn't
# equal to prev
next_stem = set.difference(self.edges[current],
set([prev]))
build_order += [(prev, current, list(next_stem)[0])]
self.build_paths = build_paths
self.build_order = build_order
return build_order
def set_angle_types(self):
"""
Fill in the angle types based on the build order
"""
if self.build_order is None:
self.traverse_graph()
self.ang_types = dict()
for (s1, b, s2) in self.build_order:
self.ang_types[b] = self.connection_type(b, [s1, s2])
def get_angle_type(self, bulge):
"""
Return what type of angle this bulge is, based on the way this
would be built using a breadth-first traversal along the minimum
spanning tree.
"""
if self.ang_types is None:
self.set_angle_types()
if bulge in self.ang_types:
return self.ang_types[bulge]
else:
return None
def is_node_pseudoknot(self, d):
"""
Is a particular multiloop part of a pseudoknot?
"""
conn = self.connections(d)
ct = self.connection_type(d, conn)
if abs(ct) == 5:
return True
return False
def is_loop_pseudoknot(self, loop):
"""
Is a particular loop a pseudoknot?
:param loop: A list of elements that are part of the loop.
:return: Either True or false
"""
allowed_ang_types = [2, 3, 4]
found_ang_types = set()
for l in loop:
if l[0] != 'm':
continue
conn = self.connections(l)
ctype = self.connection_type(l, conn)
if ctype not in allowed_ang_types:
return True
found_ang_types.add(ctype)
if len(found_ang_types) == 3:
return False
return True
def is_pseudoknot(self):
"""
Is this bulge part of a pseudoknot?
"""
for d in self.mloop_iterator():
if self.is_node_pseudoknot(d):
return True
return False
'''
def to_networkx(self):
"""
Convert this graph to a networkx representation. This representation
will contain all of the nucleotides as nodes and all of the base pairs
as edges as well as the adjacent nucleotides.
"""
import networkx as nx
G = nx.Graph()
residues = []
for d in self.defines:
prev = None
for r in self.define_residue_num_iterator(d):
G.add_node(r)
residues += [r]
residues.sort()
prev = None
for r in residues:
if prev is not None:
G.add_edge(prev, r)
prev = r
for s in self.stem_iterator():
for (f, t) in self.stem_bp_iterator(s):
G.add_edge(f, t)
return G
'''
def ss_distance(self, e1, e2):
'''
Calculate the distance between two elements (e1, e2)
along the secondary structure. The distance only starts
at the edge of each element, and is the closest distance
between the two elements.
:param e1: The name of the first element
:param e2: The name of the second element
:return: The integer distance between the two along the secondary
structure.
'''
# get the edge nucleotides
# thanks to:
# http://stackoverflow.com/questions/2154249/identify-groups-of-continuous-numbers-in-a-list
# we get the edges, except that they might be one too close because we use adjacent
# nucleotides, nevertheless we'll take care of that later
d1_corners = []
d2_corners = []
for key, group in it.groupby(
enumerate(self.define_residue_num_iterator(e1, adjacent=True)),
lambda(index, item): index - item):
group = map(oper.itemgetter(1), group)
d1_corners += group
for key, group in it.groupby(
enumerate(self.define_residue_num_iterator(e2, adjacent=True)),
lambda(index, item): index - item):
group = map(oper.itemgetter(1), group)
d2_corners += group
import networkx as nx
G = self.to_networkx()
path_lengths = []
for c1, c2 in it.product(d1_corners, d2_corners):
path_lengths += [nx.shortest_path_length(G, c1, c2)]
if e1 == e2:
return 0
if e1 in self.edges[e2]:
return min(path_lengths) + 1
# make some exceptions for edges which have length 0
common_edges = set.intersection(self.edges[e1], self.edges[e2])
for e in common_edges:
if e[0] == 'i' and len(self.defines[e]) < 4:
return min(path_lengths) + 1
elif e[0] == 'm' and len(self.defines[e]) < 2:
return min(path_lengths) + 1
return min(path_lengths) + 2
def get_position_in_element(self, resnum):
node = self.get_node_from_residue_num(resnum)
if node[0] == 's':
if self.defines[node][0] <= resnum <= self.defines[node][1]:
return resnum - self.defines[node][0], self.defines[node][1] - self.defines[node][0]
else:
return abs(resnum - self.defines[node][3]), self.defines[node][1] - self.defines[node][0]
elif node[0] == 'i':
s0, s1 = self.connections(node)
if self.defines[s0][1] <= resnum <= self.defines[s1][0]:
return resnum - self.defines[s0][1], self.defines[s1][0] - self.defines[s0][1]
else:
return abs(resnum - self.defines[s0][2]) - 1, self.defines[s0][2] - self.defines[s1][3]
elif node[0] == 'h':
pos1 = resnum - self.defines[node][0]
pos2 = abs(resnum - self.defines[node][1])
return min(pos1, pos2) + 1, (self.defines[node][1] - self.defines[node][0] + 2) / 2
i = 0
while i < len(self.defines[node]):
s = self.defines[node][i]
e = self.defines[node][i + 1]
if s <= resnum <= e:
return resnum - s + 1, e - s + 2
i += 2
return None
def connected(self, n1, n2):
'''
Are the nucleotides n1 and n2 connected?
@param n1: A node in the BulgeGraph
@param n2: Another node in the BulgeGraph
@return: True or False indicating whether they are connected.
'''
if n1 in self.edges[n2] or n2 in self.edges[n1]:
return True
# two multiloops can be considered connected if they both
# link to the same side of the same stem
if n1[0] == 'm' and n2[0] == 'm':
common_stems = list(
set.intersection(self.edges[n1], self.edges[n2]))
if len(common_stems) == 0:
return False
common_stem = common_stems[0]
(s1c, b1c) = self.get_sides_plus(common_stem, n1)
(s2c, b1c) = self.get_sides_plus(common_stem, n2)
if sorted([s1c, s2c]) == [0, 3] or sorted([s1c, s2c]) == [1, 2]:
return True
return False
def bg_from_subgraph(bg, sg):
"""
Create a BulgeGraph from a list containing the nodes
to take from the original.
WARNING: The sequence information is not copied
"""
nbg = BulgeGraph()
nbg.seq_length = 0
for d in sg:
# copy the define
nbg.defines[d] = bg.defines[d][::]
# copy edges only if they connect elements which
# are also in the new structure
for e in bg.edges.keys():
for conn in bg.edges[e]:
if conn in sg:
nbg.edges[e].add(conn)
return nbg
| 31.69353 | 105 | 0.514899 | 10,882 | 83,766 | 3.872266 | 0.077375 | 0.040984 | 0.004319 | 0.005411 | 0.275333 | 0.193626 | 0.149224 | 0.122597 | 0.101429 | 0.092031 | 0 | 0.026995 | 0.383962 | 83,766 | 2,642 | 106 | 31.705526 | 0.789586 | 0.053423 | 0 | 0.259205 | 0 | 0 | 0.017085 | 0.001484 | 0 | 0 | 0 | 0 | 0.000736 | 0 | null | null | 0.000736 | 0.007364 | null | null | 0.005155 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52935524fdd04ab3aad5b99a96020fa25d8790c3 | 3,215 | py | Python | tools/test_net.py | Willy0919/Evovling_Boxes | a8543227f9e715d67dde1ffe62ee6d0400cca517 | [
"BSD-3-Clause"
] | 74 | 2017-04-07T13:18:39.000Z | 2021-05-20T01:35:31.000Z | tools/test_net.py | Willy0919/Evovling_Boxes | a8543227f9e715d67dde1ffe62ee6d0400cca517 | [
"BSD-3-Clause"
] | 10 | 2017-06-22T12:54:17.000Z | 2021-06-28T11:28:11.000Z | tools/test_net.py | Willy0919/Evovling_Boxes | a8543227f9e715d67dde1ffe62ee6d0400cca517 | [
"BSD-3-Clause"
] | 26 | 2017-06-02T05:54:44.000Z | 2021-03-12T07:53:54.000Z | #!/usr/bin/env python
import _init_paths
from evb.test import test_net
from evb.config import cfg, cfg_from_file, cfg_from_list
from datasets.factory import get_imdb
import caffe
import argparse
import pprint
import time, os, sys
def parse_args():
"""
Parse input arguments
"""
parser = argparse.ArgumentParser(description='Test a Fast R-CNN network')
parser.add_argument('--gpu', dest='gpu_id', help='GPU id to use',
default=0, type=int)
parser.add_argument('--def', dest='prototxt',
help='prototxt file defining the network',
default=None, type=str)
parser.add_argument('--net', dest='caffemodel',
help='model to test',
default=None, type=str)
parser.add_argument('--cfg', dest='cfg_file',
help='optional config file', default=None, type=str)
parser.add_argument('--wait', dest='wait',
help='wait until net file exists',
default=True, type=bool)
parser.add_argument('--imdb', dest='imdb_name',
help='dataset to test',
default='detrac', type=str)
parser.add_argument('--comp', dest='comp_mode', help='competition mode',
action='store_true')
parser.add_argument('--set', dest='set_cfgs',
help='set config keys', default=None,
nargs=argparse.REMAINDER)
parser.add_argument('--vis', dest='vis', help='visualize detections',
action='store_true')
parser.add_argument('--num_dets', dest='max_per_image',
help='max number of detections per image',
default=100, type=int)
parser.add_argument('--test', dest='test_order',
help='test file',
default=01, type=int)
parser.add_argument('--data', dest='data_path',
help='set training and testing data path', default=None, type=str)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
return args
if __name__ == '__main__':
args = parse_args()
print('Called with args:')
print(args)
if args.cfg_file is not None:
cfg_from_file(args.cfg_file)
if args.set_cfgs is not None:
cfg_from_list(args.set_cfgs)
cfg.GPU_ID = args.gpu_id
cfg.DATASET_DIR = args.data_path
print('Using config:')
pprint.pprint(cfg)
while not os.path.exists(args.caffemodel) and args.wait:
print('Waiting for {} to exist...'.format(args.caffemodel))
time.sleep(10)
caffe.set_mode_gpu()
caffe.set_device(args.gpu_id)
net = caffe.Net(args.prototxt, args.caffemodel, caffe.TEST)
net.name = os.path.splitext(os.path.basename(args.caffemodel))[0]
imdb = get_imdb(args.imdb_name)
imdb.competition_mode(args.comp_mode)
test_file = s = "%02d" % (args.test_order)
print 'test_file:', test_file
print 'max per image:',args.max_per_image
test_net(net, imdb, max_per_image=args.max_per_image, vis=args.vis,test=test_file)
| 36.954023 | 90 | 0.597823 | 414 | 3,215 | 4.458937 | 0.289855 | 0.058505 | 0.110509 | 0.039003 | 0.189057 | 0.119718 | 0.085049 | 0 | 0 | 0 | 0 | 0.005589 | 0.276516 | 3,215 | 86 | 91 | 37.383721 | 0.788048 | 0.006221 | 0 | 0.056338 | 0 | 0 | 0.177067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.112676 | null | null | 0.126761 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
52964f8e82619896f05ff838aec7c2f18d6b3605 | 2,030 | py | Python | GameStore/product/migrations/0001_initial.py | hossein9090/gamestore | 62d20b1d32c52c68dfe587ae8b6de5c36c122a37 | [
"MIT"
] | null | null | null | GameStore/product/migrations/0001_initial.py | hossein9090/gamestore | 62d20b1d32c52c68dfe587ae8b6de5c36c122a37 | [
"MIT"
] | null | null | null | GameStore/product/migrations/0001_initial.py | hossein9090/gamestore | 62d20b1d32c52c68dfe587ae8b6de5c36c122a37 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.6 on 2021-08-24 09:26
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Category',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('cat_parent', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='product.category')),
],
),
migrations.CreateModel(
name='Productbase',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('stock', models.BooleanField(default=False)),
('device', models.CharField(choices=[('ps4', 'ps4'), ('ps5', 'ps5'), ('all', 'all'), ('xbox', 'xbox'), ('nintendo', 'nintendo switch')], max_length=20)),
('description', models.TextField(blank=True, null=True)),
('price', models.FloatField(default=0.0)),
('added_time', models.DateTimeField(auto_now_add=True)),
('category', models.ForeignKey(on_delete=django.db.models.deletion.RESTRICT, to='product.category')),
],
),
migrations.CreateModel(
name='ImageProduct',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('image', models.ImageField(upload_to='images/')),
('default', models.BooleanField(default=False)),
('product', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='product.productbase')),
],
),
]
| 43.191489 | 169 | 0.578818 | 201 | 2,030 | 5.741294 | 0.39801 | 0.034662 | 0.048527 | 0.076257 | 0.503466 | 0.503466 | 0.438475 | 0.438475 | 0.398614 | 0.398614 | 0 | 0.021433 | 0.264532 | 2,030 | 46 | 170 | 44.130435 | 0.751507 | 0.022167 | 0 | 0.435897 | 1 | 0 | 0.119012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.051282 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bfdb1ec3bcf1251777d5b7ee5b6352813b38dd21 | 435 | py | Python | src/organ_match_app/matchapp/migrations/0004_auto_20190509_1708.py | ajmengistu/organmatch | 0b1549bde715eb2e44cbb6dcd34fc5e0ce315e4e | [
"bzip2-1.0.6"
] | 1 | 2019-05-07T21:47:54.000Z | 2019-05-07T21:47:54.000Z | src/organ_match_app/matchapp/migrations/0004_auto_20190509_1708.py | ajmengistu/organmatch | 0b1549bde715eb2e44cbb6dcd34fc5e0ce315e4e | [
"bzip2-1.0.6"
] | 7 | 2019-12-04T22:51:59.000Z | 2022-02-10T08:28:35.000Z | src/organ_match_app/matchapp/migrations/0004_auto_20190509_1708.py | ajmengistu/organmatch | 0b1549bde715eb2e44cbb6dcd34fc5e0ce315e4e | [
"bzip2-1.0.6"
] | 1 | 2020-07-23T03:43:46.000Z | 2020-07-23T03:43:46.000Z | # Generated by Django 2.2.1 on 2019-05-09 21:08
from django.conf import settings
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('matchapp', '0003_available_doctors'),
]
operations = [
migrations.RenameModel(
old_name='UserProfile',
new_name='Person',
),
]
| 21.75 | 66 | 0.648276 | 47 | 435 | 5.851064 | 0.765957 | 0.072727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058462 | 0.252874 | 435 | 19 | 67 | 22.894737 | 0.787692 | 0.103448 | 0 | 0 | 1 | 0 | 0.121134 | 0.056701 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bfe336aabdbb7507441477f437b0274a9584341b | 1,710 | py | Python | app/jitsi/main.py | RibhiEl-Zaru/jitsi-party | d51b827304f97010dcf82673d124c8f68f98cb02 | [
"MIT"
] | null | null | null | app/jitsi/main.py | RibhiEl-Zaru/jitsi-party | d51b827304f97010dcf82673d124c8f68f98cb02 | [
"MIT"
] | null | null | null | app/jitsi/main.py | RibhiEl-Zaru/jitsi-party | d51b827304f97010dcf82673d124c8f68f98cb02 | [
"MIT"
] | null | null | null | import os
import json
from .models import User
from flask import Blueprint, send_from_directory, redirect, url_for, current_app, request, jsonify
main = Blueprint('main', __name__)
@main.route('/join', methods=['GET', 'POST'])
def join():
params = request.get_json()['params']
params['ip'] = compute_ip()
user = User.create(**params)
return jsonify(user.to_json())
@main.route('/rooms')
def get_rooms():
basedir = current_app.config.get('BASE_DIR')
rooms = json.load(open(os.path.join(basedir, 'rooms.json')))
adventures = json.load(open(os.path.join(basedir, 'adventures.json')))
for adventure in adventures.values():
for node_name, adventure_node in adventure.items():
rooms[node_name] = {
'name': adventure_node.get('name', ''),
'type': 'adventure',
'text': adventure_node['text'],
'buttons': adventure_node['buttons']
}
if adventure_node.get('map'):
rooms[node_name]['map'] = adventure_node['map']
return jsonify(rooms)
@main.route('/', defaults={'path': ''})
@main.route('/<path:path>')
def serve(path):
if path and not path.startswith('client'):
return redirect(url_for('main.serve'))
return send_from_directory(current_app.static_folder, 'index.html')
def compute_ip():
headers_list = request.headers.getlist("X-Forwarded-For")
# using the 0th index of headers_list is dangerous for stuff explained here: http://esd.io/blog/flask-apps-heroku-real-ip-spoofing.html
# TODO find a way to make this ALWAYS get client IP
user_ip = headers_list[0] if headers_list else request.remote_addr
return user_ip
| 36.382979 | 140 | 0.654386 | 226 | 1,710 | 4.792035 | 0.411504 | 0.072022 | 0.031394 | 0.025854 | 0.053555 | 0.053555 | 0.053555 | 0 | 0 | 0 | 0 | 0.001462 | 0.2 | 1,710 | 46 | 141 | 37.173913 | 0.790205 | 0.107602 | 0 | 0 | 0 | 0 | 0.113666 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 1 | 0.108108 | false | 0 | 0.108108 | 0 | 0.351351 | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bfe67e7a43775352603cca8735626be5e0303770 | 2,436 | py | Python | recipes/Python/299133_bubblebabble/recipe-299133.py | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/299133_bubblebabble/recipe-299133.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/299133_bubblebabble/recipe-299133.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | #! /usr/bin/env python
"""Compute a (somewhat more) human readable format for message
digests. This is port of the perl module Digest-BubbleBabble-0.01
(http://search.cpan.org/~btrott/Digest-BubbleBabble-0.01/)
"""
vowels = "aeiouy"
consonants = "bcdfghklmnprstvzx"
def bubblebabble(digest):
"""compute bubblebabble representation of digest.
@param digest: raw string representation of digest (e.g. what md5.digest returns)
@type digest: str
@return: bubblebabble representation of digest
@rtype: str
"""
digest = [ord(x) for x in digest]
dlen = len(digest)
seed = 1
rounds = 1+dlen/2
retval = "x"
for i in range(rounds):
if i+1<rounds or dlen % 2:
idx0 = (((digest[2*i] >> 6) & 3) + seed) % 6
idx1 = (digest[2*i] >> 2) & 15;
idx2 = ((digest[2*i] & 3) + seed / 6) % 6;
retval += "%s%s%s" % (vowels[idx0], consonants[idx1], vowels[idx2])
if i+1 < rounds:
idx3 = (digest[2 * i + 1] >> 4) & 15;
idx4 = digest[2 * i + 1] & 15;
retval += "%s-%s" % (consonants[idx3], consonants[idx4])
seed = (seed * 5 + digest[2*i] * 7 +
digest[2*i+1]) % 36;
else:
idx0 = seed % 6;
idx1 = 16;
idx2 = seed / 6;
retval += "%s%s%s" % (vowels[idx0], consonants[idx1], vowels[idx2])
retval += "x"
return retval
def hexstring2string(s):
"""convert hex representation of digest back to raw digest"""
assert (len(s) % 2 == 0)
if s.startswith("0x") or s.startswith("0X"):
s = s[2:]
return "".join([chr(eval("0x%s" % s[i:i+2])) for i in range(0, len(s), 2)])
def _test():
tests = """432cc46b5c67c9adaabdcc6c69e23d6d xibod-sycik-rilak-lydap-tipur-tifyk-sipuv-dazok-tixox
5a1edbe07020525fd28cba1ea3b76694 xikic-vikyv-besed-begyh-zagim-sevic-vomer-lunon-gexex
1c453603cdc914c1f2eeb1abddae2e03 xelag-hatyb-fafes-nehys-cysyv-vasop-rylop-vorab-fuxux
df8ec33d78ae78280e10873f5e58d5ad xulom-vebyf-tevyp-vevid-mufic-bucef-zylyh-mehyp-tuxax
02b682a73739a9fb062370eaa8bcaec9 xebir-kybyp-latif-napoz-ricid-fusiv-popir-soras-nixyx"""
# ...as computed by perl
tests = [x.split()[:2] for x in tests.split("\n")]
for digest, expected in tests:
res=bubblebabble(hexstring2string(digest))
print digest, res, ("failure", "ok")[expected==res]
if __name__=="__main__":
_test()
| 33.369863 | 101 | 0.610016 | 323 | 2,436 | 4.569659 | 0.470588 | 0.033198 | 0.03794 | 0.018293 | 0.059621 | 0.059621 | 0.059621 | 0.059621 | 0.059621 | 0.059621 | 0 | 0.086606 | 0.236864 | 2,436 | 72 | 102 | 33.833333 | 0.70737 | 0.018062 | 0 | 0.045455 | 0 | 0 | 0.264737 | 0.223684 | 0 | 0 | 0 | 0 | 0.022727 | 0 | null | null | 0 | 0 | null | null | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bff840c3000c3f2fd947707aa1ec3f8be36ae525 | 1,510 | py | Python | setup.py | canvasslabs/packageurl-python | ea036912050ddeecff4741486de0d992fbe6bb0c | [
"MIT"
] | null | null | null | setup.py | canvasslabs/packageurl-python | ea036912050ddeecff4741486de0d992fbe6bb0c | [
"MIT"
] | null | null | null | setup.py | canvasslabs/packageurl-python | ea036912050ddeecff4741486de0d992fbe6bb0c | [
"MIT"
] | 1 | 2018-10-06T21:40:41.000Z | 2018-10-06T21:40:41.000Z | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
# SPDX-License-Identifier: MIT
from __future__ import absolute_import
from __future__ import print_function
from glob import glob
from os.path import basename
from os.path import splitext
from setuptools import find_packages
from setuptools import setup
setup(
name='packageurl-python',
version='0.9.4',
license='MIT',
description='A "purl" aka. Package URL parser and builder',
long_description='Python library to parse and build "purl" aka. Package URLs. '
'This is a microlibrary implementing the purl spec at https://github.com/package-url',
author='the purl authors',
url='https://github.com/package-url/packageurl-python',
packages=find_packages('src'),
package_dir={'': 'src'},
py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],
include_package_data=True,
zip_safe=False,
platforms='any',
keywords='package, url, package manager, package url',
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Topic :: Software Development :: Libraries',
'Topic :: Utilities',
],
)
| 32.826087 | 90 | 0.660927 | 180 | 1,510 | 5.444444 | 0.538889 | 0.05102 | 0.127551 | 0.032653 | 0.04898 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010042 | 0.208609 | 1,510 | 45 | 91 | 33.555556 | 0.810042 | 0.048344 | 0 | 0 | 0 | 0 | 0.48954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.189189 | 0 | 0.189189 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bff9635e2cdc0eb91f5d1e1167976170aacd1110 | 747 | py | Python | fduser/cms_plugins.py | ForumDev/forumdev-user | 1d5e03531466c30160c2b4cb9a48ca9e7bda61ba | [
"MIT"
] | null | null | null | fduser/cms_plugins.py | ForumDev/forumdev-user | 1d5e03531466c30160c2b4cb9a48ca9e7bda61ba | [
"MIT"
] | null | null | null | fduser/cms_plugins.py | ForumDev/forumdev-user | 1d5e03531466c30160c2b4cb9a48ca9e7bda61ba | [
"MIT"
] | 1 | 2020-10-12T06:28:18.000Z | 2020-10-12T06:28:18.000Z | from cms.plugin_base import CMSPluginBase
from cms.models.pluginmodel import CMSPlugin
from cms.plugin_pool import plugin_pool
from fduser import models
from django.utils.translation import ugettext as _
from django.contrib.sites.models import Site
class Users(CMSPluginBase):
model = CMSPlugin # Model where data about this plugin is saved
module = _("UserList")
name = _("User List") # Name of the plugin
render_template = "fduser/list.html" # template to render the plugin with
def render(self, context, instance, placeholder):
context['users'] = models.User.objects.all
context['site'] = Site.objects.get_current()
return context
plugin_pool.register_plugin(Users) # register the plugin
| 35.571429 | 78 | 0.740295 | 98 | 747 | 5.540816 | 0.520408 | 0.038674 | 0.047882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182062 | 747 | 20 | 79 | 37.35 | 0.888707 | 0.156627 | 0 | 0 | 0 | 0 | 0.067308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.375 | 0 | 0.8125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
bfff911df94e2189787b5ca569a10579a5dcb4c2 | 11,611 | py | Python | DeepLearning/DeepLearning/02_Deep_ChoTH/deep_learning_7.py | ghost9023/DeepLearningPythonStudy | 4d319c8729472cc5f490935854441a2d4b4e8818 | [
"MIT"
] | 1 | 2019-06-27T04:05:59.000Z | 2019-06-27T04:05:59.000Z | DeepLearning/DeepLearning/02_Deep_ChoTH/deep_learning_7.py | ghost9023/DeepLearningPythonStudy | 4d319c8729472cc5f490935854441a2d4b4e8818 | [
"MIT"
] | null | null | null | DeepLearning/DeepLearning/02_Deep_ChoTH/deep_learning_7.py | ghost9023/DeepLearningPythonStudy | 4d319c8729472cc5f490935854441a2d4b4e8818 | [
"MIT"
] | null | null | null | # CHAPTER 7 합성곱 신경망(CNN)
# 전체구조
# 합성곱계층과 풀링계층이 추가된다.
# 지금까지 본 신경망은 인접하는 계층의 모든 뉴런과 결합되어 있다. 이를 완전연결이라고 하며, 완전히 연결된 계층을 Affine계층이라는 이름으로 구현했다.
##########################################
# 2차원 배열 합성곱 #@#!@#!$!$!@#@!$%!@#!@#
##########################################
import numpy as np
data = np.array(range(0,81)).reshape(9,9)
filter = np.array(range(0,16)).reshape(4,4)
def find_pad(data, filter, s, oh):
h = len(data)
fh = len(filter)
return (((oh-1)*s)+fh-h) / 2
def padding(data, x):
if x%1 == 0:
x = int(x)
return np.pad(data, pad_width=x, mode='constant', constant_values=0)
else:
x1 = int(x+0.5)
x2 = int(x-0.5)
return np.pad(data, pad_width=((x1,x2), (x1,x2)), mode='constant', constant_values=0)
def output(data, filter):
num = len(data) - len(filter) + 1
result = []
for rn in range(num):
for cn in range(num):
result.append(np.sum(data[rn:rn+len(filter), cn:cn+len(filter)] * filter))
return np.array(result).reshape(num, num)
f_p = find_pad(data, filter, 1, 9) # Straid(s) / 출력값(oh)
data = padding(data, f_p)
print('q3\n', output(data, filter))
print('q4\n', output(data, filter) * 3)
############################################
# 3차원 배열 합성곱!@#!@#!@#!@#!@#@!#!
##############################################
import numpy as np
def find_pad(data, filter, s, oh):
h = len(data[0])
fh = len(filter[0])
return (((oh-1)*s)+fh-h) / 2
def padding(data, x):
if x%1 == 0:
x = int(x)
lst = []
for i in range(len(data)):
lst.append(np.pad(data[i], pad_width=x, mode='constant', constant_values=0))
lst = np.array(lst)
return lst
else:
x1 = int(x+0.5)
x2 = int(x-0.5)
lst = []
for i in range(len(data)):
lst.append(np.pad(data[i], pad_width=((x1,x2), (x1,x2)), mode='constant', constant_values=0))
lst = np.array(lst)
return lst
def output(data, filter):
num = len(data[0]) - len(filter[0]) + 1 # 가장 상위 차원의 수는 입력과 필터가 같다. 행과 열은 정사각형 형태를 이루지만 두께까지 같은 정육면체는 아니다.
result = []
for i in range(len(data)):
for rn in range(num):
for cn in range(num):
result.append(np.sum(data[i, rn:rn+len(filter[0]), cn:cn+len(filter[0])] * filter[i]))
return np.array(result).reshape(len(data), num, num)
data = np.array([[[1,2,0,0], [0,1,-2,0], [0,0,1,2], [2,0,0,1]], [[1,0,0,0], [0,0,-2,-1], [3,0,1,0], [2,0,0,1]]])
filter = np.array([[[-1,0,3], [2,0,-1], [0,2,1]], [[0,0,0], [2,0,-1], [0,-2,1]]])
f_p = find_pad(data, filter, 1, 3) # Straid(s) / 출력값(oh)
data = padding(data, f_p)
print('q\n', output(data, filter))
# 블록으로 생각하기
# 3차원의 합성곱 연산은 데이터와 필터를 직육면체 블록이라고 생각하면 쉽다.
# 3차원 데이터를 다차원 배열로 나타낼 때는 (채널, 높이, 너비) 순서로 쓴다.
# 채널:C, 높이:H, 너비:W // 필터채널:C, 필터높이:FH, 필터너비:FW
# 합성곱에서 출력되는 데이터는 한장의 특징맵이다.
# 그렇다면 합성곱 연산의 출력으로 다수의 채널을 내보내려면 어떻게 해야할까?
# 그 답은 필터(가중치)를 다수 사용하는 것이다.
# 필터를 FN개 적용하면 출력맵고 FN개가 된다. 그리고 FN개의 맵을 모으면 형상이 (FN, OH, OW)인 블록이 완성된다.
# (이 완성된 블록을 다음 계층으로 넘기겠다는 것이 CNN의 처리흐름이다.)
# 위의 예처럼 합성곱 연산에서는 필터의 수도 고려해야한다. 필터의 가중치 데이터는 4차원데이터이며
# (출력채널수, 입력채널수, 높이, 너비) 순으로 쓴다. p.238 참조!
# 편향은 채널 하나에 값 하나씩으로 구성된다. 형상이 다르지만 때문에 넘파이의 브로드캐스트 기능을 이용해 쉽게 구현할 수 있다.
# 배치처리
# 합성곱 연산도 마찬가지로 배치처리를 통해 지원하려고 한다.
# 입력데이터(N,C,H,W) --> 필터(FN,C,FH,FW) --> (N,FN,OH,OW) + 편향(FN,1,1) --> 출력데이터(N,FN,OH,OW)
# 이처럼 데이터는 4차원 형상을 가진 채 각 계층을 타고 흐른다.
# 여기서 주의할 점은 신경망에 4차원 데이터가 하나 흐를 때마다 데이터 N개에 대한 합성곱 연산이 이뤄진다는 것이다.
# 즉 N회분의 처리를 한번에 수행한다.
# 풀링계층
# 풀링은 가로, 세로 방향의 공간을 줄이는 연산이다. 즉 가로세로의 값을 대표하는 값을 뽑는다.
# p.240 그림은 2x2 최대풀링을 스트라이드 2로 처리하는 순서이다.
# 최대풀링이란 최대값을 구하는 연산으로 정해진 영역에서 최대값을 꺼낸다.
# 참고로, 풀링의 윈도우 크기와 스트라이트는 같은 값으로 설정하는 것이 보통이다. 즉 mEcE하게 하기 위해서
# 평균풀링도 있지만 이미지 인식 분야에서는 주로 최대풀링을 이용한다.
# 풀링계층의 특징
# 1. 학습해야할 매개변수가 없다.
# 풀링계층은 합성곱계층과 달리 학습해야할 매개변수가 없다.
# 2. 채널 수가 변하지 않는다.
# 풀링연산은 입력데이터의 채널 수 그대로 출력데이터로 내보낸다.
# 3. 입력의 변화레 영향을 적게 받는다.
# 입력데이터가 조금 변해도 풀링의 결과는 잘 변하지 않는다.
# 합성곱 풀링계층 구현하기
# 4차원 배열
# 데이터의 형상이 (10,1,28,28)이라면 이는 높이 28, 너비28, 채널1인 데이터10개라는 뜻이다.
# 이를 파이썬으로 구현하면 다음과 같다.
import numpy as np
x = np.random.rand(10, 1, 28, 28)
x.shape
x[0].shape
x[1].shape
x[0,0] # 데이터에 접근하려면 다음과 같이 한다.
x[0][0]
###########################
# im2col로 데이터 전개하기
###########################
import numpy as np
def im2col(input_data, filter_h, filter_w, stride=1, pad=0):
"""다수의 이미지를 입력받아 2차원 배열로 변환한다(평탄화).
Parameters
----------
input_data : 4차원 배열 형태의 입력 데이터(이미지 수, 채널 수, 높이, 너비)
filter_h : 필터의 높이
filter_w : 필터의 너비
stride : 스트라이드
pad : 패딩
Returns
-------
col : 2차원 배열
"""
N, C, H, W = input_data.shape
out_h = (H + 2 * pad - filter_h) // stride + 1
out_w = (W + 2 * pad - filter_w) // stride + 1
img = np.pad(input_data, [(0, 0), (0, 0), (pad, pad), (pad, pad)], 'constant')
col = np.zeros((N, C, filter_h, filter_w, out_h, out_w))
for y in range(filter_h):
y_max = y + stride * out_h
for x in range(filter_w):
x_max = x + stride * out_w
col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]
col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N * out_h * out_w, -1)
return col
##################################
##################################
##################################
import sys, os
sys.path.append(os.pardir)
from common.util import im2col
x1 = np.random.rand(1,3,7,7)
col1 = im2col(x1, 5, 5, stride=1, pad=0)
print(col1.shape)
x2 = np.random.rand(10, 3, 7, 7)
col2 = im2col(x2, 5, 5, stride=1, pad=0)
print(col2.shape)
####################################################
################## 합성곱계층 구현 ###################
####################################################
class Convolution: #
def __init__(self, W, b, stride=1, pad=0):
self.W = W
self.b = b
self.stride = stride
self.pad = pad
def forward(self, x):
FN, C, FH, FW = self.W.shape
N, C, H, W = x.shape
out_h = int(1 + (H + 2*self.pad - FH) / self.stride)
out_w = int(1 + (W + 2*self.pad - FH) / self.stride)
col = im2col(x, FH, FW, self.stride, self.pad)
col_W = self.W.reshape(FN, -1).T
out = np.dot(col, col_W) + self.b
out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)
return
##################################################
################## 풀링계층 구현 ###################
##################################################
class Pooling:
def __init__(self, pool_h, pool_w, stride=1, pad=0):
self.pool_h = pool_h
self.pool_w = pool_w
self.stride = stride
self.pad = pad
def forward(self, x):
N, C, H, W = x.shape
out_h = int(1 +(H-self.pool_h) / self.stride)
out_w = int(1 +(W-self.pool_w) / self.stride)
col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad) # 전개 (1)
col = col.reshape(-1, self.pool_h*self.pool_w)
out = np.max(col, axis=1) # 최대값 (2)
out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2) # 성형 (3)
return out
# 풀링계층 구현은 [그림 7-22]와 같이 다음의 세 단계로 진행합니다.
# 1. 입력데이터를 전개한다.
# 2. 행 별 최대값을 구한다.
# 3. 적절한 모양으로 성형한다.
# 앞의 코드에서와 같이 각 단계는 한 두줄 정도로 간단히 구현됩니다.
# CNN 구현하기
# 합성곱 계층과 풀링계층을 조합하여 손글씨 숫자를 인식하는 CNN을 조립할 수 있다.
# 단순 CNN의 네트워크 구성
# conv -> relu -> pooling -> affine -> relu -> affine -> softmax ->
# 위의 순서로 흐르는 CNN 신경망 구현
# 초기화 때 받는 인수
# input_dim - 입력데이터(채널 수, 높이, 너비)의 차원
# conv_param - 합성곱계층의 하이퍼파라미터(딕셔너리). 딕셔너리의 키는 다음과 같다.
# filter_num - 필터 수
# filter_size - 필터크기
# stride - 스트라이드
# pad - 패딩
# hidden_size - 은닉층(완전연결)의 뉴런수
# output_size - 출력층(완전연결)의 뉴런수
# weight_init_std - 초기화 때의 가중치 표준편차
# 여기서 합성곱 계층의 매개변수는 딕셔너리 형태로 주어진다.(conv_param)
# 예를 들어 {'filter_num':30, 'filter_size':5, 'pad':0, 'stride':1}처럼 저장된다.
class SimpleConvNet: # CNN 초기화
def __init__(self, input_dim=(1,28,28), conv_param={'filter_num':30, 'filter_size':5
'pad':0, 'stride':1},
hidden_size=100, output_size=10, weight_init_std=0.01):
filter_num = conv_param['filter_num'] # 초기화 인수로 주어진 합성곱 계층의 하이퍼파라미터를 딕셔너리에서 꺼낸다.
filter_size = conv_param['filter_size']
filter_pad = conv_param['pad']
filter_stride = conv_param['stride']
input_size = input_dim[1]
conv_output_size = (input_size - filter_size + 2 * filter_pad) / filter + 1
pool_output_size = int(filter_num * (conv_output_size/2) * (conv_output_size/2)) # 합성곱 계층의 출력크기를 계산한다.
self.params = {}
self.params['W1'] = weight_init_std * np.random.randn(filter_num, input_dim[0], filter_size, filter_size)
self.params['b1'] = np.zeros(filter_num)
self.params['W2'] = weight_init_std * np.random.randn(pool_output_size, hidden_size)
self.params['b2'] = np.zeros(hidden_size)
self.params['W3'] = weight_init_std * np.random.randn(hidden_size, output_size)
self.params['b3'] = np.zeros(output_size)
self.layers = OrderedDict() # 순서가 있는 딕셔너리 -> layers에 계층들을 차례대로 추가
self.layers['Conv1'] = Convolution(self.params['W1'],
self.params['b1'],
conv_param['stride'],
conv_param['pad'])
self.layers['Relu1'] = Relu()
self.layers['Pool1'] = Pooling(pool_h=2, pool_w=2, stride=2)
self.layers['Affine1'] = Affine(self.params['W2'],
self.params['b2'])
self.layers['Relu2'] = Relu()
self.layers['Affine2'] = Affine(self.params['W3'],
self.params['b3'])
self.last_layers = SoftmaxWithLoss() # 마지막계층은 따로 저장
def predict(self, x):
for layer in layers.values():
x = layer.forward(x)
return x
def loss(self, x, t):
y = self.predict(x)
return self.last_layer.forward(y, t)
def gradient(self, x, t): # 매개변수의 기울기는 오차역전파로 구한다. 이 과정은 순전파와 역전파를 반복한다.
self.loss(x, t) # 순전파
dout = 1 # 역전파
dout = self.last_layer.backward(dout)
layers = list(self.layers.values())
layers.reverse()
for layer in layers:
dout = layer.backward(dout)
grads = {}
grads['W1'] = self.layers['Conv1'].dW
grads['b1'] = self.layers['Conv1'].dW
grads['W2'] = self.layers['Affine1'].dW
grads['b2'] = self.layers['Affine1'].dW
grads['W3'] = self.layers['Affine2'].dW
grads['b3'] = self.layers['Affine2'].dW
return grads
# CNN 시각화하기
# CNN을 구성하는 합성곱계층은 입력으로 받은 이미지에서 보고 있는 것이 무엇인지 알아보도록 하자!
# 1번째 층의 가중치 시각화하기
# 1번째 층의 합성곱 계층의 가중치는 (30, 1, 5, 5)이다. - 필터30개, 채널1개, 5X5 크기 - 회색조필터!
# 학습을 마친 필터는 규칙성 있는 이미지가 된다.
# 층 깊이에 따른 추출정보 변화
# 계층이 깊어질 수록 추출되는 정보(정확히는 강하게 반응하는 뉴런)는 더 추상화 된다.
# 층이 깊어지면서 더 복잡하고 추상화된 정보가 추출된다. 처음층은 단순한 에지에 반응하고 이어서 텍스쳐에 반응한다.
# 층이 깊어지면서 뉴런이 반응하는 대상이 단순한 모양에서 '고급'정보로 변화해간다.
# 대표적인 CNN
# LeNet@@@@은 손글씨 숫자를 인식하는 네트워크로 1998년에 제안되었다.
# 합성곱계층과 풀링 계층(정확히는 원소를 줄이기만 하는 서브샘플링)을 반복하고, 마지막으로 완전연결 계층을 거치면서 결과를 출력한다.
# LeNet과 '현재의 CNN'을 비교하면 몇가지 차이가 있다.
# 1. 활성화함수의 차이 - 르넷은 시그모이드, 현재는 렐루
# 2. 르넷은 서브샘플링을 하여 중간 데이터의 크기가 달라지지만 현재는 최대풀링이 주류이다.
# AlexNet은 딥러닝 열풍을 일으키는 데 큰 역할을 했다.
# AlexNet은 합성곱계층과 풀링계층을 거듭하며 마지막으로 완전연결 게층을 거쳐 결과를 출력한다.
# AlexNet은 활성화함수로 렐루를 이용한다.
# LRN이라는 국소적 정규화를 실시하는 계층을 이용한다.
# 드롭아웃을 사용한다.
# 정리
# CNN은 지금까지의 완전연결계층 네트워크와 합성곱 계층과 풀링계층을 새로 추가한다.
# 합성곱계층과 풀링계층은 im2col을 이용하면 간단하고 효율적으로 구현할 수 있다.
# CNN을 시각화해보면 계층이 깊어질 수록 고급정보가 추출되는 모습을 확인할 수 있다.
# 대표적인 CNN 에는 르넷과 알렉스넷이 잇다.
# 딥러닝의 발전에는 빅데이터와 GPU가 공헌햇다.
| 35.39939 | 113 | 0.551977 | 1,907 | 11,611 | 3.285265 | 0.334557 | 0.005427 | 0.003831 | 0.008779 | 0.255706 | 0.208779 | 0.169832 | 0.13759 | 0.124661 | 0.124661 | 0 | 0.037118 | 0.252864 | 11,611 | 327 | 114 | 35.507645 | 0.685072 | 0.294893 | 0 | 0.268571 | 0 | 0 | 0.031614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.034286 | null | null | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8701d9777abb9867105564e069f85e55ea2668a5 | 806 | py | Python | project/settings/general/databases.py | danielbraga/hcap | a3ca0d6963cff19ed6ec0436cce84e2b41615454 | [
"MIT"
] | null | null | null | project/settings/general/databases.py | danielbraga/hcap | a3ca0d6963cff19ed6ec0436cce84e2b41615454 | [
"MIT"
] | null | null | null | project/settings/general/databases.py | danielbraga/hcap | a3ca0d6963cff19ed6ec0436cce84e2b41615454 | [
"MIT"
] | null | null | null | """
django:
https://docs.djangoproject.com/en/3.0/ref/settings/#databases
"""
from ..env import env
from .paths import SQLITE_PATH
DATABASE_TYPE = env("HCAP__DATABASE_TYPE", default="sqlite")
if DATABASE_TYPE == "sqlite":
DATABASES = {"default": {"ENGINE": "django.db.backends.sqlite3", "NAME": str(SQLITE_PATH)}}
elif DATABASE_TYPE == "postgresql":
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": env("HCAP__POSTGRES_DB", default="hcap"),
"USER": env("HCAP__POSTGRES_USER", default="pydemic"),
"PASSWORD": env("HCAP__POSTGRES_PASSWORD", default="pydemic"),
"HOST": env("HCAP__POSTGRES_HOST", default="postgres"),
"PORT": env("HCAP__POSTGRES_PORT", default=5432),
}
}
| 32.24 | 95 | 0.626551 | 89 | 806 | 5.41573 | 0.41573 | 0.087137 | 0.155602 | 0.116183 | 0.157676 | 0.157676 | 0 | 0 | 0 | 0 | 0 | 0.010886 | 0.202233 | 806 | 24 | 96 | 33.583333 | 0.738725 | 0.090571 | 0 | 0 | 0 | 0 | 0.376552 | 0.107586 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.0625 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8703e2d3131fbbf507719f582a432f8703d7572d | 537 | py | Python | exerciciosPython/Mundo1GB/ex035.py | gutembergdomingos13/ExerciciosPhyton | 67e046d3ed91d1c10d8227bc5a89735ed6a0abff | [
"MIT"
] | null | null | null | exerciciosPython/Mundo1GB/ex035.py | gutembergdomingos13/ExerciciosPhyton | 67e046d3ed91d1c10d8227bc5a89735ed6a0abff | [
"MIT"
] | null | null | null | exerciciosPython/Mundo1GB/ex035.py | gutembergdomingos13/ExerciciosPhyton | 67e046d3ed91d1c10d8227bc5a89735ed6a0abff | [
"MIT"
] | null | null | null | # Desenvolva um programa que leia o comprimento de três retas
# e diga ao usuário se elas podem ou não formar um triângulo.
print("-=-" * 15)
print("Vamos analisar um triângulo...")
print('-=-' * 15)
r1 = float(input('Informe o primeiro segmento: '))
r2 = float(input('Informe o segundo seguimento: '))
r3 = float(input('Informe o terceiro seguimento: '))
if r1 < r2 + r3 and r2 < r1 + r3 and r3 < r1 + r2:
print('Os seguimentos a cima podem formar um triangulo!')
else:
print('Os seguimentos não potem formar um triangulo!')
| 33.5625 | 61 | 0.687151 | 82 | 537 | 4.5 | 0.54878 | 0.065041 | 0.138211 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036613 | 0.18622 | 537 | 15 | 62 | 35.8 | 0.80778 | 0.221601 | 0 | 0 | 0 | 0 | 0.527711 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
8705be63c63c4c6cc5d6f7cc5c1159e4435dd4ec | 956 | py | Python | examples/img-classifier/main.py | robianmcd/keras-mri | dd8619ca848cb64555fbd7aca5b7aa1941cdc08b | [
"MIT"
] | 12 | 2019-04-18T13:32:48.000Z | 2020-06-19T13:45:34.000Z | examples/img-classifier/main.py | robianmcd/keras-mri | dd8619ca848cb64555fbd7aca5b7aa1941cdc08b | [
"MIT"
] | 1 | 2019-06-20T03:44:07.000Z | 2019-06-21T14:35:44.000Z | examples/img-classifier/main.py | robianmcd/keras-mri | dd8619ca848cb64555fbd7aca5b7aa1941cdc08b | [
"MIT"
] | 3 | 2019-04-18T19:36:52.000Z | 2020-01-30T22:51:02.000Z | import numpy as np
import os
import os.path as path
from keras.applications import vgg16, inception_v3, resnet50, mobilenet
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
import kmri
base_path = path.dirname(path.realpath(__file__))
img_path = path.join(base_path, 'img')
## Load the VGG model
# model = vgg16.VGG16(weights='imagenet')
# normalize_pixels = True
## Load the MobileNet model
# model = mobilenet.MobileNet(weights='imagenet')
# normalize_pixels = True
## Load the ResNet50 model
model = resnet50.ResNet50(weights='imagenet')
normalize_pixels = False
def get_img(file_name):
image = load_img(path.join(img_path, file_name), target_size=(224, 224))
if normalize_pixels:
return img_to_array(image) / 256
else:
return img_to_array(image)
img_input = np.array([get_img(file_name) for file_name in os.listdir(img_path)])
kmri.visualize_model(model, img_input)
| 26.555556 | 80 | 0.761506 | 142 | 956 | 4.901408 | 0.352113 | 0.04023 | 0.043103 | 0.12931 | 0.272989 | 0.117816 | 0.117816 | 0 | 0 | 0 | 0 | 0.029197 | 0.140167 | 956 | 35 | 81 | 27.314286 | 0.817518 | 0.212343 | 0 | 0 | 0 | 0 | 0.014825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.368421 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
870a8d637ee605f55604fde4b7ffc2547f1122a2 | 958 | py | Python | tests/unitary/ERC20CRV/test_setters.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 217 | 2020-06-24T14:01:21.000Z | 2022-03-29T08:35:24.000Z | tests/unitary/ERC20CRV/test_setters.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 25 | 2020-06-24T09:39:02.000Z | 2022-03-22T17:03:00.000Z | tests/unitary/ERC20CRV/test_setters.py | AqualisDAO/curve-dao-contracts | beec73a068da8ed01c0f710939dc5adb776d565b | [
"MIT"
] | 110 | 2020-07-10T22:45:49.000Z | 2022-03-29T02:51:08.000Z | import brownie
def test_set_minter_admin_only(accounts, token):
with brownie.reverts("dev: admin only"):
token.set_minter(accounts[2], {"from": accounts[1]})
def test_set_admin_admin_only(accounts, token):
with brownie.reverts("dev: admin only"):
token.set_admin(accounts[2], {"from": accounts[1]})
def test_set_name_admin_only(accounts, token):
with brownie.reverts("Only admin is allowed to change name"):
token.set_name("Foo Token", "FOO", {"from": accounts[1]})
def test_set_minter(accounts, token):
token.set_minter(accounts[1], {"from": accounts[0]})
assert token.minter() == accounts[1]
def test_set_admin(accounts, token):
token.set_admin(accounts[1], {"from": accounts[0]})
assert token.admin() == accounts[1]
def test_set_name(accounts, token):
token.set_name("Foo Token", "FOO", {"from": accounts[0]})
assert token.name() == "Foo Token"
assert token.symbol() == "FOO"
| 26.611111 | 65 | 0.676409 | 135 | 958 | 4.622222 | 0.185185 | 0.100962 | 0.096154 | 0.128205 | 0.703526 | 0.684295 | 0.576923 | 0.407051 | 0.192308 | 0.192308 | 0 | 0.014925 | 0.160752 | 958 | 35 | 66 | 27.371429 | 0.761194 | 0 | 0 | 0.1 | 0 | 0 | 0.131524 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.3 | false | 0 | 0.05 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
870f4a06886086706c0904777ac053129591b6cd | 715 | py | Python | src/dcos_e2e_cli/dcos_docker/commands/_cgroup_mount_option.py | jongiddy/dcos-e2e | b52ef9a1097a8fb328902064345cc6c8b0bf5779 | [
"Apache-2.0"
] | 63 | 2018-05-17T21:02:14.000Z | 2021-11-15T19:18:03.000Z | src/dcos_e2e_cli/dcos_docker/commands/_cgroup_mount_option.py | jongiddy/dcos-e2e | b52ef9a1097a8fb328902064345cc6c8b0bf5779 | [
"Apache-2.0"
] | 225 | 2017-09-08T02:24:58.000Z | 2018-05-16T12:18:58.000Z | src/dcos_e2e_cli/dcos_docker/commands/_cgroup_mount_option.py | jongiddy/dcos-e2e | b52ef9a1097a8fb328902064345cc6c8b0bf5779 | [
"Apache-2.0"
] | 21 | 2018-06-14T21:58:24.000Z | 2021-11-15T19:18:06.000Z | """
Mount /sys/fs/cgroup Option
"""
from typing import Callable
import click
def cgroup_mount_option(command: Callable[..., None]) -> Callable[..., None]:
"""
Option for choosing to mount `/sys/fs/cgroup` into the container.
"""
function = click.option(
'--mount-sys-fs-cgroup/--no-mount-sys-fs-cgroup',
default=True,
show_default=True,
help=(
'Mounting ``/sys/fs/cgroup`` from the host is required to run '
'applications which require ``cgroup`` isolation. '
'Choose to not mount ``/sys/fs/cgroup`` if it is not available on '
'the host.'
),
)(command) # type: Callable[..., None]
return function
| 27.5 | 79 | 0.587413 | 86 | 715 | 4.848837 | 0.488372 | 0.071942 | 0.158273 | 0.191847 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.267133 | 715 | 25 | 80 | 28.6 | 0.795802 | 0.167832 | 0 | 0 | 0 | 0 | 0.402098 | 0.08042 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8712aff8958c54d885bd36104125e3e7046bb22c | 29,060 | py | Python | ScriptsForDiffPlatforms/SteamOrderPricesCSmoney.py | dr11m/steamSkins | 74202ec23c2b096ef18bad0a408ca7f6cccd187f | [
"MIT"
] | null | null | null | ScriptsForDiffPlatforms/SteamOrderPricesCSmoney.py | dr11m/steamSkins | 74202ec23c2b096ef18bad0a408ca7f6cccd187f | [
"MIT"
] | null | null | null | ScriptsForDiffPlatforms/SteamOrderPricesCSmoney.py | dr11m/steamSkins | 74202ec23c2b096ef18bad0a408ca7f6cccd187f | [
"MIT"
] | 1 | 2022-02-20T21:01:16.000Z | 2022-02-20T21:01:16.000Z | import os
from selenium import webdriver
import random
import time
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
import linecache
import sys
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver import ActionChains
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import requests
import MySQLdb
import datetime
import mysql.connector
import logging
import re
import urllib
from pyvirtualdisplay import Display
import subprocess
#настройка и главные функции
try:
min_perccent = "39"
allowed_min_percent = 0.37
#loging errors
# Create a logging instance
logger = logging.getLogger('SteamOrderPrices')
logger.setLevel(logging.INFO) # you can set this to be DEBUG, INFO, ERROR
# Assign a file-handler to that instance
fh = logging.FileHandler("ErrorsSteamOrderPrices.txt")
fh.setLevel(logging.ERROR) # again, you can set this differently
# Format your logs (optional)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter) # This will set the format to the file handler
# Add the handler to your logging instance
logger.addHandler(fh)
def PrintException():
exc_type, exc_obj, tb = sys.exc_info()
f = tb.tb_frame
lineno = tb.tb_lineno
filename = f.f_code.co_filename
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
try:
print('EXCEPTION IN ({}, LINE {} "{}"): {}'.format(filename, lineno, line.strip(), exc_obj))
except:
pass
close_script()
def PrintException_only_print():
succsess = 0
print("was an error")
# exception output
exc_type, exc_obj, tb = sys.exc_info()
f = tb.tb_frame
lineno = tb.tb_lineno
filename = f.f_code.co_filename
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
print('EXCEPTION IN ({}, LINE {} "{}"): {}'.format(filename, lineno, line.strip(), exc_obj))
def close_modals():
try:
driver.find_element_by_css_selector("body > div.newmodal > div.newmodal_header_border > div > div.newmodal_close").click()
except:
pass
def close_mysql_connection():
global mydb, mycursor
try:
mycursor.close
except:
pass
try:
mydb.close
except:
pass
def close_script():
try:
mycursor.close()
except:
pass
try:
mydb.close()
except:
pass
try:
driver.close()
except:
pass
try:
driver.quit()
except:
pass
sys.exit()
# rub_usd
rub_usd = requests.get("https://www.cbr-xml-daily.ru/daily_json.js")
rub_usd = rub_usd.json()
rub_usd = float(rub_usd["Valute"]["USD"]["Value"])
print("current exchange rate", rub_usd)
#start driver
display = Display(visible=0, size=(1600, 900), backend='xvfb')
display.start()
chrome_options = Options()
chrome_options.add_argument("user-data-dir=/home/work/profilesForAll/SteamOrders2") # linux
chrome_options.add_argument("window-size=1600,900")
driver = webdriver.Chrome(executable_path='/usr/bin/chromedriver', chrome_options=chrome_options)
driver.set_window_size(1600, 900)
print("-- check if need to loging into steam")
wait = WebDriverWait(driver, 15)
driver.get("https://steamcommunity.com/market/")
time.sleep(5)
try:
element = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "#marketWalletBalanceAmount")))
except:
print("cant verify login into steam on the first try")
time.sleep(5)
driver.get("https://steamcommunity.com/market/")
try:
element = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "#marketWalletBalanceAmount")))
except:
print("login into steam")
# входим в стим
driver.find_element_by_css_selector("#global_action_menu > a").click()
time.sleep(1)
# login
input_field = driver.find_element_by_css_selector('#input_username')
input_field.clear()
time.sleep(2)
# pass
input_field = driver.find_element_by_css_selector('#input_password')
input_field.clear()
time.sleep(3)
input_field = driver.find_element_by_css_selector('#twofactorcode_entry')
# get 2fa code
os.chdir("/home/work/steamguard-cli")
guard = subprocess.check_output('build/steamguard 2fa', shell=True).decode("utf-8").strip()
print(guard)
input_field.clear()
input_field.send_keys(guard)
time.sleep(2)
driver.find_element_by_css_selector(
"#login_twofactorauth_buttonset_entercode > div.auth_button.leftbtn > div.auth_button_h5").click()
time.sleep(10)
# снова проверяем, вошли ли мы в систему
try:
element = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "#marketWalletBalanceAmount")))
except:
msg = "need to login into steam"
logger.exception(msg)
raise ValueError('need to login into steam')
#получаем предметы на tryskins
print("-- get items from tryskins")
wait = WebDriverWait(driver, 60)
skins_from_parser = []
dates_diff = 777
if dates_diff > 60:
driver.get(
"https://table.altskins.com/site/items?ItemsFilter%5Bknife%5D=0&ItemsFilter%5Bknife%5D=1&ItemsFilter%5Bstattrak%5D=0&ItemsFilter%5Bstattrak%5D=1&ItemsFilter%5Bsouvenir%5D=0&ItemsFilter%5Bsouvenir%5D=1&ItemsFilter%5Bsticker%5D=0&ItemsFilter%5Bsticker%5D=1&ItemsFilter%5Btype%5D=1&ItemsFilter%5Bservice1%5D=showsteama&ItemsFilter%5Bservice2%5D=showcsmoneyw&ItemsFilter%5Bunstable1%5D=1&ItemsFilter%5Bunstable2%5D=0&ItemsFilter%5Bhours1%5D=192&ItemsFilter%5Bhours2%5D=192&ItemsFilter%5BpriceFrom1%5D=22&ItemsFilter%5BpriceTo1%5D=&ItemsFilter%5BpriceFrom2%5D=&ItemsFilter%5BpriceTo2%5D=&ItemsFilter%5BsalesBS%5D=&ItemsFilter%5BsalesTM%5D=&ItemsFilter%5BsalesST%5D=3&ItemsFilter%5Bname%5D=&ItemsFilter%5Bservice1Minutes%5D=301&ItemsFilter%5Bservice2Minutes%5D=301&ItemsFilter%5BpercentFrom1%5D="+ min_perccent +"&ItemsFilter%5BpercentFrom2%5D=&ItemsFilter%5Btimeout%5D=777&ItemsFilter%5Bservice1CountFrom%5D=&ItemsFilter%5Bservice1CountTo%5D=&ItemsFilter%5Bservice2CountFrom%5D=&ItemsFilter%5Bservice2CountTo%5D=&ItemsFilter%5BpercentTo1%5D=&ItemsFilter%5BpercentTo2%5D=&refreshonoff=1")
time.sleep(6)
element = 0
try:
element = driver.find_element_by_css_selector("#page-wrapper > div.row.border-bottom > nav > ul > li:nth-child(4) > a > img")
except:pass
if element != 0:
driver.find_element_by_css_selector("#page-wrapper > div.row.border-bottom > nav > ul > li:nth-child(4) > a > img").click()
time.sleep(5)
driver.find_element_by_css_selector("#imageLogin").click()
time.sleep(5)
driver.get("https://www.google.com/")
time.sleep(3)
driver.get(
"https://table.altskins.com/site/items?ItemsFilter%5Bknife%5D=0&ItemsFilter%5Bknife%5D=1&ItemsFilter%5Bstattrak%5D=0&ItemsFilter%5Bstattrak%5D=1&ItemsFilter%5Bsouvenir%5D=0&ItemsFilter%5Bsouvenir%5D=1&ItemsFilter%5Bsticker%5D=0&ItemsFilter%5Bsticker%5D=1&ItemsFilter%5Btype%5D=1&ItemsFilter%5Bservice1%5D=showsteama&ItemsFilter%5Bservice2%5D=showcsmoneyw&ItemsFilter%5Bunstable1%5D=1&ItemsFilter%5Bunstable2%5D=0&ItemsFilter%5Bhours1%5D=192&ItemsFilter%5Bhours2%5D=192&ItemsFilter%5BpriceFrom1%5D=22&ItemsFilter%5BpriceTo1%5D=&ItemsFilter%5BpriceFrom2%5D=&ItemsFilter%5BpriceTo2%5D=&ItemsFilter%5BsalesBS%5D=&ItemsFilter%5BsalesTM%5D=&ItemsFilter%5BsalesST%5D=3&ItemsFilter%5Bname%5D=&ItemsFilter%5Bservice1Minutes%5D=301&ItemsFilter%5Bservice2Minutes%5D=301&ItemsFilter%5BpercentFrom1%5D="+ min_perccent +"&ItemsFilter%5BpercentFrom2%5D=&ItemsFilter%5Btimeout%5D=777&ItemsFilter%5Bservice1CountFrom%5D=&ItemsFilter%5Bservice1CountTo%5D=&ItemsFilter%5Bservice2CountFrom%5D=&ItemsFilter%5Bservice2CountTo%5D=&ItemsFilter%5BpercentTo1%5D=&ItemsFilter%5BpercentTo2%5D=&refreshonoff=1")
time.sleep(8)
element = 0
try:
element = driver.find_element_by_css_selector("#page-wrapper > div.row.border-bottom > nav > ul > li:nth-child(4) > a > img")
except: pass
if element != 0:
raise ValueError('tryskins login error')
#подгружаем все элементы из parser'a
for x in range(2):
try:
mainBlocks = driver.find_elements_by_css_selector('table > tbody > tr:nth-child(n)')
len_start = len(mainBlocks)
element = driver.find_element_by_css_selector(
'table > tbody > tr:nth-child(' + str(len(mainBlocks)) + ')')
element.location_once_scrolled_into_view
time.sleep(0.2)
element = driver.find_element_by_css_selector(
'table > tbody > tr:nth-child(' + str(len(mainBlocks) - 29) + ')')
element.location_once_scrolled_into_view
time.sleep(0.2)
element = driver.find_element_by_css_selector(
'table > tbody > tr:nth-child(' + str(len(mainBlocks)) + ')')
element.location_once_scrolled_into_view
except: continue
for ind in range (10):
mainBlocks = driver.find_elements_by_css_selector('table > tbody > tr:nth-child(n)')
len_after_scroll = len(mainBlocks)
if len_start == len_after_scroll:
time.sleep(0.5)
if len_start != len_after_scroll:
break
mainBlocks = driver.find_elements_by_css_selector('table > tbody > tr:nth-child(n)')
len_after_scroll = len(mainBlocks)
if len_start == len_after_scroll:
break
XML = driver.find_element_by_css_selector('#w0 > table > tbody').get_attribute('innerHTML')
XML = XML.split('<tr class="tr"')
del XML[0]
for item_xml in XML:
try:
name = re.search('market_hash_name=([^<]*)&sort_by=price', item_xml)
name = name[1].strip()
price_csm = re.search('attribute="pricecsmoneyw">([^<]*)</span><span', item_xml).group(1).strip()
sales = re.search('class="sales">([^<]*)</div><img src="/images/steam', item_xml)
sales = sales[1].strip()
purchased_count = 0
#получаем, сколько предметов было куплено за последние 7 дней
close_mysql_connection()
mycursor.execute("SELECT name FROM PurchasedItems WHERE DATE(date) > (NOW() - INTERVAL 7 DAY);")
items_mysql_last7days = mycursor.fetchall()
close_mysql_connection()
connect_to_mysql("SteamBuyOrders")
for item_mysql in items_mysql_last7days:
if item_mysql == name:
print("added count")
purchased_count += 1
skins_from_parser.append({"name":name, "price_csm": float(price_csm), "sales": sales, "overstock": -1, "db_id": -1, "purchased_count": purchased_count, "allowed_count": 0, "buy_order_price": 0})
except Exception as e:
PrintException_only_print()
continue
print("******************************")
print(len(skins_from_parser))
print("******************************")
#проверяем, есть ли смысл получать оверсток на сайте (это нужно делать каждые 24 часа)
need_to_check_on_cs_money = False
mycursor.execute("SELECT name,quanity,date,id FROM csMoneyLimits")
items_mysql = mycursor.fetchall()
now = datetime.datetime.now()
for item_db in items_mysql:
for item_parser in skins_from_parser:
if item_db[0] == item_parser["name"]:
item_parser["db_id"] = item_db[3] #add table's id
delta = now - item_db[2]
# кол-во часов
time_diff_in_hours = int(delta.total_seconds()) / 60 / 60
print('time difference: ', time_diff_in_hours)
if time_diff_in_hours < 24:
#print("string before change -", item_parser) #debug
#print("change overstock to DB value (DB value -)", item_db[1]) #debug
item_parser["overstock"] = item_db[1]
#print("string after change -", item_parser) #debug
#input("test input")
break
print("******************************")
print(len(skins_from_parser))
print("******************************")
#проверяем, нужно ли заходить на csm ндля получения overstocka (есть ли предметы overstock == -1)
for item in skins_from_parser:
if item["overstock"] == -1:
need_to_check_on_cs_money = True
#нужно получить оверстоки с сайта csmoney
if need_to_check_on_cs_money == True:
print("-- getting overstocks from csmoney")
#check if I'm logged into csMoney
driver.get("https://old.cs.money/")
try:
element = wait.until(EC.visibility_of_element_located(
(By.CSS_SELECTOR, "div.header_menu_mobile > div.balance_header.superclass_space.block_balance")))
time.sleep(1)
except:
#входим в csMoney
print("start entering into csMoney")
driver.find_element_by_css_selector("#authenticate_button > a").click()
time.sleep(1)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "#imageLogin")))
driver.find_element_by_css_selector("#imageLogin").click()
time.sleep(5)
# снова проверяем, вошли ли мы в систему
try:
element = wait.until(EC.visibility_of_element_located(
(By.CSS_SELECTOR, "div.header_menu_mobile > div.balance_header.superclass_space.block_balance")))
except:
raise ValueError('need to login into csMoney')
#надо полностью прогрузить страницу
try:
time.sleep(20)
element = wait.until(EC.visibility_of_element_located(
(By.CSS_SELECTOR, "#main_container_bot > div.items > div:nth-child(1)")))
except:
raise ValueError('cant properly load csmoney')
#проверяем оверстоки
driver.find_element_by_css_selector("#header_panel > ul.header_menu > li:nth-child(1) > a").click()
time.sleep(5)
for item in skins_from_parser:
if item["overstock"] == -1:
input_field = driver.find_element_by_css_selector('#universal_skin_input')
input_field.clear()
time.sleep(0.2)
input_field.send_keys(item["name"])
time.sleep(0.2)
driver.find_element_by_css_selector("#check_skin_status_btn > a").click()
time.sleep(2)
try:
overstock_csm = driver.find_element_by_css_selector("#overstock_difference").text
overstock_csm = int(overstock_csm)
print("overstock -", overstock_csm)
except:
print("was error getting overstock_difference")
continue
#либо обновляем информацию у предмета в бд, либо добавляем новую запись
#если предмета не было в бд
if item["db_id"] == -1:
mycursor.execute(
"INSERT INTO csMoneyLimits (name, quanity) VALUES (%s, %s)",
(item["name"], str(overstock_csm),))
mydb.commit()
#если предмет был в бд
if item["db_id"] != -1:
print("item exists in DB (id not -1)", item["db_id"]) #debug
#сперва удаляем устаревший предмет из бд
mycursor.execute(
"DELETE FROM `csMoneyLimits` WHERE id = %s",
(item["db_id"],))
mydb.commit()
print("deleted from DB") #debug
#теперь добавляем новый
mycursor.execute(
"INSERT INTO csMoneyLimits (name, quanity) VALUES (%s, %s)",
(item["name"], str(overstock_csm),))
mydb.commit()
print("added to DB") #debug
#проставляем оверсток
item["overstock"] = int(overstock_csm)
#сортируем и выставляем разрешенное кол-во покупок
index = -1
for item in skins_from_parser:
index += 1
if item["overstock"] == -1:
del skins_from_parser[index]
continue
predicted_purchases = 0
if int(item["sales"]) > 0:
predicted_purchases = int(int(item["sales"]) / 3)
allowed_count = int(item["overstock"]) - (predicted_purchases + int(item["purchased_count"]))
item["allowed_count"] = allowed_count
if allowed_count < 1:
del skins_from_parser[index]
# опустили, так как всеравно нужно открывать страницу каждого скина для выставления байордера (там и будем брать мин цену)
"""
#получаем usd-rub, для этого возможно нужно будет загрузить несколько предметов
#проверяем, нет ли у нас готового курса в бд (не старше 3ч часов)
mycursor.execute("SELECT quanity, date FROM csMoneyLimits WHERE id = 38")
exchange_rate_and_date = mycursor.fetchone()
exchange_rate = 0
#получаем, если не старше 4х часов
now = datetime.datetime.now()
delta = now - exchange_rate_and_date[1]
# кол-во часов
time_diff_in_hours = int(delta.total_seconds()) / 60 / 60
print('time difference exchange rate: ', time_diff_in_hours)
if time_diff_in_hours < 4:
exchange_rate = exchange_rate_and_date[0]
#получаем курс валют (сравнивая цену доллара на парсере и цену в рублях в стиме у нескольких предметов)
if exchange_rate == 0:
skins_for_exchange_rate = []
#получаем минимальную цену ордера в рублях
for index in range(3):
print("index exchange rate -", index)
url_name = urllib.parse.quote(skins_from_parser[index]["name"])
driver.get("https://steamcommunity.com/market/listings/730/"+url_name)
try:
element = wait.until(EC.visibility_of_element_located(
(By.CSS_SELECTOR, "span.market_commodity_orders_header_promote:last-child")))
except:
raise ValueError('cant get exchange rate')
try:
driver.find_element_by_css_selector("#market_buyorder_info_show_details > span").click() #View more details
except: pass
lowest_prices = driver.find_elements_by_css_selector("span.market_commodity_orders_header_promote:last-child")
if len(lowest_prices) == 1:
print("lenth is 1")
lowest_price = lowest_prices[0].text
if len(lowest_prices) == 2:
print("lenth is 2")
lowest_price = lowest_prices[1].text
lowest_price = float(lowest_price.strip().replace(',', '.')[:-5])
print("name -", skins_from_parser[index]["name"], "lowest price -", lowest_price)
skins_for_exchange_rate.append({"name": skins_from_parser[index]["name"], "price_usd": skins_from_parser[index]["price"], "price_rub": lowest_price})
#получаем курс валют и проверяем его на адекватность
print("skins_for_exchange_rate", skins_for_exchange_rate)
rates = []
#получаем курс для трех предметов
index = 0
for item in skins_for_exchange_rate:
index += 1
compared_price = float(item["price_rub"]) / float(item["price_usd"])
rates.append({"name":index, "exchange_rate": compared_price})
print("rates", rates)
#сравниваем курсы валют
for rate_1 in rates:
for rate_2 in rates:
if rate_1["name"] == rate_2["name"]:
continue
rates_difference = rate_1["exchange_rate"] - rate_2["exchange_rate"]
if rates_difference < 0:
rates_difference = rates_difference * -1
#если значение больше, то берем!
print("rates_difference -", rates_difference)
if rates_difference < 0.1:
print("gotcha!")
#берем наибольший курс валют
if rate_1["exchange_rate"] > rate_2["exchange_rate"]:
exchange_rate = rate_1["exchange_rate"]
if rate_1["exchange_rate"] < rate_2["exchange_rate"]:
exchange_rate = rate_2["exchange_rate"]
break
#проставляем курс валют в бд
if exchange_rate != 0:
now = datetime.datetime.now()
mycursor.execute(
"UPDATE `csMoneyLimits` SET `quanity`=%s,`date`=%s WHERE id = 38",
(exchange_rate, now,))
mydb.commit()
print("exchange rate -", exchange_rate)
# проставляем цену для байордера для каждого предмета
for item in skins_from_parser:
#usd min market price #exchange rate #plus 1 rubble
item["buy_order_price"] = item["price"] * float(exchange_rate) + 1
"""
# удаляем все выставленные байордера
print("-- start deleting buyorders")
driver.get("https://steamcommunity.com/market/")
time.sleep(3)
try:
element = wait.until(EC.visibility_of_element_located(
(By.CSS_SELECTOR, "#result_0")))
except:
raise ValueError('cant load steam market page')
# получаем баланс, чтобы отсеить дорогие предметы
balance = driver.find_element_by_css_selector("#header_wallet_balance").text.strip()[:-5]
try:
balance = balance.replace(',', '.') # если целое число (без запятой)
except:
print("integer price")
pass
balance = float(balance)
high_limit_of_orders = balance * 10
# удаляем байордера
buy_orders_cancel_buttons = driver.find_elements_by_css_selector(
"#tabContentsMyListings > div:last-child > div.market_listing_row.market_recent_listing_row > div.market_listing_edit_buttons.actual_content > div > a")
print("length items to del", len(buy_orders_cancel_buttons))
for index in range(len(buy_orders_cancel_buttons)):
driver.find_element_by_css_selector(
"#tabContentsMyListings > div:last-child > div:nth-child(3) > div.market_listing_edit_buttons.actual_content > div > a").click() # кликаем всегда на первую кнопку cancel, тк они пропадают после клика
buy_orders_cancel_buttons = driver.find_elements_by_css_selector("#tabContentsMyListings > div:last-child > div.market_listing_row.market_recent_listing_row > div.market_listing_edit_buttons.actual_content > div > a")
print("length items to del", len(buy_orders_cancel_buttons))
time.sleep(6)
#выставляем байордера
errors = 0
spent_money = 0
for item in skins_from_parser:
#если за проход мы получаем >= 3 ошибки, то перкращаем работу и открываем логи
if errors == 3:
raise ValueError('got 3 errors while placing buy orders')
#открываем предмет
url_name = urllib.parse.quote(item["name"])
driver.get("https://steamcommunity.com/market/listings/730/" + url_name)
time.sleep(7)
try:
element = wait.until(EC.visibility_of_element_located(
(By.CSS_SELECTOR, "span.market_commodity_orders_header_promote:last-child")))
except:
print("can load items page +1 error")
errors += 1
continue
try:
driver.find_element_by_css_selector("#market_buyorder_info_show_details > span").click() #View more details
except: pass
#получаем мин цену
lowest_prices = driver.find_elements_by_css_selector("span.market_commodity_orders_header_promote:last-child")
if len(lowest_prices) == 1:
print("lenth is 1")
lowest_price = lowest_prices[0].text
if len(lowest_prices) == 2:
print("lenth is 2")
lowest_price = lowest_prices[1].text
lowest_price = lowest_price.strip()[:-5]
try:
lowest_price = lowest_price.replace(',', '.') #если целое число (без запятой)
except:
print("integer price")
pass
#проверяем, подходит ли предмет по профиту
print("lowest_price", lowest_price)
lowest_price = float(lowest_price) + 3 # прибавляем рубль к цене
dep_price_rub = item["price_csm"] * rub_usd * 0.96
expected_profit = (lowest_price / dep_price_rub -1) * -1
print("expected profit -", expected_profit)
if expected_profit < allowed_min_percent:
print("expected profit is less than allowed min profit")
continue
#проверяем, хватает ли баланса на покупку предмета
if lowest_price > balance:
print("items's price is more than balance")
continue
#получаем имя предмета со страницы (доп проверка)
name_of_the_item_on_page = driver.find_element_by_css_selector("#mainContents > div.market_listing_nav_container > div.market_listing_nav > a:nth-child(2)").text.strip()
if name_of_the_item_on_page != item["name"]:
print("name on the market page doesn't math with the actual item name")
continue
spent_money += lowest_price
# в стиме верхняя планка это х10 от баланса
if spent_money > high_limit_of_orders:
print("limit exceeded", spent_money, high_limit_of_orders)
break
#выставляем ордер
if len(lowest_prices) == 1:
driver.find_element_by_css_selector("#market_buyorder_info > div:nth-child(1) > div:nth-child(1) > a > span").click() #place buy order
if len(lowest_prices) == 2:
driver.find_element_by_css_selector("#market_commodity_order_spread > div:nth-child(2) > div > div.market_commodity_orders_header > a > span").click()
time.sleep(1)
input_field = driver.find_element_by_css_selector('#market_buy_commodity_input_price')
input_field.clear()
time.sleep(0.2)
input_field.send_keys(str(lowest_price)) #вводим цену
time.sleep(0.2)
driver.find_element_by_css_selector("#market_buyorder_dialog_purchase > span").click() #выставляем ордер
time.sleep(0.2)
#проверяем, не появилась ли ошибка при попытке выставить ордер (нужно поставить галочку)
try:
error_text = driver.find_element_by_css_selector("#market_buyorder_dialog_error_text").text.strip()
except:
pass
if error_text == "You must agree to the terms of the Steam Subscriber Agreement to complete this transaction.":
print("tick!")
driver.find_element_by_css_selector("#market_buyorder_dialog_accept_ssa").click() #ставим халочку
time.sleep(0.2)
driver.find_element_by_css_selector("#market_buyorder_dialog_purchase > span").click() #снова выставляем ордер
time.sleep(0.2)
print("add one more buy order with a price -", lowest_price)
time.sleep(4)
close_modals()
except Exception as e:
telegram_bot_sendtext("SteamOrderPrices: Возникла ошибка, нужно выяснять")
logger.exception(e) # Will send the errors to the file
PrintException()
close_script()
#test print
#for item in skins_from_parser:
# print(item)
print("Successful")
close_script()
| 43.699248 | 1,098 | 0.618582 | 3,409 | 29,060 | 5.066002 | 0.193899 | 0.014186 | 0.036885 | 0.035206 | 0.51077 | 0.487782 | 0.443891 | 0.427794 | 0.401737 | 0.377012 | 0 | 0.02192 | 0.27629 | 29,060 | 664 | 1,099 | 43.76506 | 0.799249 | 0.080867 | 0 | 0.481043 | 0 | 0.028436 | 0.302756 | 0.095579 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011848 | false | 0.035545 | 0.059242 | 0 | 0.07109 | 0.101896 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
871339da3c5ad19f363b1ca88a51d6c2fea28c59 | 3,858 | py | Python | scripts/venv/lib/python2.7/site-packages/cogent/align/progressive.py | sauloal/cnidaria | fe6f8c8dfed86d39c80f2804a753c05bb2e485b4 | [
"MIT"
] | 3 | 2015-11-20T08:44:42.000Z | 2016-12-14T01:40:03.000Z | scripts/venv/lib/python2.7/site-packages/cogent/align/progressive.py | sauloal/cnidaria | fe6f8c8dfed86d39c80f2804a753c05bb2e485b4 | [
"MIT"
] | 1 | 2017-09-04T14:04:32.000Z | 2020-05-26T19:04:00.000Z | scripts/venv/lib/python2.7/site-packages/cogent/align/progressive.py | sauloal/cnidaria | fe6f8c8dfed86d39c80f2804a753c05bb2e485b4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from __future__ import with_statement
from cogent import LoadTree
from cogent.phylo import nj as NJ
from cogent.phylo.distance import EstimateDistances
from cogent.core.info import Info
from cogent.util import progress_display as UI
__author__ = "Peter Maxwell"
__copyright__ = "Copyright 2007-2012, The Cogent Project"
__credits__ = ["Peter Maxwell", "Gavin Huttley"]
__license__ = "GPL"
__version__ = "1.5.3"
__maintainer__ = "Peter Maxwell"
__email__ = "pm67nz@gmail.com"
__status__ = "Production"
@UI.display_wrap
def TreeAlign(model, seqs, tree=None, indel_rate=0.01, indel_length=0.01,
ui = None, ests_from_pairwise=True, param_vals=None):
"""Returns a multiple alignment and tree.
Uses the provided substitution model and a tree for determining the
progressive order. If a tree is not provided a Neighbour Joining tree is
constructed from pairwise distances estimated from pairwise aligning the
sequences. If running in parallel, only the distance estimation is
parallelised and only the master CPU returns the alignment and tree, other
CPU's return None, None.
Arguments:
- model: a substitution model
- seqs: a sequence collection
- indel_rate, indel_length: parameters for the progressive pair-HMM
- ests_from_pairwise: if no tree provided and True, the median value
of the substitution model parameters are used
- param_vals: named key, value pairs for model parameters. These
override ests_from_pairwise.
"""
_exclude_params = ['mprobs', 'rate', 'bin_switch']
if param_vals:
param_vals = dict(param_vals)
else:
param_vals = {}
if isinstance(seqs, dict):
seq_names = seqs.keys()
else:
seq_names = seqs.getSeqNames()
two_seqs = len(seq_names) == 2
if tree:
tip_names = tree.getTipNames()
tip_names.sort()
seq_names.sort()
assert tip_names == seq_names, \
"names don't match between seqs and tree: tree=%s; seqs=%s" % \
(tip_names, seq_names)
ests_from_pairwise = False
elif two_seqs:
tree = LoadTree(tip_names=seqs.getSeqNames())
ests_from_pairwise = False
else:
if ests_from_pairwise:
est_params = [param for param in model.getParamList() \
if param not in _exclude_params]
else:
est_params = None
dcalc = EstimateDistances(seqs, model, do_pair_align=True,
est_params=est_params)
dcalc.run()
dists = dcalc.getPairwiseDistances()
tree = NJ.nj(dists)
LF = model.makeLikelihoodFunction(tree.bifurcating(name_unnamed=True), aligned=False)
if ests_from_pairwise and not param_vals:
# we use the Median to avoid the influence of outlier pairs
param_vals = {}
for param in est_params:
numbers = dcalc.getParamValues(param)
print "Param Estimate Summary Stats: %s" % param
print numbers.summarize()
param_vals[param] = numbers.Median
ui.display("Doing %s alignment" % ["progressive", "pairwise"][two_seqs])
with LF.updatesPostponed():
for param, val in param_vals.items():
LF.setParamRule(param, value=val, is_constant=True)
LF.setParamRule('indel_rate', value=indel_rate, is_constant=True)
LF.setParamRule('indel_length', value=indel_length, is_constant=True)
LF.setSequences(seqs)
edge = LF.getLogLikelihood().edge
(vtLnL, align) = edge.getViterbiScoreAndAlignment(0.5)
info = Info()
info["AlignParams"] = param_vals
info["AlignParams"].update(dict(indel_length=indel_length, indel_rate=indel_rate))
align.Info = info
return align, tree
| 38.58 | 89 | 0.664593 | 483 | 3,858 | 5.089027 | 0.36646 | 0.040277 | 0.045566 | 0.019528 | 0.026851 | 0.026851 | 0 | 0 | 0 | 0 | 0 | 0.007618 | 0.251426 | 3,858 | 99 | 90 | 38.969697 | 0.84349 | 0.020218 | 0 | 0.112676 | 0 | 0 | 0.108546 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0 | null | null | 0 | 0.084507 | null | null | 0.028169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8713f50880ce1a36e5b15f90eca7dd6b247d359c | 4,654 | py | Python | Day Pogress - 18~100/Day 17/constructor.py | Abbhiishek/Python | 3ad5310ca29469f353f9afa99531f01273eec6bd | [
"MIT"
] | 1 | 2022-02-04T07:04:34.000Z | 2022-02-04T07:04:34.000Z | Day Pogress - 18~100/Day 17/constructor.py | Abbhiishek/Python | 3ad5310ca29469f353f9afa99531f01273eec6bd | [
"MIT"
] | 12 | 2022-02-13T12:10:32.000Z | 2022-02-17T09:36:49.000Z | Day Pogress - 18~100/Day 17/constructor.py | Abbhiishek/Python | 3ad5310ca29469f353f9afa99531f01273eec6bd | [
"MIT"
] | null | null | null | """
how to create a class
a class is constructed through constructor
=> a class can be constructed through specific function called "__init__(self)"
"""
# let's create a Class
class Person:
"""
A class to create a person with the following attributes:
age, marks, phone, user_id, profession, goal
Have the following methods:
get_age()
get_marks()
get_phone()
get_user_id()
get_profession()
get_goal()
set_age(age)
set_marks(marks)
set_phone(phone)
set_user_id(user_id)
set_profession(profession)
set_goal(goal)
print_details()
"""
# now we have to define the shape of the class for that we use __init__
def __init__(self, age, marks, phone, user_id, profession, goal):
# this is a special function for the initialising the variables
# construct the shape
## to add new parameters we pass in init method
"""
this is a function to initialise the variables of the class
:param age: int
:param marks: int
:param phone: int
:param user_id: int
:param profession: string
:param goal: string
"""
self.age = age
self.marks = marks
self.phone = phone
self.user_id = user_id
self.profession = profession
self.goal = goal
def get_age(self):
"""
this is a function to get the age of the class
:return: int
"""
return self.age
def get_marks(self):
"""
this is a function to get the marks of the class
:return: int
"""
return self.marks
def get_phone(self):
"""
this is a function to get the phone of the class
:return: int
"""
return self.phone
def get_user_id(self):
"""
this is a function to get the user_id of the class
:return: int
"""
return self.user_id
def get_profession(self):
"""
this is a function to get the profession of the class
:return: string
"""
return self.profession
def get_goal(self):
"""
this is a function to get the goal of the class
:return: goal
"""
return self.goal
def set_age(self, age):
"""
this is a function to set the age of the class
:param age: int
:return: Age
"""
self.age = age
def set_marks(self, marks):
"""
this is a function to set the marks of the class
:param marks: int
:return: marks
"""
self.marks = marks
def set_phone(self, phone):
"""
this is a function to set the phone of the class
:param phone: int
:return: phone
"""
self.phone = phone
def set_user_id(self, user_id):
"""
this is a function to set the user_id of the class
:param user_id: int
:return: user_id
"""
self.user_id = user_id
def set_profession(self, profession):
"""
this is a function to set the profession of the class
:param profession: string
:return: Profession
"""
self.profession = profession
def set_goal(self, goal):
"""
this is a function to set the goal of the class
:param goal: string
:return: goal
"""
self.goal = goal
def print_details(self):
"""
this is a function to print the details of the class
:return: age , marks , phone , user_id , profession , goal
"""
print("age: ", self.age)
print("marks: ", self.marks)
print("phone: ", self.phone)
print("user_id: ", self.user_id)
print("profession: ", self.profession)
print("goal: ", self.goal)
abhishek = Person(21, 90, 31114111, 12345, "student", "to be a good programmer")
abhishek.set_age(20)
abhishek.set_marks(90)
abhishek.set_phone(9674144556) # yess this is a valid phone number of mine :)
abhishek.set_user_id(1)
abhishek.set_profession("student")
abhishek.set_goal("to be a good programmer")
print("the address is ")
print(abhishek) # this is the address of the object
print(abhishek.__dict__) # this is a dictionary of the class
print(abhishek.__doc__) # this is a special function to print the docstring of the class
abhishek.print_details() # this is to print the details of the class
profession = abhishek.get_profession() # this is to get the profession of the class
print("the profession is " + profession) # this is to print the profession of the class | 26.747126 | 89 | 0.595617 | 632 | 4,654 | 4.265823 | 0.129747 | 0.048961 | 0.074184 | 0.077893 | 0.372774 | 0.259644 | 0.228858 | 0.060089 | 0 | 0 | 0 | 0.010117 | 0.32037 | 4,654 | 174 | 90 | 26.747126 | 0.842238 | 0.470348 | 0 | 0.226415 | 0 | 0 | 0.075216 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.264151 | false | 0 | 0 | 0 | 0.396226 | 0.245283 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8714de68982283bcaeff5e6a0b71251d452dbab8 | 1,873 | py | Python | tests/x7/lib/test_shell_tools.py | gribbg/x7-lib | 1ec5807d2c85d522a9f678f995d0f2fe42735d18 | [
"BSD-2-Clause"
] | null | null | null | tests/x7/lib/test_shell_tools.py | gribbg/x7-lib | 1ec5807d2c85d522a9f678f995d0f2fe42735d18 | [
"BSD-2-Clause"
] | null | null | null | tests/x7/lib/test_shell_tools.py | gribbg/x7-lib | 1ec5807d2c85d522a9f678f995d0f2fe42735d18 | [
"BSD-2-Clause"
] | null | null | null | # Originally auto-generated on 2021-02-15-12:14:36 -0500 EST
# By '--verbose --verbose x7.lib.shell_tools'
from unittest import TestCase
from x7.lib.annotations import tests
from x7.testing.support import Capture
from x7.lib import shell_tools
from x7.lib.shell_tools_load import ShellTool
@tests(shell_tools)
class TestModShellTools(TestCase):
"""Tests for stand-alone functions in x7.lib.shell_tools module"""
@tests(shell_tools.Dir)
def test_dir(self):
self.assertIn('__init__', dir(self))
self.assertNotIn('__init__', shell_tools.Dir(self))
self.assertIn('test_dir', shell_tools.Dir(self))
@tests(shell_tools.help)
def test_help(self):
with Capture() as orig:
help(shell_tools.Dir)
with Capture() as modified:
shell_tools.help(shell_tools.Dir)
self.assertEqual(orig.stdout(), modified.stdout())
self.assertIn('Like dir(v), but only non __ names', orig.stdout())
st_dir = ShellTool('Dir', shell_tools.Dir)
with Capture() as as_shell_tool:
shell_tools.help(st_dir)
self.assertEqual(orig.stdout(), as_shell_tool.stdout())
self.assertNotIn('__init__', as_shell_tool.stdout())
with Capture() as orig_as_shell_tool:
help(st_dir)
self.assertIn('__init__', orig_as_shell_tool.stdout())
@tests(shell_tools.help)
def test_help_on_help(self):
with Capture() as orig:
help(help)
with Capture() as modified:
shell_tools.help(ShellTool('help', shell_tools.help))
self.assertEqual(orig.stdout(), modified.stdout())
@tests(shell_tools.tools)
def test_tools(self):
with Capture() as out:
shell_tools.tools()
self.assertIn('Help for tools', out.stdout())
self.assertGreaterEqual(out.stdout().count('\n'), 5)
| 36.019231 | 74 | 0.662573 | 251 | 1,873 | 4.705179 | 0.25498 | 0.160881 | 0.077053 | 0.038103 | 0.281118 | 0.254869 | 0.1558 | 0 | 0 | 0 | 0 | 0.017007 | 0.215163 | 1,873 | 51 | 75 | 36.72549 | 0.786395 | 0.08756 | 0 | 0.195122 | 1 | 0 | 0.056992 | 0 | 0 | 0 | 0 | 0 | 0.268293 | 1 | 0.097561 | false | 0 | 0.121951 | 0 | 0.243902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8716b6c023660b91d85d95f0d7e719aaefef6ec6 | 408 | py | Python | elvaapp/elvaapp/doctype/vache/vache_dashboard.py | ovresko/elvadesk | 10090244ce5bd379cf16763e216e6011870de937 | [
"MIT"
] | 1 | 2020-12-28T16:35:41.000Z | 2020-12-28T16:35:41.000Z | elvaapp/troupeau/doctype/vache/vache_dashboard.py | ovresko/elvadesk | 10090244ce5bd379cf16763e216e6011870de937 | [
"MIT"
] | null | null | null | elvaapp/troupeau/doctype/vache/vache_dashboard.py | ovresko/elvadesk | 10090244ce5bd379cf16763e216e6011870de937 | [
"MIT"
] | null | null | null | from frappe import _
def get_data():
return {
'fieldname': 'vache',
'non_standard_fieldnames': {
'Insemination': 'vache'
},
'transactions': [
{
'label': _('Reproduction'),
'items': ['Insemination','Velage','Diagnostique']
},
{
'label': _('Production'),
'items': ['Lactation item']
},
{
'label': _('Sante'),
'items': ['Dossier medical','Poids']
}
]
}
| 17 | 53 | 0.544118 | 32 | 408 | 6.71875 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237745 | 408 | 23 | 54 | 17.73913 | 0.691318 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0.056373 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | true | 0 | 0.045455 | 0.045455 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
871afb26bdc862a0ddb469ca8e427353406c19fc | 690 | py | Python | tests/emulator/kernel/test_kernel.py | FKD13/RCPU | 1f27246494f60eaa2432470b2d218bb3f63578c7 | [
"MIT"
] | 17 | 2017-07-26T13:08:34.000Z | 2022-02-19T20:44:02.000Z | tests/emulator/kernel/test_kernel.py | FKD13/RCPU | 1f27246494f60eaa2432470b2d218bb3f63578c7 | [
"MIT"
] | 4 | 2017-10-12T20:56:39.000Z | 2020-05-04T09:19:44.000Z | tests/emulator/kernel/test_kernel.py | FKD13/RCPU | 1f27246494f60eaa2432470b2d218bb3f63578c7 | [
"MIT"
] | 4 | 2017-10-16T16:24:16.000Z | 2022-03-21T19:07:06.000Z | from .utils import init_kernel
def test_read_string():
k = init_kernel()
k.RAM.load([ord(char) for char in "ABCDE"] + [0])
assert k.read_string(0) == "ABCDE"
k.RAM.load([ord(char) for char in "PYTHON"] + [0], base_address=40)
assert k.read_string(40) == "PYTHON"
def test_write_string():
k = init_kernel()
k.write_string(0, "Hello World!")
assert k.read_string(0) == "Hello World!"
# Test empty string
k = init_kernel()
k.write_string(20, '')
assert k.read_string(20) == ''
def test_read_string_empty():
k = init_kernel()
k.RAM.set(0, 0)
assert k.read_string(0) == ""
k.RAM.set(40, 0)
assert k.read_string(40) == ""
| 24.642857 | 71 | 0.614493 | 110 | 690 | 3.663636 | 0.263636 | 0.198511 | 0.163772 | 0.253102 | 0.578164 | 0.35732 | 0.263027 | 0.119107 | 0 | 0 | 0 | 0.038817 | 0.215942 | 690 | 27 | 72 | 25.555556 | 0.7061 | 0.024638 | 0 | 0.2 | 0 | 0 | 0.068554 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.15 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
871edef7298e1f6e8174cf24afd5721a5373f0fa | 3,297 | py | Python | google_analytics/predictor.py | hadisotudeh/jads_kaggle | 384928f8504053692ee4569f9692dd8ecebe1df3 | [
"MIT"
] | 11 | 2018-01-20T08:17:34.000Z | 2019-12-04T08:40:11.000Z | google_analytics/predictor.py | hadisotudeh/jads_kaggle | 384928f8504053692ee4569f9692dd8ecebe1df3 | [
"MIT"
] | 55 | 2018-01-19T15:23:45.000Z | 2019-11-01T09:51:21.000Z | google_analytics/predictor.py | hadisotudeh/jads_kaggle | 384928f8504053692ee4569f9692dd8ecebe1df3 | [
"MIT"
] | 28 | 2018-01-17T16:18:23.000Z | 2020-05-25T14:14:49.000Z | import numpy as np
from sklearn.metrics import mean_squared_error, make_scorer
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.base import BaseEstimator
from utils import timing # noqa
TUNING_OUTPUT_DEFAULT = 'tuning.txt'
RANDOM_STATE = 42 # Used for reproducible results
class Predictor(BaseEstimator):
"""
An abstract class modeling our notion of a predictor.
Concrete implementations should follow the predictors
interface
"""
name = 'Abstract Predictor'
def __init__(self, name=name):
"""
Base constructor. The input training is expected to be preprocessed and contain
features extracted for each sample along with the true values
:param params: a dictionary of named model parameters
:param name: Optional model name, used for logging
"""
self.name = name
def __str__(self):
return self.name
def fit(self, train_x, train_y):
"""
A function that fits the predictor to the provided dataset
:param train_x: train data, a pd.DataFrame of features to fit on.
:param train_y: The labels for the training data.
"""
self.model.fit(train_x, train_y)
def predict(self, test_x):
"""
Predicts the target for the given input
:param test_x: a pd.DataFrame of features to be used for predictions
:return: The predicted labels
"""
return self.model.predict(test_x)
def predict_proba(self, test_x):
"""
Predicts the probability of the label for the given input
:param test_x: a pd.DataFrame of features to be used for predictions
:return: The predicted probabilities
"""
return self.model.predict_proba(test_x)
@timing
def evaluate(self, x, y, method="split", nfolds=3, val_size=0.3):
"""
Evaluate performance of the predictor. The default method `CV` is a lot more robust, however it is also a lot slower
since it goes through `nfolds` iterations. The `split` method is based on a train-test split which makes it a lot faster.
:param x: Input features to be used for fitting
:param y: Target values
:param method: String denoting the evaluation method. Acceptable values are 'cv' for cross validation and 'split' for train-test split
:param nfolds: Number of folds per tag in case CV is the evaluation method. Ignored otherwise
:param val_size: Ratio of the training set to be used as validation in case split is the evaluation method. Ignored otherwise
:return: The average log loss error across all tags
"""
if method == 'CV':
scorer = make_scorer(mean_squared_error)
scores = cross_val_score(self.model, x, y, cv=nfolds, scoring=scorer)
return np.mean(scores ** (1/2))
if method == 'split':
train_x, val_x, train_y, val_y = train_test_split(x, y, test_size=val_size, random_state=RANDOM_STATE)
self.fit(train_x, train_y)
predictions = self.predict(val_x)
return mean_squared_error(val_y, predictions) ** (1/2)
raise ValueError("Method must be either 'stratified_CV', 'CV' or 'split', not {}".format(method))
| 40.703704 | 142 | 0.670003 | 463 | 3,297 | 4.647948 | 0.356371 | 0.013941 | 0.026022 | 0.016729 | 0.16868 | 0.127323 | 0.081784 | 0.081784 | 0.081784 | 0.081784 | 0 | 0.003679 | 0.258113 | 3,297 | 80 | 143 | 41.2125 | 0.876124 | 0.484683 | 0 | 0 | 0 | 0 | 0.071279 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.193548 | false | 0 | 0.16129 | 0.032258 | 0.580645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
871f9d647a8ce835e1b9294a55164bca63acffe5 | 2,625 | py | Python | benchmark/test.py | franklx/misaka | 616d360bee576e4b1dfb36c58bd05a5166f59a38 | [
"MIT"
] | 1 | 2017-09-04T05:32:10.000Z | 2017-09-04T05:32:10.000Z | benchmark/test.py | franklx/misaka | 616d360bee576e4b1dfb36c58bd05a5166f59a38 | [
"MIT"
] | null | null | null | benchmark/test.py | franklx/misaka | 616d360bee576e4b1dfb36c58bd05a5166f59a38 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from misaka import Markdown, BaseRenderer, HtmlRenderer, \
SmartyPants, \
EXT_FENCED_CODE, EXT_TABLES, EXT_AUTOLINK, EXT_STRIKETHROUGH, \
EXT_SUPERSCRIPT, HTML_USE_XHTML, \
TABLE_ALIGN_L, TABLE_ALIGN_R, TABLE_ALIGN_C, \
TABLE_ALIGNMASK, TABLE_HEADER
class BleepRenderer(HtmlRenderer, SmartyPants):
def block_code(self, text, lang):
if lang:
lang = ' class="%s"' % lang
else:
lang = ''
return '\n<pre%s><code>%s</code></pre>\n' % (lang, text)
def block_quote(self, text):
return '\n<blockquote>%s</blockquote>\n' % text
def block_html(self, text):
return '\n%s' % text
def header(self, text, level):
return '\n<h%d>%s</h%d>\n' % (level, text, level)
def hrule(self):
if self.flags & HTML_USE_XHTML:
return '\n<hr/>\n'
else:
return '\n<hr>\n'
def list(self, text, is_ordered):
if is_ordered:
return '\n<ol>%s</ol>\n' % text
else:
return '\n<ul>%s</ul>\n' % text
def list_item(self, text, is_ordered):
return '<li>%s</li>\n' % text
def paragraph(self, text):
# No hard wrapping yet. Maybe with:
# http://docs.python.org/library/textwrap.html
return '\n<p>%s</p>\n' % text
def table(self, header, body):
return '\n<table><thead>\n%s</thead><tbody>\n%s</tbody></table>\n' % \
(header, body)
def table_row(self, text):
return '<tr>\n%s</tr>\n' % text
def table_cell(self, text, flags):
flags = flags & TABLE_ALIGNMASK
if flags == TABLE_ALIGN_C:
align = 'align="center"'
elif flags == TABLE_ALIGN_L:
align = 'align="left"'
elif flags == TABLE_ALIGN_R:
align = 'align="right"'
else:
align = ''
if flags & TABLE_HEADER:
return '<th%s>%s</th>\n' % (align, text)
else:
return '<td%s>%s</td>\n' % (align, text)
def autolink(self, link, is_email):
if is_email:
return '<a href="mailto:%(link)s">%(link)s</a>' % {'link': link}
else:
return '<a href="%(link)s">%(link)s</a>' % {'link': link}
def preprocess(self, text):
return text.replace(' ', '_')
md = Markdown(BleepRenderer(),
EXT_FENCED_CODE | EXT_TABLES | EXT_AUTOLINK |
EXT_STRIKETHROUGH | EXT_SUPERSCRIPT)
print(md.render('''
Unordered
- One
- Two
- Three
And now ordered:
1. Three
2. Two
3. One
An email: example@example.com
And an URL: http://example.com
'''))
| 25.240385 | 78 | 0.552762 | 350 | 2,625 | 4.02 | 0.297143 | 0.056859 | 0.028429 | 0.022743 | 0.11656 | 0.11656 | 0.11656 | 0.089552 | 0.089552 | 0.089552 | 0 | 0.002133 | 0.285714 | 2,625 | 103 | 79 | 25.485437 | 0.748267 | 0.038095 | 0 | 0.081081 | 0 | 0.013514 | 0.208251 | 0.07259 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175676 | false | 0 | 0.013514 | 0.108108 | 0.432432 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
8722f6f5404b5e8a02c1795e4a9cd19c750cfc01 | 10,119 | py | Python | tests/test_xunit_plugin.py | omergertel/slash | 7dd5710a05822bbbaadc6c6517cefcbaa6397eab | [
"BSD-3-Clause"
] | null | null | null | tests/test_xunit_plugin.py | omergertel/slash | 7dd5710a05822bbbaadc6c6517cefcbaa6397eab | [
"BSD-3-Clause"
] | null | null | null | tests/test_xunit_plugin.py | omergertel/slash | 7dd5710a05822bbbaadc6c6517cefcbaa6397eab | [
"BSD-3-Clause"
] | null | null | null | import os
import pytest
import slash
from lxml import etree
def test_xunit_plugin(results, xunit_filename):
assert os.path.exists(xunit_filename), 'xunit file not created'
schema_root = etree.XML(_XUNIT_XSD)
schema = etree.XMLSchema(schema_root)
parser = etree.XMLParser(schema=schema)
with open(xunit_filename) as f:
etree.parse(f, parser)
@pytest.fixture
def results(populated_suite, xunit_filename):
populated_suite.run()
@pytest.fixture
def xunit_filename(tmpdir, request, config_override):
xunit_filename = str(tmpdir.join('xunit.xml'))
slash.plugins.manager.activate('xunit')
slash.config.root.plugins.xunit.filename = xunit_filename
@request.addfinalizer
def deactivate():
slash.plugins.manager.deactivate('xunit')
assert 'xunit' not in slash.config['plugins']
return xunit_filename
# Taken from https://gist.github.com/jzelenkov/959290
_XUNIT_XSD = """<?xml version="1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified"
attributeFormDefault="unqualified">
<xs:annotation>
<xs:documentation xml:lang="en">Jenkins xUnit test result schema.
</xs:documentation>
</xs:annotation>
<xs:element name="testsuite" type="testsuite"/>
<xs:simpleType name="ISO8601_DATETIME_PATTERN">
<xs:restriction base="xs:dateTime">
<xs:pattern value="[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}"/>
</xs:restriction>
</xs:simpleType>
<xs:element name="testsuites">
<xs:annotation>
<xs:documentation xml:lang="en">Contains an aggregation of testsuite results</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="testsuite" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:complexContent>
<xs:extension base="testsuite">
<xs:attribute name="package" type="xs:token" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Derived from testsuite/@name in the non-aggregated documents</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="id" type="xs:int" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Starts at '0' for the first testsuite and is incremented by 1 for each following testsuite</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:complexContent>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:complexType name="testsuite">
<xs:annotation>
<xs:documentation xml:lang="en">Contains the results of exexuting a testsuite</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="testcase" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:choice minOccurs="0">
<xs:element name="error">
<xs:annotation>
<xs:documentation xml:lang="en">Indicates that the test errored. An errored test is one that had an unanticipated problem. e.g., an unchecked throwable; or a problem with the implementation of the test. Contains as a text node relevant data for the error, e.g., a stack trace</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:simpleContent>
<xs:extension base="pre-string">
<xs:attribute name="message" type="xs:string">
<xs:annotation>
<xs:documentation xml:lang="en">The error message. e.g., if a java exception is thrown, the return value of getMessage()</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="type" type="xs:string" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">The type of error that occured. e.g., if a java execption is thrown the full class name of the exception.</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
<xs:element name="failure">
<xs:annotation>
<xs:documentation xml:lang="en">Indicates that the test failed. A failure is a test which the code has explicitly failed by using the mechanisms for that purpose. e.g., via an assertEquals. Contains as a text node relevant data for the failure, e.g., a stack trace</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:simpleContent>
<xs:extension base="pre-string">
<xs:attribute name="message" type="xs:string">
<xs:annotation>
<xs:documentation xml:lang="en">The message specified in the assert</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="type" type="xs:string" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">The type of the assert.</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
<xs:element name="skipped">
<xs:annotation>
<xs:documentation xml:lang="en">Indicates that the test was skipped. A skipped test is a test which was ignored using framework mechanisms. e.g., @Ignore annotation.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:simpleContent>
<xs:extension base="pre-string">
<xs:attribute name="type" type="xs:string" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Skip type.</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
</xs:choice>
<xs:attribute name="name" type="xs:token" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Name of the test method</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="classname" type="xs:token" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Full class name for the class the test method is in.</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="time" type="xs:decimal" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Time taken (in seconds) to execute the test</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="name" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Full class name of the test for non-aggregated testsuite documents. Class name without the package for aggregated testsuites documents</xs:documentation>
</xs:annotation>
<xs:simpleType>
<xs:restriction base="xs:token">
<xs:minLength value="1"/>
</xs:restriction>
</xs:simpleType>
</xs:attribute>
<xs:attribute name="timestamp" type="ISO8601_DATETIME_PATTERN" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">when the test was executed. Timezone may not be specified.</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="hostname" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Host on which the tests were executed. 'localhost' should be used if the hostname cannot be determined.</xs:documentation>
</xs:annotation>
<xs:simpleType>
<xs:restriction base="xs:token">
<xs:minLength value="1"/>
</xs:restriction>
</xs:simpleType>
</xs:attribute>
<xs:attribute name="tests" type="xs:int" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">The total number of tests in the suite</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="failures" type="xs:int" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">The total number of tests in the suite that failed. A failure is a test which the code has explicitly failed by using the mechanisms for that purpose. e.g., via an assertEquals</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="errors" type="xs:int" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">The total number of tests in the suite that errored. An errored test is one that had an unanticipated problem. e.g., an unchecked throwable; or a problem with the implementation of the test.</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="skipped" type="xs:int">
<xs:annotation>
<xs:documentation xml:lang="en">The total number of tests in the suite that skipped. A skipped test is a test which was ignored using framework mechanisms. e.g., @Ignore annotation.</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="time" type="xs:decimal" use="required">
<xs:annotation>
<xs:documentation xml:lang="en">Time taken (in seconds) to execute the tests in the suite</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:simpleType name="pre-string">
<xs:restriction base="xs:string">
<xs:whiteSpace value="preserve"/>
</xs:restriction>
</xs:simpleType>
</xs:schema>
"""
| 45.376682 | 303 | 0.608657 | 1,200 | 10,119 | 5.113333 | 0.175 | 0.097784 | 0.109518 | 0.105606 | 0.692797 | 0.662158 | 0.64912 | 0.613755 | 0.599413 | 0.504074 | 0 | 0.006163 | 0.262378 | 10,119 | 222 | 304 | 45.581081 | 0.815916 | 0.00504 | 0 | 0.600962 | 0 | 0.100962 | 0.916253 | 0.099642 | 0 | 0 | 0 | 0 | 0.028846 | 1 | 0.019231 | false | 0 | 0.019231 | 0 | 0.043269 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
872538f9e004a1966a45af99b31962b33d5a2ea0 | 372 | py | Python | bin/award_ebadge_declare.py | ervikey/SA-ctf_scoreboard | 00b631e9ed2c075f96f660583656ae68eb4b17e0 | [
"CC0-1.0"
] | 106 | 2018-03-09T13:03:05.000Z | 2022-03-10T11:01:48.000Z | bin/award_ebadge_declare.py | ervikey/SA-ctf_scoreboard | 00b631e9ed2c075f96f660583656ae68eb4b17e0 | [
"CC0-1.0"
] | 17 | 2018-05-11T00:53:47.000Z | 2020-05-07T10:14:40.000Z | bin/award_ebadge_declare.py | ervikey/SA-ctf_scoreboard | 00b631e9ed2c075f96f660583656ae68eb4b17e0 | [
"CC0-1.0"
] | 33 | 2018-04-23T20:18:11.000Z | 2022-03-27T16:41:03.000Z | # encode = utf-8
import os
import sys
import re
ta_name = 'SA-ctf_scoreboard'
ta_lib_name = 'sa_ctf_scoreboard'
pattern = re.compile(r"[\\/]etc[\\/]apps[\\/][^\\/]+[\\/]bin[\\/]?$")
new_paths = [path for path in sys.path if not pattern.search(path) or ta_name in path]
new_paths.insert(0, os.path.sep.join([os.path.dirname(__file__), ta_lib_name]))
sys.path = new_paths
| 28.615385 | 86 | 0.688172 | 64 | 372 | 3.75 | 0.53125 | 0.1 | 0.075 | 0.158333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006042 | 0.110215 | 372 | 12 | 87 | 31 | 0.719033 | 0.037634 | 0 | 0 | 0 | 0 | 0.219101 | 0.123596 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
87259cd2c6b021df4f9006bba7f609c83a89593f | 503 | py | Python | books/migrations/0009_library_waitlist_items.py | rodbv/kamu | f390d91f7d7755b49176cf5d504648e3fe572237 | [
"MIT"
] | 70 | 2018-05-23T16:44:44.000Z | 2021-12-05T21:48:10.000Z | books/migrations/0009_library_waitlist_items.py | rodbv/kamu | f390d91f7d7755b49176cf5d504648e3fe572237 | [
"MIT"
] | 122 | 2018-10-06T21:31:24.000Z | 2020-11-09T15:04:56.000Z | books/migrations/0009_library_waitlist_items.py | rodbv/kamu | f390d91f7d7755b49176cf5d504648e3fe572237 | [
"MIT"
] | 50 | 2018-05-23T05:49:10.000Z | 2021-11-22T07:53:42.000Z | # Generated by Django 2.0.1 on 2019-01-07 14:45
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('waitlist', '0001_initial'),
('books', '0008_bookcopy_borrow_date'),
]
operations = [
migrations.AddField(
model_name='library',
name='waitlist_items',
field=models.ManyToManyField(related_name='waitlist_items', through='waitlist.WaitlistItem', to='books.Book'),
),
]
| 25.15 | 122 | 0.628231 | 53 | 503 | 5.811321 | 0.773585 | 0.077922 | 0.11039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060526 | 0.244533 | 503 | 19 | 123 | 26.473684 | 0.75 | 0.089463 | 0 | 0 | 1 | 0 | 0.254386 | 0.100877 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
872757654f8085ced49101ead759a3fbe2d7c7bc | 7,695 | py | Python | SubredditScraper/scrape.py | tomhennessey/Subreddit-Scrape | 224c76fb179b6171634e7dd7739f73015c206d62 | [
"MIT"
] | null | null | null | SubredditScraper/scrape.py | tomhennessey/Subreddit-Scrape | 224c76fb179b6171634e7dd7739f73015c206d62 | [
"MIT"
] | null | null | null | SubredditScraper/scrape.py | tomhennessey/Subreddit-Scrape | 224c76fb179b6171634e7dd7739f73015c206d62 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
A simple subreddit scraper
"""
import datetime as dt
import logging
import time
import os
import sys
import getopt
import praw
from psaw import PushshiftAPI
if __package__ == None:
import db
else:
from . import db
def init_log():
"""
Initiates a logger for psaw to be used with '-v' cli option.
"""
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
logger = logging.getLogger('psaw')
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
def utc_to_local(utc_dt):
"""
Converts unix utc time format to human readable form for output
Returns
-------
string
A date and time in string format
"""
return dt.datetime.fromtimestamp(utc_dt).strftime('%Y-%m-%d %I:%M:%S%p')
def epoch_generate(month_num, year):
"""
Generates start and end epochs to be used in
generate_submissions_psaw()
Parameters
----------
month_num : int
The month number (1-12) that is being requested in scrape
month_half: int
Half of the month (1-2) that corresponds to first or last
15 days
Returns
-------
tuple
A tuple containing a start and end date in linux utc format
"""
start_time = int(dt.datetime(year, month_num, 1).timestamp())
end_time = int(dt.datetime(year, month_num + 1, 1).timestamp())
return (start_time, end_time)
def generate_submissions_psaw(month_num, subreddit):
"""
Gets submissions between start/end epochs for requested
subreddit
Parameters
----------
month_num: int
The month number to be passed to epoch_generate()
month_half: int
The month half to be passed to epoch_generate()
subreddit: string
The name of the subreddit to be scraped
Returns
-------
generator
A generator object that will be used to loop through
submissions
"""
# init api
api = PushshiftAPI()
epoch_tuple = epoch_generate(month_num, 2020)
start_epoch = epoch_tuple[0]
end_epoch = epoch_tuple[1]
return api.search_submissions(after=start_epoch, before=end_epoch,
subreddit=subreddit, size=1000)
def generate_comments(reddit, submission_id):
"""
Take a PRAW reddit object and finds comments for a given
submissions_id
Parameters
----------
reddit: praw.Reddit
A PRAW Reddit API instance
submission_id: int
The id of the subreddit submission whose comments we want
Returns
-------
submission.comments: praw.models.comment_forest.CommentForest
A Reddit CommentForest that can be iterated through
"""
# get submission from praw via submission_id from psaw
submission = reddit.submission(id=submission_id)
# should load all folded comments
return submission.comments
def praw_timer(reddit):
"""
A timer that counts down remaining PRAW Api requests and
shortly halts and retries when there are less than 10.
Parameters
----------
reddit: praw.Reddit
A PRAW Reddit API instance
"""
if reddit.auth.limits['remaining'] < 10:
print("Waiting for PRAW API limit to reset...", end="\r")
time.sleep(4)
def init_db(db_name):
"""
Creates a SQLite DB connection to put scraped content into
Returns
-------
conn: SQLite DB instance
"""
conn = db.create_connection(db_name)
db.create_table_submissions(conn)
db.create_table_comments(conn)
print("DB Init Success")
return conn
def clear_screen():
"""
Clears the terminal screen depending on OS detected
"""
os.system('cls' if os.name == 'nt' else 'clear')
def get_args():
"""
Retrieve CLI arguments
"""
return getopt.getopt(sys.argv[1:], 'vc')
def iterate_comments(state, submission, conn):
"""
TODO: Docstring
"""
comments = generate_comments(state.reddit, submission.id)
praw_timer(state.reddit)
for j in list(comments):
try:
comment = (str(j.author), str(utc_to_local(j.created_utc)),
str(j.id), str(j.body), str(submission.id))
db.insert_comment(conn, comment)
except AttributeError as err:
print(err)
continue
state.update_praw()
state.inc_comment()
#print("PRAW requests remaining: ", end="")
#print(reddit.auth.limits['remaining'], end="\r")
def update_display(state_obj):
"""
TODO: Docstring
"""
filesize = 0
if os.path.isfile(state_obj.db_name):
filesize = (int(os.stat(state_obj.db_name).st_size)) / 1048576
output = ' PRAW Requests Remaining: {} '\
'|Submission Request #{} '\
'|Comment Request #{} ' \
'|Filesize {} MB'
print(output.format(state_obj.praw_requests,
state_obj.submission_idx, state_obj.comment_idx,
filesize), end=" \r", flush=True)
def usage():
"""
TODO: Docstring
"""
if os.name == 'nt':
output = """Usage: python3 scrape.py [subreddit] [output file]
Options: -v: verbose logging
-c: comments on"""
else:
output = """Usage: ./scrape.py [subreddit] [output file]
Options: -v: verbose logging
-c: comments on"""
print(output)
exit()
class StateObj:
"""
TODO: Docstring
"""
reddit = []
submission_idx = 0
comment_idx = 0
praw_requests = 0
corpus_size = 0
db_name = "./corpus.db"
def __init__(self):
self.submission_idx = 0
self.comment_idx = 0
self.praw_requests = 0
def init_reddit(self):
self.reddit = praw.Reddit("bot1")
def inc_sub(self):
# increment idx
self.submission_idx += 1
def reset_comment(self):
self.comment_idx = 0
def inc_comment(self):
self.comment_idx += 1
def update_praw(self):
self.praw_requests = self.reddit.auth.limits['remaining']
def main():
"""
TODO: Docstring
"""
if len(sys.argv) == 0:
usage()
opts, args = get_args()
subreddit = sys.argv[1]
comment_flag = False
for arg in args:
if arg == '-v':
print("Verbose logging")
init_log()
if arg == '-c':
comment_flag = True
print("Comments on ")
if arg in [('-h'), ('-u')]:
usage()
exit()
state = StateObj()
if comment_flag:
state.init_reddit()
for arg in args:
print(arg)
if ".db" in arg:
state.db_name = arg
conn = init_db(state.db_name)
for month in range(1, 2):
gen = generate_submissions_psaw(month, subreddit)
for i in list(gen):
state.inc_sub()
update_display(state)
# only get submission that are self posts
if hasattr(i, 'selftext'):
if hasattr(i, 'author'):
submission = (i.author, utc_to_local(i.created_utc), i.title,
i.selftext, i.id, i.is_self, utc_to_local(i.retrieved_on),
i.num_comments, i.permalink)
else:
submission = ('deleted', utc_to_local(i.created_utc), i.title,
i.selftext, i.id, i.is_self, utc_to_local(i.retrieved_on),
i.num_comments, i.permalink)
db.insert_submission(conn, submission)
if comment_flag:
iterate_comments(state, i, conn)
if __name__ == "__main__":
main()
| 24.046875 | 86 | 0.591813 | 953 | 7,695 | 4.632739 | 0.266527 | 0.012684 | 0.01359 | 0.009966 | 0.141789 | 0.130464 | 0.119139 | 0.103284 | 0.089694 | 0.06795 | 0 | 0.009458 | 0.299285 | 7,695 | 319 | 87 | 24.122257 | 0.809347 | 0.262118 | 0 | 0.142857 | 0 | 0 | 0.099296 | 0 | 0 | 0 | 0 | 0.015674 | 0 | 1 | 0.129252 | false | 0 | 0.068027 | 0 | 0.285714 | 0.054422 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
874f63a84b2700710fd965dced8d15b1927c1840 | 503 | py | Python | enumerate_and_map_and_reduce.py | aslange/Python_Basics | 53856d53b970026da7aa26b8bc468c03352d97b7 | [
"Apache-2.0"
] | null | null | null | enumerate_and_map_and_reduce.py | aslange/Python_Basics | 53856d53b970026da7aa26b8bc468c03352d97b7 | [
"Apache-2.0"
] | null | null | null | enumerate_and_map_and_reduce.py | aslange/Python_Basics | 53856d53b970026da7aa26b8bc468c03352d97b7 | [
"Apache-2.0"
] | null | null | null | # função enumerate
lista = ['abacate', 'bola', 'cachorro'] # lista
for i in range(len(lista)):
print(i, lista[i])
for i, nome in enumerate(lista):
print(i, nome)
# função map
def dobro(x):
return x * 2
valor = [1, 2, 3, 4, 5]
print(dobro(valor))
valor_dobrado = map(dobro, valor)
valor_dobrado = list(valor_dobrado)
print(valor_dobrado)
# função reduce
from functools import reduce
def soma(x, y):
return x + y
lista = [1, 2, 3, 4, 5]
soma = reduce(soma, lista)
print(soma) | 14.371429 | 47 | 0.648111 | 81 | 503 | 3.975309 | 0.395062 | 0.149068 | 0.068323 | 0.024845 | 0.031056 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0275 | 0.204771 | 503 | 35 | 48 | 14.371429 | 0.7775 | 0.093439 | 0 | 0 | 0 | 0 | 0.042035 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0.111111 | 0.277778 | 0.277778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
87599bd37b5c1ce4791f85dbb86aca71c590ae91 | 6,095 | py | Python | src/cascade_at/settings/base_case.py | ihmeuw/cascade-at | a5b1b5da1698163fd3bbafc6288968dd9c398096 | [
"MIT"
] | 1 | 2019-10-14T23:18:04.000Z | 2019-10-14T23:18:04.000Z | src/cascade_at/settings/base_case.py | ihmeuw/cascade | a5b1b5da1698163fd3bbafc6288968dd9c398096 | [
"MIT"
] | 35 | 2018-07-17T18:37:33.000Z | 2020-03-06T13:31:35.000Z | src/cascade_at/settings/base_case.py | ihmeuw/cascade | a5b1b5da1698163fd3bbafc6288968dd9c398096 | [
"MIT"
] | 4 | 2018-07-13T00:01:35.000Z | 2019-09-02T23:56:11.000Z | BASE_CASE = {
"model": {
"random_seed": 495279142,
"default_age_grid": "0 0.01917808 0.07671233 1 5 10 20 30 40 50 60 70 80 90 100",
"default_time_grid": "1990 1995 2000 2005 2010 2015 2016",
"add_calc_emr": "from_both",
"birth_prev": 0,
"ode_step_size": 5,
"minimum_meas_cv": 0.2,
"rate_case": "iota_pos_rho_zero",
"data_density": "log_gaussian",
"constrain_omega": 1,
"modelable_entity_id": 2005,
"decomp_step_id": 3,
"research_area": 2,
"drill": "drill",
"drill_location_start": 70,
"bundle_id": 173,
"crosswalk_version_id": 5699,
"split_sex": "most_detailed",
"add_csmr_cause": 587,
"drill_sex": 2,
"model_version_id": 472515,
"title": "test diabetes australasia marlena -- 2",
"relabel_incidence": 2,
"description": "<p>diabetes<\/p>",
"addl_ode_stpes": "0.01917808 0.07671233 1.0",
"zero_sum_random": [
"iota"
],
"bound_frac_fixed": 1.0e-8,
"drill_location_end": [72],
"quasi_fixed": 0
},
"max_num_iter": {
"fixed": 200,
"random": 100
},
"print_level": {
"fixed": 5,
"random": 0
},
"accept_after_max_steps": {
"fixed": 5,
"random": 5
},
"students_dof": {
"priors": 5,
"data": 5
},
"log_students_dof": {
"priors": 5,
"data": 5
},
"eta": {
"priors": 1.0e-5,
"data": 1.0e-5
},
"config_version": "mnorwood",
"rate": [
{
"age_time_specific": 1,
"default": {
"value": {
"density": "gaussian",
"min": 1.0e-6,
"mean": 0.00015,
"max": 0.01,
"std": 1.5,
"eta": 1.0e-6
},
"dage": {
"density": "gaussian",
"min": -1,
"mean": 0,
"max": 1,
"std": 0.01
},
"dtime": {
"density": "gaussian",
"min": -1,
"mean": 0,
"max": 1,
"std": 0.01
}
},
"rate": "iota",
"age_grid": "0 5 10 50 100"
},
{
"age_time_specific": 1,
"default": {
"value": {
"density": "gaussian",
"min": 1.0e-6,
"mean": 0.0004,
"max": 0.01,
"std": 0.2
},
"dage": {
"density": "gaussian",
"min": -1,
"mean": 0,
"max": 1,
"std": 0.01
},
"dtime": {
"density": "gaussian",
"min": -1,
"mean": 0,
"max": 1,
"std": 0.01
}
},
"rate": "chi"
},
{
"age_time_specific": 0,
"default": {
"value": {
"density": "log_gaussian",
"min": 0,
"mean": 0.1,
"max": 0.2,
"std": 1,
"eta": 1.0e-6
},
"dage": {
"density": "uniform",
"min": -1,
"mean": 0,
"max": 1
},
"dtime": {
"density": "uniform",
"min": -1,
"mean": 0,
"max": 1
}
},
"rate": "pini"
}
],
"random_effect": [
{
"age_time_specific": 0,
"default": {
"value": {
"density": "gaussian",
"mean": 0,
"std": 1
},
"dage": {
"mean": 0,
"std": 1,
"density": "uniform"
},
"dtime": {
"mean": 0,
"std": 1,
"density": "uniform"
}
},
"rate": "iota"
}
],
"study_covariate": [
{
"age_time_specific": 0,
"mulcov_type": "rate_value",
"default": {
"value": {
"density": "uniform",
"min": -1,
"mean": 0,
"max": 1
},
"dage": {
"density": "uniform",
"min": -1,
"mean": 0,
"max": 1
},
"dtime": {
"density": "uniform",
"min": -1,
"mean": 0,
"max": 1
}
},
"study_covariate_id": 0,
"transformation": 0,
"measure_id": 41
}
],
"country_covariate": [
{
"age_time_specific": 0,
"mulcov_type": "rate_value",
"measure_id": 41,
"country_covariate_id": 28,
"transformation": 0,
"default": {
"value": {
"density": "uniform",
"min": -1,
"mean": 0,
"max": 1,
"eta": 1.0e-5
}
}
}
],
"gbd_round_id": 6,
"csmr_cod_output_version_id": 84,
"csmr_mortality_output_version_id": 8003,
"location_set_version_id": 684,
"tolerance": {
"fixed": 1.0e-6,
"random": 1.0e-6
}
}
| 27.331839 | 89 | 0.31534 | 479 | 6,095 | 3.807933 | 0.311065 | 0.04386 | 0.04386 | 0.049342 | 0.451206 | 0.405702 | 0.347588 | 0.309211 | 0.309211 | 0.260965 | 0 | 0.102941 | 0.542576 | 6,095 | 222 | 90 | 27.454955 | 0.551291 | 0 | 0 | 0.468468 | 0 | 0 | 0.288761 | 0.016899 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.004505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
875e13669b90d566535e678aeb9626f67f4c6d30 | 3,101 | py | Python | PyBlokusTools/pyblokustools/testserver/Message.py | HenrikThoroe/SWC-2021 | 8e7eee25e3a6fda7e863591b05fa161d8a2ebc78 | [
"BSD-2-Clause",
"MIT"
] | null | null | null | PyBlokusTools/pyblokustools/testserver/Message.py | HenrikThoroe/SWC-2021 | 8e7eee25e3a6fda7e863591b05fa161d8a2ebc78 | [
"BSD-2-Clause",
"MIT"
] | null | null | null | PyBlokusTools/pyblokustools/testserver/Message.py | HenrikThoroe/SWC-2021 | 8e7eee25e3a6fda7e863591b05fa161d8a2ebc78 | [
"BSD-2-Clause",
"MIT"
] | null | null | null | from typing import Any, Final, Tuple, List
from enum import IntEnum
from .MsgType import MsgType
class Message():
"""Base Message
Arguments:
type {MsgType} -- Type of message
payload {Any} -- Payload used for later handling
"""
def __init__(self, type: MsgType, payload: Any) -> None:
self.type: Final[MsgType] = type
self.payload: Final[Any] = payload
def __repr__(self) -> str:
"""Represent object instance
Returns:
str -- Representation of instance
"""
return f"Message({self.type}, {self.payload})"
class MementoMsg():
"""Message that holds a Gamestate
Arguments:
currentTurn {int} -- Current turn
"""
def __init__(self, currentTurn: int) -> None:
self.currentTurn: Final[int] = currentTurn
def __repr__(self) -> str:
"""Represent object instance
Returns:
str -- Representation of instance
"""
return f"MementoMsg({self.currentTurn})"
class ResultCause(IntEnum):
"""An enum of different game end causes.
"""
REGULAR = 0
LEFT = 1
RULE_VIOLATION = 2
SOFT_TIMEOUT = 3
HARD_TIMEOUT = 4
class ResultEnd(IntEnum):
"""An enum of different game endings.
"""
LOSE = 0
DRAW = 1
WIN = 2
class ResultMsg():
"""Message that holds a game's result
Arguments:
score {List[int, int]} -- GameScore both players reached
end {List[ResultEnd, ResultEnd]} -- TournamentPoints both players earned
cause {List[ResultCause, ResultCause]} -- Game-ending causes for both players
"""
def __init__(self, score: List[int], end: List[ResultEnd], cause: List[ResultCause]) -> None:
self.score: Final[List[int]] = score
self.end: Final[List[ResultEnd]] = end
self.cause: Final[List[ResultCause]] = cause
def swap(self) -> None:
"""Swap Player1 & Player2 in place
"""
self.score.reverse()
self.end.reverse()
self.cause.reverse()
def __repr__(self) -> str:
"""Represent object instance
Returns:
str -- Representation of instance
"""
return f"ResultMsg(({self.score[0]}, {self.score[1]}), ({self.end[0]}, {self.end[1]}), ({self.cause[0]}, {self.cause[1]}))"
class PreparedMsg():
"""Message that holds info on a prepared game
Arguments:
roomId {str} -- RoomId of newly prepared game
reservations {Tuple[str]} -- Reservation codes for clients associated with the game
"""
def __init__(self, roomId: str, reservations: Tuple[str, str]) -> None:
self.roomId: Final[str] = roomId
self.reservations: Final[Tuple[str, str]] = reservations
def __repr__(self) -> str:
"""Represent object instance
Returns:
str -- Representation of instance
"""
return f"PreparedMsg({self.roomId}, ({self.reservations[0]}, {self.reservations[1]}))"
| 29.817308 | 131 | 0.58207 | 339 | 3,101 | 5.221239 | 0.286136 | 0.025424 | 0.024859 | 0.031638 | 0.20791 | 0.20791 | 0.176271 | 0.176271 | 0.176271 | 0.176271 | 0 | 0.008257 | 0.297001 | 3,101 | 103 | 132 | 30.106796 | 0.80367 | 0.357304 | 0 | 0.097561 | 0 | 0.02439 | 0.144886 | 0.074432 | 0 | 0 | 0 | 0 | 0 | 1 | 0.219512 | false | 0 | 0.073171 | 0 | 0.731707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
875f3dc2268c5b69947a406b55babc4f4af29f54 | 2,984 | py | Python | celo_sdk/tests/stable_token_tests.py | rcroessmann/celo-sdk-py | 8826adaa6bbcb53374e7c26f0638a7fc973a9dd9 | [
"Apache-2.0"
] | 7 | 2021-02-09T20:44:41.000Z | 2022-03-30T10:56:06.000Z | celo_sdk/tests/stable_token_tests.py | rcroessmann/celo-sdk-py | 8826adaa6bbcb53374e7c26f0638a7fc973a9dd9 | [
"Apache-2.0"
] | 4 | 2020-11-04T07:39:10.000Z | 2022-02-19T00:06:46.000Z | celo_sdk/tests/stable_token_tests.py | rcroessmann/celo-sdk-py | 8826adaa6bbcb53374e7c26f0638a7fc973a9dd9 | [
"Apache-2.0"
] | 8 | 2020-11-03T14:45:26.000Z | 2022-02-23T12:41:05.000Z | import time
import unittest
from web3 import Web3
from celo_sdk.kit import Kit
from celo_sdk.tests import test_data
class TestStableTokenWrapper(unittest.TestCase):
@classmethod
def setUpClass(self):
self.kit = Kit('http://localhost:8544')
self.stable_token_wrapper = self.kit.base_wrapper.create_and_get_contract_by_name(
'StableToken')
self.kit.wallet.sign_with_provider = True
self.accounts = self.kit.w3.eth.accounts
for _, v in test_data.deriv_pks.items():
self.kit.wallet_add_new_key = v
self.kit.w3.eth.defaultAccount = self.accounts[0]
self.kit.wallet_change_account = self.accounts[0]
def test_name(self):
name = self.stable_token_wrapper.name()
self.assertEqual(name, 'Celo Dollar')
def test_symbol(self):
symbol = self.stable_token_wrapper.symbol()
self.assertEqual(symbol, 'cUSD')
def test_decimals(self):
decimals = self.stable_token_wrapper.decimals()
self.assertEqual(decimals, 18)
def test_total_supply(self):
total_supply = self.stable_token_wrapper.total_supply()
self.assertEqual(type(total_supply), int)
def test_balance_of(self):
balance = self.stable_token_wrapper.balance_of(self.accounts[0])
self.assertEqual(type(balance), int)
def test_owner(self):
owner = self.stable_token_wrapper.owner()
self.assertEqual(self.kit.w3.isAddress(owner), True)
def test_get_inflation_parameters(self):
infl_params = self.stable_token_wrapper.get_inflation_parameters()
self.assertEqual(type(infl_params), dict)
def test_transfer(self):
initial_balance_2 = self.stable_token_wrapper.balance_of(
self.accounts[1])
tx_hash = self.stable_token_wrapper.transfer(
self.accounts[1], self.kit.w3.toWei(1, 'ether'))
self.assertEqual(type(tx_hash), str)
time.sleep(5) # wait until transaction finalized
final_balance_2 = self.stable_token_wrapper.balance_of(
self.accounts[1])
self.assertEqual(final_balance_2, initial_balance_2 +
self.kit.w3.toWei(1, 'ether'))
def test_transfer_from(self):
tx_hash = self.stable_token_wrapper.increase_allowance(self.accounts[1], self.kit.w3.toWei(1, 'ether'))
self.assertEqual(type(tx_hash), str)
self.kit.w3.eth.defaultAccount = self.accounts[1]
self.kit.wallet_change_account = self.accounts[1]
initial_balance_3 = self.stable_token_wrapper.balance_of(
test_data.address3)
tx_hash = self.stable_token_wrapper.transfer_from(self.accounts[0], self.accounts[2], self.kit.w3.toWei(1, 'ether'))
time.sleep(5)
final_balance_3 = self.stable_token_wrapper.balance_of(
self.accounts[2])
self.assertEqual(final_balance_3, initial_balance_3 + self.kit.w3.toWei(1, 'ether'))
| 33.909091 | 124 | 0.678284 | 392 | 2,984 | 4.905612 | 0.234694 | 0.054602 | 0.117005 | 0.171607 | 0.350494 | 0.349974 | 0.325013 | 0.186687 | 0.117525 | 0.117525 | 0 | 0.019247 | 0.216488 | 2,984 | 87 | 125 | 34.298851 | 0.803251 | 0.010724 | 0 | 0.098361 | 0 | 0 | 0.024407 | 0 | 0 | 0 | 0 | 0 | 0.180328 | 1 | 0.163934 | false | 0 | 0.081967 | 0 | 0.262295 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87678f18706219a75f4b57fd79110313c3f75dec | 1,232 | py | Python | vsts/vsts/work/v4_1/models/parent_child_wIMap.py | kenkuo/azure-devops-python-api | 9e920bd25e938fa89ff7f60153e5b9e113ca839d | [
"MIT"
] | null | null | null | vsts/vsts/work/v4_1/models/parent_child_wIMap.py | kenkuo/azure-devops-python-api | 9e920bd25e938fa89ff7f60153e5b9e113ca839d | [
"MIT"
] | null | null | null | vsts/vsts/work/v4_1/models/parent_child_wIMap.py | kenkuo/azure-devops-python-api | 9e920bd25e938fa89ff7f60153e5b9e113ca839d | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from msrest.serialization import Model
class ParentChildWIMap(Model):
"""ParentChildWIMap.
:param child_work_item_ids:
:type child_work_item_ids: list of int
:param id:
:type id: int
:param title:
:type title: str
"""
_attribute_map = {
'child_work_item_ids': {'key': 'childWorkItemIds', 'type': '[int]'},
'id': {'key': 'id', 'type': 'int'},
'title': {'key': 'title', 'type': 'str'}
}
def __init__(self, child_work_item_ids=None, id=None, title=None):
super(ParentChildWIMap, self).__init__()
self.child_work_item_ids = child_work_item_ids
self.id = id
self.title = title
| 36.235294 | 94 | 0.507305 | 124 | 1,232 | 4.814516 | 0.524194 | 0.090452 | 0.130653 | 0.160804 | 0.080402 | 0.080402 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175325 | 1,232 | 33 | 95 | 37.333333 | 0.587598 | 0.553571 | 0 | 0 | 0 | 0 | 0.159136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.