hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
64806f65878c18d62b19689145b457942a25bb91 | 1,991 | py | Python | server/src/weaverbird/pipeline/steps/aggregate.py | JeremyJacquemont/weaverbird | e04ab6f9c8381986ab71078e5199ece7a875e743 | [
"BSD-3-Clause"
] | 54 | 2019-11-20T15:07:39.000Z | 2022-03-24T22:13:51.000Z | server/src/weaverbird/pipeline/steps/aggregate.py | JeremyJacquemont/weaverbird | e04ab6f9c8381986ab71078e5199ece7a875e743 | [
"BSD-3-Clause"
] | 786 | 2019-10-20T11:48:37.000Z | 2022-03-23T08:58:18.000Z | server/src/weaverbird/pipeline/steps/aggregate.py | JeremyJacquemont/weaverbird | e04ab6f9c8381986ab71078e5199ece7a875e743 | [
"BSD-3-Clause"
] | 10 | 2019-11-21T10:16:16.000Z | 2022-03-21T10:34:06.000Z | from typing import List, Literal, Optional, Sequence
from pydantic import Field, root_validator, validator
from pydantic.main import BaseModel
from weaverbird.pipeline.steps.utils.base import BaseStep
from weaverbird.pipeline.steps.utils.render_variables import StepWithVariablesMixin
from weaverbird.pipeline.steps.utils.validation import validate_unique_columns
from weaverbird.pipeline.types import ColumnName, PopulatedWithFieldnames, TemplatedVariable
AggregateFn = Literal[
'avg',
'sum',
'min',
'max',
'count',
'count distinct',
'first',
'last',
'count distinct including empty',
]
class Aggregation(BaseModel):
class Config(PopulatedWithFieldnames):
...
new_columns: List[ColumnName] = Field(alias='newcolumns')
agg_function: AggregateFn = Field(alias='aggfunction')
columns: List[ColumnName]
@validator('columns', pre=True)
def validate_unique_columns(cls, value):
return validate_unique_columns(value)
@root_validator(pre=True)
def handle_legacy_syntax(cls, values):
if 'column' in values:
values['columns'] = [values.pop('column')]
if 'newcolumn' in values:
values['new_columns'] = [values.pop('newcolumn')]
return values
class AggregateStep(BaseStep):
name = Field('aggregate', const=True)
on: List[ColumnName] = []
aggregations: Sequence[Aggregation]
keep_original_granularity: Optional[bool] = Field(
default=False, alias='keepOriginalGranularity'
)
class Config(PopulatedWithFieldnames):
...
class AggregationWithVariables(Aggregation):
class Config(PopulatedWithFieldnames):
...
new_columns: List[TemplatedVariable] = Field(alias='newcolumns')
agg_function: TemplatedVariable = Field(alias='aggfunction')
columns: List[TemplatedVariable]
class AggregateStepWithVariables(AggregateStep, StepWithVariablesMixin):
aggregations: Sequence[AggregationWithVariables]
| 29.279412 | 92 | 0.721748 | 193 | 1,991 | 7.352332 | 0.414508 | 0.039464 | 0.062016 | 0.057082 | 0.224101 | 0.067653 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176796 | 1,991 | 67 | 93 | 29.716418 | 0.865772 | 0 | 0 | 0.117647 | 0 | 0 | 0.09995 | 0.011552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.137255 | 0.019608 | 0.568627 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6485fec3abdcd71b449dc07dfd6e24085b4ec88d | 19,281 | py | Python | tlmshop/settings.py | LegionMarket/django-cms-base | 1b6fc3423e3d0b2165552cc980432befb496f3e0 | [
"BSD-3-Clause"
] | null | null | null | tlmshop/settings.py | LegionMarket/django-cms-base | 1b6fc3423e3d0b2165552cc980432befb496f3e0 | [
"BSD-3-Clause"
] | null | null | null | tlmshop/settings.py | LegionMarket/django-cms-base | 1b6fc3423e3d0b2165552cc980432befb496f3e0 | [
"BSD-3-Clause"
] | null | null | null | """
Django settings for this project.
Generated by 'django-admin startproject' using Django 1.10.7.
For more information on this file, see
https://docs.djangoproject.com/en/1.10/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.10/ref/settings/
"""
import os
from django.utils.translation import ugettext_lazy as _
from cmsplugin_cascade.utils import format_lazy
from django.core.urlresolvers import reverse_lazy
from decimal import Decimal
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(__file__)
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'y&+f+)tw5sqkcy$@vwh8cy%y^9lwytqtn*y=lv7f9t39b(cufx'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['*']
SITE_ID = 1
APP_LABEL = 'tlmshop'
# Enable this to additionally show the debug toolbar
INTERNAL_IPS = ['localhost', '127.0.0.1', '192.168.1.69']
# Root directory for this django project
PROJECT_ROOT = os.path.abspath(os.path.join(BASE_DIR, os.path.pardir))
# Directory where working files, such as media and databases are kept
WORK_DIR = os.environ.get('DJANGO_WORKDIR', os.path.abspath(os.path.join(PROJECT_ROOT, 'LegionMarket')))
if not os.path.exists(WORK_DIR):
os.makedirs(WORK_DIR)
# Application definition
DJANGO_APPS_JET = (
# todo: Fix bug in jet does not all for adding new page when clicked on from create page on start
# 'jet_ole.dashboard',
# 'jet_ole',
'jet.dashboard',
'jet',
)
DJANGO_APPS_ADMIN_INTERFACE = (
# todo: Fix bug in jet does not all for adding new page when clicked on from create page on start
'admin_interface',
'flat_responsive',
'colorfield',
)
DJANGO_APPS_MATERIAL = (
# material apps
'material',
# 'material.frontend',
'material.admin',
)
DJANGO_APPS = (
# djangocms_admin_style needs to be before django.contrib.admin!
# https://django-cms.readthedocs.org/en/develop/how_to/install.html#configuring-your-project-for-django-cms
'djangocms_admin_style',
'django.contrib.admin',
# django defaults
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.sites',
'django.contrib.staticfiles',
)
DJANGO_CMS = (
'cms',
'menus',
'treebeard',
'sekizai',
# 'reversion',
# requirements for django-filer
'filer',
'easy_thumbnails',
'easy_thumbnails.optimize',
'mptt',
# core addons
'djangocms_text_ckeditor',
'djangocms_link',
'djangocms_picture',
'djangocms_snippet',
'djangocms_style',
'djangocms_googlemap',
'djangocms_audio',
)
DJANGO_CMS_ADDONS = (
# Cassaade
'cmsplugin_cascade',
'cmsplugin_cascade.clipboard',
'cmsplugin_cascade.sharable',
'cmsplugin_cascade.extra_fields',
'cmsplugin_cascade.icon',
'cmsplugin_cascade.segmentation',
)
THIRD_PARTY_APPS = (
'embed_video',
'crispy_forms',
)
SHOP = (
'django_select2',
'cms_bootstrap3',
'adminsortable2',
'django_fsm',
'fsm_admin',
'djng',
'compressor',
'sass_processor',
'django_filters',
'post_office',
'haystack',
'shop',
'shop_stripe',
)
SHOP_TOO = (
'email_auth',
'polymorphic',
'rest_framework',
'rest_framework.authtoken',
'rest_auth',
)
LOCAL_APPS = (
'video_back',
'videojs',
# 'background',
'tlmshop',
)
DEV_APP = (
'django.contrib.flatpages',
'django.contrib.redirects',
)
INSTALLED_APPS = DJANGO_APPS_ADMIN_INTERFACE + DJANGO_APPS + SHOP_TOO + \
DJANGO_CMS + DJANGO_CMS_ADDONS + THIRD_PARTY_APPS + \
LOCAL_APPS + SHOP
#######
# DJANGO_APPS_JET + \
# DJANGO_APPS_MATERIAL + \
# DJANGO_APPS_ADMIN_INTERFACE + \
##
MIDDLEWARE_CLASSES = (
'djng.middleware.AngularUrlMiddleware',
# its recommended to place this as high as possible to enable apphooks
# to reload the page without loading unnecessary middlewaresfuschiaatinuma
'cms.middleware.utils.ApphookReloadMiddleware',
# django defaults
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'shop.middleware.CustomerMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
# django CMS additions
'django.middleware.locale.LocaleMiddleware',
'cms.middleware.user.CurrentUserMiddleware',
'cms.middleware.page.CurrentPageMiddleware',
'cms.middleware.toolbar.ToolbarMiddleware',
'cms.middleware.language.LanguageCookieMiddleware',
)
ROOT_URLCONF = 'tlmshop.urls'
WSGI_APPLICATION = 'tlmshop.wsgi.application'
# Templates
# https://docs.djangoproject.com/en/1.8/ref/settings/#templates
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(PROJECT_ROOT, 'templates'),
],
'APP_DIRS': False,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
# django CMS additions
'cms.context_processors.cms_settings',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
# additional context processors for local development
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
# django CMS additions
'cms.context_processors.cms_settings',
'sekizai.context_processors.sekizai',
# Shop
'shop.context_processors.customer',
'shop.context_processors.ng_model_options',
'shop_stripe.context_processors.public_keys',
],
'loaders': [
'django.template.loaders.filesystem.Loader',
# django CMS additions
'django.template.loaders.eggs.Loader',
],
'debug': DEBUG,
},
},
]
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
# we use os.getenv to be able to override the default database settings for the docker setup
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(WORK_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en'
TIME_ZONE = 'America/New_York'
USE_I18N = False
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_ROOT = os.path.join(WORK_DIR, 'static')
STATIC_URL = '/static/'
# print(STATIC_ROOT)
# we need to add additional configuration for filer etc.
MEDIA_ROOT = os.path.join(WORK_DIR, 'media')
MEDIA_URL = '/media/'
# Checking to see if directories are there
if not os.path.exists(STATIC_ROOT):
os.makedirs(STATIC_ROOT)
if not os.path.exists(MEDIA_ROOT):
os.makedirs(MEDIA_ROOT)
STATICFILES_FINDERS = [
# 'tlmshop.finders.FileSystemFinder', # or
# 'tlmshop.finders.AppDirectoriesFinder', # or
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'sass_processor.finders.CssFinder',
'compressor.finders.CompressorFinder',
]
# we need to add additional configuration for filer etc.
NODE = os.path.join(PROJECT_ROOT, 'node_modules')
if not os.path.exists(NODE):
os.makedirs(NODE)
STATICFILES_DIRS = [
('static', os.path.join(PROJECT_ROOT, 'static')),
('node_modules', os.path.join(PROJECT_ROOT, 'node_modules')),
('templates', os.path.join(PROJECT_ROOT, 'templates')),
]
# print(STATICFILES_DIRS)
NODE_MODULES_URL = STATIC_URL + 'node_modules/'
# print(STATICFILES_DIRS)
SASS_PROCESSOR_INCLUDE_DIRS = [
os.path.join(PROJECT_ROOT, 'node_modules'),
]
# print(SASS_PROCESSOR_INCLUDE_DIRS)
COERCE_DECIMAL_TO_STRING = True
FSM_ADMIN_FORCE_PERMIT = True
ROBOTS_META_TAGS = ('noindex', 'nofollow')
# django CMS settings
# http://docs.django-cms.org/en/latest/
# #########################################
# Static Templates Files
CMS_PERMISSION = True
CMS_PLACEHOLDER_CONF = {
}
CMS_PAGE_WIZARD_CONTENT_PLACEHOLDER = 'content'
# django CMS internationalization
# http://docs.django-cms.org/en/latest/topics/i18n.html
# LANGUAGES = (
# ('en', _('English')),
# )
# django CMS templates
# http://docs.django-cms.org/en/latest/how_to/templates.html
CMS_TEMPLATES = (
('content.html', 'Content'),
('t458_lavish/index.html', 'TLM-Lavish')
)
# CUSTOM
# Filer
THUMBNAIL_PRESERVE_EXTENSIONS = True
THUMBNAIL_PROCESSORS = (
'easy_thumbnails.processors.colorspace',
'easy_thumbnails.processors.autocrop',
'filer.thumbnail_processors.scale_and_crop_with_subject_location',
'easy_thumbnails.processors.filters',
)
# CKEditor
# DOCS: https://github.com/divio/djangocms-text-ckeditor
# CKEDITOR_SETTINGS = {
# 'stylesSet': 'default:/static/js/addons/ckeditor.wysiwyg.js',
# 'contentsCss': ['/static/css/base.css'],
# }
CKEDITOR_SETTINGS = {
'language': '{{ language }}',
'skin': 'moono',
'toolbar': 'CMS',
'toolbar_HTMLField': [
['Undo', 'Redo'],
['cmsplugins', '-', 'ShowBlocks'],
['Format', 'Styles'],
['TextColor', 'BGColor', '-', 'PasteText', 'PasteFromWord'],
['Maximize', ''],
'/',
['Bold', 'Italic', 'Underline', '-', 'Subscript', 'Superscript', '-', 'RemoveFormat'],
['JustifyLeft', 'JustifyCenter', 'JustifyRight'],
['HorizontalRule'],
['NumberedList', 'BulletedList', '-', 'Outdent', 'Indent', '-', 'Table'],
['Source']
],
'stylesSet': format_lazy('default:{}', reverse_lazy('admin:cascade_texticon_wysiwig_config')),
}
CKEDITOR_SETTINGS_CAPTION = {
'language': '{{ language }}',
'skin': 'moono',
'height': 70,
'toolbar_HTMLField': [
['Undo', 'Redo'],
['Format', 'Styles'],
['Bold', 'Italic', 'Underline', '-', 'Subscript', 'Superscript', '-', 'RemoveFormat'],
['Source']
],
}
CKEDITOR_SETTINGS_DESCRIPTION = {
'language': '{{ language }}',
'skin': 'moono',
'height': 250,
'toolbar_HTMLField': [
['Undo', 'Redo'],
['cmsplugins', '-', 'ShowBlocks'],
['Format', 'Styles'],
['TextColor', 'BGColor', '-', 'PasteText', 'PasteFromWord'],
['Maximize', ''],
'/',
['Bold', 'Italic', 'Underline', '-', 'Subscript', 'Superscript', '-', 'RemoveFormat'],
['JustifyLeft', 'JustifyCenter', 'JustifyRight'],
['HorizontalRule'],
['NumberedList', 'BulletedList', '-', 'Outdent', 'Indent', '-', 'Table'],
['Source']
],
}
SELECT2_CSS = 'node_modules/select2/dist/css/select2.min.css'
SELECT2_JS = 'node_modules/select2/dist/js/select2.min.js'
# Embed Video
APPEND_SLASH = True
###################################################################################
#
# Shop Settings
#
###################################################################################
SHOP_APP_LABEL = 'tlmshop'
AUTH_USER_MODEL = 'email_auth.User'
SHOP_TYPE = 'smartcard'
AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend',
]
MIGRATION_MODULES = {
'tlmshop': 'tlmshop.migrations.{}'.format(SHOP_TYPE)
}
############################################
# settings for sending mail
EMAIL_HOST = 'smtp.example.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'no-reply@example.com'
EMAIL_HOST_PASSWORD = 'smtp-secret-password'
EMAIL_USE_TLS = True
DEFAULT_FROM_EMAIL = 'My Shop <no-reply@example.com>'
EMAIL_REPLY_TO = 'info@example.com'
EMAIL_BACKEND = 'post_office.EmailBackend'
############################################
# settings for third party Django apps
SERIALIZATION_MODULES = {'json': str('shop.money.serializers')}
############################################
# settings for django-restframework and plugins
REST_FRAMEWORK = {
'DEFAULT_RENDERER_CLASSES': (
'shop.rest.money.JSONRenderer',
'rest_framework.renderers.BrowsableAPIRenderer', # can be disabled for production environments
),
# 'DEFAULT_AUTHENTICATION_CLASSES': (
# 'rest_framework.authentication.TokenAuthentication',
# ),
'DEFAULT_FILTER_BACKENDS': ('rest_framework.filters.DjangoFilterBackend',),
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
'PAGE_SIZE': 12,
}
############################################
# settings for storing session data
SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db'
SESSION_SAVE_EVERY_REQUEST = True
###########################################################
# Files
SHOP_TYPE = 'smartcard'
# 'commodity', 'i18n_commodity', 'smartcard', 'i18n_smartcard', 'i18n_polymorphic', 'polymorphic'
##############################################################
#
#
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'filters': {'require_debug_false': {'()': 'django.utils.log.RequireDebugFalse'}},
'formatters': {
'simple': {
'format': '[%(asctime)s %(module)s] %(levelname)s: %(message)s'
},
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'simple',
},
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
'propagate': True,
},
'post_office': {
'handlers': ['console'],
'level': 'WARNING',
'propagate': True,
},
},
}
SILENCED_SYSTEM_CHECKS = ['auth.W004']
FIXTURE_DIRS = [os.path.join(WORK_DIR, SHOP_TYPE, 'fixtures')]
############################################
# settings for django-cms and its plugins
CMS_CACHE_DURATIONS = {
'content': 600,
'menus': 3600,
'permissions': 86400,
}
cascade_workarea_glossary = {
'breakpoints': ['xs', 'sm', 'md', 'lg'],
'container_max_widths': {'xs': 750, 'sm': 750, 'md': 970, 'lg': 1170},
'fluid': True,
'media_queries': {
'xs': ['(max-width: 768px)'],
'sm': ['(min-width: 768px)', '(max-width: 992px)'],
'md': ['(min-width: 992px)', '(max-width: 1200px)'],
'lg': ['(min-width: 1200px)'],
},
}
CMSPLUGIN_CASCADE_PLUGINS = [
'cmsplugin_cascade.segmentation',
'cmsplugin_cascade.generic',
'cmsplugin_cascade.icon',
'cmsplugin_cascade.link',
'shop.cascade',
'cmsplugin_cascade.bootstrap3',
]
CMSPLUGIN_CASCADE = {
'link_plugin_classes': [
'shop.cascade.plugin_base.CatalogLinkPluginBase',
'cmsplugin_cascade.link.plugin_base.LinkElementMixin',
'shop.cascade.plugin_base.CatalogLinkForm',
],
'alien_plugins': ['TextPlugin', 'TextLinkPlugin', 'AcceptConditionPlugin'],
'bootstrap3': {
'template_basedir': 'angular-ui',
},
'plugins_with_extra_render_templates': {
'CustomSnippetPlugin': [
('shop/catalog/product-heading.html', _("Product Heading")),
('tlmshop/catalog/manufacturer-filter.html', _("Manufacturer Filter")),
],
},
'plugins_with_sharables': {
'BootstrapImagePlugin': ['image_shapes', 'image_width_responsive', 'image_width_fixed',
'image_height', 'resize_options'],
'BootstrapPicturePlugin': ['image_shapes', 'responsive_heights', 'image_size', 'resize_options'],
},
'bookmark_prefix': '/',
'segmentation_mixins': [
('shop.cascade.segmentation.EmulateCustomerModelMixin', 'shop.cascade.segmentation.EmulateCustomerAdminMixin'),
],
'allow_plugin_hiding': True,
}
#############################################
# settings for full index text search (Haystack)
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://localhost:9200/',
'INDEX_NAME': 'tlmshop-{}-en'.format(SHOP_TYPE),
},
}
if USE_I18N:
HAYSTACK_CONNECTIONS['de'] = {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://localhost:9200/',
'INDEX_NAME': 'tlmshop-{}-de'.format(SHOP_TYPE),
}
HAYSTACK_ROUTERS = [
'shop.search.routers.LanguageRouter',
]
#####################################################################################3
############################################
# settings for django-shop and its plugins
SHOP_VALUE_ADDED_TAX = Decimal(19)
SHOP_DEFAULT_CURRENCY = 'USD'
SHOP_PRODUCT_SUMMARY_SERIALIZER = 'tlmshop.serializers.ProductSummarySerializer'
if SHOP_TYPE in ['i18n_polymorphic', 'polymorphic']:
SHOP_CART_MODIFIERS = ['tlmshop.polymorphic_modifiers.tlmshopCartModifier']
else:
SHOP_CART_MODIFIERS = ['shop.modifiers.defaults.DefaultCartModifier']
SHOP_CART_MODIFIERS.extend([
'shop.modifiers.taxes.CartExcludedTaxModifier',
'tlmshop.modifiers.PostalShippingModifier',
'tlmshop.modifiers.CustomerPickupModifier',
'shop.modifiers.defaults.PayInAdvanceModifier',
])
if 'shop_stripe' in INSTALLED_APPS:
SHOP_CART_MODIFIERS.append('tlmshop.modifiers.StripePaymentModifier')
SHOP_EDITCART_NG_MODEL_OPTIONS = "{updateOn: 'default blur', debounce: {'default': 2500, 'blur': 0}}"
SHOP_ORDER_WORKFLOWS = [
'shop.payment.defaults.PayInAdvanceWorkflowMixin',
'shop.payment.defaults.CancelOrderWorkflowMixin',
'shop_stripe.payment.OrderWorkflowMixin',
]
if SHOP_TYPE in ['i18n_polymorphic', 'polymorphic']:
SHOP_ORDER_WORKFLOWS.append('shop.shipping.delivery.PartialDeliveryWorkflowMixin')
else:
SHOP_ORDER_WORKFLOWS.append('shop.shipping.defaults.CommissionGoodsWorkflowMixin')
SHOP_STRIPE = {
'PUBKEY': 'pk_test_HlEp5oZyPonE21svenqowhXp',
'APIKEY': 'sk_test_xUdHLeFasmOUDvmke4DHGRDP',
'PURCHASE_DESCRIPTION': _("Thanks for purchasing at tlmshop"),
}
try:
from .private_settings import * # NOQA
except ImportError:
pass
| 29.213636 | 119 | 0.64478 | 1,931 | 19,281 | 6.239254 | 0.323666 | 0.024817 | 0.01079 | 0.0166 | 0.196298 | 0.16841 | 0.130312 | 0.112965 | 0.092131 | 0.071215 | 0 | 0.010386 | 0.181007 | 19,281 | 659 | 120 | 29.257967 | 0.752581 | 0.186246 | 0 | 0.155963 | 1 | 0 | 0.495657 | 0.299508 | 0 | 0 | 0 | 0.001517 | 0 | 1 | 0 | false | 0.016055 | 0.016055 | 0 | 0.016055 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64867655df53ae53f42648b7a2d27b6c674ce9c1 | 272 | py | Python | examples/02-hello_kml.py | JMSchietekat/polycircles | 26f46bb77c234ac0aec756131f599f1651a559da | [
"MIT"
] | 9 | 2016-07-04T08:57:57.000Z | 2021-04-30T16:02:12.000Z | examples/02-hello_kml.py | JMSchietekat/polycircles | 26f46bb77c234ac0aec756131f599f1651a559da | [
"MIT"
] | 11 | 2016-06-30T19:36:24.000Z | 2021-12-04T21:20:23.000Z | examples/02-hello_kml.py | JMSchietekat/polycircles | 26f46bb77c234ac0aec756131f599f1651a559da | [
"MIT"
] | 7 | 2015-11-15T02:38:38.000Z | 2021-12-04T09:16:49.000Z | import os
import simplekml
from polycircles.polycircles import Polycircle
polycircle = Polycircle(latitude=31.611878, longitude=34.505351, radius=100)
kml = simplekml.Kml()
pol = kml.newpolygon(name=f"Polycircle", outerboundaryis=polycircle.to_kml())
kml.save('02.kml') | 27.2 | 77 | 0.794118 | 36 | 272 | 5.972222 | 0.638889 | 0.186047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084337 | 0.084559 | 272 | 10 | 78 | 27.2 | 0.779116 | 0 | 0 | 0 | 0 | 0 | 0.058608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6486e28543122cc731938867a4ab44ae1ac8a42a | 5,579 | py | Python | amazon_main_xgboost.py | twankim/ensemble_amazon | 9019d8dcdfa3651b374e0216cc310255c2d660aa | [
"Apache-2.0"
] | 236 | 2016-04-08T01:49:46.000Z | 2021-08-16T21:27:34.000Z | amazon_main_xgboost.py | twankim/ensemble_amazon | 9019d8dcdfa3651b374e0216cc310255c2d660aa | [
"Apache-2.0"
] | 1 | 2017-07-09T10:35:01.000Z | 2017-07-09T10:55:19.000Z | amazon_main_xgboost.py | kaz-Anova/ensemble_amazon | 9019d8dcdfa3651b374e0216cc310255c2d660aa | [
"Apache-2.0"
] | 87 | 2016-04-08T05:13:44.000Z | 2022-02-02T14:46:51.000Z | """ Amazon Access Challenge Code for ensemble
Marios Michaildis script for Amazon .
xgboost on input data
based on Paul Duan's Script.
"""
from __future_
_ import division
import numpy as np
from sklearn import preprocessing
from sklearn.metrics import roc_auc_score
import XGBoostClassifier as xg
from sklearn.cross_validation import StratifiedKFold
SEED = 42 # always use a seed for randomized procedures
def load_data(filename, use_labels=True):
"""
Load data from CSV files and return them as numpy arrays
The use_labels parameter indicates whether one should
read the first column (containing class labels). If false,
return all 0s.
"""
# load column 1 to 8 (ignore last one)
data = np.loadtxt(open( filename), delimiter=',',
usecols=range(1, 9), skiprows=1)
if use_labels:
labels = np.loadtxt(open( filename), delimiter=',',
usecols=[0], skiprows=1)
else:
labels = np.zeros(data.shape[0])
return labels, data
def save_results(predictions, filename):
"""Given a vector of predictions, save results in CSV format."""
with open(filename, 'w') as f:
f.write("id,ACTION\n")
for i, pred in enumerate(predictions):
f.write("%d,%f\n" % (i + 1, pred))
def bagged_set(X_t,y_c,model, seed, estimators, xt, update_seed=True):
# create array object to hold predictions
baggedpred=[ 0.0 for d in range(0, (xt.shape[0]))]
#loop for as many times as we want bags
for n in range (0, estimators):
#shuff;e first, aids in increasing variance and forces different results
#X_t,y_c=shuffle(Xs,ys, random_state=seed+n)
if update_seed: # update seed if requested, to give a slightly different model
model.set_params(random_state=seed + n)
model.fit(X_t,y_c) # fit model0.0917411475506
preds=model.predict_proba(xt)[:,1] # predict probabilities
# update bag's array
for j in range (0, (xt.shape[0])):
baggedpred[j]+=preds[j]
# divide with number of bags to create an average estimate
for j in range (0, len(baggedpred)):
baggedpred[j]/=float(estimators)
# return probabilities
return np.array(baggedpred)
# using numpy to print results
def printfilcsve(X, filename):
np.savetxt(filename,X)
def main():
"""
Fit models and make predictions.
We'll use one-hot encoding to transform our categorical features
into binary features.
y and X will be numpy array objects.
"""
filename="main_xgboost" # nam prefix
#model = linear_model.LogisticRegression(C=3) # the classifier we'll use
model=xg.XGBoostClassifier(num_round=1000 ,nthread=25, eta=0.12, gamma=0.01,max_depth=12, min_child_weight=0.01, subsample=0.6,
colsample_bytree=0.7,objective='binary:logistic',seed=1)
# === load data in memory === #
print "loading data"
y, X = load_data('train.csv')
y_test, X_test = load_data('test.csv', use_labels=False)
# === one-hot encoding === #
# we want to encode the category IDs encountered both in
# the training and the test set, so we fit the encoder on both
encoder = preprocessing.OneHotEncoder()
encoder.fit(np.vstack((X, X_test)))
X = encoder.transform(X) # Returns a sparse matrix (see numpy.sparse)
X_test = encoder.transform(X_test)
# if you want to create new features, you'll need to compute them
# before the encoding, and append them to your dataset after
#create arrays to hold cv an dtest predictions
train_stacker=[ 0.0 for k in range (0,(X.shape[0])) ]
# === training & metrics === #
mean_auc = 0.0
bagging=20 # number of models trained with different seeds
n = 5 # number of folds in strattified cv
kfolder=StratifiedKFold(y, n_folds= n,shuffle=True, random_state=SEED)
i=0
for train_index, test_index in kfolder: # for each train and test pair of indices in the kfolder object
# creaning and validation sets
X_train, X_cv = X[train_index], X[test_index]
y_train, y_cv = np.array(y)[train_index], np.array(y)[test_index]
#print (" train size: %d. test size: %d, cols: %d " % ((X_train.shape[0]) ,(X_cv.shape[0]) ,(X_train.shape[1]) ))
# if you want to perform feature selection / hyperparameter
# optimization, this is where you want to do it
# train model and make predictions
preds=bagged_set(X_train,y_train,model, SEED , bagging, X_cv, update_seed=True)
# compute AUC metric for this CV fold
roc_auc = roc_auc_score(y_cv, preds)
print "AUC (fold %d/%d): %f" % (i + 1, n, roc_auc)
mean_auc += roc_auc
no=0
for real_index in test_index:
train_stacker[real_index]=(preds[no])
no+=1
i+=1
mean_auc/=n
print (" Average AUC: %f" % (mean_auc) )
print (" printing train datasets ")
printfilcsve(np.array(train_stacker), filename + ".train.csv")
# === Predictions === #
# When making predictions, retrain the model on the whole training set
preds=bagged_set(X, y,model, SEED, bagging, X_test, update_seed=True)
#create submission file
printfilcsve(np.array(preds), filename+ ".test.csv")
#save_results(preds, filename+"_submission_" +str(mean_auc) + ".csv")
if __name__ == '__main__':
main()
| 35.535032 | 133 | 0.639003 | 798 | 5,579 | 4.353383 | 0.343358 | 0.010363 | 0.011514 | 0.003454 | 0.035118 | 0.030512 | 0 | 0 | 0 | 0 | 0 | 0.018116 | 0.257932 | 5,579 | 156 | 134 | 35.762821 | 0.821014 | 0.297724 | 0 | 0 | 0 | 0 | 0.050428 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.082192 | null | null | 0.09589 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
648c4346d07df4969891123c667fe653aca75a7c | 2,310 | py | Python | kmeans/dbscan.py | kravtsun/au-ml | a9e354c14d4df8d1e4569e7f6cfa2fdad060522f | [
"MIT"
] | null | null | null | kmeans/dbscan.py | kravtsun/au-ml | a9e354c14d4df8d1e4569e7f6cfa2fdad060522f | [
"MIT"
] | null | null | null | kmeans/dbscan.py | kravtsun/au-ml | a9e354c14d4df8d1e4569e7f6cfa2fdad060522f | [
"MIT"
] | null | null | null | #!/bin/python
import argparse
import numpy as np
from cluster import read_csv, plot_clusters, distance, print_cluster_distribution
def kmeans_chosen(data, centers):
def best_cluster(p):
distances = [distance(p, c) for c in centers]
return np.argmin(distances)
return np.apply_along_axis(best_cluster, 1, data)
# return [best_cluster(p) for p in data]
def dbscan(data, eps, m):
n = data.shape[0]
cur_cluster = 0
worked = [False] * n
marked = [-1] * n
def propagate(i, cur_cluster):
if worked[i]:
return []
worked[i] = True
directly_reachable = [j for j in range(n) if i != j and distance(data[i, :], data[j, :]) < eps]
if len(directly_reachable) < m:
return []
marked[i] = cur_cluster
next_work = filter(lambda j: marked[j] == -1, directly_reachable)
for j in filter(lambda j: marked[j] != cur_cluster, directly_reachable):
marked[j] = cur_cluster
return next_work
for i in range(n):
if worked[i]: continue
next_work = [i]
while len(next_work) > 0:
next_next_work = []
for j in next_work:
next_next_work += propagate(j, cur_cluster)
next_work = next_next_work
if marked[i] != -1:
cur_cluster += 1
result = np.array(marked)
assert result.shape == (n,)
return result
def sklearn_bruteforce(data, clusters):
from sklearn import cluster
for m in range(1, 21):
for eps in np.arange(0.01, 0.5, 0.01):
core_samples, result = cluster.dbscan(data, min_samples=m, eps=eps)
if len(set(result)) == clusters+1:
print m, eps, np.count_nonzero(result == 0), np.count_nonzero(result == -1)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="run k-means clusterization with given arguments")
parser.add_argument("-f", dest="filename", type=str, required=True)
parser.add_argument("-e", dest="eps", type=float, required=True)
parser.add_argument("-m", dest="m", type=int, default=10)
args = parser.parse_args()
data = read_csv(args.filename)
result = dbscan(data, args.eps, args.m)
print_cluster_distribution(result)
plot_clusters(data, result)
| 33.970588 | 103 | 0.619913 | 322 | 2,310 | 4.279503 | 0.313665 | 0.05225 | 0.013062 | 0.014514 | 0.100145 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014035 | 0.25974 | 2,310 | 67 | 104 | 34.477612 | 0.791813 | 0.022078 | 0 | 0.036364 | 0 | 0 | 0.032387 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0 | null | null | 0 | 0.072727 | null | null | 0.054545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
649457b8032d8db424acfcf9fd15600116f0ed28 | 526 | py | Python | tests/data.py | biologic/stylus | ae642bbb7e2205bab1ab1b4703ea037e996e13db | [
"Apache-2.0"
] | null | null | null | tests/data.py | biologic/stylus | ae642bbb7e2205bab1ab1b4703ea037e996e13db | [
"Apache-2.0"
] | null | null | null | tests/data.py | biologic/stylus | ae642bbb7e2205bab1ab1b4703ea037e996e13db | [
"Apache-2.0"
] | null | null | null | # The following is a list of gene-plan combinations which should
# not be run
BLACKLIST = [
('8C58', 'performance'), # performance.xml make specific references to 52DC
('7DDA', 'performance') # performance.xml make specific references to 52DC
]
IGNORE = {
'history' : ['uuid', 'creationTool', 'creationDate'],
'genome' : ['uuid', 'creationTool', 'creationDate'],
# the following two ignored because they contain line numbers
'attempt' : ['description'],
'compared' : ['description']
}
| 35.066667 | 79 | 0.657795 | 55 | 526 | 6.290909 | 0.727273 | 0.069364 | 0.144509 | 0.16763 | 0.306358 | 0.306358 | 0.306358 | 0.306358 | 0 | 0 | 0 | 0.019139 | 0.205323 | 526 | 14 | 80 | 37.571429 | 0.808612 | 0.439164 | 0 | 0 | 0 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64949bbe3dc54542a6bf1cf20073d7801df0a497 | 295 | py | Python | prm/relations/migrations/0010_delete_mood.py | justaname94/innovathon2019 | d1a4e9b1b877ba12ab23384b9ee098fcdbf363af | [
"MIT"
] | null | null | null | prm/relations/migrations/0010_delete_mood.py | justaname94/innovathon2019 | d1a4e9b1b877ba12ab23384b9ee098fcdbf363af | [
"MIT"
] | 4 | 2021-06-08T20:20:05.000Z | 2022-03-11T23:58:37.000Z | prm/relations/migrations/0010_delete_mood.py | justaname94/personal_crm | d1a4e9b1b877ba12ab23384b9ee098fcdbf363af | [
"MIT"
] | null | null | null | # Generated by Django 2.2.6 on 2019-10-03 23:37
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('relations', '0009_auto_20191003_2258'),
]
operations = [
migrations.DeleteModel(
name='Mood',
),
]
| 17.352941 | 49 | 0.60339 | 32 | 295 | 5.46875 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146919 | 0.284746 | 295 | 16 | 50 | 18.4375 | 0.682464 | 0.152542 | 0 | 0 | 1 | 0 | 0.145161 | 0.092742 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6499166ff9e4140b4e105ce3ebf038b47555e332 | 2,149 | py | Python | rdr_server/model/site.py | robabram/raw-data-repository-v2 | a8e1a387d9ea3e4be3ec44473d026e3218f23509 | [
"BSD-3-Clause"
] | null | null | null | rdr_server/model/site.py | robabram/raw-data-repository-v2 | a8e1a387d9ea3e4be3ec44473d026e3218f23509 | [
"BSD-3-Clause"
] | 2 | 2021-02-08T20:31:00.000Z | 2021-04-30T20:44:44.000Z | rdr_server/model/site.py | robabram/raw-data-repository-v2 | a8e1a387d9ea3e4be3ec44473d026e3218f23509 | [
"BSD-3-Clause"
] | null | null | null | from rdr_server.common.enums import SiteStatus, EnrollingStatus, DigitalSchedulingStatus, ObsoleteStatus
from sqlalchemy import Column, Integer, String, Date, Float, ForeignKey, UnicodeText
from rdr_server.model.base_model import BaseModel, ModelMixin, ModelEnum
class Site(ModelMixin, BaseModel):
__tablename__ = 'site'
siteId = Column('site_id', Integer, unique=True)
siteName = Column('site_name', String(255), nullable=False)
# The Google group for the site; this is a unique key used externally.
googleGroup = Column('google_group', String(255), nullable=False, unique=True)
mayolinkClientNumber = Column('mayolink_client_number', Integer)
organizationId = Column('organization_id', Integer,
ForeignKey('organization.organization_id'))
# Deprecated; this is being replaced by organizationId.
hpoId = Column('hpo_id', Integer, ForeignKey('hpo.hpo_id'))
siteStatus = Column('site_status', ModelEnum(SiteStatus))
enrollingStatus = Column('enrolling_status', ModelEnum(EnrollingStatus))
digitalSchedulingStatus = Column('digital_scheduling_status', ModelEnum(DigitalSchedulingStatus))
scheduleInstructions = Column('schedule_instructions', String(2048))
scheduleInstructions_ES = Column('schedule_instructions_es', String(2048))
launchDate = Column('launch_date', Date)
notes = Column('notes', UnicodeText)
notes_ES = Column('notes_es', UnicodeText)
latitude = Column('latitude', Float)
longitude = Column('longitude', Float)
timeZoneId = Column('time_zone_id', String(1024))
directions = Column('directions', UnicodeText)
physicalLocationName = Column('physical_location_name', String(1024))
address1 = Column('address_1', String(1024))
address2 = Column('address_2', String(1024))
city = Column('city', String(255))
state = Column('state', String(2))
zipCode = Column('zip_code', String(10))
phoneNumber = Column('phone_number', String(80))
adminEmails = Column('admin_emails', String(4096))
link = Column('link', String(255))
isObsolete = Column('is_obsolete', ModelEnum(ObsoleteStatus))
| 51.166667 | 104 | 0.731038 | 230 | 2,149 | 6.669565 | 0.43913 | 0.023468 | 0.016949 | 0.028683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026879 | 0.151698 | 2,149 | 41 | 105 | 52.414634 | 0.814591 | 0.056771 | 0 | 0 | 0 | 0 | 0.182312 | 0.070158 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.088235 | 0 | 0.970588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
649a993277a42a61a1570384faaec1b7bdd02010 | 2,096 | py | Python | passwd_validate/utils.py | clamytoe/Password-Validate | 3ce14ce0e3fa325dc578aaaf5e051e71d195f271 | [
"MIT"
] | 2 | 2018-07-08T17:36:59.000Z | 2018-10-19T22:51:33.000Z | passwd_validate/utils.py | clamytoe/Password-Validate | 3ce14ce0e3fa325dc578aaaf5e051e71d195f271 | [
"MIT"
] | 1 | 2018-05-16T00:25:42.000Z | 2018-05-16T00:25:42.000Z | passwd_validate/utils.py | clamytoe/Password-Validate | 3ce14ce0e3fa325dc578aaaf5e051e71d195f271 | [
"MIT"
] | 4 | 2018-04-18T18:18:40.000Z | 2018-09-26T16:33:54.000Z | # _*_ coding: utf-8 _*_
"""
password-validate.utils
-----------------------
This module provides utility functions that are used within password_validate
that are also useful for external consumption.
"""
import hashlib
from os.path import abspath, dirname, join
DICTIONARY_LOC = "dictionary_files"
DICTIONARY = "dictionary.txt"
PHPBB = "phpbb.txt"
ROCKYOU = "rockyou.txt"
DICTS = [
DICTIONARY,
PHPBB,
]
def hashit(password):
"""
Hashes any string sent to it with sha512.
:param password: String to hash
:return: String with a hexdigest of the hashed string.
"""
hash_object = hashlib.sha512()
hash_object.update(password.encode("utf-8"))
return hash_object.hexdigest()
def not_in_dict(password):
"""
Parses several dictionary files to see if the provided password is included
within them.
If the dictionary file contains any words that are under five characters in
length, they are skipped. If the string is found, this is considered to be
a failed check and therefore not a valid password.
:param password: String to check
:return: Boolean, True if not found, False if it is
"""
for passwd_file in DICTS:
dict_words = read_file(passwd_file)
for word in dict_words:
if "dictionary.txt" in passwd_file and len(word) < 5:
# skip common words under 5 characters long
continue
if password == word:
return False
return True
def read_file(filename):
"""
Helper function that simple iterates over the dictionary files.
:param filename: String with the path and filename of the dictionary
:return: String generator with each line of the dictionary
"""
file_loc = dirname(abspath(__file__))
data_loc = join(file_loc, DICTIONARY_LOC, filename)
with open(data_loc, "rb") as file:
for line in file:
try:
yield line.decode("utf-8").rstrip()
except UnicodeDecodeError:
# LOL, like my hack around this one??
continue
| 29.942857 | 79 | 0.656966 | 274 | 2,096 | 4.923358 | 0.441606 | 0.038547 | 0.028169 | 0.031134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007097 | 0.260496 | 2,096 | 69 | 80 | 30.376812 | 0.863226 | 0.463263 | 0 | 0.0625 | 0 | 0 | 0.073786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0.21875 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
649f90abe5a0e2278134d2c05c716ffaecd2b45f | 429 | py | Python | bookwyrm/migrations/0145_sitesettings_version.py | mouse-reeve/fedireads | e3471fcc3500747a1b1deaaca662021aae5b08d4 | [
"CC0-1.0"
] | 270 | 2020-01-27T06:06:07.000Z | 2020-06-21T00:28:18.000Z | bookwyrm/migrations/0145_sitesettings_version.py | mouse-reeve/fedireads | e3471fcc3500747a1b1deaaca662021aae5b08d4 | [
"CC0-1.0"
] | 158 | 2020-02-10T20:36:54.000Z | 2020-06-26T17:12:54.000Z | bookwyrm/migrations/0145_sitesettings_version.py | mouse-reeve/fedireads | e3471fcc3500747a1b1deaaca662021aae5b08d4 | [
"CC0-1.0"
] | 15 | 2020-02-13T21:53:33.000Z | 2020-06-17T16:52:46.000Z | # Generated by Django 3.2.12 on 2022-03-16 18:10
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bookwyrm", "0144_alter_announcement_display_type"),
]
operations = [
migrations.AddField(
model_name="sitesettings",
name="version",
field=models.CharField(blank=True, max_length=10, null=True),
),
]
| 22.578947 | 73 | 0.624709 | 47 | 429 | 5.574468 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06962 | 0.263403 | 429 | 18 | 74 | 23.833333 | 0.759494 | 0.107226 | 0 | 0 | 1 | 0 | 0.165354 | 0.094488 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64a6d9297b2d82f455c0e98224ae042ba6dbe984 | 1,939 | py | Python | scripts/sample_script.py | TheConfused/LinkedIn | 83e75ed18c54ebc1bed55ee55f69d580a2cb1b73 | [
"MIT"
] | null | null | null | scripts/sample_script.py | TheConfused/LinkedIn | 83e75ed18c54ebc1bed55ee55f69d580a2cb1b73 | [
"MIT"
] | null | null | null | scripts/sample_script.py | TheConfused/LinkedIn | 83e75ed18c54ebc1bed55ee55f69d580a2cb1b73 | [
"MIT"
] | null | null | null | from simplelinkedin import LinkedIn
def run_script(settings):
with LinkedIn(
username=settings.get("LINKEDIN_USER"),
password=settings.get("LINKEDIN_PASSWORD"),
browser=settings.get("LINKEDIN_BROWSER"),
driver_path=settings.get("LINKEDIN_BROWSER_DRIVER"),
headless=bool(settings.get("LINKEDIN_BROWSER_HEADLESS")),
) as ln:
# all the steps manually
ln.login()
# ln.remove_sent_invitations(older_than_days=14)
ln.send_invitations(
max_invitation=max(ln.WEEKLY_MAX_INVITATION - ln.invitations_sent_last_week, 0),
min_mutual=10,
max_mutual=450,
preferred_users=["Quant"],
not_preferred_users=["Sportsman"],
view_profile=True,
)
ln.accept_invitations()
# OR
# run smart follow-unfollow method (without setting cron jobs) which essentially does the same thing as
# all the above steps
ln.smart_follow_unfollow(
users_preferred=settings.get("LINKEDIN_PREFERRED_USER") or [],
users_not_preferred=settings.get("LINKEDIN_NOT_PREFERRED_USER") or [],
)
# setting and un-setting cron
# set cron
ln.set_smart_cron(settings)
# remove existing cron jobs
ln.remove_cron_jobs(settings=settings)
if __name__ == "__main__":
import os
sett = {
"LINKEDIN_USER": os.getenv("LINKEDIN_USER"),
"LINKEDIN_PASSWORD": os.getenv("LINKEDIN_PASSWORD"),
"LINKEDIN_BROWSER": "Chrome",
"LINKEDIN_BROWSER_DRIVER": "/Users/dayhatt/workspace/drivers/chromedriver",
"LINKEDIN_BROWSER_HEADLESS": 0,
"LINKEDIN_BROWSER_CRON": 0,
"LINKEDIN_CRON_USER": "dayhatt",
"LINKEDIN_PREFERRED_USER": "./data/user_preferred.txt",
"LINKEDIN_NOT_PREFERRED_USER": "./data/user_not_preferred.txt",
}
run_script(settings=sett)
| 32.864407 | 111 | 0.647241 | 216 | 1,939 | 5.481481 | 0.388889 | 0.065034 | 0.112331 | 0.065878 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006863 | 0.248582 | 1,939 | 58 | 112 | 33.431034 | 0.805765 | 0.132543 | 0 | 0 | 0 | 0 | 0.293485 | 0.188882 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0.051282 | 0.051282 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
64aa2bc5220e8017e409c846fc02f3f094df8aea | 3,106 | py | Python | models/image/greenFluorescenceQuantifier.py | marinarobin/uwaterloo-igem-2018 | dd2d3227975c51c31e923c0e262b4fc07b44b73a | [
"MIT"
] | 3 | 2018-05-15T00:46:37.000Z | 2018-09-20T22:50:52.000Z | models/image/greenFluorescenceQuantifier.py | marinarobin/uwaterloo-igem-2018 | dd2d3227975c51c31e923c0e262b4fc07b44b73a | [
"MIT"
] | 1 | 2018-03-22T19:30:24.000Z | 2018-03-22T19:30:24.000Z | models/image/greenFluorescenceQuantifier.py | marinarobin/uwaterloo-igem-2018 | dd2d3227975c51c31e923c0e262b4fc07b44b73a | [
"MIT"
] | 3 | 2018-10-01T21:19:23.000Z | 2018-10-13T19:04:28.000Z | # Max Reed
# August 22, 2018
# A program designed for the UW iGEM Robots Subsubteam within the Math Subteam. It is meant to
# help quantify the amount of green fluorescence visible in an image. We have a very bright blue
# LED and a band pass filter that blocks blue light but lets through green light (and also red
# light I think). If you put the filter in front of a camera while shining the blue LED on bacterial
# samples, you can get pictures of the "green fluorescence" of your samples. Visually, it is
# possible to distinguish between pictures of "high fluorescence" samples and "low fluorescence"
# samples, but it is good to have a program to make the analysis quantitative.
from scipy import misc
# this python program should be put in the same directory as whatever images you want to analyze.
# you then enter the names of your images here.
fileNames = ["11","12","13","14","21","22","23","24"]
for name in fileNames:
currentImage = misc.imread(name + '.png') # the image gets read in as a 3D array.
# the 3rd dimension (the second embedded array) has the values:
# [red value, green value, blue value, 255]
# i don't know why 255 is added on at the end.
# this is a magical command that flattens the array so now it has the dimensions:
# (height in pixels * width in pixels)x(4)
rgb_flat_list = [item for sublist in currentImage for item in sublist]
# the rest of this is just finding the average rgb value for each image and outputting that.
# the average blue value is probably useless. the average green value might actually get too much
# bleed over from blue light, meaning the average red value actually gives the best idea of how
# much green fluorescence there is. this is just a guess though (supported by a single experiment
# that i performed on August 21st, 2018).
totIntensity = [0, 0, 0]
for j in range(len(rgb_flat_list)):
for k in range(3):
totIntensity[k] = totIntensity[k] + rgb_flat_list[j][k]
for k in range(3):
totIntensity[k] = totIntensity[k] / (1.0 * len(rgb_flat_list))
print "Average (r,g,b) for {} : ({}/255, {}/255, {}/255)".format(
name,
round(totIntensity[0], 1),
round(totIntensity[1], 1),
round(totIntensity[2], 1)
)
# It's pretty dumb to include this here but this is the output from my first experiment on August 21st:
# Average (r,g,b) for 11: (77.1/255, 224.9/255, 191.9/255)
# Average (r,g,b) for 12: (121.5/255, 232.5/255, 198.1/255)
# Average (r,g,b) for 13: (59.4/255, 216.8/255, 183.9/255)
# Average (r,g,b) for 14: (118.1/255, 233.4/255, 200.9/255)
# Average (r,g,b) for 21: (136.8/255, 240.7/255, 220.0/255)
# Average (r,g,b) for 22: (114.3/255, 227.6/255, 200.5/255)
# Average (r,g,b) for 23: (61.0/255, 207.2/255, 178.7/255)
# Average (r,g,b) for 24: (66.6/255, 209.0/255, 179.7/255)
# 12, 14, 21, and 22 were the high fluorescence samples. The major flaw in this experiment was that I didn't normalize
# for optical density, though all samples should've had an OD of about 1.
| 53.551724 | 118 | 0.68255 | 552 | 3,106 | 3.826087 | 0.42029 | 0.034091 | 0.038352 | 0.042614 | 0.102746 | 0.090436 | 0.060133 | 0.035985 | 0.035985 | 0 | 0 | 0.102208 | 0.212492 | 3,106 | 57 | 119 | 54.491228 | 0.761243 | 0.735673 | 0 | 0.117647 | 0 | 0 | 0.087675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64b296594edc9672e5df42017e0febcf11f1fb78 | 243 | py | Python | renlabs/sudoku/cli.py | swork/sudoku | 7dcc33bf6572ada4d3d92f21278d19b5237c1d83 | [
"MIT"
] | null | null | null | renlabs/sudoku/cli.py | swork/sudoku | 7dcc33bf6572ada4d3d92f21278d19b5237c1d83 | [
"MIT"
] | null | null | null | renlabs/sudoku/cli.py | swork/sudoku | 7dcc33bf6572ada4d3d92f21278d19b5237c1d83 | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
import logging, sys, os
from . import main
logger = logging.getLogger(__name__ if not __name__ == '__main__' else os.path.basename(__file__))
if __name__ == '__main__':
logging.basicConfig()
sys.exit(main())
| 22.090909 | 98 | 0.707819 | 32 | 243 | 4.625 | 0.65625 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.156379 | 243 | 10 | 99 | 24.3 | 0.717073 | 0.090535 | 0 | 0 | 0 | 0 | 0.072727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
64b5326f176464feb08959dbb46da61e35edf533 | 873 | py | Python | Python 基础教程/1.5.1 列表.py | shao1chuan/pythonbook | cd9877d04e1e11422d38cc051e368d3d9ce2ab45 | [
"MulanPSL-1.0"
] | 95 | 2020-10-11T04:45:46.000Z | 2022-02-25T01:50:40.000Z | Python 基础教程/1.5.1 列表.py | shao1chuan/pythonbook | cd9877d04e1e11422d38cc051e368d3d9ce2ab45 | [
"MulanPSL-1.0"
] | null | null | null | Python 基础教程/1.5.1 列表.py | shao1chuan/pythonbook | cd9877d04e1e11422d38cc051e368d3d9ce2ab45 | [
"MulanPSL-1.0"
] | 30 | 2020-11-05T09:01:00.000Z | 2022-03-08T05:58:55.000Z | # 插入
print('插入'*15)
x = [1,2,3]
print(x)
x = x+ [4]
x.append(5)
print(x)
x.insert(3,'w')
x.extend(['a','b'])
print(x*3)
# 删除
print("删除"*15)
y = ["a","b","c","d",'e','f']
del y[2]
print(y)
y.pop(0)
print(y)
y.remove('f')
print(y)
# 列表元素访问与计数
print("列表元素访问与计数"*5)
x =[1,2,3,3,4,5]
print(x.count(3),x.index(2))
# 列表排序
print("列表排序"*10)
x = [1,2,4,5,6,34,22,55,22,11,24,56,78]
import random as r
r.shuffle(x)
print(x)
x.reverse()
print("reverse",x)
x.sort(reverse = True)
print('sort ',x)
# 使用内置函数sorted对列表进行排序并返回新列表,不对原列表做任何修改。
sorted(x)
reversed(x)
# 打包
print("打包"*10)
a = [1,2,3]
b = [4,5,6]
print(list(zip(a,b)))
# 枚举
print("枚举"*10)
for item in enumerate('abcdef'):
print(item)
# 遍历列表的三种方式
print("遍历列表的三种方式"*10)
a = ['a','b','c','d','e','f']
for i in a:
print(i)
for i in range(len(a)):
print(i,a[i])
for i,ele in enumerate(a):
print(i,ele)
| 12.125 | 39 | 0.578465 | 182 | 873 | 2.774725 | 0.351648 | 0.059406 | 0.017822 | 0.015842 | 0.023762 | 0.023762 | 0 | 0 | 0 | 0 | 0 | 0.076716 | 0.148912 | 873 | 71 | 40 | 12.295775 | 0.602961 | 0.084765 | 0 | 0.130435 | 0 | 0 | 0.082157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021739 | 0 | 0.021739 | 0.478261 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
64b82d4b5f5b18d5d4af06617964ece62fbc4a17 | 5,781 | py | Python | pulse_lib/segments/data_classes/data_HVI_variables.py | NicoHendrickx/pulse_lib | 94cd9cf7a8d4d422b86c0759dd2eb7c3466d6413 | [
"MIT"
] | null | null | null | pulse_lib/segments/data_classes/data_HVI_variables.py | NicoHendrickx/pulse_lib | 94cd9cf7a8d4d422b86c0759dd2eb7c3466d6413 | [
"MIT"
] | 26 | 2020-04-06T09:33:39.000Z | 2022-02-18T14:08:22.000Z | pulse_lib/segments/data_classes/data_HVI_variables.py | NicoHendrickx/pulse_lib | 94cd9cf7a8d4d422b86c0759dd2eb7c3466d6413 | [
"MIT"
] | 3 | 2020-03-31T11:56:23.000Z | 2021-12-06T13:42:50.000Z | """
data class for markers.
"""
from pulse_lib.segments.data_classes.data_generic import parent_data
import copy
class marker_HVI_variable(parent_data):
def __init__(self):
"""
init marker object
Args:
pulse_amplitude(double) : pulse amplitude in mV
"""
super().__init__()
self.my_time_data = dict()
self.my_amp_data = dict()
self.end_time = 0
@property
def HVI_markers(self):
return {**self.my_time_data, **self.my_amp_data}
def __getitem__(self, *item):
try:
return self.my_time_data[item[0]]
except:
pass
try:
return self.my_amp_data[item[0]]
except:
pass
raise ValueError("Asking for HVI variable {}. But this variable is not present in the current data set.".format(item[0]))
def add_HVI_marker(self, name, amplitude, time):
"""
add a marker
Args:
name (str) : variable name for the HVI marker
amplitude (float) : amplitude of the marker (in case of a time, unit is in ns, else mV)
time (bool) : True is marker needs to be interpreted as a time.
"""
if time == True:
self.my_time_data[name] = amplitude
else:
self.my_amp_data[name] = amplitude
def reset_time(self, time = None, extend_only = False):
"""
reset the effective start time. See online manual in pulse building instructions to understand this command.
Args:
time (double) : new time that will become time zero
"""
self.start_time = self.total_time
if time is not None:
self.start_time =time
if self.start_time > self.end_time:
self.end_time = self.start_time
def wait(self, time):
"""
Wait after marker for x ns.
Args:
time (double) : time in ns to wait
"""
self.end_time += time
@property
def total_time(self):
'''
get the total time of this segment.
'''
return self.end_time
def slice_time(self, start, end):
"""
apply slice operation on this marker.
Args:
start (double) : start time of the marker
stop (double) : stop time of the marker
"""
for key in self.my_time_data.keys():
self.my_time_data[key] -= start
def get_vmin(self,sample_rate = 1e9):
return 0
def get_vmax(self,sample_rate = 1e9):
return 0
def integrate_waveform(self, sample_rate):
"""
as markers are connected to matched inputs, we do not need to compensate, hence no integration of waveforms is needed.
"""
return 0
def append(self, other, time = None):
'''
Append two segments to each other, where the other segment is places after the first segment. Time is the total time of the first segment.
Args:
other (marker_HVI_variable) : other pulse data object to be appended
time (double/None) : length that the first segment should be.
** what to do with start time argument?
'''
end_time = self.total_time
if time is not None:
end_time = time
self.slice_time(0, end_time)
other_shifted = other._shift_all_time(end_time)
self.my_time_data.update(other_shifted.my_time_data)
self.my_amp_data.update(other.my_amp_data)
def __copy__(self):
"""
make a copy of this marker.
"""
my_copy = marker_HVI_variable()
my_copy.my_amp_data = copy.copy(self.my_amp_data)
my_copy.my_time_data = copy.copy(self.my_time_data)
my_copy.start_time = copy.copy(self.start_time)
my_copy.end_time = copy.copy(self.end_time)
return my_copy
def _shift_all_time(self, time_shift):
'''
Make a copy of all the data and shift all the time
Args:
time_shift (double) : shift the time
Returns:
data_copy_shifted (pulse_data) : copy of own data
'''
if time_shift <0 :
raise ValueError("when shifting time, you cannot make negative times. Apply a positive shift.")
data_copy_shifted = copy.copy(self)
for key in data_copy_shifted.my_time_data.keys():
data_copy_shifted.my_time_data[key] += time_shift
return data_copy_shifted
def __add__(self, other):
"""
add other maker to this one
Args:
other (marker_HVI_variable) : other marker object you want to add
"""
if not isinstance(other, marker_HVI_variable):
raise ValueError("only HVI makers can be added to HVI makers. No other types allowed.")
new_data = marker_HVI_variable()
new_data.my_time_data = {**self.my_time_data, **other.my_time_data}
new_data.my_amp_data = {**self.my_amp_data, **other.my_amp_data}
new_data.start_time = self.start_time
new_data.end_time = self.end_time
if other.total_time > self.total_time:
new_data.end_time = other.end_time
return new_data
def __mul__(self, other):
raise ValueError("No multiplication support for markers ...")
def __repr__(self):
return "=== raw data in HVI variable object ===\n\namplitude data ::\n" + str(self.my_amp_data) + "\ntime dep data ::\n" + str(self.my_time_data)
def _render(self, sample_rate, ref_channel_states):
'''
make a full rendering of the waveform at a predetermined sample rate.
'''
raise ValueError("Rendering of HVI marker is currently not supported.")
| 30.426316 | 153 | 0.60474 | 785 | 5,781 | 4.220382 | 0.225478 | 0.032599 | 0.048295 | 0.042258 | 0.155146 | 0.083308 | 0.049502 | 0.019318 | 0.019318 | 0 | 0 | 0.003264 | 0.311019 | 5,781 | 189 | 154 | 30.587302 | 0.828521 | 0.269504 | 0 | 0.152941 | 0 | 0 | 0.106819 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211765 | false | 0.023529 | 0.023529 | 0.047059 | 0.376471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64b8c4d0cf9e1949005c5b1ea8a056e52ed57d58 | 2,504 | py | Python | AutoBonsai/AutoBonsai.py | WolfgangAxel/Random-Projects | 12764d96be3fa162abc5451fdd07db7481200a07 | [
"MIT"
] | 1 | 2017-08-17T19:50:11.000Z | 2017-08-17T19:50:11.000Z | AutoBonsai/AutoBonsai.py | WolfgangAxel/Random-Projects | 12764d96be3fa162abc5451fdd07db7481200a07 | [
"MIT"
] | null | null | null | AutoBonsai/AutoBonsai.py | WolfgangAxel/Random-Projects | 12764d96be3fa162abc5451fdd07db7481200a07 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# AutoBonsai.py
#
# Copyright 2016 keaton <keaton@MissionControl>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA.
#
#
## Variables
sleepTime = 30*60 # Check the schedule twice an hour
wateringSleep = 6*60*60 # Check the water levels 4 times a day
lightOnTime = 10*60*60 # Turn the light on for 10 hours a day
## Definitions
def addEvent(date,commands):
"""
adds an event to the schedule
'date' should be a time in seconds since epoch
'commands' will be passed through exec when 'date' is past (can be string or array of strings)
"""
global Schedule
if type(commands) is not list:
commands=[commands]
Schedule.append([date,commands])
# sort the array by epoch time
Schedule=sorted(Schedule,key=lambda i: i[0])
saveArray(MYDIR+"/MyFiles/Schedule.list",Schedule)
def checkEvent():
"""
Checks to see if an event needs to be triggered
"""
global Schedule
if Schedule != []:
if time() >= Schedule[0][0]:
for command in Schedule[0][1]:
exec(command)
Schedule=Schedule[1:]
saveArray(MYDIR+"/MyFiles/Schedule.list",Schedule)
checkEvent()
def notifyMe(title,message):
post('https://api.simplepush.io/send',data={'key':'36h2Me', 'title':str(title), 'msg':str(message)})
## Modules
from time import time,sleep
from os import path
from requests import post
from sys import path as moduleDir
MYDIR = path.dirname(path.realpath(__file__))
moduleDir.append(MYDIR+"/Modules")
from MyMods import *
from PlantUtils import *
Schedule = []
try:
Schedule = loadArray(MYDIR+"/Schedule.list")
except:
print "Schedule not found."
if __name__ == '__main__':
print "hi!"
if Schedule == []:
print "Schedule not found or empty. Initiating and scheduling all care commands"
beginCaring()
while True:
sleep(sleepTime)
checkEvent()
| 28.454545 | 101 | 0.717652 | 366 | 2,504 | 4.877049 | 0.519126 | 0.018487 | 0.021849 | 0.031933 | 0.091877 | 0.077311 | 0 | 0 | 0 | 0 | 0 | 0.02137 | 0.177716 | 2,504 | 87 | 102 | 28.781609 | 0.845556 | 0.388578 | 0 | 0.142857 | 0 | 0 | 0.170906 | 0.034976 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64c3b70bd4a11694d8806b5762ea9c6780b09649 | 1,858 | py | Python | oauth2/serializers.py | tinyms/bopress | 6182c8940ebeb1f7a26c0e1aa62528b9f090b2d9 | [
"Apache-2.0"
] | null | null | null | oauth2/serializers.py | tinyms/bopress | 6182c8940ebeb1f7a26c0e1aa62528b9f090b2d9 | [
"Apache-2.0"
] | null | null | null | oauth2/serializers.py | tinyms/bopress | 6182c8940ebeb1f7a26c0e1aa62528b9f090b2d9 | [
"Apache-2.0"
] | null | null | null | from rest_framework import serializers
from oauth2.models import UserInfo, CompanyInfo, CompanyEmployee, AppCommerceLicense, AppGrantAuthorization
class OAuth2VerifySerializer(serializers.Serializer):
def create(self, validated_data):
return None
def update(self, instance, validated_data):
return instance
code = serializers.CharField(max_length=255, required=True)
class UserInfoSerializer(serializers.HyperlinkedModelSerializer):
user = serializers.ReadOnlyField(source='user.username')
class Meta:
model = UserInfo
fields = ('union_id', 'nick_name', 'gender', 'city', 'province', 'country', 'avatar_url', 'app_id',
'mobile', 'name', 'address', 'identity_card', 'timestamp', 'user')
class CompanyInfoSerializer(serializers.HyperlinkedModelSerializer):
owner = serializers.ReadOnlyField(source='owner.name')
class Meta:
model = CompanyInfo
fields = ('company_name', 'name', 'mobile', 'logo', 'address', 'phone', 'summary', 'description', 'owner')
class CompanyEmployeeSerializer(serializers.HyperlinkedModelSerializer):
company_name = serializers.ReadOnlyField(source='company.company_name')
employee_name = serializers.ReadOnlyField(source='employee.name')
class Meta:
model = CompanyEmployee
fields = ('company', 'employee', 'level')
class AppCommerceLicenseSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = AppCommerceLicense
fields = ('level', 'name', 'price', 'position')
class AppGrantAuthorizationSerializer(serializers.HyperlinkedModelSerializer):
company_name = serializers.ReadOnlyField(source='company.company_name')
class Meta:
model = AppGrantAuthorization
fields = ('app_id', 'app_name', 'start_time', 'end_time', 'level', 'company')
| 33.781818 | 114 | 0.717976 | 167 | 1,858 | 7.874252 | 0.437126 | 0.140684 | 0.114068 | 0.041065 | 0.146008 | 0.146008 | 0.146008 | 0.146008 | 0.146008 | 0.146008 | 0 | 0.003232 | 0.167384 | 1,858 | 54 | 115 | 34.407407 | 0.8468 | 0 | 0 | 0.205882 | 0 | 0 | 0.174381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0.058824 | 0.676471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
64c9216d8fa2a1253a7570d4c6809f00d17bb600 | 4,665 | py | Python | src/pspnet/run.py | jefequien/PSPNet-Keras | bad76c4c397b127c1d82bff31cb8ada39d39a230 | [
"MIT"
] | 4 | 2019-09-29T06:13:17.000Z | 2020-06-06T10:21:49.000Z | src/pspnet/run.py | jefequien/PSPNet-Keras | bad76c4c397b127c1d82bff31cb8ada39d39a230 | [
"MIT"
] | null | null | null | src/pspnet/run.py | jefequien/PSPNet-Keras | bad76c4c397b127c1d82bff31cb8ada39d39a230 | [
"MIT"
] | 1 | 2020-12-22T08:30:25.000Z | 2020-12-22T08:30:25.000Z | import os
from os.path import join, isfile, isdir, dirname, basename
from os import environ, makedirs
import sys
import argparse
import numpy as np
import h5py
from scipy import misc
from keras import backend as K
import tensorflow as tf
from pspnet import PSPNet50
import utils
from utils import image_utils
from utils.datasource import DataSource
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--project', type=str, required=True, help="Project name")
parser.add_argument('-r', '--randomize', action='store_true', default=False, help="Randomize image list")
parser.add_argument('-c', '--checkpoint', type=str, help='Checkpoint to use')
parser.add_argument('-s', '--scale', type=str, default='normal',
help='Scale to use',
choices=['normal',
'medium',
'big',
'single'])
parser.add_argument('--start', type=int, default=0)
parser.add_argument('--end', type=int, default=None)
parser.add_argument('--id', default="0")
args = parser.parse_args()
environ["CUDA_VISIBLE_DEVICES"] = args.id
config = utils.get_config(args.project)
datasource = DataSource(config)
im_list = utils.open_im_list(config["im_list"])
im_list = im_list[args.start:args.end]
if args.randomize:
random.seed(3)
random.shuffle(im_list)
# Output directory
root_result = "../predictions/softmax_default/{}".format(args.scale)
if args.checkpoint is not None:
model = basename(dirname(args.checkpoint))
version = basename(args.checkpoint).split('-')[0]
root_result = "predictions/{}/{}/{}".format(model, version, args.scale)
print "Outputting to ", root_result
root_mask = os.path.join(root_result, 'category_mask')
root_prob = os.path.join(root_result, 'prob_mask')
root_maxprob = os.path.join(root_result, 'max_prob')
root_allprob = os.path.join(root_result, 'all_prob')
sess = tf.Session()
K.set_session(sess)
with sess.as_default():
print(args)
pspnet = PSPNet50(checkpoint=args.checkpoint)
for im in im_list:
print im
fn_maxprob = os.path.join(root_maxprob, im.replace('.jpg', '.h5'))
fn_mask = os.path.join(root_mask, im.replace('.jpg', '.png'))
fn_prob = os.path.join(root_prob, im)
fn_allprob = os.path.join(root_allprob, im.replace('.jpg', '.h5'))
if os.path.exists(fn_allprob):
print "Already done."
continue
# make paths if not exist
if not os.path.exists(dirname(fn_maxprob)):
os.makedirs(dirname(fn_maxprob))
if not os.path.exists(dirname(fn_mask)):
os.makedirs(dirname(fn_mask))
if not os.path.exists(dirname(fn_prob)):
os.makedirs(dirname(fn_prob))
if not os.path.exists(dirname(fn_allprob)):
os.makedirs(dirname(fn_allprob))
img, _ = datasource.get_image(im)
probs = None
if args.scale == "single":
probs = pspnet.predict(img)
elif args.scale == "normal":
img_s = image_utils.scale_maxside(img, maxside=512)
probs_s = pspnet.predict_sliding(img_s)
probs = image_utils.scale(probs_s, img.shape)
elif args.scale == "medium":
img_s = image_utils.scale_maxside(img, maxside=1028)
probs_s = pspnet.predict_sliding(img_s)
probs = image_utils.scale(probs_s, img.shape)
elif args.scale == "big":
img_s = image_utils.scale_maxside(img, maxside=2048)
probs_s = pspnet.predict_sliding(img_s)
probs = image_utils.scale(probs_s, img.shape)
# probs is 150 x h x w
probs = np.transpose(probs, (2,0,1))
# Write output
pred_mask = np.array(np.argmax(probs, axis=0) + 1, dtype='uint8')
prob_mask = np.array(np.max(probs, axis=0)*255, dtype='uint8')
max_prob = np.max(probs, axis=(1,2))
all_prob = np.array(probs*255+0.5, dtype='uint8')
# write to file
misc.imsave(fn_mask, pred_mask)
misc.imsave(fn_prob, prob_mask)
with h5py.File(fn_maxprob, 'w') as f:
f.create_dataset('maxprob', data=max_prob)
with h5py.File(fn_allprob, 'w') as f:
f.create_dataset('allprob', data=all_prob)
| 37.926829 | 109 | 0.591854 | 596 | 4,665 | 4.458054 | 0.253356 | 0.031615 | 0.030109 | 0.042153 | 0.243131 | 0.175386 | 0.161837 | 0.122695 | 0.082047 | 0.082047 | 0 | 0.013777 | 0.284244 | 4,665 | 122 | 110 | 38.237705 | 0.781971 | 0.018864 | 0 | 0.0625 | 0 | 0 | 0.085377 | 0.007224 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.145833 | null | null | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
64dbc268e94b0e86f76dd10af1901fafdc87b38b | 2,000 | py | Python | gocardless_pro/services/billing_request_flows_service.py | gocardless/gocardless-pro-python | e6763fba5326ff56f4ba417ddd7828c03e059be5 | [
"MIT"
] | 30 | 2015-07-08T21:10:10.000Z | 2022-02-17T10:08:55.000Z | gocardless_pro/services/billing_request_flows_service.py | gocardless/gocardless-pro-python | e6763fba5326ff56f4ba417ddd7828c03e059be5 | [
"MIT"
] | 21 | 2015-12-14T02:24:52.000Z | 2022-02-05T15:56:00.000Z | gocardless_pro/services/billing_request_flows_service.py | gocardless/gocardless-pro-python | e6763fba5326ff56f4ba417ddd7828c03e059be5 | [
"MIT"
] | 19 | 2016-02-10T15:57:42.000Z | 2022-02-05T10:21:05.000Z | # WARNING: Do not edit by hand, this file was generated by Crank:
#
# https://github.com/gocardless/crank
#
from . import base_service
from .. import resources
from ..paginator import Paginator
from .. import errors
class BillingRequestFlowsService(base_service.BaseService):
"""Service class that provides access to the billing_request_flows
endpoints of the GoCardless Pro API.
"""
RESOURCE_CLASS = resources.BillingRequestFlow
RESOURCE_NAME = 'billing_request_flows'
def create(self,params=None, headers=None):
"""Create a billing request flow.
Creates a new billing request flow.
Args:
params (dict, optional): Request body.
Returns:
ListResponse of BillingRequestFlow instances
"""
path = '/billing_request_flows'
if params is not None:
params = {self._envelope_key(): params}
response = self._perform_request('POST', path, params, headers,
retry_failures=True)
return self._resource_for(response)
def initialise(self,identity,params=None, headers=None):
"""Initialise a billing request flow.
Returns the flow having generated a fresh session token which can be
used to power
integrations that manipulate the flow.
Args:
identity (string): Unique identifier, beginning with "BRQ".
params (dict, optional): Request body.
Returns:
ListResponse of BillingRequestFlow instances
"""
path = self._sub_url_params('/billing_request_flows/:identity/actions/initialise', {
'identity': identity,
})
if params is not None:
params = {'data': params}
response = self._perform_request('POST', path, params, headers,
retry_failures=False)
return self._resource_for(response)
| 30.30303 | 92 | 0.618 | 208 | 2,000 | 5.8125 | 0.451923 | 0.081059 | 0.062862 | 0.034739 | 0.329198 | 0.281224 | 0.243176 | 0.243176 | 0.243176 | 0.243176 | 0 | 0 | 0.3055 | 2,000 | 65 | 93 | 30.769231 | 0.87041 | 0.3605 | 0 | 0.26087 | 1 | 0 | 0.100974 | 0.08326 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.173913 | 0 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b37c632709a6f52e9270bc4114337fa833a78ab7 | 232 | py | Python | Uche Clare/Phase 1/Python Basic 2/Day 17/Task 2.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Uche Clare/Phase 1/Python Basic 2/Day 17/Task 2.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Uche Clare/Phase 1/Python Basic 2/Day 17/Task 2.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #Write a Python program to create all possible strings by using 'a', 'e', 'i', 'o', 'u'. Use the characters exactly once.
import random
vowels = ['a', 'e', 'i', 'o', 'u']
random.shuffle(vowels)
char = "".join(vowels)
print(char)
| 23.2 | 121 | 0.642241 | 37 | 232 | 4.027027 | 0.72973 | 0.026846 | 0.040268 | 0.053691 | 0.067114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163793 | 232 | 9 | 122 | 25.777778 | 0.768041 | 0.517241 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b383c1f2704d2ae2a9e5fdb01fe6f31ca5410042 | 374 | py | Python | app/services/mail/events.py | maxzhenzhera/my_vocab_backend | 2e9f968374e0bc2fcc0ae40830ca40f3cf5754d1 | [
"MIT"
] | null | null | null | app/services/mail/events.py | maxzhenzhera/my_vocab_backend | 2e9f968374e0bc2fcc0ae40830ca40f3cf5754d1 | [
"MIT"
] | null | null | null | app/services/mail/events.py | maxzhenzhera/my_vocab_backend | 2e9f968374e0bc2fcc0ae40830ca40f3cf5754d1 | [
"MIT"
] | null | null | null | import logging
from fastapi import FastAPI
from fastapi_mail import ConnectionConfig as MailConnectionSettings
from .state import MailState
__all__ = ['init_mail']
logger = logging.getLogger(__name__)
def init_mail(app: FastAPI, settings: MailConnectionSettings) -> None:
app.state.mail = MailState(settings)
logger.info('Mail state (sender) has been set.')
| 20.777778 | 70 | 0.772727 | 45 | 374 | 6.177778 | 0.533333 | 0.079137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144385 | 374 | 17 | 71 | 22 | 0.86875 | 0 | 0 | 0 | 0 | 0 | 0.112299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b388d8673e156c073bc0e747b23586e086fb3fd0 | 2,906 | py | Python | sorteio.py | laurocjs/criptomigo | 871bcd051376f5ffe7fe64cae80401da69ae171f | [
"MIT"
] | null | null | null | sorteio.py | laurocjs/criptomigo | 871bcd051376f5ffe7fe64cae80401da69ae171f | [
"MIT"
] | null | null | null | sorteio.py | laurocjs/criptomigo | 871bcd051376f5ffe7fe64cae80401da69ae171f | [
"MIT"
] | null | null | null | # coding= utf-8
import random
from Crypto.PublicKey import RSA
# Função para sortear os pares
def sorteiaPares(listaDeParticipantes): # Recebe lista com nome dos participantes
# e o valor é a chave pública dela
dictSorteado = {} # Dict a ser retornado
numeroDeParticipantes = len(listaDeParticipantes) # Apenas para tornar o código mais limpo e legível
if numeroDeParticipantes < 2:
print "Você deve ter pelo menos dois participantes!!"
return
# Geramos então uma lista de N números aleatórios de 0 a N-1, sendo N o número de participantes
# Para evitar problemas na distribuição, o primeiro número não pode ser 0
# Caso seja, troco com algum outro número da lista
sorteio = random.sample(xrange(numeroDeParticipantes), numeroDeParticipantes)
if sorteio[0] == 0:
rand = random.randint(1, numeroDeParticipantes-1)
sorteio[0] = sorteio[rand]
sorteio[rand] = 0
# Realiza uma distribuição em que cada participante recebe outro participante aleatório
iterator = 0
for numero in sorteio:
if iterator == numero: # A pessoa tirou ela própria
# Nesse caso, ele troca com a pessoa anterior a ele na lista
dictSorteado[listaDeParticipantes[iterator]] = dictSorteado[listaDeParticipantes[iterator-1]]
dictSorteado[listaDeParticipantes[iterator-1]] = listaDeParticipantes[numero]
else:
dictSorteado[listaDeParticipantes[iterator]] = listaDeParticipantes[numero]
iterator += 1
return dictSorteado
# Função para criptografar o dict
def criptografaSorteio(dictDeChaves, dictSorteado): # Recebe dict Presenteante -> Chave e Presenteante -> Presenteado
dictCriptografado = {}
for participante in dictDeParticipantes:
pubKeyObj = RSA.importKey(dictDeParticipantes[participante]) # Pega a chave pública do participante
msg = dictSorteado[participante] # Pega o presenteado sorteado para ele
emsg = pubKeyObj.encrypt(msg, 'x')[0] # Encripta o nome do sujeito
caminho = "sorteio/" + participante
with open(caminho, "w") as text_file:
text_file.write(emsg)
# Início do programa:
# Crie a sua lista de participantes da maneira preferida
# A forma mais básica é:
listaDeParticipantes = [] # Uma lista de participantes
# Porém ler de um arquivo ou diretório também é interessante
dictDeParticipantes = {} # Um dict vazio
# Para cada participante, lê a sua chave e mapeia Participante -> Chave Pública
for participante in listaDeParticipantes:
with open("chaves/pubKey" + participante, mode='r') as file:
key = file.read()
dictDeParticipantes[participante] = key
dictSorteado = sorteiaPares(listaDeParticipantes) # Recebe o dicionário que mapeia presenteante -> presenteado
criptografaSorteio(dictDeParticipantes, dictSorteado)
| 44.030303 | 117 | 0.712319 | 330 | 2,906 | 6.266667 | 0.463636 | 0.061896 | 0.077369 | 0.039652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007083 | 0.222643 | 2,906 | 65 | 118 | 44.707692 | 0.908367 | 0.379904 | 0 | 0 | 0 | 0 | 0.038851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b38ea18fde8a979df788072d40b24291c6dbc34a | 1,632 | py | Python | src/ds_algs/list_binary_tree.py | E1mir/PySandbox | 44b39b98a41add433f0815cd3cde4d7554629eea | [
"MIT"
] | null | null | null | src/ds_algs/list_binary_tree.py | E1mir/PySandbox | 44b39b98a41add433f0815cd3cde4d7554629eea | [
"MIT"
] | null | null | null | src/ds_algs/list_binary_tree.py | E1mir/PySandbox | 44b39b98a41add433f0815cd3cde4d7554629eea | [
"MIT"
] | null | null | null | def binary_tree(r):
"""
:param r: This is root node
:return: returns tree
"""
return [r, [], []]
def insert_left(root, new_branch):
"""
:param root: current root of the tree
:param new_branch: new branch for a tree
:return: updated root of the tree
"""
t = root.pop(1)
if len(t) > 1:
root.insert(1, [new_branch, t, []])
else:
root.insert(1, [new_branch, [], []])
return root
def insert_right(root, new_branch):
"""
:param root: current root of the tree
:param new_branch: new branch for a tree
:return: updated root of the tree
"""
t = root.pop(2)
if len(t) > 1:
root.insert(2, [new_branch, [], t])
else:
root.insert(2, [new_branch, [], []])
return root
def get_root_val(root):
"""
:param root: current tree root
:return: current tree root value
"""
return root[0]
def set_root_val(root, new_val):
"""
:param root: current tree root
:param new_val: new value for root to update it
:return: updated tree root
"""
root[0] = new_val
def get_left_child(root):
"""
:param root: current root
:return: Left child of selected root
"""
return root[1]
def get_right_child(root):
"""
:param root: current root
:return: Right child of selected root
"""
return root[2]
if __name__ == '__main__':
r = binary_tree(3)
print(insert_left(r, 5))
print(insert_left(r, 6))
print(insert_right(r, 7))
print(insert_right(r, 8))
l = get_left_child(r)
print(l)
rg = get_right_child(r)
print(rg)
| 19.2 | 51 | 0.583333 | 238 | 1,632 | 3.836134 | 0.205882 | 0.098576 | 0.105148 | 0.087623 | 0.569551 | 0.464403 | 0.311062 | 0.234392 | 0.234392 | 0.234392 | 0 | 0.014542 | 0.283701 | 1,632 | 84 | 52 | 19.428571 | 0.766467 | 0.35049 | 0 | 0.176471 | 0 | 0 | 0.008753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0 | 0 | 0.382353 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b390b46232762e8aa8342658d28e1d8f4b83336f | 446 | py | Python | tests/network_speedtest.py | cloud-mon/server-test | 1175ec5426eed6455600ef45a5217e5145e6d203 | [
"MIT"
] | null | null | null | tests/network_speedtest.py | cloud-mon/server-test | 1175ec5426eed6455600ef45a5217e5145e6d203 | [
"MIT"
] | 1 | 2020-07-02T06:42:26.000Z | 2020-07-02T06:42:26.000Z | tests/network_speedtest.py | cloud-mon/server-test | 1175ec5426eed6455600ef45a5217e5145e6d203 | [
"MIT"
] | null | null | null | import speedtest
def perform_test():
s = speedtest.Speedtest()
best_server = s.get_best_server()
print('Best server: ')
print(best_server['name'])
print('Perform upload app:')
result = s.upload()
print('Done:' + str(result / 1024 / 1024) + ' MBit/s')
print('Perform download app:')
result = s.download()
print('Done:' + str(result / 1024 / 1024) + ' MBit/s')
print(s.results)
return s.results
| 22.3 | 58 | 0.609865 | 57 | 446 | 4.684211 | 0.368421 | 0.149813 | 0.11236 | 0.142322 | 0.419476 | 0.269663 | 0.269663 | 0.269663 | 0.269663 | 0 | 0 | 0.046512 | 0.2287 | 446 | 19 | 59 | 23.473684 | 0.729651 | 0 | 0 | 0.142857 | 0 | 0 | 0.181614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.214286 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b39336da5d14f41b5e11922d81cebc45de1a648c | 588 | py | Python | data_scripts/weeds.py | bayerhealth/bayerhealth | c860dc105494bab3a00798322476c3ab034cceb9 | [
"MIT"
] | null | null | null | data_scripts/weeds.py | bayerhealth/bayerhealth | c860dc105494bab3a00798322476c3ab034cceb9 | [
"MIT"
] | null | null | null | data_scripts/weeds.py | bayerhealth/bayerhealth | c860dc105494bab3a00798322476c3ab034cceb9 | [
"MIT"
] | 1 | 2021-11-24T12:45:03.000Z | 2021-11-24T12:45:03.000Z |
import os
import pandas as pd
import shutil
os.chdir("../Downloads/DeepWeeds_Images_256")
try:
os.mkdir("train")
os.mkdir("val")
except:
pass
train = pd.read_csv("../train_set_labels.csv")
val = pd.read_csv("../test_set_labels.csv")
print(train)
for j,i in train.iterrows():
try:
os.mkdir("train/"+str(i.Species))
except:
pass
shutil.copyfile(i.Label, "train/"+i.Species+"/"+i.Label)
for j,i in val.iterrows():
try:
os.mkdir("val/"+str(i.Species))
except:
pass
shutil.copyfile(i.Label, "val/"+i.Species+"/"+i.Label) | 21 | 60 | 0.622449 | 89 | 588 | 4.022472 | 0.359551 | 0.078212 | 0.083799 | 0.083799 | 0.22905 | 0.22905 | 0.22905 | 0.22905 | 0.22905 | 0 | 0 | 0.006316 | 0.192177 | 588 | 28 | 61 | 21 | 0.747368 | 0 | 0 | 0.375 | 0 | 0 | 0.183362 | 0.132428 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.125 | 0 | 0.125 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b393bdceee5141732cb897079744e23870c97773 | 491 | py | Python | auctionbot/users/migrations/0006_auto_20180218_1311.py | netvigator/auctions | f88bcce800b60083a5d1a6f272c51bb540b8342a | [
"MIT"
] | null | null | null | auctionbot/users/migrations/0006_auto_20180218_1311.py | netvigator/auctions | f88bcce800b60083a5d1a6f272c51bb540b8342a | [
"MIT"
] | 13 | 2019-12-12T03:07:55.000Z | 2022-03-07T12:59:27.000Z | auctionbot/users/migrations/0006_auto_20180218_1311.py | netvigator/auctions | f88bcce800b60083a5d1a6f272c51bb540b8342a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-02-18 06:11
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0005_auto_20180107_2152'),
]
operations = [
migrations.AlterField(
model_name='user',
name='iMarket',
field=models.PositiveIntegerField(default=1, verbose_name='ebay market (default)'),
),
]
| 23.380952 | 95 | 0.631365 | 53 | 491 | 5.660377 | 0.773585 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089674 | 0.250509 | 491 | 20 | 96 | 24.55 | 0.725543 | 0.13442 | 0 | 0 | 1 | 0 | 0.14218 | 0.054502 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b39ac8a96e45d8646cdf0980fdb5a20cae752964 | 434 | py | Python | test4.py | JarkJiao/Python_learning_TestCase | cc77a7a20b01e230e0edd818532570a7d8853b03 | [
"MIT"
] | null | null | null | test4.py | JarkJiao/Python_learning_TestCase | cc77a7a20b01e230e0edd818532570a7d8853b03 | [
"MIT"
] | null | null | null | test4.py | JarkJiao/Python_learning_TestCase | cc77a7a20b01e230e0edd818532570a7d8853b03 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
year = int(raw_input('years:\n'))
month = int(raw_input('month:\n'))
day = int(raw_input('day:\n'))
months = (0,31,59,90,120,151,181,212,243,273,304,334)
if 0< month <=12:
sum = months[month-1]
else:
print '日期错误'
sum+=day
leap = 0
if (year %400==0) or ((year % 4 == 0) and (year % 100 ==0)):
leap+=1
if(leap == 1) and (month >2):
sum+=1
print 'it is the %dth day' % sum
| 18.083333 | 60 | 0.569124 | 81 | 434 | 3.012346 | 0.555556 | 0.07377 | 0.135246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145714 | 0.193548 | 434 | 23 | 61 | 18.869565 | 0.551429 | 0.087558 | 0 | 0 | 0 | 0 | 0.111959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b39c9e2bf74b1b8f308baab389017a73fc014199 | 1,829 | py | Python | task3/test_openbrewerydb.py | rokimaru/rest_api_autotests | d4009b813b064681250671ec515646dc3554bc5d | [
"Apache-2.0"
] | null | null | null | task3/test_openbrewerydb.py | rokimaru/rest_api_autotests | d4009b813b064681250671ec515646dc3554bc5d | [
"Apache-2.0"
] | null | null | null | task3/test_openbrewerydb.py | rokimaru/rest_api_autotests | d4009b813b064681250671ec515646dc3554bc5d | [
"Apache-2.0"
] | null | null | null | """ Тесты api сайта https://www.openbrewerydb.org/ """
import pytest
import requests
from task3.methods import BrewerySiteMethods
class TestJsonPlaceholderSite:
""" Тесты api сайта https://www.openbrewerydb.org/ """
def test_openbrewerydb_1(self):
""" Проверка, что общее количество пивоварен на 1 странице = 20 """
response = requests.get('https://api.openbrewerydb.org/breweries').json()
assert len(response) == 20
def test_openbrewerydb_2(self, brewery_type):
""" Проверка фильтрации по brewery_type"""
response = requests.get('https://api.openbrewerydb.org/breweries?by_type=' + brewery_type)
assert response.status_code == 200
assert response.json()[0]["brewery_type"] == brewery_type
def test_openbrewerydb_3(self, brewery_name):
""" Проверка, фильтрации пивоварен по name """
response = requests.get("https://api.openbrewerydb.org/breweries?by_name=" + brewery_name)
assert response.status_code == 200
assert brewery_name in response.json()[0]["name"]
def test_openbrewerydb_4(self, brewery_id):
""" Проверяем, что можно получить инфо о любой пивоварне по id.
Тест через фикстуру."""
response = BrewerySiteMethods.request_brewery_information(brewery_id)
assert response.status_code == 200
assert response.json()["id"] == brewery_id
@pytest.mark.parametrize('ids_param', BrewerySiteMethods.get_all_breweries_id())
def test_openbrewerydb_5(self, ids_param):
""" Проверяем, что можно получить инфо о любой пивоварне по id.
Тест через маркер pytest"""
response = requests.get('https://api.openbrewerydb.org/breweries/' + str(ids_param))
assert response.status_code == 200
assert response.json()["id"] == ids_param
| 43.547619 | 98 | 0.686714 | 217 | 1,829 | 5.62212 | 0.317972 | 0.080328 | 0.081967 | 0.078689 | 0.468852 | 0.468852 | 0.441803 | 0.381148 | 0.259016 | 0.093443 | 0 | 0.017018 | 0.196829 | 1,829 | 41 | 99 | 44.609756 | 0.813479 | 0.217059 | 0 | 0.166667 | 0 | 0 | 0.14956 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.208333 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b39d2f478b8d05032480b473417744f719f3f2ee | 1,966 | py | Python | apps/finance_request/models.py | MegaSoftVision/finance_credit | a4350f525cd45b1e40d0a8be0ca1f3d77fdbe039 | [
"MIT"
] | null | null | null | apps/finance_request/models.py | MegaSoftVision/finance_credit | a4350f525cd45b1e40d0a8be0ca1f3d77fdbe039 | [
"MIT"
] | null | null | null | apps/finance_request/models.py | MegaSoftVision/finance_credit | a4350f525cd45b1e40d0a8be0ca1f3d77fdbe039 | [
"MIT"
] | null | null | null | from django.db import models
from apps.users.models import User
from django.core.validators import MaxValueValidator, MinValueValidator
class Client(models.Model):
GOOD = 'gd'
REGULAR = 'rg'
BAD = 'bd'
NULL = 'nl'
DEBT_SCORE_CHOICES = [
(GOOD, 'Bueno'),
(REGULAR, 'Regular'),
(BAD, 'Malo'),
(NULL, 'Nulo'),
]
id = models.AutoField(primary_key=True, unique=True)
first_name = models.CharField('Nombre Completo', max_length=30)
last_name = models.CharField('Apellidos', max_length=30)
email = models.EmailField('Correo Electronico', unique=True)
debt_mount = models.IntegerField(
'Monto de la deuda',
default=0,
validators=[MinValueValidator(0)]
)
debt_score = models.CharField(
'Puntuación de la deuda',
max_length=2,
choices=DEBT_SCORE_CHOICES,
default=NULL
)
artificial_indicator = models.IntegerField(
'Indicador Artificial',
default=1,
validators=[MaxValueValidator(10), MinValueValidator(1)]
)
class Meta:
verbose_name = 'Cliente'
verbose_name_plural = 'Clientes'
REQUIRED_FIELDS = ['__all__']
def __str__(self):
return "%s %s" % (self.first_name, self.last_name)
class RequestCredit(models.Model):
id = models.AutoField(primary_key=True)
client = models.ForeignKey(Client, related_name='requests_credits', on_delete=models.CASCADE)
request_mount = models.IntegerField(
'Monto de la solicitud',
default=1,
validators=[MaxValueValidator(50000), MinValueValidator(1)]
)
is_approved = models.BooleanField('Aprobado?', default=False)
class Meta:
verbose_name = 'Solicitud'
verbose_name_plural = 'Solicitudes'
REQUIRED_FIELDS = ['__all__']
def __str__(self):
return f'Cliente: {self.client.first_name} {self.client.last_name} - Solicitud de: {self.request_mount}'
| 29.343284 | 112 | 0.65412 | 215 | 1,966 | 5.75814 | 0.44186 | 0.035541 | 0.025848 | 0.038772 | 0.155089 | 0.155089 | 0.053312 | 0 | 0 | 0 | 0 | 0.011913 | 0.231434 | 1,966 | 66 | 113 | 29.787879 | 0.807412 | 0 | 0 | 0.145455 | 0 | 0.018182 | 0.164377 | 0.023919 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.054545 | 0.036364 | 0.527273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b39fb66c9264ab0daee2f5939dd75f3e42854c9f | 938 | py | Python | Code/mreset.py | andrewwhipple/MeowgicMatt | 189c2c90a75eeb9c53d3be03c40f6b5792ceb548 | [
"MIT"
] | null | null | null | Code/mreset.py | andrewwhipple/MeowgicMatt | 189c2c90a75eeb9c53d3be03c40f6b5792ceb548 | [
"MIT"
] | null | null | null | Code/mreset.py | andrewwhipple/MeowgicMatt | 189c2c90a75eeb9c53d3be03c40f6b5792ceb548 | [
"MIT"
] | null | null | null | #Meowgic Matt reset module
import os
import json
#Debug function to call out that the module successfully loaded.
def meow():
print("mreset loaded")
#Resets Meowgic Matt to factory conditions, DELETING EVERYTHING CURRENTLY IN SUBFOLDERS.
def reset():
os.system("rm -rf ../Edited/")
os.system("rm -rf ../Published/")
os.system("rm -rf ../Queue/")
os.system("rm -rf ../Raw/")
os.system("rm -rf ../RSS/")
os.system("rm -rf ../Data/")
with open("dataStore.json", "r") as dataReadFile:
data = json.load(dataReadFile)
podcasts = data["podcasts"]
for cast in podcasts:
os.system("rm -rf " + cast + "/")
dataReadFile.close()
with open("dataStore.json", "w") as dataWriteFile:
resetDataString = '{"Setup Required":"true","podcasts":[]}'
dataWriteFile.write(resetDataString)
dataWriteFile.close()
print "\nMeowgic Matt Reset!\n" | 33.5 | 91 | 0.621535 | 113 | 938 | 5.159292 | 0.504425 | 0.096055 | 0.120069 | 0.144082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223881 | 938 | 28 | 92 | 33.5 | 0.800824 | 0.186567 | 0 | 0 | 0 | 0 | 0.28628 | 0.040897 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3ae79059719c022e0deda9fca2855194972e39f | 70,728 | py | Python | DatabaseStocks.py | VaseSimion/Finance | 63d04a3bb03177e0f959c0c79fa922aecb50ae11 | [
"MIT"
] | 1 | 2021-01-26T11:59:59.000Z | 2021-01-26T11:59:59.000Z | DatabaseStocks.py | VaseSimion/Finance | 63d04a3bb03177e0f959c0c79fa922aecb50ae11 | [
"MIT"
] | null | null | null | DatabaseStocks.py | VaseSimion/Finance | 63d04a3bb03177e0f959c0c79fa922aecb50ae11 | [
"MIT"
] | 1 | 2021-01-26T15:53:58.000Z | 2021-01-26T15:53:58.000Z | import random
list_of_technology = ["AAPL", "ACIW", "ACN", "ADBE", "ADI", "ADP", "ADSK", "AKAM", "AMD", "AMAT",
"ANET", "ANSS", "ARW", "ATVI", "AVGO", "AVT", "AZPN", "BA", "BB", "BLL",
"BLKB", "BR", "CDK", "CDNS", "CERN", "CHKP", "CIEN", "COMM", "COUP",
"CREE", "CRM", "CRUS", "CRWD", "CSCO", "CVLT",
"CY", "CYBR", "DBX", "DDD", "DDOG", "DLB", "DOCU", "DOX", "DXC", "EA", "EFX", "EQT",
"FCEL", "FDS", "FEYE", "FIT", "FTNT", "FVRR", "G", "GE", "GLW", "GPRO",
"GRMN", "GRPN", "HIMX", "HPE", "HPQ", "IAC", "IBM",
"INFO", "INTC", "IPGP", "IT", "JBL", "JCOM", "KEX", "LOGI",
"MCHP", "MSFT", "MCO", "MDB", "MDRX", "MOMO", "MSCI", "MSI", "MU", "NCR",
"NLOK", "NLSN", "NOW", "NUAN",
"NVDA", "NTAP", "NTGR", "NXPI", "OKTA", "ON", "PANW", "PAYX", "PBI",
"PCG", "PFPT", "PING", "PTC", "PINS", "QCOM", "ORCL", "QRVO", "OTEX", "SABR",
"SHOP", "SNAP", "SPCE", "SPGI", "SPLK", "SQ", "SSNC",
"STM", "STX", "SYY", "SWKS", "TEAM", "TER", "TEVA", "TLND",
"TSM", "TTWO", "TWLO", "VEEV", "VLO", "VMW", "VRSK", "VSAT",
"WDC", "WIX", "WORK", "ZM", "XRX", "ZBRA", "ZEN", "ZNGA", "ZS"]
list_of_materials = ["AA", "ATI", "BLL", "CCJ", "CENX", "CCK", "UFS", "EXP", "EGO", "FCX", "GPK",
"IP", "KGC", "LPX", "MLM", "NEM", "NUE", "OC", "OI", "PKG", "PAAS", "RS", "RGLD", "SON", "SCCO",
"TRQ", "X", "VALE", "VMC", "WPM", "AUY", "MMM", "AIV", "ALB", "APD", "ASH", "AVY", "CE", "CX",
"CF", "CTVA", "DOW", "DD", "EXP", "EMN", "ECL", "IFF", "FMC", "HUN", "ICL", "LYB", "MEOH", "MOS",
"NEU", "POL", "PPG", "RPM", "SHW", "SLGN", "SQM", "GRA", "WLK"]
list_of_communication_services = ["ACIA", "GOOG", "AMCX", "T", "BIDU", "CTL", "CHTR", "CHL", "CHU", "CHT",
"CMCSA", "DISCA", "DISH", "DIS", "EXPE", "FFIV", "FB", "FOXA", "FTR", "GDDY",
"GRUB", "IPG", "LBTYA", "LN", "LYFT", "MTCH", "NFLX", "OMC",
"PINS", "RCI", "ROKU", "SBAC", "SNAP", "SPOT", "S", "TMUS", "TU", "TTD", "TRIP",
"TWTR", "UBER", "VEON", "VZ", "VIAC", "WB", "YNDX", "Z"]
list_of_utilities_and_real_estate = ["AES", "AEE", "AEP", "AWK", "WTR", "ATO", "CMS", "ED", "DUK", "EIX", "EVRG",
"EXC", "FE", "MDU", "NFG", "NEE", "NI", "NRG", "OGE", "PPL", "PEG", "SRE", "SO",
"UGI", "XEL", "AGNC", "ARE", "AMH", "AVB", "BXP", "CPT", "CBL", "CBRE", "CB",
"CLNY", "CXP", "ESS", "FRT", "GLPI", "JLL", "PB", "SVC", "SITC"]
list_of_energy = ["LNT", "AR", "APA", "BKR", "COG", "CNP", "CHK", "CVX", "SNP", "CNX", "CXO", "COP",
"CLB", "DCP", "DVN", "DO", "D", "DRQ", "DTE", "ENS", "EPD", "EOG", "EQT", "XOM", "FSLR", "GPOR",
"HAL", "HP", "HES", "HFC", "KMI", "LPI", "MMP", "MRO", "MPC", "MUR", "NBR", "NOV",
"NS", "OAS", "OXY", "OII", "OKE", "PBR", "PSX", "PXD", "QEP", "RRC", "RES", "SSL", "SLB", "SM",
"SWN", "SPWR", "TRGP", "FTI", "VAL", "VLO", "VSLR", "WLL", "WMB", "WEC", "INT"]
list_of_industrials = ["AOS", "AYI", "AEIS", "ACM", "AER", "AGCO", "ALLE", "ALSN", "AME", "APH", "AXE", "BA", "CAT",
"CLH", "CGNX", "CFX", "CR", "CSX", "CMI", "DE", "DCI", "DOV", "ETN", "EMR", "FAST", "FDX",
"FLEX", "FLS", "FLR", "GD", "GE", "GWR", "GLNG", "GGG", "HXL", "HON", "HII", "IEX", "ITW",
"IR", "ITRI", "JEC", "JCI", "KSU", "KBR", "KMT", "KEYS", "KEX", "KNX", "LII", "LECO", "LFUS",
"LMT", "MIC", "MIDD", "MSM", "NDSN", "NOC", "NSC", "ODFL", "PH", "PNR", "PWR", "RTN", "RBC",
"RSG", "ROK", "ROP", "R", "SPR", "SPXC", "TDY", "TEX", "TXT", "TTC", "TDG", "TRMB", "TRN", "UAL",
"URI", "UTX", "UNP", "UPS", "VMI", "WAB", "WM", "WCC", "XPO", "XYL"]
list_of_consumer_discretionary = ["ANF", "ADNT", "ALK", "BABA", "AMZN", "AAL", "AEO", "APTV", "ASNA", "AN", "AZO",
"CAR",
"BBBY", "BBY", "BJ", "BLMN", "BWA", "BV", "EAT", "BC", "BURL", "CAL", "GOOS", "CPRI",
"KMX", "CRI", "CVNA", "CHWY", "CMG", "CHRW", "CNK", "CTAS", "COLM", "CPA", "CPRT",
"DHI", "DAN", "DLPN", "DAL", "DKS", "DDS", "DOL", "DNKN", "EBAY", "ELF", "ETSY",
"RACE", "FCAU", "FL", "F", "FBHS", "FOSL", "GME", "GPS", "GTX", "GPC", "GIL", "GM",
"GNC", "GT", "HRB", "HBI", "HOG", "HAS", "HD", "H", "IGT", "IRBT", "ITT",
"JPC", "SJM", "JD", "JBLU", "JMIA", "KAR", "KSS", "KTB", "LB", "LVS", "LEA", "LEG",
"LEN", "LEVI", "LYV", "LKQ", "LOW", "LULU", "M", "MANU", "MAN", "MAR", "MAS", "MAT",
"MCD", "MLCO", "MELI", "MGM", "MHK", "NWL", "NKE", "NIO", "JWN", "NVEE", "OSK",
"PTON", "PDD", "PII", "POOL", "PHM", "PVH", "RL", "RVLV", "RHI", "RCL", "SBH",
"SGMS", "SMG", "SEE", "SIX", "SNA", "LUV", "SAVE", "SWK", "SBUX", "TPR",
"TEN", "TSLA", "MSG", "REAL", "TJX", "THO", "TIF", "TOL", "TSCO", "TUP",
"ULTA", "UAA",
"URBN", "VFC", "VC", "W", "WEN", "WHR", "WSM", "WW", "WYND", "WYNN"]
list_of_consumer_staples = ["MO", "ADM", "BYND", "BRFS", "BG", "CPB", "CHD", "CLX", "KO", "CL", "CAG", "STZ", "COTY",
"DG", "ENR", "EL", "FLO", "GIS", "HLF", "HLT", "HRL", "INGR", "K", "KDP", "KMB", "KHC",
"KR", "MKC", "TAP", "MDLZ", "PEP", "PM", "RAD", "SPB", "SFM", "SYY", "TGT",
"HAIN", "TSN", "UNFI", "VFF", "WBA", "WMT", "YUM"]
list_of_healthcare = ["ABT", "ABBV", "ACAD", "ALC", "ALXN", "ALGN", "ALKS", "AGN", "ALNY", "ABC", "AMGN", "ANTM",
"ARNA", "AVTR", "BHC", "BAX", "BDX", "BIO", "BIIB", "BMRN", "BSX", "BMY", "BKD", "BRKR", "CARA",
"CAH", "CNC", "CI", "COO", "CRBP", "CRSP", "CVS", "DHR", "DVA", "EW", "LLY", "EHC", "ENDP",
"EXAS", "GILD", "GWPH", "HCA", "HUM", "IDXX", "ILMN", "INCY", "INVA", "ISRG", "NVTA", "IQV",
"JAZZ", "JNJ", "LH", "LVGO", "MCK", "MD", "MDT", "MRK", "MTD", "MYL", "NGM", "OPK", "PKI",
"PFE", "QGEN", "REGN", "SGEN", "SYK", "TDOC", "TFX", "THC", "TEVA", "TMO", "TLRY", "UNH",
"UHS", "VAR", "VRTX", "WAT", "ZBH", "ZTS"]
list_of_financials = ["AFC", "AIG", "ACC", "AXP", "AMT", "AMP", "NLY", "AON", "ACGL", "ARCC", "AJG", "AIZ", "AGO",
"AXS", "BAC", "BK", "BKU", "BLK", "BOKF", "BRO", "COF", "CBOE", "CBRE", "SCHW",
"CIM", "CINF", "CIT", "C", "CME", "CNO", "CMA", "CBSH", "CXW", "BAP", "CCI", "CWK", "DLR", "DFS",
"DEI", "DRE", "ETFC", "EWBC", "EQIX", "EQR", "RE", "EXR", "FII", "FIS", "FNF", "FITB", "FHN",
"FRC", "BEN", "CFR", "GNW", "GPN", "GS", "HBAN", "PEAK", "HIG", "HST", "HHC", "IEP", "ICE",
"IBN", "IVZ", "IRM", "ITUB", "JKHY", "JHG", "JEF", "JPM", "KEY", "KRC", "KIM", "KKR", "LAZ",
"LM", "LC", "TREE", "LNC", "L", "LPLA", "MTB", "MKL", "MMC", "MA", "MET", "MTG", "MS", "NDAQ",
"NTRS", "NYCB", "ORI", "PYPL", "PBCT", "PNC", "BPOP", "PFG", "PSEC", "PRU", "RDN", "RJF",
"RLGY", "REG", "RF", "RGA", "RNR", "SEIC", "SBNY", "SLM", "SQ", "STT", "SF", "STI", "SIVB",
"SNV", "TROW", "AMTD", "ALL", "BX", "PGR", "TD", "TRV", "TFC", "TWO", "USB", "UBS",
"UMPQ", "UNM", "V", "WRB", "WBS", "WFC", "WELL", "WU", "WEX", "WLTW", "WETF", "ZION"]
european_stocks = ["ADS.DE", "ALO.PA", "BAYN.DE", "BMW.DE", "IFX.DE", "LHA.DE", "MAERSK-B.CO", "NOVO-B.CO",
"NZYM-B.CO", "SU.PA", "VWS.CO"]
def get_lists():
print(len(list_of_industrials + list_of_technology + list_of_communication_services + list_of_energy +
list_of_utilities_and_real_estate + list_of_materials + list_of_consumer_discretionary +
list_of_consumer_staples + list_of_healthcare + list_of_financials))
list_CFD = list_of_industrials + list_of_technology + list_of_communication_services + list_of_energy + \
list_of_utilities_and_real_estate + list_of_materials + list_of_consumer_discretionary + \
list_of_consumer_staples + list_of_healthcare + list_of_financials
random.shuffle(list_CFD)
return list_CFD
# investing side of Trading212
investing_list_of_energy = ["ADES", "AES", "ARPL", "AMRC", "AMSC", "APA", "ARCH", "AROC", "BKR", "BLDP", "BSM", "BCEI",
"COG", "CRC", "CPE", "CNQ", "CSIQ", "CQP", "CVX", "XEC", "CMS", "CRK", "CXO", "COP", "CEIX",
"CLR", "CZZ", "CVI", "DCP", "DVN", "DO", "FANG", "DRQ", "DTE", "ENB", "ET", "ENLC", "ENPH",
"ETR", "EPD", "EVA", "EOG", "EQT", "EQNR", "ES", "EXC", "EXTN",
"XOM", "FSLR", "FCEL", "GEOS", "HAL", "HP", "HES", "HEP", "JKS", "KMI", "KOS", "MMP", "MRO",
"MPC", "MPLX", "MUR",
"MUSA", "NC", "NOV", "NGS", "NEE", "DNOW", "OMP", "OXY", "OKE", "PBF", "BTU", "PBA",
"PBR", "PSX", "PSXP", "PXD", "PAA", "PLUG", "RRC",
"SLB", "SHLX", "SEDG", "SO", "SWN", "SPI", "SPWR", "RUN", "TRGP", "TRNX", "TRP", "FTI",
"TOT", "RIG", "VAL", "VLO", "VET", "VNOM", "VSLR", "VOC", "WES", "WMB", "WPX"]
investing_list_of_materials = ["MMM", "ASIX", "AEM", "AIV", "ADP", "ALB", "AA", "AMCR", "AU", "AVY", "BCPC", "BLL",
"GOLD", "BBL", "BHP", "BCC", "BREE.L", "CCJ", "CSL", "CRS", "CE", "CF", "CLF", "CDXS",
"BVN", "DOW", "DRD",
"DD", "EMN", "ESI", "UUUU", "EQX", "AG", "FMC", "FNV", "FCX", "GCP", "ICL", "IP", "IFF",
"LIN", "LAC", "LTHM", "LYB", "MLM", "MATW", "MDU", "NEM", "NXE", "NTIC", "NUE", "NTR",
"OI", "PAAS", "PPG", "RIO", "RGLD", "SAND", "SSL", "SCHN", "SEE", "SHW", "SBSW", "SMTS",
"SCOO", "STLD", "MOS", "TREX", "URG", "UEC", "VVV", "VRS", "VMC", "WRK", "WPM", "AUY"]
investing_list_of_industrials = ["AOS", "AIR", "ATU", "ADSW", "AEIS", "ACM", "AVAV", "ALG", "ALRM", "ALK", "ALGT",
"ALLE", "AMOT", "AME", "APH", "AGX", "ATRO", "AAXN", "BMI", "BDC", "BHE", "BEST",
"BLNK", "BE", "BA", "BGG", "CHRW", "WHD", "CAI", "CAMT", "CNI", "CARR", "CAT", "CFX",
"CTG", "CPA", "CVA", "CYRX", "CSX", "CMI", "CW", "CYBE", "DE",
"DAL", "DOV", "DYNT", "ETN", "ESLT", "EME", "EMR", "WATT", "ERII", "AQUA", "EXPD",
"EXPO", "FAST", "FDX", "FLS", "FLR", "FORR", "FTV", "FRG", "FELE", "FTDR", "GLOG",
"GLOP", "GD", "GE", "GFL", "EAF", "GHM", "HEES", "HDS", "HCCI",
"HON", "HWM", "HUBB", "ICHR", "ITW", "IR", "IVAC", "ITRI", "JBHT", "J", "JBLU", "JBT",
"JCI", "KAI", "KSU", "KEYS",
"KE", "LHX", "LSTR", "LTM", "LECO", "LFUS", "LMT", "MAGS", "MNTX", "MRCY", "MIDD",
"MTSC", "NSSC", "NATI", "LASR", "NAT", "NSC", "NOC", "NOVT", "ODFL", "OSIS", "OTIS",
"PCAR", "PH", "PNR", "POWL", "PWR", "RBC", "RSG", "RGP",
"RXN", "ROK", "ROP", "R", "SAIA", "SHIP", "SITE", "SNA", "LUV", "SAVE", "SPXC", "FLOW",
"SXI", "SRCL", "TEL", "TNC", "TTEK", "TXT", "GBX", "HCKT", "SHYF", "TKR", "TOPS",
"BLD", "TRNS", "TDG", "TRMB", "TWIN", "UNP", "UPS", "URI", "VSEC", "GWW", "WCN", "WM",
"WTS", "WLDN", "WWD", "WKHS", "WRTC", "XYL", "ZTO"]
investing_list_of_consumer_discretionary = ["FLWS", "TWOU", "AAN", "ANF", "ACTG", "ACCO", "AEY", "ADNT", "ATGE", "AAP",
"BABA", "AMZN", "AMC", "AAL", "APEI", "AMWD", "CRMT", "APTV", "ARC", "FUV",
"ARCO", "ASGN", "AN",
"AZO", "CAR", "AZEK", "BBSI", "BECN", "BBBY", "BBY", "BGSF", "BGFV", "BJRI",
"BLMN", "APRN", "BOOT", "BWA",
"BYD", "BRC", "BV", "BC", "BKE", "BLDR", "BURL", "CZR", "CAL", "CWH",
"GOOS", "CPRI", "KMX", "CCL",
"CVNA", "CSPR", "CATO", "FUN", "CHWY", "CMG", "CHH", "CHUY", "CNK", "CTAS",
"CPRT", "CTVA", "CRVL", "CROX", "DHI", "DRI", "PLAY", "TACO",
"DENN", "DKS", "DPZ", "DKNG", "EBAY", "ECL", "LOCO", "EEX", "ETSY",
"FTCH", "RACE", "FCAU", "FCFS", "FVRR", "FL", "F", "FRSX", "FOR", "FOSL",
"FOXF", "FRPT", "FNKO",
"GME", "GPS", "GTX", "GM", "GNTX", "GPC", "GDEN", "GT", "GHG", "GFF",
"GRWG", "GES", "HRB", "HBI", "HOG", "HAS",
"HSII", "MLHR", "HIBB", "HLT", "HMSY", "HD", "HUD", "NSP", "TILE", "IRBT",
"JD", "JMIA", "KELYA", "KEQU", "KFRC", "KBAL", "KSS", "KTB",
"KFY", "KRUS", "LB", "LAKE", "LAUR", "LEG", "LEN", "LEVI", "LQDT", "LAD",
"LYV", "LOW", "LULU", "M", "MANU",
"MAN", "HZO", "MAR", "MAS", "MAT", "MCD", "MELI", "MTH", "METX", "MGM",
"MOGU", "MCRI", "MNRO", "NRC", "EDU", "NWL", "NKE",
"NIO", "JWN", "NCLH", "ORLY", "OSW", "OSTK", "PTON", "PENN", "PETQ", "PETS",
"PDD", "PLNT", "PLYA", "PII", "RL", "PHM", "NEW", "PVH", "QRTEA", "RDIB",
"RCII", "QSR", "RVLV", "RHI", "ROST", "RCL", "RUHN", "RUTH", "SBH", "SGMS",
"SEAS", "SHAK", "SCVL", "SSTI", "SIG", "SIX", "SKY",
"SKYW", "SNBR", "SLM", "SPWH", "SWK", "SBUX", "SHOO", "SFIX", "STRA",
"TAL", "TPR", "TH", "TMHC", "TSLA", "TXRH", "CAKE", "PLCE", "MIK", "REAL",
"WEN", "THO",
"TIF", "TJX", "TOL", "TM", "TSCO", "TNET", "TBI", "ULTA", "UAA", "UNF",
"UAL", "URBN", "VFC", "MTN", "VVI", "VIPS", "SPCE", "VSTO", "VRM", "WTRH",
"WSG", "W", "WSTG", "WHR", "WING", "WINA", "WW", "WYND", "WYNN", "XSPA",
"YUMC", "YUM", "YJ", "ZUMZ"]
investing_list_of_consumer_staples = ["MO", "BUD", "ADM", "BGS", "BYND", "BIG", "BJ", "BTI", "BF-A", "BG", "CALM",
"CVGW", "CPB", "CELH", "CHD", "CLX", "KO", "CCEP", "CL", "CAG",
"STZ", "COST", "COTY", "CRON", "DAR", "DEO", "DG", "DLTR", "ELF", "EL", "GIS",
"GO", "HELE", "HLF",
"HRL", "IMKTA", "IPAR", "SJM", "K", "KDP", "KMB", "KHC", "KR", "MKC", "MGPI",
"TAP", "MDLZ", "MNST", "FIZZ", "OLLI", "PEP", "PAHC",
"PM", "PPG", "PG", "RAD", "SAFM", "SPTN", "SYY", "TGT", "SAM", "CHEF", "HSY",
"TSN", "UN", "UVV", "VFF", "WBA", "WMT", "WMK"]
investing_list_of_healthcare = ["TXG", "ABT", "ABBV", "ABMD", "ACIU", "ACHC", "ACAD", "AXDX", "XLRN", "ACOR", "AHCO",
"ADPT", "ADUS", "ADVM", "AERI", "AGEN", "AGRX", "A", "AGIO", "AIMT", "AKCA", "AKBA",
"AKRO", "AKUS", "ALBO",
"ALC", "ALEC", "ALXN", "ALGN", "ALKS", "ALLK", "AHPI", "ALLO", "ALNY", "AMRN", "AMED",
"AMS", "ABC", "AMGN", "FOLD", "AMN", "AMRX", "AMPH", "ANAB", "ANGO", "ANIP", "ANIK",
"ANPC", "ATRS", "ANTM", "APLS", "APHA", "AMEH", "APLT", "APRE", "APTO", "ARCT", "ARQT",
"ARDX",
"ARNA", "ARWR", "ARVN", "ASDN", "ASMB", "AZN", "ATRA", "ATNX", "ATHX", "BCEL", "ATRC",
"ATRI", "ACB", "AVDL", "AVNS", "AVTR", "AVGR", "AVRO", "AXNX", "AXGT", "AXSM",
"BXRX", "BHC", "BAX", "BEAM", "BDX", "BGNE", "BLCM", "BYSI", "TECH", "BASI", "BIOC",
"BCRX", "BDSI", "BIIB", "BLFS", "BMRN", "PHGE", "BNTX", "BSTC", "BEAT", "BTAI",
"BLUE", "BPMC", "BSX", "BBIO", "BMY", "BRKR", "BNR", "CABA", "CGC", "CARA",
"CRDF", "CAH", "CSII", "CDNA", "CSTL", "CPRX", "CNC", "CNTG", "CERS", "GIB", "CCXI",
"CBPO", "CPHI", "CI", "CLIN.L", "CLVS", "CODX", "COCP", "CHRS", "COLL", "CGEN", "CNST",
"CRBP", "CORT", "CRTX", "CVET", "CRNX", "CRSP",
"CCRN", "CUE", "CUTR", "CVS", "CBAY", "CYTK", "CTMX", "DHR", "DVA", "DCPH", "DNLI",
"XRAY", "DMTK", "DXCM", "DRNA", "DFFN", "DVAX", "EGRX", "EDIT", "EW", "LLY", "ENTA",
"ENDP", "EPZM", "ESPR", "ESTA", "EXAS", "EXEL", "FATE", "FGEN", "FMTX",
"FREQ", "FUSN", "GTHX", "GBIO", "GNPX", "GERN", "GILD", "GSK", "GBT", "GOSS", "GH",
"GHSI", "GWPH", "HALO", "HBIO", "HCA", "HQY", "HTBX", "HSIC", "HEPA", "HRTX", "HSKA",
"HEXO", "HOLX", "FIXX", "HZNP", "HUM", "IMAB", "IBIO", "ICLR", "ICUI", "IDXX", "IGMS",
"ILMN", "IMGN", "IMMU", "IMRN", "NARI", "INCY", "IFRX", "INMD", "INVA", "INGN", "INO",
"INSM", "PODD", "NTEC", "LOGM", "NTLA", "ICPT", "ISRG", "NVTA", "IONS", "IOVA", "IQV",
"IRTC", "IRWD", "JAZZ", "JNJ", "KALA", "KRTX", "KPTI", "KNSA", "KTOV", "KOD", "KRYS",
"KURA", "LH", "LTRN", "LNTH", "LMAT", "LHCG", "LGND", "LIVN", "LVGO", "LMNX", "MGNX",
"MDGL", "MGLN", "MNK", "MNKD", "MASI",
"MCK", "MEDP", "MDT", "MEIP", "MGTX", "MRK", "VIVO", "MMSI", "MRSN", "MRUS", "MTP",
"MRTX", "MIRM", "MRNA", "MTEM", "MNTA", "MORF", "MYL", "MYOK", "MYGN", "NSTG", "NH",
"NK", "NTRA", "NTUS", "NKTR", "NEOG", "NEO", "NBIX", "NXTC", "NGM", "NVS", "NVAX",
"NVO", "NVCR", "NUVA", "OCGN", "ODT", "OMER", "OTRK", "OPK", "OPCH", "OGEN", "OSUR",
"ORTX", "OGI", "OFIX", "KIDS", "OYST", "PACB", "PCRX", "PRTK", "PASG", "PDCO", "PAVM",
"PDLI", "PKI", "PRGO", "PFE", "PHAT", "PLRX", "PYPD", "PPD", "PRAH", "PGEN", "PRPO",
"DTIL", "PINC", "PRVL", "PRNB", "PROG", "PGNY", "PRTA", "PRVB", "PTCT", "PULM", "PLSE",
"PBYI", "QLGN", "QTRX", "DGX", "QDEL", "QTNT", "RDUS", "RDNT", "RAPT", "RETA", "RDHL",
"REGN", "RGNX", "RLMD", "RPTX", "RGEN", "REPL", "KRMD", "RTRX", "RVNC", "RVMD", "RYTM",
"RCKT", "RMTI", "RPRX", "RUBY", "SAGE", "SNY", "SRPT", "SRRK", "SGEN", "SNCA", "SWAV",
"SIBN", "SIGA", "SILK", "SINT", "SDC", "SOLY", "SRNE", "SWTX", "STAA", "STOK", "SYK",
"SNSS", "SUPN", "SGRY", "SRDX", "SNDX", "SYNH", "SNV", "SYRS", "TCMD", "TNDM", "TARO",
"TDOC", "TFX", "THC", "TEVA", "TGTX", "COO", "ENSG", "PRSC", "TXMD", "TBPH", "TMO",
"TLRY", "TMDI", "TTNP", "TVTY", "TBIO", "TMDX",
"TCDA", "TRIL", "TRIB", "GTS", "TPTX", "TWST", "RARE", "QURE", "UNH", "UTHR", "UHS",
"URGN", "VNDA", "VREX", "VAR", "VXRT", "VBIV", "VCYT", "VRTX", "VIE", "VMD", "VKTX",
"VIR", "VYGR", "WAT", "WST", "WMGI", "XBIT", "XNCR", "XENE", "YMAB", "ZLAB", "ZNTL",
"ZBH", "ZIOP", "ZTS", "ZGNX", "ZYXI"]
investing_list_of_financials = ["SRCE", "QFIN", "JFU", "AER", "AMG", "AFL", "AGMH", "AGNC", "AIG", "AL", "ALEX", "ADS",
"AB", "ALL", "ALLY", "AMBC", "AXP", "AMP", "ABCB", "AMSF", "NLY",
"AON", "ARI", "APO", "ACGL", "ARCC", "ARES", "AROW", "AJG", "APAM", "ASB", "AC", "AIZ",
"AGO", "AUB", "AXS", "BANF", "BSMX", "BAC", "BOH", "BMO", "BK", "BNS", "OZK", "BKU",
"BANR", "BBDC", "BCBP", "BRK-B",
"BLK", "BCOR", "BOKF", "BDGE", "BHF", "BRLIU", "BYFC", "BAM", "BPYU", "BRO", "CADE",
"CM", "COF", "CPTA", "CFFN", "CATM", "CATY", "CBTX", "SCHW", "CIM", "CB", "CCXX",
"CINF", "C", "CHCO", "CME", "CCH", "COLB", "CMA", "CBU",
"CODI", "COWN", "BAP", "CACC", "CRT", "CVBF", "DLR", "DFS", "DX", "ETFC", "EGBN",
"EHTH", "EFC", "ECPG", "ESGR", "EPR", "ERIE", "ESNT", "EVR", "FANH", "FIS", "FITB",
"BUSE", "FCNCA", "FFIN", "FHB", "FHN",
"FISV", "FLT", "FEAC", "WPF", "FMCI", "BEN", "FRHC", "FSK", "FULT", "FUSE", "FUTU",
"GATX", "GFN", "GNW", "GBCI", "GOOD", "GAIN", "GPN", "GL", "GS", "GSHD", "AJX", "GSKY",
"GDYN", "HLNE", "HASI", "HIG", "HDB", "HTLF", "HOPE", "HRZN", "HLI", "HSBC", "HBAN",
"IBKR", "ICE", "IVZ", "IVR", "ISBC", "ITUB",
"JKHY", "JRVR", "JHG", "JEF", "JFIN", "JPM", "KMPR", "KW", "KCAC", "KEY", "KNSL", "KKR",
"LADR", "LKFN", "LCA", "LAZ", "LGC", "LM", "TREE", "LX", "LNC", "L", "MTB", "MAIN",
"MFC", "MKL", "MMC",
"MA", "MCY", "MET", "MFA", "MC", "MGI", "MS", "COOP", "NDAQ", "NAVI", "NNI", "NRZ",
"NYMT", "NREF", "NKLA", "NMIH", "NTRS", "NWBI", "OCFC", "ONB", "ORI", "OXLC", "PPBI",
"PACW", "PLMR", "PKBK", "PAYC", "PYPL", "PAYS", "PBCT", "PNFP", "PNC",
"BPOP", "PFC", "PRA", "PGR", "PSEC", "PRU", "QIWI", "QD", "RDN", "RWT", "RF", "RNST",
"RPAY", "RY", "SAFT", "SASR", "SLCT", "SIGI", "SLQT", "FOUR", "SUNS", "SBSI", "SPAQ",
"SQ", "STWD", "STFC", "STT", "SCM", "STNE", "SLF", "SIVB", "SYF", "TROW",
"AMTD", "TCBI", "TFSL", "BX", "THG", "TRV", "TPRE", "TSBK", "TD", "SHLL", "TOWN.L",
"TSC",
"TFC", "TRUP", "TWO", "USB", "UMBF", "UMPQ", "UBSI", "UIHC", "UVE", "UNM", "VLY", "VEL",
"VCTR", "VBFC", "VIRT", "V", "WAFD", "WSBF", "WSBC", "WFC", "WAL", "WNEB", "WU", "WHG",
"WLTW", "WTFC", "XP", "ZION"]
investing_list_of_technology = ["ONEM", "DDD", "ATEN", "ACN", "ACIW", "ATVI", "ADBE", "ADTN", "AMD", "AGYS", "API",
"AIRG", "AKAM", "KERN", "AKTS", "MDRX", "AOSL", "AYX",
"AMBA", "AMSWA", "AMKR", "ASYS", "ADI", "PLAN", "ANSS", "ATEX", "APPF", "APPN", "AAPL",
"APDN", "AMAT", "AAOI", "ANET", "ARLO", "ASML",
"AZPN", "ASUR", "TEAM", "ATOM", "AUDC", "AEYE", "ADSK", "ADP", "AVNW", "AVID", "AVT",
"AWRE", "ACLS", "AXTI", "BAND", "BZUN",
"BNFT", "BILI", "BILL", "BLKB", "BB", "BL", "BAH", "BRQS", "EPAY", "BOX", "BCOV",
"AVGO", "BRKS", "CCMP", "CACI", "CDNS",
"CAMP", "CASA", "CDW", "CRNT", "CRNC", "CERN", "CEVA", "CHNG", "ECOM", "CHKP", "IMOS",
"CRUS", "CSCO", "CTXS", "CLFD", "CLDR", "NET", "CTSH",
"COHR", "COMM", "CPSI", "CNDT", "CLGX", "CSOD", "GLW", "CSGP", "COUP", "CREE", "CRWD",
"CSGS",
"CTS", "CUB", "CYBR", "DJCO", "DAKT", "DDOG", "DELL", "DSGX", "DGII", "DMRC", "APPS",
"DIOD", "DOCU", "DOMO", "DOYU", "DBX", "DSPG", "DXC", "EBON", "EBIX", "EGAN", "ESTC",
"EA",
"EMKR", "EMIS.L", "DAVA", "EIGI", "ENV", "EPAM", "PLUS", "EFX", "ERIC", "EVBG", "MRAM",
"EVH", "EXTR", "FFIV", "FDS", "FICO",
"FSLY", "FEYE", "FIT", "FIVN", "FLEX", "FLIR", "FORM", "FTNT", "FEIM", "GRMN", "IT",
"GNUS", "GILT", "GSB", "GLOB",
"GLUU", "GPRO", "GRVY", "GSIT", "GSX", "GTYH", "GWRE", "HLIT", "HPE", "HPQ", "HIMX",
"HUBS", "HUYA", "IBM", "IDEA.L", "IDEX", "INVE", "IDOX.L", "INFO", "IIVI", "IMMR",
"INFN",
"INOV", "IPHI", "INPX", "INSG", "NSIT", "INSE", "INTC", "IDCC", "INTU", "IPGP", "IQE.L",
"JCOM", "JBL", "JNPR", "KLAC",
"KOPN", "KLIC", "LRCX", "LTRX", "LSCC", "LDOS", "LLNW", "LPSN", "RAMP", "LIZI", "LOGI",
"LOGM", "LITE", "LUNA", "MTSI", "MGIC", "MANH", "MANT", "MKTX",
"MRVL", "MTLS", "MAXR", "MXIM", "MXL", "MCHP", "MU", "MSFT", "MSTR", "MIME", "MITK",
"MIXT", "MOBL",
"MODN", "MDB", "MPWR", "MCO", "MSI", "NPTN", "NTAP", "NTES", "NTGR", "NTCT",
"NEWR", "EGOV", "NICE", "NLSN", "NOK", "NLOK", "NVMI", "NUAN", "NTNX", "NVEC", "NVDA",
"NXPI", "OKTA", "OMCL",
"ON", "OCFT", "OSPN", "ORCL", "PD", "PANW", "PCYG", "PKE", "PAYX", "PCTY", "PCTI",
"PDFS", "PEGA", "PRFT", "PERI", "PFSW",
"PLAB", "PING", "PBI", "PXLW", "PLXS", "POWI", "PRGS", "PFPT", "PRO", "PTC", "QADA",
"QADB", "QRVO", "QCOM", "QLYS", "QMCO", "QTT", "RDCM", "RMBS", "RPD", "RTX", "RNWK",
"RP", "RDVT", "RESN", "RBBN", "RMNI", "RNG", "RIOT", "RST", "SBGI",
"SABR", "CRM", "SANM", "SAP", "SPNS", "SCSC", "SDGR", "SCPL", "SE", "SEAC", "STX",
"SCWX", "SMTC", "NOW", "SREV", "SWIR", "SILC", "SLAB", "SIMO", "SLP", "SITM", "SWKS",
"WORK", "SGH", "SMAR", "SMSI", "SMTX", "SONO", "SNE", "SPLK", "SPOT", "SPT", "SPSC",
"SSNC", "SRT", "SSYS",
"SMCI", "SVMK", "SYKE", "SYNC", "SYNA", "SNCR", "SNX", "SNPS", "TRHC", "TSM", "TTWO",
"TLND", "TM17.L", "TENB", "TDC",
"TER", "TXN", "TSEM", "TRU", "TTEC", "TTMI", "TWLO", "TYL", "UI", "UCTT", "UIS",
"UMC", "OLED", "UPLD", "UTSI",
"VRNS", "VECO", "VEEV", "VRNT", "VRSK", "VERI", "VSAT", "VIAV", "VICR", "VRTU", "VISL",
"VMW", "VCRA", "VOXX", "VUZI", "WAND.L", "WDC", "WIT", "WNS",
"WDAY", "WK", "XRX", "XLNX", "XPER", "XNET", "YEXT", "ZBRA", "ZEN", "ZUO", "ZNGA", "ZS"]
investing_list_of_communication_services = ["VNET", "EGHT", "ACIA", "ALLT", "GOOGL", "GOOG", "ANGI", "T", "ATNI",
"ATHM", "BIDU", "BCE", "WIFI", "BKNG", "BOMN", "CABO", "CDLX", "CARG",
"LUMN",
"CHTR", "CHL", "CHU", "CHT", "CIDM", "CCOI", "CMCSA", "CVLT", "CMTL",
"CNSL", "CRTO", "DADA", "DESP", "DISCA", "DISCK", "DISH", "SSP", "SATS",
"EB",
"EXPE", "FB", "FOXA", "FTR", "GCI", "GDDY", "GOGO", "GRPN", "GRUB", "HSTM",
"IHRT", "IAC", "INAP", "IPG", "IQ",
"IRDM", "KVHI", "LBRDA", "LBTYA", "LILA", "LTRPA", "LGF-B", "LORL",
"LYFT", "MMYT", "MCHX", "MTCH", "MDP", "MOMO", "NCMI", "NFLX", "NWS", "OMC",
"OPRA", "PTNR", "PT", "PINS", "QNST", "QRTEA", "MARK", "RCI", "ROKU", "SJR",
"SHOP", "SSTK", "SINA", "SBGI", "SIRI", "SKM", "SNAP", "SOGO", "SOHU",
"STMP", "TMUS", "TTGT", "TGNA", "TEF", "TU", "TME", "TTD",
"TZOO", "TPCO", "TCOM", "TRIP", "TRVG", "TRUE", "TCX", "TWTR", "UBER",
"UCL", "USM", "UONE", "UXIN", "VEON", "VRSN", "VZ", "VIAC",
"VOD", "VG", "DIS", "WMG", "WB", "WIX", "WWE", "YNDX", "Z", "ZIXI", "ZM"]
investing_list_of_utilities_and_real_estate = ["AQN", "ALE", "LNT", "AEE", "AEP", "AWR", "AWK", "ATO", "AGR", "AVA",
"AZRE", "BKH", "BIP", "BEP", "CIG", "CNP", "EBR", "ED", "CWCO", "D",
"DUK", "EIX", "ENIA", "ENIC", "EVRG", "FE", "FTS", "GWRS", "HNP", "ITCI",
"KEP", "MGEE", "NEP", "NI", "NRG", "OGS", "ORA", "PCG", "PNW", "POR",
"PPL", "PEG", "RGCO", "SRE", "SWX", "SR", "SPH", "UGI", "UTL", "VST",
"WEC", "XEL", "ADC", "ALEX", "ALX", "ARE", "ACC", "AFIN", "AMT", "COLD",
"AHT", "AVB", "BXP", "BRX",
"BPY", "CTRE", "CBL", "CBRE", "CDR", "CLDT", "CIO", "CLNY", "CXW", "COR",
"CUZ", "CRESY", "CCI", "CUBE", "CONE", "DLR", "DEI", "DEA", "EGP",
"ESRT", "EQIX", "ELS", "EQR", "ESS", "EXPI", "EXR", "FPI-PB",
"FRT", "FPH", "FCPT", "GOOD", "LAND", "GNL", "HASI", "HTA", "PEAK", "HT",
"HST", "HPP", "IIPR", "INVH", "IRM", "MAYS", "KIM", "LMRK", "LTC", "MAC",
"CLI", "MGRC", "MPW", "MAA", "MNR", "NNN", "NMRK", "OHI", "PK", "DOC",
"PLYM", "APTS", "PLD", "PSB", "PSA", "RYN", "O", "RVI", "REXR", "RLJ",
"RMR", "SBRA", "SAFE", "BFS",
"SBAC", "SPG", "SLG", "SRC", "STAG", "STOR", "SUI", "SHO", "SKT", "TRNO",
"GEO", "UMH", "VTR", "VER", "VICI", "VNO", "WPC",
"WELL", "WY", "WSR"]
investing_list_of_oct_nov = ["TCEHY", "NSRGY", "RHHBY", "LVMUY", "PNGAY", "LRLCY", "MPNGY", "PROSY", "CIHKY", "SFTBY",
"AAGIY", "DCMYY", "SIEGY", "HESAY", "CSLLY", "ENLAY", "CMWAY", "IDEXY", "CHDRY", "NTTYY",
"PPRUY", "NPSNY", "IBDRY", "ALIZY", "DTEGY", "FRCOY", "AIQUY", "SBGSY", "NTDOY", "RCRUY",
"CHGCY", "RBGLY", "ADDYY", "DNNGY", "KDDIY", "EADSY", "NJDCY", "DMLRY", "HKXCY", "ESLOY",
"SHECY", "DPSGY", "SOBKY", "BASFY", "ADYEY", "HEINY", "DSNKY", "DKILY", "ATLKY", "ATLCY",
"BYDDY", "OLCLY", "ZURVY", "VCISY", "VWAGY", "BAYRY", "BNPQY", "SMMNY", "MRAAY", "SAFRY",
"LZAGY", "DASTY", "NABZY", "PDRDY", "MTHRY", "BMWYY", "KNYJY", "HENKY", "HENOY", "WMMVY",
"NTOIY", "HOCPY", "TOELY", "AXAHY", "FANUY", "VLVLY", "DANOY", "IFNNY", "NILSY", "ANZBY",
"DBSDY", "AHCHY", "GVDNY", "LNSTY", "CFRUY", "ITOCY", "JAPAY", "WXXWY", "WFAFY", "VONOY",
"DSDVY", "DNZOY", "SUHJY", "ECIFY", "SMCAY", "FSUGY", "SXYAY", "VIVHY", "EXPGY", "MQBKY",
"KAOOY", "MURGY", "HTHIY", "JMHLY", "VWAPY", "TKOMY", "LGFRY", "VWDRY", "NRDBY", "YAHOY",
"ENGIY", "SMICY", "CLLNY", "NGLOY", "ADRNY", "GXYYY", "JPPHY", "AMKBY", "OPYGY", "CLPBY",
"ANPDY", "DBOEY", "WEGZY", "BDRFY", "RDSMY", "ELEZY", "EONGY", "MITSY", "HSNGY", "UNICY",
"SVNDY", "HNNMY", "OVCHY", "TRUMY", "MIELY", "HCMLY", "HXGBY", "TSCDY", "SCMWY", "SHZHY",
"CJPRY", "FJTSY", "FUJIY", "RWEOY", "OCPNY", "ALPMY", "PDYPY", "CMPGY", "SSDOY", "ASAZY",
"LBRDB", "TTNDY", "ZLNDY", "CRARY", "KHNGY", "UOVEY", "BRDCY", "JSHLY", "SDVKY", "AMADY",
"HKHHY", "CKHUY", "CLPHY", "ESALY", "NEXOY", "CTTAY", "KMTUY", "SSREY", "FERGY", "TELNY",
"KRYAY", "DNHBY", "NCLTY", "SAXPY", "AONNY", "KYOCY", "KUBTY", "FSNUY", "WTKWY", "ARZGY",
"KBCSY", "OCDDY", "OEZVY", "CODYY", "GBERY", "OTSKY", "SZKMY", "PCRFY", "LGRDY", "MITEY",
"TYIDY", "UCBJY", "EJPRY", "EDPFY", "AFTPY", "ANGPY", "CABGY", "MGDDY", "SOMLY", "MKKGY",
"GCTAY", "GASNY", "CGEMY", "SSMXY", "CRHKY", "SGSOY", "KNRRY", "BAESY", "SWDBY", "AKZOY",
"DWAHY", "EPOKY", "SSEZY", "DTCWY", "HVRRY", "SOTGY", "SYIEY", "UNCRY", "NRILY", "GIKLY",
"TLPFY", "FOJCY", "NCMGY", "ASBFY", "FRRVY", "NVZMY", "SMNNY", "GZPFY", "POAHY", "SAUHY",
"PUGOY", "TKAYY", "KNBWY", "NTDTY", "SVNLY", "ASHTY", "MTSFY", "AVIFY", "BDORY", "MSADY",
"KGSPY", "SCBFY", "SONVY", "NCBDY", "UPMMY", "IMBBY", "ERRFY", "OPHLY", "OMRNY", "THLLY",
"TTDKY", "FUJHY", "SGIOY", "SSUMY", "WRDLY", "PUMSY", "RKUNY", "RNECY", "GBLBY", "VDMCY",
"TEZNY", "EVVTY", "TSGTY", "ATASY", "CGXYY", "SMPNY", "DQJCY", "CHYHY", "CRRFY", "RTOKY",
"IKTSY", "XNGSY", "TGOPY", "JPXGY", "LNNGY", "GBOOY", "SCVPY", "DFKCY", "SCGLY", "AHKSY",
"WOPEY", "SWMAY", "MKTAY", "DNKEY", "NNGRY", "MONOY", "ILIAY", "CZMWY", "SKHHY", "HALMY",
"HDELY", "TOSYY", "HPGLY", "EDNMY", "SEOAY", "SGBLY", "SZLMY", "UBSFY", "SKHSY", "SZSAY",
"SWGAY", "NDEKY", "VEOEY", "FNMA", "STBFY", "FYRTY", "ASXFY", "HGKGY", "NXGPY", "BZLFY",
"RMYHY", "REPYY", "GNNDY", "AJINY", "BXBLY", "COIHY", "AUCOY", "JDSPY", "PSMMY", "JRONY",
"ALSMY", "YASKY", "CHEOY", "AMIGY", "KKOYY", "GJNSY", "JBAXY", "SNYFY", "SDXAY", "ATEYY",
"SUTNY", "RANJY", "NPSCY", "RDEIY", "YATRY", "MTUAY", "TKGSY", "HRGLY", "BNTGY", "UMICY",
"KIGRY", "MONDY", "KIROY", "JBSAY", "UHID", "MARUY", "COVTY", "YARIY", "SBSNY", "DSCSY",
"ORKLY", "SKFRY", "GNHAY", "KIKOY", "AYALY", "SMFKY", "SOLVY", "INGIY", "CCHGY", "TMVWY",
"SMMYY", "ASEKY", "SGPYY", "PUBGY", "QBIEY", "REMYY", "YAMCY", "EBKDY", "OTGLY", "SMTOY",
"IFJPY", "MTLHY", "AHEXY", "ALFVY", "DNTUY", "WJRYY", "MHGVY", "SKBSY", "KAEPY", "CUYTY",
"CNPAY", "ITTOY", "KSRYY", "LSRCY", "AGESY", "TLTZY", "ROHCY", "OMVKY", "KGFHY", "GLIBB",
"AAVMY", "LZRFY", "YKLTY", "RNLSY", "TMSNY", "SKSUY", "SNMCY", "PANDY", "SVCBY", "WILYY",
"HEGIY", "BDNNY", "CYGIY", "AEXAY", "TMICY", "SRTTY", "IMPUY", "GMVHY", "EFGSY", "HKMPY",
"UUGRY", "ARKAY", "STRNY", "ALNPY", "MNBEY", "TOTDY", "BKHYY", "ACSAY", "VLEEY", "SQNNY",
"KOTMY", "PRYMY", "ACOPY", "NNCHY", "CCOEY", "BURBY", "GLPEY", "IPSEY", "RTMVY", "SNPHY",
"ATDRY", "SMGZY", "TOKUY", "TISCY", "ISUZY", "ELUXY", "ACCYY", "SPXCY", "BTDPY", "BLHEY",
"ASGLY", "MCARY", "MAGOY", "VOPKY", "MDIBY", "TDHOY", "AMBBY", "BKGFY", "CRZBY", "AACAY",
"MAEOY", "FUPBY", "HLTOY", "GEHDY", "JSGRY", "DIFTY", "JAPSY", "PSZKY", "AMSSY", "EGIEY",
"SHCAY", "DNPLY", "BGAOY", "RGLXY", "CCLAY", "BMRRY", "HTCMY", "GEAGY", "LLESY", "IDKOY",
"ORINY", "RYKKY", "OUKPY", "TMRAY", "HSQVY", "CLZNY", "CKHGY", "NHYDY", "ENGGY", "WTBDY",
"JSAIY", "FMCC", "ASOMY", "HLUYY", "KAJMY", "RSNAY", "IESFY", "AGLXY", "JMPLY", "SALRY",
"SKLTY", "KNMCY", "LNEGY", "LRENY", "PEGRY", "DLAKY", "HLLGY", "CIBEY", "SHMUY", "PGENY",
"GTMEY", "YZCAY", "OSAGY", "MITUY", "WEGRY", "NPPNY", "CAKFY", "JCYGY", "LDSCY", "TSUKY",
"DCYHY", "YAMHY", "GNGBY", "HWDJY", "DMZPY", "JSCPY", "HKUOY", "TAIPY", "BNCDY", "AGRPY",
"PHPPY", "WRTBY", "SRGHY", "SCRYY", "AZIHY", "VLPNY", "RICOY", "RAIFY", "TYOYY", "SLOIY",
"CMSQY", "RYHTY", "BLSFY", "AVHNY", "TOPPY", "FELTY", "FCNCB", "NPSKY", "TSRYY", "TLGHY",
"EKTAY", "OCLDY", "TKAGY", "CDEVY", "EVTCY", "ASMVY", "CWLDY", "SUOPY", "BZZUY", "MAHLY",
"HINOY", "NKRKY", "NIPMY", "CHBAY", "VEMLY", "TEPCY", "ABCZY", "YOKEY", "FLGZY", "SHZUY",
"ROSYY", "MAURY", "FOVSY", "KYSEY", "BRTHY", "SEKEY", "TPRKY", "CLBEY", "GULRY", "CSXXY",
"CSIOY", "IMIAY", "TATYY", "JTTRY", "AUOTY", "FPRUY", "FMOCY", "THUPY", "WYGPY", "CLCGY",
"RBSFY", "IGGHY", "MCHOY", "ANSLY", "NGKSY", "MTGGY", "TKGBY", "RXEEY", "MZDAY", "THKLY",
"RNMBY", "ADRZY", "TPDKY", "KZMYY", "NCHEY", "EBRPY", "MNHFY", "AKABY", "KURRY", "SIETY",
"THYCY", "SGAMY", "ITJTY", "NDBKY", "HYPMY", "MALRY", "FINN", "KWPCY", "SKLKY", "SBFFY",
"NPNYY", "TKAMY", "AKBTY", "VNRFY", "DUFRY", "WBRBY", "TINLY", "GOFPY", "APELY", "CCOJY",
"ESYJY", "FINMY", "PBSFY", "EVNVY", "TGOSY", "BDVSY", "APNHY", "ELPVY", "STWRY", "CSNVY",
"AWCMY", "GLAPY", "NIFCY", "AIAGY", "GWPRF", "NHNKY", "EOCCY", "CGUSY", "MSLOY", "PUODY",
"JTEKY", "SOHVY", "BTVCY", "ZNKKY", "TKCBY", "SHWDY", "EBCOY", "DURYY", "ASCCY", "ISSDY",
"YORUY", "SREDY", "NTNTY", "BSEFY", "SFRGY", "CHRYY", "BCNAY", "KNCRY", "TBLMY", "TCCPY",
"MAKSY", "NINOY", "JGCCY", "PKCOY", "ETCMY", "MRPLY", "ANIOY", "MAUSY", "NPSHY", "BCUCY",
"FJTNY", "CBGPY", "BICEY", "SUBCY", "INCPY", "WETG", "ENGH", "FHNIY", "KWHIY", "IIJIY",
"JUMSY", "EFGXY", "IHICY", "FJTCY", "AOZOY", "TROLB", "KRNTY", "UBEOY", "BOSSY", "VIAAY",
"BPOSY", "BVNRY", "TRHFD", "BKUH", "OIBRQ", "FUWAY", "KUKAY", "ILKAY", "VTKLY", "TREAY",
"NPKYY", "THDDY", "CSVI", "TKYMY", "COGNY", "HAWPY", "SBLUY", "MMSMY", "RSTAY", "TGSGY",
"KPLUY", "FLIDY", "ASOZY", "HHULY", "SMSMY", "YITYY", "JPSWY", "WACLY", "TYOBY", "RKAGY",
"ISMAY", "MOHCY", "DNIYY", "KAIKY", "WACMY", "SYANY", "TCLRY", "SZGPY", "BOZTY", "BFLBY",
"WTBFA", "WTBFB", "CYBQY", "TDPAY", "PLFRY", "LTMAQ", "NVGI", "BIOGY", "FMBL", "ERMAY",
"TOGL", "CCGGY", "MDXG", "HWAL", "ADPXY", "VSBC", "HXOH", "FBAK", "HBIA", "FMCB", "USAT",
"TMOAY", "GANS", "NPACY", "NASB", "SBOEY", "BERK", "EMIS", "SPHRY", "RWWI", "LICT", "RTTO",
"SMLR", "SBNC", "KLDI", "MCHB", "KCLI", "CNND", "OTCM", "VADP", "ARTNB", "BHRB", "BWMYD",
"RCAR", "DGRLY", "CRCW", "SEMUF", "THVB", "FNGR", "AIXN", "FRMO", "EVSBY", "SCPJ", "MCCK",
"GFASY", "MGOM", "MCEM", "EOSS", "ATROB", "CUSI", "PKIN", "ORXOY", "AYAG", "FIZN", "EXSR",
"VLOWY", "TTSH", "MRTI", "CSHX", "SMTI", "ATCN", "FKWL", "LNNNY", "BVHBB", "WNDW", "NODB",
"HONT", "FNBT", "AMBZ", "CZFS", "JMSB", "GFKSY", "CFNB", "RCBC", "TGRF", "VNJA", "MLGF",
"VULC", "WMPN", "BKUT", "FCUV", "HLAN", "RZLTD", "WFTLF", "WNRP", "RVRF", "TRCY", "NECB",
"FFMH", "GDRZF", "ISBA", "EVOA", "CNBW", "CYFL", "WINSF", "TRUX", "NLST", "PURE", "TPRP",
"NWIN", "TYFG", "POWW", "PTGEF", "AKOM", "CWGL", "HMLN", "FKYS", "MDWT", "SCZC", "FNRN",
"GMGI", "BNCC", "LGIQ", "ENBP", "TYCB", "KSHB", "BVFL", "SBKK", "CHBH", "XTEG", "LQMT",
"LSYN", "QNBC", "PDER", "MHGU", "AERO", "KTYB", "RVRA", "BKUTK", "ICTSF", "EMYB", "MCBK",
"EACO", "CPKF", "MCBI", "JUVF", "CSBB", "CPTP", "PSIX", "CTGO", "NVOS", "BBBK", "OCBI",
"BAYK", "NUVR", "UTGN", "PSBQ", "LYBC", "NOBH", "EFSI", "CCFN", "KEWL", "BKGM", "JDVB",
"FGFH", "SCBH", "RRTS", "DMKBA", "MSBC", "DIMC", "ROFO", "HNFSA", "HNFSB", "APTL", "CVLBD",
"INTEQ", "HARL", "PFLC", "HSBI", "FABP", "NEFB", "SFDL", "LIXT", "FBTT", "EMPK", "WBBW",
"ELAMF", "PLTYF", "HWIN", "BMBN", "AMNF", "CMTV", "FFDF", "AMSIY", "SCTY", "CZBC", "ARHN",
"TYBT", "SOBS", "QEPC", "LWLG", "BHWB", "CNRD", "JFBC", "MODD", "BCAL", "CNBB", "KEGX",
"AMBK", "SABK", "PBAM", "MMMB", "ADOCY", "GWOX", "CURR", "ZMTP", "SOMC", "PRED", "CWBK",
"SILXY", "AVBH", "LBTI", "VABK", "WEBC", "REDW", "FETM", "CFCX", "NWYF", "BIOQ", "KTHN",
"GRRB", "FMBM", "PFOH", "UBNC", "SOTK", "INLB", "TRVR", "CLWY", "ELLH", "CCEL", "VLLX",
"SQCF", "MYBF", "CULL", "REEMF", "JCPNQ", "WAYN", "CZBT", "ELTP", "FACO", "TWCF", "CRSS",
"VKSC", "BURCA", "COSM", "WCRS", "ARBV", "CRAWA", "LINK", "IFHI", "SEBC", "LFGP", "KRPI",
"FISB", "CFST", "PKKW", "NACB", "DYNE", "MHPC", "TRNLY", "RVCB", "AVHOQ", "PFBX", "CNIG",
"KANP", "CHUC", "CBKM", "MCRAA", "MCRAB", "RSKIA", "SHWZ", "DWNX", "INVU", "HBSI", "PYYX",
"WLFDY", "SBBI", "UNIB", "UBAB", "CVSI", "CZNL", "WFCF", "PGTK", "FMFP", "TUESQ", "MFON",
"SCSG", "CNAF", "PMHG", "SMAL", "CIBY", "PKDC", "HLFN", "YDVL", "YRKB", "EPGNY", "FTMR",
"FFWC", "NIDB", "BTTR", "SHRG", "WLMS", "TETAA", "TETAB", "CFIN", "BELP", "NWPP", "MRMD",
"SBKO", "NUBC", "ECRP", "FMFG", "BOREF", "ACMTA", "ACMT", "BSHI", "IEHC", "UWHR", "FBVA",
"BSFC", "SRYB", "NJMC", "WEIN", "SLNG", "BEBE", "CTAM", "CUII", "SMID", "SUME", "EMGCQ",
"CEFC", "SRRE", "PPHI", "TLCC", "TTLO", "PGCG", "SBBG", "STBI", "ORBN", "WDFN", "FLFG",
"HFBA", "BUKS", "FOTB", "PCLB", "BCTF", "VKIN", "TPCS", "EQFN", "IOFB", "MUEL", "ATGN",
"NTRB", "FDVA", "PTBS", "OTTW", "PRSI", "VTDRF", "IVST", "SIMA", "CNCG", "GNRV", "CTUY",
"TORW", "STMH", "MDVT", "LSFG", "BORT", "KSBI", "PBSV", "TLRS", "MPAD", "BSPA", "ONVC",
"RYFL", "LEAT", "HRST", "PPBN", "BSCA", "OSBK", "FCOB", "PMTS", "HCBC", "FSRL", "TGEN",
"FBPA", "AFAP", "FBPI", "MEEC", "BEOB", "BKOR", "CCUR", "PGNN", "LOGN", "PWBO", "SLRK",
"HFBK", "ALMC", "CRMZ", "BMMJ", "GLGI", "EFBI", "OXBC", "BLHK", "UGRO", "SOFO", "OMQS",
"NANX", "PEGX", "SYCRF", "GVYB", "SFBK", "CIWV", "CFOK", "MACE", "MRGO", "BRBW", "PRKA",
"CNBZ", "SCTC", "PNBI", "CBCZ", "HLIX", "TMAK", "CNBX", "GOVB", "CHHE", "SCND", "SDRLF",
"HCBN", "DVCR", "JCDAF", "MAAL", "QNTO", "RSRV", "TRNF", "RWCB", "CZBS", "PBNK", "INIS",
"UNTN", "GTPS", "SCAY", "DTST", "AYSI", "SLGD", "STLY", "MVLY", "MNBO", "OTIVF", "GTMAY",
"EMMS", "FALC", "FBSI", "SCYT", "FRFC", "APLO", "SCXLB", "CBFC", "HCGI", "PEBC", "PHCG",
"CIBH", "PBCO", "IBWC", "NWBB", "CYTR", "SVBL", "GLXZ", "SPGZ", "WAKE", "OPXS", "ORBT",
"ADOM", "MKTY", "RYES", "WCFB", "TDCB", "CUEN", "PAYD", "CRSB", "HLSPY", "CCBC", "FRSB",
"FIDS", "OPST", "HRGG", "ELMA", "PIAC", "AGOL", "LXRP", "AMEN", "SSBP", "CTYP", "RAFI",
"PEYE", "JKRO", "TLSS", "GVFF", "DYNR", "FTLF", "CLOK", "DYSL", "EGDW", "MOBQ", "CCFC",
"TBBA", "SVIN", "CAWW", "PVBK", "HVLM", "DAFL", "FNFI", "FHLB", "SADL", "SNNF", "INBP",
"ERKH", "TRTC", "SECI", "IDWM", "TBTC", "SPND", "ABCP", "IVFH", "PPSF", "CNBA", "HWEN",
"NNUP", "HRTH", "GRMM", "PFHO", "AMFC", "RGBD", "CRZY", "NUKK", "FTDL", "AWSMD", "BXLC",
"DTRL", "TIKK", "ALTN", "ACAN", "KBPH", "ESOA", "OHPB", "NSYC", "GIGA", "ENZN", "TSSI",
"LPBC", "SUWN", "VFRM", "TRKX", "HWIS", "SURG", "BRRE", "FAME", "HCMC", "BMNM", "MAJJ",
"FCCT", "ZXAIY", "HOOB", "JSDA", "SPCO", "SMDM", "SYTE", "SNNY", "EFSH", "SENR", "NROM",
"CCOM", "VERF", "TOOD", "TOFB", "AMCT", "MXMTY", "OART", "IAIC", "SPRS", "ADMT", "AQSP",
"CIBN", "NAUH", "ANFC", "BKFG", "EUSP", "INRD", "HEWA", "QMCI", "CSTI", "ITEX", "JANL",
"LUVU", "DTRK", "IBAL", "VIDE", "CBKC", "MCVT", "ABLT", "ALPP", "SORT", "SUGR", "ALBY",
"AMMX", "MICR", "TCNB", "FSCR", "WELX", "VVUSQ", "RHDGF", "NEBLQ", "YEWB", "GNCIQ", "MGTI",
"FMBN", "PGNT", "XOGAQ", "CRVW", "YRIV", "EDHD", "ECIA", "SCIA", "IONI", "PALT", "SKTP",
"LCTC", "SRNA", "CLWD", "DWOG", "SRNN", "UNIR", "ABBB", "AAON", "ABB", "ACNB", "ABM",
"AGCO", "ACMR", "AHC", "ALJJ", "AMAG", "AMCX", "HKIB", "AMRK", "APG", "AACG", "ABIO",
"LIFE", "ASX", "AZZ", "ABEO", "AXAS", "ACAM", "ACEL", "ARAY", "ACER", "ACRX", "ACRS",
"ACU", "ATV", "AYI", "GOLF", "ADMS", "AE", "ADAP", "ADXN", "ADIL", "ACET", "ADRO", "AEHR",
"ADXS", "AEGN", "AMTX", "ACY", "AGLE", "AJRD", "WMS", "AEMD", "AIH", "ARPO", "AGFS",
"ALRN", "AIM", "AIRI", "AIRT", "ATSG", "ANTE", "AKTX", "AKER", "ALSK", "AIN", "ALDX",
"ALRS", "ALCO", "ALIM", "WTER", "Y", "ATI", "ABTX", "ALNA", "AESE", "ALSN", "ATEC", "ALPN",
"ALTG", "ALTA", "ALTR", "ALT", "ATHE", "ATUS", "AIMC", "ALTM", "ACH", "AMAL", "ABEV",
"AMBO", "DIT", "AMTB", "AMTBB", "UHAL", "AMRH", "AMX", "AMOV", "AXL", "AEO", "AEL", "AFG",
"ANAT", "AMNB", "AOUT", "ARL", "ARA", "AREC", "AMRB", "AVD", "AVCT", "ASRV", "ATLO",
"AMPE", "AMHC", "AMPY", "AXR", "AMYT", "AMRS", "AVXL", "ANCN", "ANDE", "ANIX", "ANVS",
"AR", "APEX", "APOG", "AINV", "APEN", "AIT", "AGTC", "AMTI", "ATR", "APVO", "APTX", "APYX",
"AQMS", "ARMK", "ARAV", "RKDA", "ARCB", "ACA", "ARNC", "RCUS", "ARGX", "ARDS", "ARKR",
"ARMP", "AFI", "AWI", "ARW", "ARTL", "ARTNA", "ARTW", "ABG", "AINC", "ASH", "ASLN", "ASPN",
"ASPU", "AWH", "ASRT", "AMK", "ASFI", "ASTE", "ALOT", "ASTC", "ATKR", "AAME", "ACBI",
"ATLC", "AAWW", "ATCX", "ATOS", "AUBN", "JG", "ALV", "AUTL", "AUTO", "AWX", "AVYA", "AVEO",
"ATXI", "CDMO", "AVNT", "RCEL", "AXLA", "AX", "AYLA", "AYTU", "AZYO", "BBX", "AZUL",
"AZRX", "BGCP", "BBQ", "RILY", "BKTI", "BRP", "BMCH", "BWXT", "BW", "BCSF", "BTN", "BBAR",
"BBD", "BBDO", "BBVA", "BCH", "BMA", "SAN", "BSAC", "BSBR", "CIB", "TBBK", "BXS", "BANC",
"BFC", "BMRC", "BOCH", "BPRN", "BKSC", "BFIN", "BSVN", "BWFG", "BHB", "BCS", "BNED", "B",
"BRN", "BSET", "BATL", "BCML", "BBGI", "BZH", "BELFA", "BELFB", "BLPH", "BRBR", "BNTC",
"WRB", "BRK-A", "BHLB", "BERY", "BRY", "XAIR", "BCYC", "BRPA", "BH", "BH-A", "BIO-B",
"BIO", "BPTH", "BCDA", "BMRA", "BLRX", "BSGM", "BVXV", "BHTG", "BIVI", "BFRA", "BITA",
"BKI", "BKCC", "TCPC", "BCAT", "BLBD", "BRBS", "BLCT", "BXC", "BXG", "BSBK", "BIMI",
"BPFH", "BWL-A", "BCLI", "BWAY", "BRFS", "BAK", "LND", "BBI", "BLIN", "BWB", "BRID",
"MNRL", "BFAM", "BEDU", "BSIG", "BCO", "VTOL", "BWEN", "BKD", "BRKL", "BMTC", "BSQR",
"BBW", "BFST", "BY", "CFFI", "CBFV", "CBZ", "YCBD", "CBMB", "CBOE", "CDK", "CECE", "CFBK",
"CHFS", "CIT", "CHPM", "CKX", "CVU", "CNA", "CCNE", "CEO", "CRAI", "CPSH", "CNO", "CRH",
"CSPI", "CSWI", "CTIC", "CNX", "CVV", "CBT", "CDZI", "CLBS", "CALB", "CWT", "CALA", "ELY",
"CALT", "CLXT", "CEI", "CATC", "CAC", "CANF", "CGIX", "CANG", "CNNE", "CAJ", "CMD", "CPHC",
"CCBG", "CBNK", "CSU", "CSWC", "CPST", "CAPR", "CSTR", "CG", "CUK", "CSV", "PRTS", "TAST",
"CARS", "CARE", "CWST", "CASY", "CASI", "CASS", "SAVA", "CSLT", "CATB", "CTLT", "CBIO",
"CVCO", "CVM", "CELC", "APOP", "CLDX", "CLRB", "CLLS", "CLSN", "CBMG", "CYAD", "CPAC",
"CX", "CETX", "CDEV", "EBR", "CENT", "CENTA", "CPF", "CEPU", "CVCY", "CENX", "CNBKA",
"LEU", "CNTY", "CCS", "CERC", "CDAY", "CSBR", "CHX", "CHRA", "CHAQ", "CTHR", "CRL", "GTLS",
"CCF", "CKPT", "CMCM", "CEMI", "CHE", "CC", "CHMG", "LNG", "CPK", "CHMA", "CVR", "CSSE",
"CHS", "CREG", "CMRX", "CAAS", "JRJC", "CEA", "LFC", "ZNH", "SNP", "CHA", "CGA", "DL",
"CXDC", "HGSH", "CJJD", "CNET", "COE", "CIH", "COFS", "CDXC", "CHDN", "CDTX", "CBB",
"CNNB", "CIR", "CZNC", "CTRN", "CTXR", "CFG", "CIZN", "CIA", "CZWI", "CIVB", "CLAR", "CLH",
"CLSK", "CCO", "CLSD", "CLIR", "CLRO", "CLPT", "CLW", "CWEN-A", "CWEN", "CBLI", "CNSP",
"CNF", "CCB", "COKE", "KOF", "CODA", "CCNC", "CVLY", "CDE", "JVA", "CGNX", "CNS", "CWBR",
"COHN", "CLCT", "CGRO", "CLGN", "CBAN", "CLBK", "COLM", "CMCO", "CBSH", "CMC", "CVGI",
"ESXB", "CYH", "TCFC", "CFBI", "JCS", "CTBI", "CWBC", "CIG-C", "CBD", "SID", "SBS", "ELP",
"CCU", "CMP", "CIX", "SCOR", "CHCI", "LODE", "CNCE", "CCM", "BBCP", "CNFR", "CNMD", "CNOB",
"CONN", "STZ-B", "ROAD", "CPSS", "TCS", "MCF", "CFRX", "VLRS", "CTRA", "CPS", "CTB", "CTK",
"CORE", "CMT", "CRMD", "CNR", "CLDB", "CRVS", "ICBK", "CPAH", "CVLG", "CBRL", "BREW", "CR",
"CRD-B", "CRD-A", "CRTD", "CREX", "CS", "CCAP", "CXDO", "CFB", "CRWS", "CCK", "CRY", "CTO",
"CFR", "CULP", "CPIX", "CRIS", "CURO", "CUBI", "CYAN", "CYCC", "CYCN", "CTEK", "CTSO",
"BOOM", "DHX", "DLHC", "DXPE", "DFPH", "DARE", "DQ", "DRIO", "DSKE", "DAIO", "DTSS",
"DWSN", "DXR", "DECK", "DCTH", "DK", "DLA", "DEN", "DLX", "DBI", "DXLG", "DHIL", "DBD",
"DRAD", "DGLY", "DCOM", "DMS", "DIN", "DISCB", "DXYN", "RDY", "DLB", "UFS", "DCI", "DGICA",
"DGICB", "RRD", "DFIN", "DORM", "PLOW", "DVD", "DPW", "DS", "DCO", "DLTH", "DUOT", "DXF",
"DRRX", "DYAI", "DY", "DT", "DZSI", "EDAP", "EH", "E", "EVOP", "EVI", "EBMT", "FCRD",
"EXP", "ESTE", "EWBC", "EML", "EAST", "ECHO", "MOHO", "EC", "EPC", "EDNT", "EDUC", "EIGR",
"BCOW", "ETNB", "EKSO", "ELAN", "ELSE", "ELMD", "ELVT", "ESBK", "ELOX", "EMAN", "AKO-A",
"AKO-B", "EMCF", "ERJ", "EBS", "MSN", "EIG", "EDN", "WIRE", "EHC", "EFOI", "ENR", "NDRA",
"ENS", "EPAC", "ENG", "EBF", "ENOB", "NPO", "ENSV", "ETTX", "ENTG", "ETM", "EBTC", "EFSC",
"EVC", "ELA", "ENZ", "NVST", "EQH", "ETRN", "EQBK", "EQS", "ERYP", "ESCA", "ESE", "ESP",
"ESSA", "ESQ", "GMBL", "WTRG", "ETH", "EEFT", "EVBN", "EVLO", "EVK", "EVER", "EPM", "EVOK",
"EVOL", "EOLS", "XGN", "XCUR", "EXLS", "XONE", "EXPC", "EXPR", "EZPW", "EYEG", "EYEN",
"FFG", "FNB", "FNCB", "FBK", "FFBW", "FSBW", "FTSI", "FRPH", "FCN", "FLMN", "SFUN", "DUO",
"FARM", "FMAO", "FMNB", "FARO", "FBSS", "AGM-A", "AGM", "FSS", "FHI", "FNHC", "FOE",
"FDBC", "FNF", "FDUS", "FRGI", "JOBS", "FISI", "FINV", "FAF", "FNLC", "FBNC", "FBMS",
"FRBA", "FBIZ", "FCAP", "FCBP", "FCF", "FCCO", "FCBC", "FCCY", "FFBC", "THFF", "FFNW",
"FFWM", "FGBI", "INBK", "FIBK", "FLIC", "FRME", "FMBH", "FMBI", "FXNC", "FNWB", "FSFG",
"FSEA", "FUNC", "FUSB", "MYFW", "SVVC", "FBC", "FIVE", "WBAI", "FPRX", "FVE", "BDL",
"FLXS", "FLXN", "FPAY", "FND", "FTK", "FLO", "FLNT", "FFIC", "FLUX", "FLY", "FOCS", "FMX",
"FONR", "FORTY", "FRTA", "FBRX", "FBHS", "FET", "FWRD", "FWP", "FIII", "FSTR", "FBM",
"FEDU", "FOX", "FRAN", "FC", "FRAF", "RAIL", "FMS", "FRD", "FTEK", "FSKR", "FUBO", "FULC",
"FLGT", "FLL", "FUL", "FF", "FTFT", "FVCB", "GBL", "GLIBA", "JOB", "GDS", "GWGH", "GPX",
"GVP", "GIII", "GTT", "GXGX", "GMS", "GAIA", "GLPG", "GALT", "GRTX", "GARS", "GENC",
"GNSS", "GNRC", "GMO", "GCO", "GENE", "GEN", "GNFT", "GNE", "GMAB", "GNMK", "GNCA", "THRM",
"GOVX", "GGB", "GABC", "ROCK", "GLAD", "GLT", "GKOS", "GLBZ", "GSAT", "GMED", "GBLI",
"GLYC", "GOL", "GFI", "GORO", "AUMN", "GV", "GSBD", "GBDC", "GTIM", "GDP", "GRSV", "GHIV",
"GRC", "GRA", "GGG", "GHC", "GRAM", "GTE", "LOPE", "GVA", "GPK", "GTN", "GTN-A", "GECC",
"GEC", "GLDD", "GSBC", "GWB", "GRBK", "GDOT", "GPRE", "GCBC", "GHL", "GNLN", "GNRS",
"GRNQ", "GEF", "GEF-B", "GSUM", "GRIF", "GRFS", "GRTS", "GPI", "GGAL", "SIM", "TV", "OMAB",
"PAC", "ASR", "AVAL", "SUPV", "GSH", "GNTY", "GFED", "GIFI", "GURE", "GPOR", "GYRO", "HBT",
"HCHC", "HCI", "HFFG", "HMNF", "HNI", "HTGM", "HVBC", "HAE", "HAIN", "HLG", "HNRG", "HOFV",
"HALL", "HBB", "HWC", "HJLI", "HNGR", "HAFC", "HONE", "HMY", "HARP", "HROW", "HSC", "HCAP",
"HVT", "HVT-A", "HE", "HA", "HWKN", "HWBK", "HAYN", "HCSG", "HHR", "HCAT", "HTLD", "HL",
"HEI-A", "HLIO", "HSDT", "HLX", "HMTV", "HNNA", "HTBK", "HRI", "HTGC", "HFWA", "HRTG",
"HESM", "HXL", "HX", "HPR", "HPK", "HIL", "HI", "HGV", "HIFS", "HQI", "HSTO", "HOL", "HFC",
"HOMB", "HBCP", "HFBL", "HMST", "HTBI", "HMC", "HOFT", "HOOK", "HMN", "HBNC", "HZN",
"TWNK", "HOTH", "HWCC", "HOV", "HBMD", "HHC", "HUBG", "HTHT", "HEC", "HSON", "HDSN",
"HUIZ", "HGEN", "HII", "HUN", "HURC", "HURN", "HCM", "HYMC", "IDT", "HYRE", "HY", "IAA",
"ICFI", "ICCH", "ICAD", "IEC", "IROQ", "IESC", "IFMK", "IMAC", "IRS", "ITCB", "ITT", "IBN",
"ICON", "IEP", "IDA", "ICLK", "IPWR", "IDYA", "IEX", "IDRA", "IKNX", "ISNS", "IMBI",
"IMRA", "ICCC", "IMNPQ", "IMH", "IMMP", "IMVT", "IMUX", "PI", "ICD", "IHC", "INDB", "IBCP",
"IBTX", "IBA", "INFI", "III", "INFY", "ING", "INFU", "IEA", "NGVT", "INGR", "INOD", "ISIG",
"IOSP", "ISSC", "INSP", "INWK", "IIIN", "NSPR", "IBP", "IPHA", "INMB", "IHT", "INS", "IDN",
"ITGR", "IHG", "INTG", "IBOC", "IMXI", "IDXG", "IPV", "XENT", "ICMB", "INTT", "IVC", "IIN",
"IPI", "INUV", "ISTR", "ITIC", "NVIV", "IO", "IRMD", "IRIX", "IRCP", "ISR", "ISDR", "ITP",
"ITMR", "ITI", "IIIV", "ISEE", "YY", "JJSF", "JAX", "JILL", "JACK", "BOTJ", "JHX", "JAN",
"JELD", "JRSH", "JT", "JYNT", "JLL", "JNCE", "JP", "JIH", "KAR", "KB", "KBR", "KLXE", "KT",
"KDMN", "KALU", "KLDO", "KLR", "KALV", "KAMN", "KSPN", "KBH", "KRNY", "KELYB", "KMPH",
"KMT", "KFFB", "KROS", "KTCC", "KZR", "KIN", "KC", "KINS", "KFS", "KTRA", "KEX", "KIRK",
"KNL", "KNX", "KN", "KOP", "KOSS", "KTOS", "KRA", "KRO", "LCNB", "LGL", "LPL", "LGIH",
"LKQ", "LCII", "DFNS", "LMFA", "LPLA", "LXU", "LYTS", "LJPC", "LZB", "LAIX", "LSBK",
"LBAI", "LW", "LANC", "LNDC", "LARK", "LE", "LCI", "LPI", "LRMR", "LAWS", "LAZY", "LEAF",
"LEA", "LPTX", "LEE", "LEGH", "LEGN", "LACQ", "LC", "LEN-B", "LII", "LEVL", "LBRT", "LWAY",
"LFVN", "LCUT", "LTBR", "LPTH", "LITB", "LSAC", "LMST", "LMB", "LMNR", "LINC", "LIND",
"LNN", "LCTX", "LN", "LINX", "LGHL", "LIQT", "LQDA", "LOAK", "LIVE", "LIVX", "LXEH", "LYG",
"LMPX", "LOGC", "LOMA", "LONE", "LGVW", "LOOP", "LPX", "LOVE", "LUB", "LL", "LBC", "LDL",
"LYRA", "MBI", "MDC", "MTG", "MHO", "MKSI", "MMAC", "MRC", "MSA", "MSM", "MSGN", "MTBC",
"MVBF", "MVC", "MYRG", "MCBC", "MFNC", "MIC", "MSGS", "MSGE", "MGTA", "MX", "MGY", "MGYR",
"MNSB", "MBUU", "MLVF", "TUSK", "MTW", "MTEX", "MN", "MMI", "MCS", "MRIN", "MPX", "MRNS",
"MRLN", "MBII", "MRTN", "MTZ", "MHH", "MCFT", "MTDR", "MTRN", "MTNB", "MTRX", "MATX",
"MLP", "MMS", "MEC", "MKC-V", "MUX", "MTL-P", "MTL", "MFIN", "MDLA", "MDIA", "MED", "MDGS",
"MD", "MCC", "MDLY", "MLCO", "MBWM", "MERC", "MBIN", "MFH", "MREO", "MCMJ", "MRBK", "EBSB",
"MTOR", "MACK", "MESA", "MLAB", "CASH", "MEI", "MCBS", "MCB", "MXC", "MFGP", "MVIS",
"MBOT", "MPB", "MSVB", "MBCN", "MSEX", "MOFG", "MLSS", "MLND", "MLR", "MIND", "MTX",
"NERV", "MGEN", "MSON", "MG", "MUFG", "MFG", "MBT", "MOD", "MWK", "MKD", "MBRX", "MOH",
"TAP-A", "MKGI", "MNPR", "MRCC", "MR", "MOG-A", "MOG-B", "MORN", "MOR", "MOSY", "MPAA",
"MOTS", "MOV", "MOXC", "MLI", "MWA", "GRIL", "MBIO", "MYE", "MYO", "MYOS", "NBTB", "NCR",
"NCSM", "NL", "NNBR", "NVEE", "NNVC", "NNDM", "NAOV", "NATH", "NBHC", "NKSH", "NHC", "NFG",
"NGHC", "NGG", "NHLD", "NPK", "NSEC", "EYE", "NWLI", "NAII", "NTCO", "NHTC", "NGVC",
"NATR", "NWG", "NTZ", "NAV", "NAVB", "NP", "NMRD", "NLTX", "NEON", "NEOS", "NEPH", "NSCO",
"UEPS", "NTWK", "NTIP", "NURO", "NTRP", "STIM", "NBSE", "NRBO", "NVRO", "GBR", "NFE",
"NWHM", "NJR", "NMFC", "NPA", "NYC", "NYCB", "NYT", "NBEV", "NWGI", "NEU", "NR", "NEWT",
"NEX", "NXST", "NEXT", "NODK", "NXGN", "NCBS", "NINE", "NOAH", "NMR", "NDLS", "NDSN",
"NSYS", "NBN", "NOG", "NFBK", "NRIM", "NWN", "NWPX", "NWE", "NWFL", "NVFY", "NVUS", "NUS",
"NCNA", "NUZE", "OGE", "OLB", "NXTD", "NES", "OFS", "OIIM", "OPBK", "OVLY", "OCSL", "OCSI",
"OAS", "OBLN", "OBLG", "OBCI", "OPTT", "OII", "OFED", "OCN", "OCUL", "OMEX", "OVBC", "ODC",
"OIS", "OPOF", "OSBC", "OLN", "ZEUS", "OFLX", "ONDK", "ONCS", "OCX", "ONCT", "PIH", "YI",
"OMF", "ONEW", "ONTO", "OOMA", "LPRO", "OPNT", "OPRT", "OPY", "OCC", "OPHC", "OPRX",
"ORMP", "OPTN", "ORAN", "ORBC", "OEG", "ORGS", "ONVO", "ORGO", "OBNK", "ORIC", "OESX",
"ORN", "IX", "ORRF", "OSK", "OSN", "OTEL", "OTIC", "OTTR", "OTLK", "OSG", "OVID", "OVV",
"OC", "ORCC", "OXM", "PFIN", "PDLB", "PAE", "PTSI", "CNXN", "PCB", "PCSB", "PDCE", "PICO",
"PGTI", "PJT", "PHI", "PKX", "PNM", "PRAA", "PRGX", "PUYI", "PMBC", "PKG", "PTN", "PAM",
"PHX", "PZZA", "PAR", "PARR", "PZG", "TEUM", "PRK", "PKOH", "PSN", "PTRS", "PBHC", "PATK",
"PNBK", "PATI", "PTEN", "PDSB", "PGC", "PSO", "PED", "PVAC", "PNTG", "PNNT", "PFLT",
"PWOD", "PAG", "PEN", "PEBO", "PEBK", "PFIS", "PRCP", "PRDO", "PFGC", "PFMT", "PESI",
"PPIH", "PRSP", "PSNL", "TLK", "PTR", "PBR-A", "PFNX", "PHAS", "FENG", "DNK", "PHR",
"PHUN", "PIRS", "PBFS", "PPSI", "PIPR", "PAGP", "PLAG", "PLT", "AGS", "PLBC", "PSTI",
"PSTV", "PLXP", "PTE", "PTMN", "POST", "PBPB", "PWFL", "PQG", "PFBC", "POAI", "PLPC",
"PFBI", "PBH", "PSMT", "PNRG", "PRIM", "PFG", "PDEX", "PRTH", "IPDN", "PFHD", "PFIE",
"PRPH", "PUMP", "PROS", "PB", "PLX", "TARA", "PTGX", "PTVCA", "PTVCB", "PTI", "PVBC",
"PROV", "PFS", "PBIP", "PUK", "PMD", "PCYO", "PSTG", "PRPL", "QCRH", "QUAD", "KWR", "PZN",
"QEP", "QTWO", "QK", "NX", "QRHC", "QUIK", "QUMU", "QUOT", "RBB", "RMED", "RICK", "RCMT",
"RCM", "REVG", "RFIL", "RLI", "RMG", "RES", "RH", "RPM", "RYB", "RLGT", "RFL", "RAND",
"RNDB", "RNGR", "PACK", "RAVE", "RAVN", "RJF", "RYAM", "ROLL", "RMAX", "RDI", "RLGY",
"REPH", "RLH", "RRBI", "RRGB", "RRR", "REED", "RGS", "RGLS", "RGA", "REKR", "RS", "RELV",
"RELX", "RBNC", "SOL", "RENN", "RBCAA", "FRBK", "REFR", "RSSS", "REZI", "RVP", "REV",
"REX", "REXN", "REYN", "RBKB", "RIBT", "RELL", "RMBI", "RNET", "REI", "REDU", "RVSB",
"RIVE", "RCKY", "RMCF", "ROG", "ROCH", "RDS-B", "RDS-A", "RBCN", "RMBL", "RUSHA", "RUSHB",
"RYAAY", "RYI", "STBA", "WORX", "SBFG", "SEIC", "SMHI", "SGBX", "SJW", "SM", "SP", "SRAX",
"SGRP", "SWKH", "SANW", "SFET", "SFE", "SGA", "JOE", "SLRX", "SALM", "SAL", "SD", "JBSS",
"SGMO", "SC", "SAR", "STSA", "SVRA", "SMIT", "SNDR", "SCHL", "SWM", "SAIC", "SMG", "SCPH",
"SCU", "SCYX", "SEB", "SBCF", "CKH", "SPNE", "EYES", "SECO", "SNFCA", "SEEL", "SIC",
"WTTR", "SEM", "SELB", "SLS", "LEDS", "SENEB", "SENEA", "SNES", "AIHS", "SXT", "SENS",
"SRTS", "SQNS", "SQBG", "SCI", "SERV", "SFBS", "SVT", "SESN", "SVBI", "SMED", "SHSP",
"SHEN", "TYHT", "SHG", "SHBI", "SIEB", "BSRR", "SIEN", "SRRA", "SIF", "SIFY", "SGLB",
"SGMA", "SBNY", "SLN", "SLGN", "SAMG", "SBOW", "SI", "SSNT", "SFNC", "SSD", "SHI", "SINO",
"TSLX", "SKX", "SKYS", "SWBI", "SNN", "SMBK", "SND", "SY", "SQM", "SCKT", "SWI", "SOI",
"SLNO", "SNGX", "SLDB", "XPL", "SAH", "SONM", "SON", "SNOA", "SOS", "SFBC", "SJI", "SPFI",
"SSB", "SFST", "SMBC", "SONA", "SPKE", "LOV", "SPPI", "SPB", "SPRO", "STXB", "SPOK",
"SBPH", "SFM", "SRAC", "STAF", "STND", "SMP", "SGU", "SCX", "MITO", "STCN", "SCS", "SCL",
"STXS", "STL", "SBT", "STRL", "STC", "SF", "SYBT", "BANX", "SRI", "STON", "SNEX", "SSKN",
"STRT", "STRS", "STRM", "MSC", "RGR", "SMFG", "SUMR", "SMMF", "SUM", "SSBI", "SMMT",
"WISA", "SXC", "SNDE", "SSY", "STG", "NOVA", "SCON", "SLGG", "SDPI", "SUP", "SGC", "SPRT",
"SURF", "SRGA", "SSSS", "STRO", "SUZ", "SWCH", "SYNL", "SYN", "SYPR", "SYBX", "SYX",
"CGBD", "TCF", "TELA", "TESS", "TFFP", "GLG", "TPIC", "TSRI", "TAIT", "TLC", "TAK", "TKAT",
"TALO", "TEDU", "TTM", "TAYD", "TCRR", "TISI", "TCCO", "TRC", "TEO", "TDY", "VIV", "TDS",
"TNAV", "TLGT", "TPX", "TS", "TENX", "TGC", "TEN", "TX", "TBNK", "TTI", "ODP", "NCTY",
"STKS", "THR", "KRKR", "TDW", "TLYS", "TSU", "TMBR", "TMST", "TWI", "TITN", "TLSA", "TMP",
"TR", "TRCH", "TTC", "TBLT", "TSQ", "TCON", "TW", "TACT", "TCI", "TRXC", "TGS", "TA",
"TREC", "TG", "THS", "TRVI", "TCBK", "TRS", "TRN", "TPHS", "TRT", "TPVG", "TBK", "TGI",
"TRST", "TRMK", "MEDS", "TTOO", "TC", "TOUR", "TKC", "TPB", "THCB", "THCA", "TPC", "XXII",
"TRWH", "TYME", "UFPT", "USAU", "USAK", "GROW", "UMRX", "USNA", "USCR", "USPH", "SLCA",
"ULBI", "UGP", "UA", "UNAM", "UFI", "UL", "UNB", "UFAB", "UBOH", "UCBI", "UBCP", "UFCS",
"UG", "UNFI", "UAMY", "USEG", "USLM", "USFD", "USWS", "UNTY", "UNVR", "UEIC", "UUU",
"USAP", "ULH", "UTI", "UVSP", "TIGR", "UONEK", "USIO", "ECOL", "UTMD", "VTVT", "EGY",
"VCNX", "VHI", "VALE", "VMI", "VALU", "VAPO", "VGR", "VEC", "VEDL", "PCVX", "VERO", "VRA",
"VNE", "VSTM", "VERB", "VBTX", "VRTV", "VCEL", "VRME", "VERY", "VRNA", "VRRM", "VRCA",
"VTNR", "VERU", "VRT", "VIACA", "VRAY", "VLGEA", "VNCE", "VIOT", "VIRC", "VHC", "VTSI",
"VRTS", "VSH", "VPG", "VIST", "VC", "VTGN", "VMAC", "VIVE", "VOLT", "VOYA", "VJET", "WTI",
"WDFC", "WSFS", "WVFC", "WPP", "VYNE", "WNC", "WAB", "WDR", "WD", "HCC", "WASH", "WSO-B",
"WSO", "WEI", "WBT", "WERN", "WSBC", "WCC", "WTBA", "WABC", "WSTL", "WLK", "WBK", "WWR",
"WEX", "WEYS", "WHF", "WLL", "FREE", "WOW", "WYY", "JW-A", "JW-B", "WHLM", "WVVI", "WSM",
"WLFC", "WSC", "WINT", "WGO", "WTT", "WETF", "WKEY", "WWW", "WF", "WRLD", "WOR", "XYF",
"XPO", "XPEL", "XTLB", "XELB", "XBIO", "XIN", "XOMA", "XTNT", "XFOR", "YPF", "YRCW",
"YELP", "YTEN", "YRD", "YIN", "YETI", "YORW", "DAO", "YGYI", "CTIB", "ZAGG", "ZEAL",
"ZDGE", "ZG", "ZSAN", "ZVO", "ZYNE", "BNSO", "DSWL", "ATIF", "BHVN", "BRLI", "CHNR",
"CCCL", "CCRC", "SXTC", "DOGZ", "CLWT", "GTEC", "HEBT", "HIHO", "HOLI", "HUSN", "LLIT",
"LKCO", "MTC", "NESR", "NTP", "NEWA", "NOMD", "SEED", "RETO", "SJ", "TANH", "PETZ", "MYT",
"WAFU", "ZKIN", "AAMC"]
def get_investing_lists():
investment_list = investing_list_of_industrials + investing_list_of_technology + \
investing_list_of_communication_services + investing_list_of_energy + \
investing_list_of_utilities_and_real_estate + investing_list_of_materials + \
investing_list_of_consumer_discretionary + investing_list_of_consumer_staples + \
investing_list_of_healthcare + investing_list_of_financials + investing_list_of_oct_nov
random.shuffle(investment_list)
print(len(investment_list))
return investment_list
# print(get_investing_lists())
| 104.472674 | 120 | 0.364226 | 6,745 | 70,728 | 3.793921 | 0.838695 | 0.012192 | 0.012896 | 0.005276 | 0.032161 | 0.020399 | 0.018132 | 0.01524 | 0.01524 | 0.01524 | 0 | 0.000111 | 0.360465 | 70,728 | 676 | 121 | 104.627219 | 0.565627 | 0.000806 | 0 | 0 | 0 | 0 | 0.34263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003106 | false | 0 | 0.001553 | 0 | 0.007764 | 0.003106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3b7971054166b3e113cd28da11c99033b9339be | 248 | py | Python | todolist/api/urls.py | yusukhobok/TodoAPI | 9f9d93b54cfe3c841dbe2de91798c5e6a28c7edf | [
"MIT"
] | null | null | null | todolist/api/urls.py | yusukhobok/TodoAPI | 9f9d93b54cfe3c841dbe2de91798c5e6a28c7edf | [
"MIT"
] | 6 | 2020-06-05T20:06:44.000Z | 2021-09-22T18:08:06.000Z | todolist/api/urls.py | Yus27/TodoAPI | 1308aeac356b2168cec02045aa6774b16443d17b | [
"MIT"
] | null | null | null | from django.urls import path
from .views import TodoViewSet
from rest_framework_bulk.routes import BulkRouter
from rest_framework import routers
router = BulkRouter()
router.register(r'', TodoViewSet, basename='todos')
urlpatterns = router.urls | 22.545455 | 51 | 0.814516 | 32 | 248 | 6.21875 | 0.59375 | 0.080402 | 0.170854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 248 | 11 | 52 | 22.545455 | 0.904545 | 0 | 0 | 0 | 0 | 0 | 0.02008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b3ba429b584473910459cd9abd6680855878d8b8 | 406 | py | Python | whileIF.py | JulianConneely/Python-Problem-Sets | 315a43b00c98bff0a9889c730ae13ca963d5fa6c | [
"Apache-2.0"
] | null | null | null | whileIF.py | JulianConneely/Python-Problem-Sets | 315a43b00c98bff0a9889c730ae13ca963d5fa6c | [
"Apache-2.0"
] | null | null | null | whileIF.py | JulianConneely/Python-Problem-Sets | 315a43b00c98bff0a9889c730ae13ca963d5fa6c | [
"Apache-2.0"
] | null | null | null | #Julian Conneely, 21/03/18
#WhileIF loop with increment
#first 5 is printed
#then it is decreased to 4
#as it is not satisfying i<=2
#it moves on
#then 4 is printed
#then it is decreased to 3
#as it is not satisfying i<=2,it moves on
#then 3 is printed
#then it is decreased to 2
#as now i<=2 has completely satisfied, the loop breaks
i = 5
while True:
print(i)
i=i-1
if i<=2:
break
| 19.333333 | 55 | 0.684729 | 82 | 406 | 3.390244 | 0.463415 | 0.071942 | 0.140288 | 0.161871 | 0.546763 | 0.546763 | 0.546763 | 0.244604 | 0.244604 | 0.244604 | 0 | 0.058065 | 0.236453 | 406 | 20 | 56 | 20.3 | 0.83871 | 0.76601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3c169da3d4b81dde59b6bb476806e478d3ea83d | 3,493 | py | Python | lib/python/batch_sim/gcloud_fakes.py | leozz37/makani | c94d5c2b600b98002f932e80a313a06b9285cc1b | [
"Apache-2.0"
] | 1,178 | 2020-09-10T17:15:42.000Z | 2022-03-31T14:59:35.000Z | lib/python/batch_sim/gcloud_fakes.py | leozz37/makani | c94d5c2b600b98002f932e80a313a06b9285cc1b | [
"Apache-2.0"
] | 1 | 2020-05-22T05:22:35.000Z | 2020-05-22T05:22:35.000Z | lib/python/batch_sim/gcloud_fakes.py | leozz37/makani | c94d5c2b600b98002f932e80a313a06b9285cc1b | [
"Apache-2.0"
] | 107 | 2020-09-10T17:29:30.000Z | 2022-03-18T09:00:14.000Z | # Copyright 2020 Makani Technologies LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Fake gcloud utils for testing without cloud access."""
from makani.lib.python.batch_sim import gcloud_util
class FakeFilesystem(object):
"""A fake filesystem.
A FakeFilesystem instance is simply a dictionary of file names to file
contents, with Save() and Load() methods to make access look a bit more
file-like.
The class itself also contains LOCAL and CLOUD variables intended to store
references to particular FakeFilesystem instances. These are initialized to
None and intended to be defined as needed via mock.patch. For example:
with mock.patch('makani.batch_sim.gcloud_fakes.FakeFilesystem.LOCAL',
FakeFilesystem()) as local_fs:
<Do something with local files>
with mock.patch('makani.batch_sim.gcloud_fakes.FakeFilesystem.CLOUD',
FakeFilesystem()) as remote_fs:
<Do something with remote files>
In particular, many of the fakes in this module use FakeFilesystem.LOCAL and
FakeFilesystem.CLOUD to simulate actual storage patterns.
"""
LOCAL = None
CLOUD = None
def __init__(self):
self.files = {}
def Save(self, filename, descriptor):
self.files[filename] = descriptor
def Load(self, filename):
return self.files[filename]
class FakeCloudStorageApi(object):
"""A fake of gcloud_util.CloudStorageApi.
This performs simple transfers between FakeFilesystem.LOCAL and
FakeFilesystem.CLOUD.
To simulate working with different local filesystems, FakeFilesystem.LOCAL
may be patched before instantiating the FakeCloudStorageApi.
"""
def __init__(self, bucket=None):
self._local_fs = FakeFilesystem.LOCAL
self._cloud_fs = FakeFilesystem.CLOUD
self._bucket = bucket
def _RemoveBucketFromCloudName(self, cloud_name):
cloud_name = cloud_name.strip()
if cloud_name.startswith('gs://'):
_, cloud_name = gcloud_util.ParseBucketAndPath(cloud_name, None)
return cloud_name
def DownloadFile(self, cloud_name, stream):
cloud_name = self._RemoveBucketFromCloudName(cloud_name)
stream.write(self._cloud_fs.Load(cloud_name))
def UploadFile(self, local_name, cloud_name):
cloud_name = self._RemoveBucketFromCloudName(cloud_name)
self._cloud_fs.Save(cloud_name, self._local_fs.Load(local_name))
def UploadStream(self, stream, cloud_name):
cloud_name = self._RemoveBucketFromCloudName(cloud_name)
self._cloud_fs.Save(cloud_name, stream.getvalue())
def DeletePrefix(self, prefix):
for filename in self.List(prefix):
if filename.startswith(prefix):
self._cloud_fs.files.pop(filename)
def DeleteFile(self, cloud_name):
cloud_name = self._RemoveBucketFromCloudName(cloud_name)
self._cloud_fs.files.pop(cloud_name)
def List(self, prefix):
prefix = self._RemoveBucketFromCloudName(prefix)
return [name for name in self._cloud_fs.files if name.startswith(prefix)]
| 34.93 | 78 | 0.744918 | 462 | 3,493 | 5.482684 | 0.354978 | 0.081721 | 0.041058 | 0.035531 | 0.215555 | 0.189499 | 0.170944 | 0.130675 | 0.130675 | 0.089617 | 0 | 0.00278 | 0.176066 | 3,493 | 99 | 79 | 35.282828 | 0.877345 | 0.499857 | 0 | 0.102564 | 0 | 0 | 0.002983 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.282051 | false | 0 | 0.025641 | 0.025641 | 0.487179 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3d08a0120015a60c620ea1e806103c54c2cbf58 | 196 | py | Python | LeetCode/python3/136.py | ZintrulCre/LeetCode_Archiver | de23e16ead29336b5ee7aa1898a392a5d6463d27 | [
"MIT"
] | 279 | 2019-02-19T16:00:32.000Z | 2022-03-23T12:16:30.000Z | LeetCode/python3/136.py | ZintrulCre/LeetCode_Archiver | de23e16ead29336b5ee7aa1898a392a5d6463d27 | [
"MIT"
] | 2 | 2019-03-31T08:03:06.000Z | 2021-03-07T04:54:32.000Z | LeetCode/python3/136.py | ZintrulCre/LeetCode_Crawler | de23e16ead29336b5ee7aa1898a392a5d6463d27 | [
"MIT"
] | 12 | 2019-01-29T11:45:32.000Z | 2019-02-04T16:31:46.000Z | class Solution:
def singleNumber(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
k = 0
for n in nums:
k ^= n
return k | 19.6 | 33 | 0.408163 | 22 | 196 | 3.636364 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009901 | 0.484694 | 196 | 10 | 34 | 19.6 | 0.782178 | 0.168367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3d2818801e85a29eccfd608b62683f33282a6d9 | 2,071 | py | Python | src/genie/libs/parser/iosxr/tests/ShowVrrpSummary/cli/equal/golden_output_1_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxr/tests/ShowVrrpSummary/cli/equal/golden_output_1_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxr/tests/ShowVrrpSummary/cli/equal/golden_output_1_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | expected_output = {
"address_family":{
"ipv4":{
"bfd_sessions_down":0,
"bfd_sessions_inactive":1,
"bfd_sessions_up":0,
"intf_down":1,
"intf_up":1,
"num_bfd_sessions":1,
"num_intf":2,
"state":{
"all":{
"sessions":2,
"slaves":0,
"total":2
},
"backup":{
"sessions":1,
"slaves":0,
"total":1
},
"init":{
"sessions":1,
"slaves":0,
"total":1
},
"master":{
"sessions":0,
"slaves":0,
"total":0
},
"master(owner)":{
"sessions":0,
"slaves":0,
"total":0
}
},
"virtual_addresses_active":0,
"virtual_addresses_inactive":2,
"vritual_addresses_total":2
},
"ipv6":{
"bfd_sessions_down":0,
"bfd_sessions_inactive":0,
"bfd_sessions_up":0,
"intf_down":0,
"intf_up":1,
"num_bfd_sessions":0,
"num_intf":1,
"state":{
"all":{
"sessions":1,
"slaves":0,
"total":1
},
"backup":{
"sessions":0,
"slaves":0,
"total":0
},
"init":{
"sessions":0,
"slaves":0,
"total":0
},
"master":{
"sessions":1,
"slaves":0,
"total":1
},
"master(owner)":{
"sessions":0,
"slaves":0,
"total":0
}
},
"virtual_addresses_active":1,
"virtual_addresses_inactive":0,
"vritual_addresses_total":1
},
"num_tracked_objects":2,
"tracked_objects_down":2,
"tracked_objects_up":0
}
}
| 24.081395 | 40 | 0.356832 | 166 | 2,071 | 4.198795 | 0.180723 | 0.10043 | 0.172166 | 0.114778 | 0.619799 | 0.619799 | 0.401722 | 0.157819 | 0.157819 | 0.157819 | 0 | 0.053295 | 0.50169 | 2,071 | 85 | 41 | 24.364706 | 0.622093 | 0 | 0 | 0.541176 | 0 | 0 | 0.32593 | 0.090777 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3df745ca42527733326bd472f3dcaf4891e8d24 | 2,789 | py | Python | ETLPipeline (1).py | mudigosa/Disaster-Pipeline | 3a4a8cb78202def522f131d15e5f3938c15f1be5 | [
"CC-BY-4.0"
] | null | null | null | ETLPipeline (1).py | mudigosa/Disaster-Pipeline | 3a4a8cb78202def522f131d15e5f3938c15f1be5 | [
"CC-BY-4.0"
] | null | null | null | ETLPipeline (1).py | mudigosa/Disaster-Pipeline | 3a4a8cb78202def522f131d15e5f3938c15f1be5 | [
"CC-BY-4.0"
] | null | null | null | import sys
import pandas as pd
from sqlalchemy import create_engine
def load_data(messages_filepath, categories_filepath):
'''
input:
messages_filepath: The path of messages dataset.
categories_filepath: The path of categories dataset.
output:
df: The merged dataset
'''
disastermessages = pd.read_csv('disaster_messages.csv')
disastermessages.head()
# load categories dataset
disastercategories = pd.read_csv('disaster_categories.csv')
disastercategories.head()
df = pd.merge(disastermessages, disastercategories, left_on='id', right_on='id', how='outer')
return df
def clean_data(df):
'''
input:
df: The merged dataset in previous step.
output:
df: Dataset after cleaning.
'''
disastercategories = df.categories.str.split(';', expand = True)
# select the first row of the categories dataframe
row = disastercategories.iloc[0,:]
# use this row to extract a list of new column names for categories.
# one way is to apply a lambda function that takes everything
# up to the second to last character of each string with slicing
disastercategory_colnames = row.apply(lambda x:x[:-2])
print(disastercategory_colnames)
disastercategories.columns = category_colnames
for column in disastercategories:
# set each value to be the last character of the string
disastercategories[column] = disastercategories[column].str[-1]
# convert column from string to numeric
disastercategories[column] = disastercategories[column].astype(np.int)
disastercategories.head()
df.drop('categories', axis = 1, inplace = True)
df = pd.concat([df, categories], axis = 1)
# drop the original categories column from `df`
df = df.drop('categories',axis=1)
df.head()
# check number of duplicates
print('Number of duplicated rows: {} out of {} samples'.format(df.duplicated().sum(),df.shape[0]))
df.drop_duplicates(subset = 'id', inplace = True)
return df
def save_data(df):
engine = create_engine('sqlite:///disastermessages.db')
df.to_sql('df', engine, index=False)
def main():
df = load_data()
print('Cleaning data...')
df = clean_data(df)
save_data(df)
print('Cleaned data saved to database!')
else:
print('Please provide the filepaths of the messages and categories '\
'datasets as the first and second argument respectively, as '\
'well as the filepath of the database to save the cleaned data '\
'to as the third argument. \n\nExample: python process_data.py '\
'disaster_messages.csv disaster_categories.csv '\
'DisasterResponse.db')
if __name__ == '__main__':
main()
| 35.303797 | 102 | 0.674435 | 347 | 2,789 | 5.322767 | 0.403458 | 0.016243 | 0.024364 | 0.018408 | 0.02274 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003251 | 0.228039 | 2,789 | 78 | 103 | 35.75641 | 0.854621 | 0 | 0 | 0.088889 | 0 | 0 | 0.252318 | 0.057101 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3e734c6ffea6e6eebb61e519518c93ae03d2616 | 1,504 | py | Python | fst_lookup/fallback_data.py | eddieantonio/fst-lookup | 383173e906c25070fb71044318a6fe9e62ca9c13 | [
"MIT"
] | 5 | 2019-07-17T16:28:52.000Z | 2020-03-31T08:36:26.000Z | fst_lookup/fallback_data.py | eddieantonio/fst-lookup | 383173e906c25070fb71044318a6fe9e62ca9c13 | [
"MIT"
] | 15 | 2019-01-31T18:31:35.000Z | 2021-03-01T16:24:20.000Z | fst_lookup/fallback_data.py | eddieantonio/fst-lookup | 383173e906c25070fb71044318a6fe9e62ca9c13 | [
"MIT"
] | 2 | 2019-07-15T20:10:55.000Z | 2019-08-17T18:02:08.000Z | """
Fallback data types, implemented in Python, for platforms that cannot build
the C extension.
"""
from .symbol import Symbol
from .typedefs import StateID
class Arc:
"""
An arc (transition) in the FST.
"""
__slots__ = ("_state", "_upper", "_lower", "_destination")
def __init__(
self, state: StateID, upper: Symbol, lower: Symbol, destination: StateID
) -> None:
self._state = state
self._upper = upper
self._lower = lower
self._destination = destination
@property
def state(self) -> int:
return self._state
@property
def upper(self) -> Symbol:
return self._upper
@property
def lower(self) -> Symbol:
return self._lower
@property
def destination(self) -> int:
return self._destination
def __eq__(self, other) -> bool:
if not isinstance(other, Arc):
return False
return (
self._state == other._state
and self._upper == other._upper
and self._lower == other._lower
and self._destination == other._destination
)
def __hash__(self) -> int:
return self._state + (hash(self._upper) ^ hash(self._lower))
def __str__(self) -> str:
if self._upper == self._lower:
label = str(self._upper)
else:
label = str(self._upper) + ":" + str(self._lower)
return "{:d} -{:s}-> {:d}".format(self._state, label, self._destination)
| 24.655738 | 80 | 0.583112 | 167 | 1,504 | 4.952096 | 0.305389 | 0.076179 | 0.047158 | 0.061669 | 0.053204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.299867 | 1,504 | 60 | 81 | 25.066667 | 0.785375 | 0.082447 | 0 | 0.1 | 0 | 0 | 0.035372 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.05 | 0.125 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
b3e9fef97a59ba6d9d87e871a5f6cd1c0bbe6864 | 2,111 | py | Python | indicators.py | abulte/python-influxdb-alerts | 5955b99551094d24197705ec5d97373b64b338a1 | [
"MIT"
] | 2 | 2018-04-16T23:44:56.000Z | 2019-02-28T16:43:57.000Z | indicators.py | abulte/python-influxdb-alerts | 5955b99551094d24197705ec5d97373b64b338a1 | [
"MIT"
] | null | null | null | indicators.py | abulte/python-influxdb-alerts | 5955b99551094d24197705ec5d97373b64b338a1 | [
"MIT"
] | 1 | 2021-05-04T16:06:15.000Z | 2021-05-04T16:06:15.000Z | """Indicators to monitor"""
import click
from configparser import NoOptionError
from query import Query
from config import CONFIG
class BaseIndicator(object):
client = Query()
# name of the metric in influxdb
name = 'base_indicator'
# unit (displays in alerts)
unit = ''
# alert when value gt (>) than threshold or lt (<)
comparison = 'gt'
# timeframe to compute mean of indicator values on
timeframe = '10m'
# some filters to pass to the influx db query (where clause)
filters = None
# divide the raw value from influx by this (eg convert bytes to Mb, Gb...)
divider = 1
def __init__(self):
try:
self.threshold = float(CONFIG.get('thresholds', self.name))
except NoOptionError:
raise click.ClickException('No threshold configured for indicator %s' % self.name)
def get_value(self, host):
"""Get the value from influx for this indicator"""
value = self.client.query_last_mean(
self.name,
host,
timeframe=self.timeframe,
filters=self.filters
)
if value:
return value / self.divider
def is_alert(self, host, value=None):
"""Is this indicator in alert state?"""
value = self.get_value(host) if not value else value
if self.comparison == 'gt' and value > self.threshold:
return True
elif self.comparison == 'lt' and value < self.threshold:
return True
else:
return False
class LoadIndicator(BaseIndicator):
name = 'load_longterm'
unit = ''
comparison = 'gt'
timeframe = '10m'
filters = None
divider = 1
class FreeRAMIndicator(BaseIndicator):
name = 'memory_value'
unit = 'Mb'
comparison = 'lt'
timeframe = '10m'
filters = {
'type_instance': 'free'
}
divider = 1000000
class FreeDiskIndicator(BaseIndicator):
name = 'df_value'
unit = 'Mb'
comparison = 'lt'
timeframe = '10m'
filters = {
'type_instance': 'free'
}
divider = 1000000
| 24.546512 | 94 | 0.606348 | 241 | 2,111 | 5.248963 | 0.39834 | 0.042688 | 0.045059 | 0.033202 | 0.162846 | 0.162846 | 0.113834 | 0.113834 | 0.113834 | 0.113834 | 0 | 0.016249 | 0.300332 | 2,111 | 85 | 95 | 24.835294 | 0.840217 | 0.183799 | 0 | 0.40678 | 0 | 0 | 0.09342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0 | 0.067797 | 0 | 0.677966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b3ed72fa8fd255bd3466367b59b2f940a3c9c4e8 | 600 | py | Python | openbook_posts/migrations/0022_auto_20190311_1432.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 164 | 2019-07-29T17:59:06.000Z | 2022-03-19T21:36:01.000Z | openbook_posts/migrations/0022_auto_20190311_1432.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 188 | 2019-03-16T09:53:25.000Z | 2019-07-25T14:57:24.000Z | openbook_posts/migrations/0022_auto_20190311_1432.py | TamaraAbells/okuna-api | f87d8e80d2f182c01dbce68155ded0078ee707e4 | [
"MIT"
] | 80 | 2019-08-03T17:49:08.000Z | 2022-02-28T16:56:33.000Z | # Generated by Django 2.2b1 on 2019-03-11 13:32
from django.db import migrations
import imagekit.models.fields
import openbook_posts.helpers
class Migration(migrations.Migration):
dependencies = [
('openbook_posts', '0021_auto_20190309_1532'),
]
operations = [
migrations.AlterField(
model_name='postimage',
name='image',
field=imagekit.models.fields.ProcessedImageField(height_field='height', null=True, upload_to=openbook_posts.helpers.upload_to_post_image_directory, verbose_name='image', width_field='width'),
),
]
| 28.571429 | 203 | 0.698333 | 69 | 600 | 5.855072 | 0.652174 | 0.096535 | 0.09901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064315 | 0.196667 | 600 | 20 | 204 | 30 | 0.773859 | 0.075 | 0 | 0 | 1 | 0 | 0.121157 | 0.041591 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3eeb3b273ced737afa9c583f2c895e42540c161 | 617 | py | Python | setup.py | AndersenLab/liftover-utils | 17041d8b84ba79e9b3105267040bec4ade437a5c | [
"MIT"
] | 7 | 2015-04-29T22:53:46.000Z | 2022-03-11T02:24:32.000Z | setup.py | AndersenLab/liftover-utils | 17041d8b84ba79e9b3105267040bec4ade437a5c | [
"MIT"
] | 2 | 2015-05-04T23:40:00.000Z | 2015-12-01T21:17:56.000Z | setup.py | AndersenLab/liftover-utils | 17041d8b84ba79e9b3105267040bec4ade437a5c | [
"MIT"
] | 1 | 2017-07-07T05:19:12.000Z | 2017-07-07T05:19:12.000Z | from setuptools import setup
import glob
setup(name='liftover.py',
version='0.1',
packages=['liftover'],
description='C. elegans liftover utility',
url='https://github.com/AndersenLab/liftover-utils',
author='Daniel Cook',
author_email='danielecook@gmail.com',
license='MIT',
entry_points="""
[console_scripts]
liftover = liftover.liftover:main
""",
install_requires=["docopt"],
data_files=[('CHROMOSOME_DIFFERENCES', glob.glob("data/CHROMOSOME_DIFFERENCES/sequence*")),
'remap_gff_between_releases.pl'],
zip_safe=False) | 32.473684 | 97 | 0.65316 | 66 | 617 | 5.939394 | 0.772727 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004065 | 0.202593 | 617 | 19 | 98 | 32.473684 | 0.792683 | 0 | 0 | 0 | 0 | 0 | 0.475728 | 0.211974 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3ef8ee3fd80d4e7439e46e105cfd252de319dd7 | 2,610 | py | Python | tests/test_atom.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | null | null | null | tests/test_atom.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | 1 | 2021-11-11T18:45:10.000Z | 2021-11-11T18:45:10.000Z | tests/test_atom.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import math
from unittest import TestCase
from restraintlib.atom import Atom
class AtomTestCase(TestCase):
def setUp(self):
self.atom = Atom('A', '100', 'DT', 'C2', ' ', (1.0, 0.2, 0.3), 1)
self.atom2 = Atom('A', '100', 'DT', 'C3', ' ', (1.0, 1.0, 1.0), 2)
def test_str(self):
expected = "chain: A res: 100 monomer: DT atom: C2 alt loc: xyz: (1.0, 0.2, 0.3)"
self.assertEqual(expected, str(self.atom))
def test_cross(self):
cross = Atom.cross(self.atom.atom_xyz, self.atom2.atom_xyz)
self.assertEqual((0.2 - 0.3, 0.3 - 1.0, 1.0 - 0.2), cross)
def test_sub(self):
sub = Atom.sub(self.atom.atom_xyz, self.atom2.atom_xyz)
self.assertEqual((0.0, -0.8, -0.7), sub)
def test_dot(self):
dot = Atom.dot(self.atom.atom_xyz, self.atom2.atom_xyz)
self.assertEqual(1.5, dot)
def test_lenght(self):
length = Atom.lenght(self.atom.atom_xyz)
self.assertAlmostEqual(math.sqrt(1.13), length, 5)
length = Atom.lenght(self.atom2.atom_xyz)
self.assertAlmostEqual(math.sqrt(3.0), length, 5)
def test_mul_sca(self):
vec = Atom.mul_sca(3, self.atom2.atom_xyz)
self.assertEqual((3.0, 3.0, 3.0), vec)
def test_normalize(self):
vec = Atom.normalize(self.atom2.atom_xyz)
self.assertEqual(1.0, Atom.lenght(vec))
def test_det(self):
det = Atom.det(self.atom.atom_xyz, self.atom2.atom_xyz, self.atom2.atom_xyz)
self.assertAlmostEqual(0, det, 6)
det = Atom.det(self.atom.atom_xyz, self.atom2.atom_xyz, (1, 1, 0.5))
self.assertAlmostEqual(-0.4, det, 6)
def test_dist(self):
dist = self.atom.dist(self.atom2)
self.assertAlmostEqual(math.sqrt(1.13), dist, 6)
def test_angle(self):
zero = Atom('', '1', '', '', '', (0.0, 0.0, 0.0), 1)
one = Atom('', '2', '', '', '', (1.0, 0.0, 0.0), 2)
diag = Atom('', '3', '', '', '', (1.0, 1.0, 0.0), 3)
one_one_one = Atom('', '3', '', '', '', (1.0, 1.0, 1.0), 4)
angle = one.angle(zero, diag)
self.assertAlmostEqual(45, angle, 6)
angle = diag.angle(zero, one_one_one)
self.assertAlmostEqual(35.26, angle, 2)
def test_torsion(self):
a1 = Atom('', '1', '', '', '', (-1.0, -1.0, 0.0), 1)
a2 = Atom('', '2', '', '', '', (-1.0, 0.0, 0.0), 2)
a3 = Atom('', '3', '', '', '', (1.0, 0.0, 0.0), 3)
a4 = Atom('', '3', '', '', '', (1.0, 1.0, 0.0), 4)
torsion = a1.torsion(a2, a3, a4)
self.assertAlmostEqual(180, torsion, 6)
| 34.8 | 90 | 0.544444 | 412 | 2,610 | 3.371359 | 0.160194 | 0.040317 | 0.036717 | 0.103672 | 0.393809 | 0.387329 | 0.231102 | 0.195824 | 0.179986 | 0.159827 | 0 | 0.089939 | 0.245977 | 2,610 | 74 | 91 | 35.27027 | 0.615854 | 0.008046 | 0 | 0 | 0 | 0.018519 | 0.036722 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 1 | 0.222222 | false | 0 | 0.055556 | 0 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3f2836bffc8bb0fbc4cbeb37e50f7145fc48bd5 | 1,099 | py | Python | tests/test_settings.py | wegostudio/wegq | 391cdf7e6a345079c20d9c517bff89cd440dd35c | [
"Apache-2.0"
] | null | null | null | tests/test_settings.py | wegostudio/wegq | 391cdf7e6a345079c20d9c517bff89cd440dd35c | [
"Apache-2.0"
] | null | null | null | tests/test_settings.py | wegostudio/wegq | 391cdf7e6a345079c20d9c517bff89cd440dd35c | [
"Apache-2.0"
] | null | null | null | import unittest
from wework import settings, wechat
class TestSettings(unittest.TestCase):
def test_init(self):
t = settings.init(
CROP_ID='a',
PROVIDER_SECRET='a',
REGISTER_URL='www.quseit.com/',
HELPER='wegq.DjangoHelper'
)
self.assertTrue(isinstance(t, wechat.WorkWechatApi))
def test_error(self):
with self.assertRaises(settings.InitError):
settings.init(
CROP_ID='a',
PROVIDER_SECRET='a',
HELPER='wegq.DjangoHelper'
)
with self.assertRaises(settings.InitError):
settings.init(
CROP_ID='a',
PROVIDER_SECRET='a',
REGISTER_URL='www.quseit.com',
HELPER='wegq.DjangoHelper'
)
with self.assertRaises(settings.InitError):
settings.init(
CROP_ID='a',
PROVIDER_SECRET='a',
REGISTER_URL='www.quseit.com',
HELPER=type('MyHelper', (object, ), {}),
) | 29.702703 | 60 | 0.520473 | 101 | 1,099 | 5.534653 | 0.366337 | 0.085868 | 0.11449 | 0.128801 | 0.694097 | 0.694097 | 0.694097 | 0.694097 | 0.694097 | 0.694097 | 0 | 0 | 0.368517 | 1,099 | 37 | 61 | 29.702703 | 0.805476 | 0 | 0 | 0.59375 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3f538d52ae1e6ef588876efeb6394fbd75aad99 | 435 | py | Python | setup.py | mchalek/timetools | 347d71c5b1b61b2e7f51b55b78717d046df0eac9 | [
"MIT"
] | null | null | null | setup.py | mchalek/timetools | 347d71c5b1b61b2e7f51b55b78717d046df0eac9 | [
"MIT"
] | null | null | null | setup.py | mchalek/timetools | 347d71c5b1b61b2e7f51b55b78717d046df0eac9 | [
"MIT"
] | null | null | null | from distutils.core import setup
setup(
name = 'timetools',
version = '1.0.0',
description = 'CL tools for timestamps',
author = 'Kevin McHale',
author_email = 'mchalek@gmail.com',
url = 'https://github.com/mchalek/timetools',
scripts = [
'bin/now',
'bin/when',
'bin/ts',
'bin/daysago',
'bin/hoursago',
])
| 25.588235 | 53 | 0.482759 | 42 | 435 | 4.97619 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.37931 | 435 | 16 | 54 | 27.1875 | 0.762963 | 0 | 0 | 0 | 0 | 0 | 0.335632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3f7e6c22b6c9794a41aee33552f02308c3e5c81 | 334 | py | Python | home/migrations/0020_remove_comment_content_type.py | yys534640040/blog | 5489506593a4c496856cb197c6eeee0b9d0c7422 | [
"MIT"
] | null | null | null | home/migrations/0020_remove_comment_content_type.py | yys534640040/blog | 5489506593a4c496856cb197c6eeee0b9d0c7422 | [
"MIT"
] | 1 | 2020-07-12T11:36:06.000Z | 2020-07-16T22:58:18.000Z | home/migrations/0020_remove_comment_content_type.py | yys534640040/blog | 5489506593a4c496856cb197c6eeee0b9d0c7422 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.5 on 2020-07-03 20:33
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('home', '0019_comment_content_type'),
]
operations = [
migrations.RemoveField(
model_name='comment',
name='content_type',
),
]
| 18.555556 | 47 | 0.598802 | 36 | 334 | 5.416667 | 0.777778 | 0.112821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080169 | 0.290419 | 334 | 17 | 48 | 19.647059 | 0.742616 | 0.134731 | 0 | 0 | 1 | 0 | 0.167247 | 0.087108 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b3f8cd4ae912dbdd73c9929061b57c0026f34e20 | 394 | py | Python | maldives/bot/models/price.py | filipecn/maldives | f20f17d817fc3dcad7f9674753744716d1d4c821 | [
"MIT"
] | 1 | 2021-09-17T18:04:33.000Z | 2021-09-17T18:04:33.000Z | maldives/bot/models/price.py | filipecn/maldives | f20f17d817fc3dcad7f9674753744716d1d4c821 | [
"MIT"
] | null | null | null | maldives/bot/models/price.py | filipecn/maldives | f20f17d817fc3dcad7f9674753744716d1d4c821 | [
"MIT"
] | 3 | 2021-09-17T18:04:43.000Z | 2022-03-18T20:04:07.000Z | from datetime import datetime
class Price:
date: datetime = datetime(1, 1, 1)
currency: str = 'BRL'
symbol: str = ''
current: float = 0
open: float = 0
close: float = 0
low: float = 0
high: float = 0
volume: float = 0
interval: str = ''
def __init__(self, **kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
| 20.736842 | 41 | 0.563452 | 51 | 394 | 4.27451 | 0.588235 | 0.165138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033582 | 0.319797 | 394 | 18 | 42 | 21.888889 | 0.779851 | 0 | 0 | 0 | 0 | 0 | 0.007614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.866667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b3fbdad4ed64cfcecd3fb0cb6ae4bd8c65572b8e | 348 | py | Python | filesystems/click.py | Julian/Filesystems | 8c758c8dbbe1263b2ad574ccb8e2c81c865c0a61 | [
"MIT"
] | 2 | 2017-04-17T18:30:15.000Z | 2018-05-05T23:11:06.000Z | filesystems/click.py | Julian/Filesystems | 8c758c8dbbe1263b2ad574ccb8e2c81c865c0a61 | [
"MIT"
] | 46 | 2016-09-11T19:40:49.000Z | 2020-02-05T01:49:34.000Z | filesystems/click.py | Julian/Filesystems | 8c758c8dbbe1263b2ad574ccb8e2c81c865c0a61 | [
"MIT"
] | 4 | 2017-01-13T14:47:00.000Z | 2020-01-17T00:45:49.000Z | """
Click support for `filesystems.Path`.
"""
from __future__ import absolute_import
import click
import filesystems
class Path(click.ParamType):
name = "path"
def convert(self, value, param, context):
if not isinstance(value, str):
return value
return filesystems.Path.from_string(value)
PATH = Path()
| 15.130435 | 50 | 0.672414 | 41 | 348 | 5.560976 | 0.585366 | 0.131579 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229885 | 348 | 22 | 51 | 15.818182 | 0.850746 | 0.106322 | 0 | 0 | 0 | 0 | 0.013201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b60351063710a533911517acf4f3365c7b83f4ec | 4,621 | py | Python | radiation/python/scripts/record_random_walk.py | dfridovi/exploration | 5e66115178988bd264a920041dfeab6d3539caec | [
"BSD-3-Clause"
] | 5 | 2018-07-08T08:32:49.000Z | 2022-03-13T10:17:09.000Z | radiation/python/scripts/record_random_walk.py | dfridovi/exploration | 5e66115178988bd264a920041dfeab6d3539caec | [
"BSD-3-Clause"
] | 5 | 2016-11-30T02:52:58.000Z | 2018-05-24T04:46:49.000Z | radiation/python/scripts/record_random_walk.py | dfridovi/exploration | 5e66115178988bd264a920041dfeab6d3539caec | [
"BSD-3-Clause"
] | 2 | 2016-12-01T04:06:40.000Z | 2019-06-19T16:32:28.000Z | """
Copyright (c) 2015, The Regents of the University of California (Regents).
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS AS IS
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Please contact the author(s) of this library if you have any questions.
Authors: David Fridovich-Keil ( dfk@eecs.berkeley.edu )
"""
###########################################################################
#
# Record a dataset of map, successive poses, and corresponding measurements.
# The dataset will contain many such groups.
#
###########################################################################
from grid_pose_2d import GridPose2D
from source_2d import Source2D
from sensor_2d import Sensor2D
from encoding import *
import numpy as np
import math
# File to save to.
maps_file = "maps.csv"
trajectories_file = "trajectories.csv"
measurements_file = "measurements.csv"
# Define hyperparameters.
kNumSimulations = 100000
kNumRows = 5
kNumCols = 5
kNumSources = 3
kNumSteps = 10
kNumAngles = 8
kAngularStep = 2.0 * math.pi / float(kNumAngles)
kSensorParams = {"x" : kNumRows/2,
"y" : kNumCols/2,
"angle" : 0.0,
"fov" : 0.5 * math.pi}
delta_xs = [-1, 0, 1]
delta_ys = [-1, 0, 1]
delta_as = [-kAngularStep, 0, kAngularStep]
# Run the specified number of simulations.
maps = np.zeros(kNumSimulations)
trajectories = np.zeros((kNumSimulations, kNumSteps))
measurements = np.zeros((kNumSimulations, kNumSteps))
for ii in range(kNumSimulations):
# Generate random sources on the grid.
sources = []
for jj in range(kNumSources):
x = float(np.random.random_integers(0, kNumRows-1)) + 0.5
y = float(np.random.random_integers(0, kNumCols-1)) + 0.5
sources.append(Source2D(x, y))
sensor = Sensor2D(kSensorParams, sources)
maps[ii] = EncodeMap(kNumRows, kNumCols, sources)
# Generate a valid trajectory of the given length.
step_counter = 0
current_pose = GridPose2D(kNumRows, kNumCols,
int(np.random.uniform(0.0, kNumRows)) + 0.5,
int(np.random.uniform(0.0, kNumCols)) + 0.5,
np.random.uniform(0.0, 2.0 * math.pi))
while step_counter < kNumSteps:
dx = np.random.choice(delta_xs)
dy = np.random.choice(delta_ys)
da = np.random.choice(delta_as)
next_pose = GridPose2D.Copy(current_pose)
if next_pose.MoveBy(dx, dy, da):
# If a valid move, append to list.
trajectories[ii, step_counter] = (int(next_pose.x_) +
int(next_pose.y_) * kNumRows +
(int(next_pose.angle_ / kAngularStep)
% kNumAngles) * kNumRows * kNumAngles)
current_pose = next_pose
# Get a measurement.
sensor.ResetPose(current_pose)
measurements[ii, step_counter] = sensor.Sense()
step_counter += 1
# Save to disk.
np.savetxt(maps_file, maps, delimiter=",")
np.savetxt(trajectories_file, trajectories, delimiter=",")
np.savetxt(measurements_file, measurements, delimiter=",")
print "Successfully saved to disk."
| 39.161017 | 85 | 0.662194 | 587 | 4,621 | 5.151618 | 0.403748 | 0.021164 | 0.021825 | 0.015873 | 0.098214 | 0.07672 | 0.044974 | 0.044974 | 0.044974 | 0.044974 | 0 | 0.018545 | 0.22982 | 4,621 | 117 | 86 | 39.495727 | 0.831132 | 0.075958 | 0 | 0 | 0 | 0 | 0.033319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b605e2b8a8084f71c91e33d2351a943a53e715a5 | 1,447 | py | Python | Test/test.py | ed-chin-git/Utils_edchin | 44a6cd8ff9a3c3ffd9976eae53f3b3edd2cca18e | [
"MIT"
] | null | null | null | Test/test.py | ed-chin-git/Utils_edchin | 44a6cd8ff9a3c3ffd9976eae53f3b3edd2cca18e | [
"MIT"
] | null | null | null | Test/test.py | ed-chin-git/Utils_edchin | 44a6cd8ff9a3c3ffd9976eae53f3b3edd2cca18e | [
"MIT"
] | null | null | null | # ======================================================
# Packaging Python Projects Tutorial : https://packaging.python.org/tutorials/packaging-projects/
#
# Installation : # pip install -i https://test.pypi.org/simple/ utils-edchin
#
#
# ===================================================================
# Importing
if __name__ == "__main__":
''' -- Run tests --
'''
import numpy as np
import utils_edchin.pyxlib as edc_xlib # import class
pyx = edc_xlib() # create instance
num_list = [-10, 2, 5, 3, 8, 4, 7, 5, 10, 99, 1000]
qSet = pyx.quartileSet(num_list)
print('\nList :', num_list)
print('Quartile limit-Lower:', qSet[0])
print(' Upper:',qSet[1])
print(' Outliers :', pyx.listOutliers(num_list))
print('w/o Outliers :', pyx.removeOutliers(num_list))
print(' my Var :', pyx.variance_edc(pyx.removeOutliers(num_list)))
print(' Numpy.Var :', np.var(pyx.removeOutliers(num_list)))
str_ing = 'Pennsylvania'
print('\n', str_ing,'string reversed =', pyx.str_reverse(str_ing))
import pandas
import utils_edchin.DataProcessor as edc_dp # import class
dp = edc_dp() # create instance
df_in = pandas.DataFrame({"zip":[45763, 73627, 78632, 22374, 31455], "abbrev": ["OH", "MI", "SD", "PR", "PA"]})
df_out = dp.add_state_names(df_in) # use it
print('\nInput:\n', df_in.head())
print('Output:\n', df_out.head())
| 38.078947 | 115 | 0.572218 | 179 | 1,447 | 4.441341 | 0.536313 | 0.061635 | 0.075472 | 0.090566 | 0.072956 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037639 | 0.192122 | 1,447 | 37 | 116 | 39.108108 | 0.642429 | 0.257084 | 0 | 0 | 0 | 0 | 0.177154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b606c6f7d46c35659ab485c2ddd396a75dda115b | 831 | py | Python | reqlog/__main__.py | JFF-Bohdan/reqlog | a7ba7b6e12609d736b3cd8cd8bc2913d511848ee | [
"MIT"
] | null | null | null | reqlog/__main__.py | JFF-Bohdan/reqlog | a7ba7b6e12609d736b3cd8cd8bc2913d511848ee | [
"MIT"
] | null | null | null | reqlog/__main__.py | JFF-Bohdan/reqlog | a7ba7b6e12609d736b3cd8cd8bc2913d511848ee | [
"MIT"
] | null | null | null | import os
import sys
import bottle
base_module_dir = os.path.dirname(sys.modules[__name__].__file__)
try:
import reqlog # noqa: F401 # need to check import possibility
except ImportError:
path = base_module_dir
path = os.path.join(path, "..")
sys.path.insert(0, path)
import reqlog # noqa # testing that we able to import package
from reqlog.support.bottle_tools import log_all_routes # noqa
config = reqlog.get_config()
reqlog.setup_app(config)
log_all_routes(reqlog.logger, reqlog.application)
bottle.run(
app=reqlog.application,
host=config.get("main", "host"),
port=config.getint("main", "port"),
debug=config.getboolean("main", "debug"),
reloader=config.getboolean("main", "reloader"),
interval=config.getint("main", "reloader_interval")
)
| 25.96875 | 67 | 0.690734 | 108 | 831 | 5.12963 | 0.472222 | 0.036101 | 0.046931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005917 | 0.186522 | 831 | 31 | 68 | 26.806452 | 0.813609 | 0.113117 | 0 | 0.086957 | 0 | 0 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.304348 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b6086a825ed85a12a4788f76ef961e39f102218a | 43,116 | py | Python | djangobbs/uploads/exif.py | JuanbingTeam/djangobbs | 2d52d83b80758e153b0604e71fb0cef4e6528275 | [
"Apache-2.0"
] | null | null | null | djangobbs/uploads/exif.py | JuanbingTeam/djangobbs | 2d52d83b80758e153b0604e71fb0cef4e6528275 | [
"Apache-2.0"
] | null | null | null | djangobbs/uploads/exif.py | JuanbingTeam/djangobbs | 2d52d83b80758e153b0604e71fb0cef4e6528275 | [
"Apache-2.0"
] | null | null | null | # Library to extract EXIF information in digital camera image files
#
# To use this library call with:
# f=open(path_name, 'rb')
# tags=EXIF.process_file(f)
# tags will now be a dictionary mapping names of EXIF tags to their
# values in the file named by path_name. You can process the tags
# as you wish. In particular, you can iterate through all the tags with:
# for tag in tags.keys():
# if tag not in ('JPEGThumbnail', 'TIFFThumbnail', 'Filename',
# 'EXIF MakerNote'):
# print "Key: %s, value %s" % (tag, tags[tag])
# (This code uses the if statement to avoid printing out a few of the
# tags that tend to be long or boring.)
#
# The tags dictionary will include keys for all of the usual EXIF
# tags, and will also include keys for Makernotes used by some
# cameras, for which we have a good specification.
#
# Contains code from "exifdump.py" originally written by Thierry Bousch
# <bousch@topo.math.u-psud.fr> and released into the public domain.
#
# Updated and turned into general-purpose library by Gene Cash
#
# This copyright license is intended to be similar to the FreeBSD license.
#
# Copyright 2002 Gene Cash All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the
# distribution.
#
# THIS SOFTWARE IS PROVIDED BY GENE CASH ``AS IS'' AND ANY EXPRESS OR
# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# This means you may do anything you want with this code, except claim you
# wrote it. Also, if it breaks you get to keep both pieces.
#
# Patch Contributors:
# * Simon J. Gerraty <sjg@crufty.net>
# s2n fix & orientation decode
# * John T. Riedl <riedl@cs.umn.edu>
# Added support for newer Nikon type 3 Makernote format for D70 and some
# other Nikon cameras.
# * Joerg Schaefer <schaeferj@gmx.net>
# Fixed subtle bug when faking an EXIF header, which affected maker notes
# using relative offsets, and a fix for Nikon D100.
#
# 21-AUG-99 TB Last update by Thierry Bousch to his code.
# 17-JAN-02 CEC Discovered code on web.
# Commented everything.
# Made small code improvements.
# Reformatted for readability.
# 19-JAN-02 CEC Added ability to read TIFFs and JFIF-format JPEGs.
# Added ability to extract JPEG formatted thumbnail.
# Added ability to read GPS IFD (not tested).
# Converted IFD data structure to dictionaries indexed by
# tag name.
# Factored into library returning dictionary of IFDs plus
# thumbnail, if any.
# 20-JAN-02 CEC Added MakerNote processing logic.
# Added Olympus MakerNote.
# Converted data structure to single-level dictionary, avoiding
# tag name collisions by prefixing with IFD name. This makes
# it much easier to use.
# 23-JAN-02 CEC Trimmed nulls from end of string values.
# 25-JAN-02 CEC Discovered JPEG thumbnail in Olympus TIFF MakerNote.
# 26-JAN-02 CEC Added ability to extract TIFF thumbnails.
# Added Nikon, Fujifilm, Casio MakerNotes.
# 30-NOV-03 CEC Fixed problem with canon_decode_tag() not creating an
# IFD_Tag() object.
# 15-FEB-04 CEC Finally fixed bit shift warning by converting Y to 0L.
#
# field type descriptions as (length, abbreviation, full name) tuples
FIELD_TYPES=(
(0, 'X', 'Proprietary'), # no such type
(1, 'B', 'Byte'),
(1, 'A', 'ASCII'),
(2, 'S', 'Short'),
(4, 'L', 'Long'),
(8, 'R', 'Ratio'),
(1, 'SB', 'Signed Byte'),
(1, 'U', 'Undefined'),
(2, 'SS', 'Signed Short'),
(4, 'SL', 'Signed Long'),
(8, 'SR', 'Signed Ratio')
)
# dictionary of main EXIF tag names
# first element of tuple is tag name, optional second element is
# another dictionary giving names to values
EXIF_TAGS={
0x0100: ('ImageWidth', ),
0x0101: ('ImageLength', ),
0x0102: ('BitsPerSample', ),
0x0103: ('Compression',
{1: 'Uncompressed TIFF',
6: 'JPEG Compressed'}),
0x0106: ('PhotometricInterpretation', ),
0x010A: ('FillOrder', ),
0x010D: ('DocumentName', ),
0x010E: ('ImageDescription', ),
0x010F: ('Make', ),
0x0110: ('Model', ),
0x0111: ('StripOffsets', ),
0x0112: ('Orientation',
{1: 'Horizontal (normal)',
2: 'Mirrored horizontal',
3: 'Rotated 180',
4: 'Mirrored vertical',
5: 'Mirrored horizontal then rotated 90 CCW',
6: 'Rotated 90 CW',
7: 'Mirrored horizontal then rotated 90 CW',
8: 'Rotated 90 CCW'}),
0x0115: ('SamplesPerPixel', ),
0x0116: ('RowsPerStrip', ),
0x0117: ('StripByteCounts', ),
0x011A: ('XResolution', ),
0x011B: ('YResolution', ),
0x011C: ('PlanarConfiguration', ),
0x0128: ('ResolutionUnit',
{1: 'Not Absolute',
2: 'Pixels/Inch',
3: 'Pixels/Centimeter'}),
0x012D: ('TransferFunction', ),
0x0131: ('Software', ),
0x0132: ('DateTime', ),
0x013B: ('Artist', ),
0x013E: ('WhitePoint', ),
0x013F: ('PrimaryChromaticities', ),
0x0156: ('TransferRange', ),
0x0200: ('JPEGProc', ),
0x0201: ('JPEGInterchangeFormat', ),
0x0202: ('JPEGInterchangeFormatLength', ),
0x0211: ('YCbCrCoefficients', ),
0x0212: ('YCbCrSubSampling', ),
0x0213: ('YCbCrPositioning', ),
0x0214: ('ReferenceBlackWhite', ),
0x828D: ('CFARepeatPatternDim', ),
0x828E: ('CFAPattern', ),
0x828F: ('BatteryLevel', ),
0x8298: ('Copyright', ),
0x829A: ('ExposureTime', ),
0x829D: ('FNumber', ),
0x83BB: ('IPTC/NAA', ),
0x8769: ('ExifOffset', ),
0x8773: ('InterColorProfile', ),
0x8822: ('ExposureProgram',
{0: 'Unidentified',
1: 'Manual',
2: 'Program Normal',
3: 'Aperture Priority',
4: 'Shutter Priority',
5: 'Program Creative',
6: 'Program Action',
7: 'Portrait Mode',
8: 'Landscape Mode'}),
0x8824: ('SpectralSensitivity', ),
0x8825: ('GPSInfo', ),
0x8827: ('ISOSpeedRatings', ),
0x8828: ('OECF', ),
# print as string
0x9000: ('ExifVersion', lambda x: ''.join(map(chr, x))),
0x9003: ('DateTimeOriginal', ),
0x9004: ('DateTimeDigitized', ),
0x9101: ('ComponentsConfiguration',
{0: '',
1: 'Y',
2: 'Cb',
3: 'Cr',
4: 'Red',
5: 'Green',
6: 'Blue'}),
0x9102: ('CompressedBitsPerPixel', ),
0x9201: ('ShutterSpeedValue', ),
0x9202: ('ApertureValue', ),
0x9203: ('BrightnessValue', ),
0x9204: ('ExposureBiasValue', ),
0x9205: ('MaxApertureValue', ),
0x9206: ('SubjectDistance', ),
0x9207: ('MeteringMode',
{0: 'Unidentified',
1: 'Average',
2: 'CenterWeightedAverage',
3: 'Spot',
4: 'MultiSpot'}),
0x9208: ('LightSource',
{0: 'Unknown',
1: 'Daylight',
2: 'Fluorescent',
3: 'Tungsten',
10: 'Flash',
17: 'Standard Light A',
18: 'Standard Light B',
19: 'Standard Light C',
20: 'D55',
21: 'D65',
22: 'D75',
255: 'Other'}),
0x9209: ('Flash', {0: 'No',
1: 'Fired',
5: 'Fired (?)', # no return sensed
7: 'Fired (!)', # return sensed
9: 'Fill Fired',
13: 'Fill Fired (?)',
15: 'Fill Fired (!)',
16: 'Off',
24: 'Auto Off',
25: 'Auto Fired',
29: 'Auto Fired (?)',
31: 'Auto Fired (!)',
32: 'Not Available'}),
0x920A: ('FocalLength', ),
0x927C: ('MakerNote', ),
# print as string
0x9286: ('UserComment', lambda x: ''.join(map(chr, x))),
0x9290: ('SubSecTime', ),
0x9291: ('SubSecTimeOriginal', ),
0x9292: ('SubSecTimeDigitized', ),
# print as string
0xA000: ('FlashPixVersion', lambda x: ''.join(map(chr, x))),
0xA001: ('ColorSpace', ),
0xA002: ('ExifImageWidth', ),
0xA003: ('ExifImageLength', ),
0xA005: ('InteroperabilityOffset', ),
0xA20B: ('FlashEnergy', ), # 0x920B in TIFF/EP
0xA20C: ('SpatialFrequencyResponse', ), # 0x920C - -
0xA20E: ('FocalPlaneXResolution', ), # 0x920E - -
0xA20F: ('FocalPlaneYResolution', ), # 0x920F - -
0xA210: ('FocalPlaneResolutionUnit', ), # 0x9210 - -
0xA214: ('SubjectLocation', ), # 0x9214 - -
0xA215: ('ExposureIndex', ), # 0x9215 - -
0xA217: ('SensingMethod', ), # 0x9217 - -
0xA300: ('FileSource',
{3: 'Digital Camera'}),
0xA301: ('SceneType',
{1: 'Directly Photographed'}),
0xA302: ('CVAPattern',),
}
# interoperability tags
INTR_TAGS={
0x0001: ('InteroperabilityIndex', ),
0x0002: ('InteroperabilityVersion', ),
0x1000: ('RelatedImageFileFormat', ),
0x1001: ('RelatedImageWidth', ),
0x1002: ('RelatedImageLength', ),
}
# GPS tags (not used yet, haven't seen camera with GPS)
GPS_TAGS={
0x0000: ('GPSVersionID', ),
0x0001: ('GPSLatitudeRef', ),
0x0002: ('GPSLatitude', ),
0x0003: ('GPSLongitudeRef', ),
0x0004: ('GPSLongitude', ),
0x0005: ('GPSAltitudeRef', ),
0x0006: ('GPSAltitude', ),
0x0007: ('GPSTimeStamp', ),
0x0008: ('GPSSatellites', ),
0x0009: ('GPSStatus', ),
0x000A: ('GPSMeasureMode', ),
0x000B: ('GPSDOP', ),
0x000C: ('GPSSpeedRef', ),
0x000D: ('GPSSpeed', ),
0x000E: ('GPSTrackRef', ),
0x000F: ('GPSTrack', ),
0x0010: ('GPSImgDirectionRef', ),
0x0011: ('GPSImgDirection', ),
0x0012: ('GPSMapDatum', ),
0x0013: ('GPSDestLatitudeRef', ),
0x0014: ('GPSDestLatitude', ),
0x0015: ('GPSDestLongitudeRef', ),
0x0016: ('GPSDestLongitude', ),
0x0017: ('GPSDestBearingRef', ),
0x0018: ('GPSDestBearing', ),
0x0019: ('GPSDestDistanceRef', ),
0x001A: ('GPSDestDistance', )
}
# Nikon E99x MakerNote Tags
# http://members.tripod.com/~tawba/990exif.htm
MAKERNOTE_NIKON_NEWER_TAGS={
0x0002: ('ISOSetting', ),
0x0003: ('ColorMode', ),
0x0004: ('Quality', ),
0x0005: ('Whitebalance', ),
0x0006: ('ImageSharpening', ),
0x0007: ('FocusMode', ),
0x0008: ('FlashSetting', ),
0x0009: ('AutoFlashMode', ),
0x000B: ('WhiteBalanceBias', ),
0x000C: ('WhiteBalanceRBCoeff', ),
0x000F: ('ISOSelection', ),
0x0012: ('FlashCompensation', ),
0x0013: ('ISOSpeedRequested', ),
0x0016: ('PhotoCornerCoordinates', ),
0x0018: ('FlashBracketCompensationApplied', ),
0x0019: ('AEBracketCompensationApplied', ),
0x0080: ('ImageAdjustment', ),
0x0081: ('ToneCompensation', ),
0x0082: ('AuxiliaryLens', ),
0x0083: ('LensType', ),
0x0084: ('LensMinMaxFocalMaxAperture', ),
0x0085: ('ManualFocusDistance', ),
0x0086: ('DigitalZoomFactor', ),
0x0088: ('AFFocusPosition',
{0x0000: 'Center',
0x0100: 'Top',
0x0200: 'Bottom',
0x0300: 'Left',
0x0400: 'Right'}),
0x0089: ('BracketingMode',
{0x00: 'Single frame, no bracketing',
0x01: 'Continuous, no bracketing',
0x02: 'Timer, no bracketing',
0x10: 'Single frame, exposure bracketing',
0x11: 'Continuous, exposure bracketing',
0x12: 'Timer, exposure bracketing',
0x40: 'Single frame, white balance bracketing',
0x41: 'Continuous, white balance bracketing',
0x42: 'Timer, white balance bracketing'}),
0x008D: ('ColorMode', ),
0x008F: ('SceneMode?', ),
0x0090: ('LightingType', ),
0x0092: ('HueAdjustment', ),
0x0094: ('Saturation',
{-3: 'B&W',
-2: '-2',
-1: '-1',
0: '0',
1: '1',
2: '2'}),
0x0095: ('NoiseReduction', ),
0x00A7: ('TotalShutterReleases', ),
0x00A9: ('ImageOptimization', ),
0x00AA: ('Saturation', ),
0x00AB: ('DigitalVariProgram', ),
0x0010: ('DataDump', )
}
MAKERNOTE_NIKON_OLDER_TAGS={
0x0003: ('Quality',
{1: 'VGA Basic',
2: 'VGA Normal',
3: 'VGA Fine',
4: 'SXGA Basic',
5: 'SXGA Normal',
6: 'SXGA Fine'}),
0x0004: ('ColorMode',
{1: 'Color',
2: 'Monochrome'}),
0x0005: ('ImageAdjustment',
{0: 'Normal',
1: 'Bright+',
2: 'Bright-',
3: 'Contrast+',
4: 'Contrast-'}),
0x0006: ('CCDSpeed',
{0: 'ISO 80',
2: 'ISO 160',
4: 'ISO 320',
5: 'ISO 100'}),
0x0007: ('WhiteBalance',
{0: 'Auto',
1: 'Preset',
2: 'Daylight',
3: 'Incandescent',
4: 'Fluorescent',
5: 'Cloudy',
6: 'Speed Light'})
}
# decode Olympus SpecialMode tag in MakerNote
def olympus_special_mode(v):
a={
0: 'Normal',
1: 'Unknown',
2: 'Fast',
3: 'Panorama'}
b={
0: 'Non-panoramic',
1: 'Left to right',
2: 'Right to left',
3: 'Bottom to top',
4: 'Top to bottom'}
return '%s - sequence %d - %s' % (a[v[0]], v[1], b[v[2]])
MAKERNOTE_OLYMPUS_TAGS={
# ah HAH! those sneeeeeaky bastids! this is how they get past the fact
# that a JPEG thumbnail is not allowed in an uncompressed TIFF file
0x0100: ('JPEGThumbnail', ),
0x0200: ('SpecialMode', olympus_special_mode),
0x0201: ('JPEGQual',
{1: 'SQ',
2: 'HQ',
3: 'SHQ'}),
0x0202: ('Macro',
{0: 'Normal',
1: 'Macro'}),
0x0204: ('DigitalZoom', ),
0x0207: ('SoftwareRelease', ),
0x0208: ('PictureInfo', ),
# print as string
0x0209: ('CameraID', lambda x: ''.join(map(chr, x))),
0x0F00: ('DataDump', )
}
MAKERNOTE_CASIO_TAGS={
0x0001: ('RecordingMode',
{1: 'Single Shutter',
2: 'Panorama',
3: 'Night Scene',
4: 'Portrait',
5: 'Landscape'}),
0x0002: ('Quality',
{1: 'Economy',
2: 'Normal',
3: 'Fine'}),
0x0003: ('FocusingMode',
{2: 'Macro',
3: 'Auto Focus',
4: 'Manual Focus',
5: 'Infinity'}),
0x0004: ('FlashMode',
{1: 'Auto',
2: 'On',
3: 'Off',
4: 'Red Eye Reduction'}),
0x0005: ('FlashIntensity',
{11: 'Weak',
13: 'Normal',
15: 'Strong'}),
0x0006: ('Object Distance', ),
0x0007: ('WhiteBalance',
{1: 'Auto',
2: 'Tungsten',
3: 'Daylight',
4: 'Fluorescent',
5: 'Shade',
129: 'Manual'}),
0x000B: ('Sharpness',
{0: 'Normal',
1: 'Soft',
2: 'Hard'}),
0x000C: ('Contrast',
{0: 'Normal',
1: 'Low',
2: 'High'}),
0x000D: ('Saturation',
{0: 'Normal',
1: 'Low',
2: 'High'}),
0x0014: ('CCDSpeed',
{64: 'Normal',
80: 'Normal',
100: 'High',
125: '+1.0',
244: '+3.0',
250: '+2.0',})
}
MAKERNOTE_FUJIFILM_TAGS={
0x0000: ('NoteVersion', lambda x: ''.join(map(chr, x))),
0x1000: ('Quality', ),
0x1001: ('Sharpness',
{1: 'Soft',
2: 'Soft',
3: 'Normal',
4: 'Hard',
5: 'Hard'}),
0x1002: ('WhiteBalance',
{0: 'Auto',
256: 'Daylight',
512: 'Cloudy',
768: 'DaylightColor-Fluorescent',
769: 'DaywhiteColor-Fluorescent',
770: 'White-Fluorescent',
1024: 'Incandescent',
3840: 'Custom'}),
0x1003: ('Color',
{0: 'Normal',
256: 'High',
512: 'Low'}),
0x1004: ('Tone',
{0: 'Normal',
256: 'High',
512: 'Low'}),
0x1010: ('FlashMode',
{0: 'Auto',
1: 'On',
2: 'Off',
3: 'Red Eye Reduction'}),
0x1011: ('FlashStrength', ),
0x1020: ('Macro',
{0: 'Off',
1: 'On'}),
0x1021: ('FocusMode',
{0: 'Auto',
1: 'Manual'}),
0x1030: ('SlowSync',
{0: 'Off',
1: 'On'}),
0x1031: ('PictureMode',
{0: 'Auto',
1: 'Portrait',
2: 'Landscape',
4: 'Sports',
5: 'Night',
6: 'Program AE',
256: 'Aperture Priority AE',
512: 'Shutter Priority AE',
768: 'Manual Exposure'}),
0x1100: ('MotorOrBracket',
{0: 'Off',
1: 'On'}),
0x1300: ('BlurWarning',
{0: 'Off',
1: 'On'}),
0x1301: ('FocusWarning',
{0: 'Off',
1: 'On'}),
0x1302: ('AEWarning',
{0: 'Off',
1: 'On'})
}
MAKERNOTE_CANON_TAGS={
0x0006: ('ImageType', ),
0x0007: ('FirmwareVersion', ),
0x0008: ('ImageNumber', ),
0x0009: ('OwnerName', )
}
# see http://www.burren.cx/david/canon.html by David Burren
# this is in element offset, name, optional value dictionary format
MAKERNOTE_CANON_TAG_0x001={
1: ('Macromode',
{1: 'Macro',
2: 'Normal'}),
2: ('SelfTimer', ),
3: ('Quality',
{2: 'Normal',
3: 'Fine',
5: 'Superfine'}),
4: ('FlashMode',
{0: 'Flash Not Fired',
1: 'Auto',
2: 'On',
3: 'Red-Eye Reduction',
4: 'Slow Synchro',
5: 'Auto + Red-Eye Reduction',
6: 'On + Red-Eye Reduction',
16: 'external flash'}),
5: ('ContinuousDriveMode',
{0: 'Single Or Timer',
1: 'Continuous'}),
7: ('FocusMode',
{0: 'One-Shot',
1: 'AI Servo',
2: 'AI Focus',
3: 'MF',
4: 'Single',
5: 'Continuous',
6: 'MF'}),
10: ('ImageSize',
{0: 'Large',
1: 'Medium',
2: 'Small'}),
11: ('EasyShootingMode',
{0: 'Full Auto',
1: 'Manual',
2: 'Landscape',
3: 'Fast Shutter',
4: 'Slow Shutter',
5: 'Night',
6: 'B&W',
7: 'Sepia',
8: 'Portrait',
9: 'Sports',
10: 'Macro/Close-Up',
11: 'Pan Focus'}),
12: ('DigitalZoom',
{0: 'None',
1: '2x',
2: '4x'}),
13: ('Contrast',
{0xFFFF: 'Low',
0: 'Normal',
1: 'High'}),
14: ('Saturation',
{0xFFFF: 'Low',
0: 'Normal',
1: 'High'}),
15: ('Sharpness',
{0xFFFF: 'Low',
0: 'Normal',
1: 'High'}),
16: ('ISO',
{0: 'See ISOSpeedRatings Tag',
15: 'Auto',
16: '50',
17: '100',
18: '200',
19: '400'}),
17: ('MeteringMode',
{3: 'Evaluative',
4: 'Partial',
5: 'Center-weighted'}),
18: ('FocusType',
{0: 'Manual',
1: 'Auto',
3: 'Close-Up (Macro)',
8: 'Locked (Pan Mode)'}),
19: ('AFPointSelected',
{0x3000: 'None (MF)',
0x3001: 'Auto-Selected',
0x3002: 'Right',
0x3003: 'Center',
0x3004: 'Left'}),
20: ('ExposureMode',
{0: 'Easy Shooting',
1: 'Program',
2: 'Tv-priority',
3: 'Av-priority',
4: 'Manual',
5: 'A-DEP'}),
23: ('LongFocalLengthOfLensInFocalUnits', ),
24: ('ShortFocalLengthOfLensInFocalUnits', ),
25: ('FocalUnitsPerMM', ),
28: ('FlashActivity',
{0: 'Did Not Fire',
1: 'Fired'}),
29: ('FlashDetails',
{14: 'External E-TTL',
13: 'Internal Flash',
11: 'FP Sync Used',
7: '2nd("Rear")-Curtain Sync Used',
4: 'FP Sync Enabled'}),
32: ('FocusMode',
{0: 'Single',
1: 'Continuous'})
}
MAKERNOTE_CANON_TAG_0x004={
7: ('WhiteBalance',
{0: 'Auto',
1: 'Sunny',
2: 'Cloudy',
3: 'Tungsten',
4: 'Fluorescent',
5: 'Flash',
6: 'Custom'}),
9: ('SequenceNumber', ),
14: ('AFPointUsed', ),
15: ('FlashBias',
{0XFFC0: '-2 EV',
0XFFCC: '-1.67 EV',
0XFFD0: '-1.50 EV',
0XFFD4: '-1.33 EV',
0XFFE0: '-1 EV',
0XFFEC: '-0.67 EV',
0XFFF0: '-0.50 EV',
0XFFF4: '-0.33 EV',
0X0000: '0 EV',
0X000C: '0.33 EV',
0X0010: '0.50 EV',
0X0014: '0.67 EV',
0X0020: '1 EV',
0X002C: '1.33 EV',
0X0030: '1.50 EV',
0X0034: '1.67 EV',
0X0040: '2 EV'}),
19: ('SubjectDistance', )
}
# extract multibyte integer in Motorola format (little endian)
def s2n_motorola(str):
x=0
for c in str:
x=(x << 8) | ord(c)
return x
# extract multibyte integer in Intel format (big endian)
def s2n_intel(str):
x=0
y=0L
for c in str:
x=x | (ord(c) << y)
y=y+8
return x
# ratio object that eventually will be able to reduce itself to lowest
# common denominator for printing
def gcd(a, b):
if b == 0:
return a
else:
return gcd(b, a % b)
class Ratio:
def __init__(self, num, den):
self.num=num
self.den=den
def __repr__(self):
self.reduce()
if self.den == 1:
return str(self.num)
return '%d/%d' % (self.num, self.den)
def reduce(self):
div=gcd(self.num, self.den)
if div > 1:
self.num=self.num/div
self.den=self.den/div
# for ease of dealing with tags
class IFD_Tag:
def __init__(self, printable, tag, field_type, values, field_offset,
field_length):
# printable version of data
self.printable=printable
# tag ID number
self.tag=tag
# field type as index into FIELD_TYPES
self.field_type=field_type
# offset of start of field in bytes from beginning of IFD
self.field_offset=field_offset
# length of data field in bytes
self.field_length=field_length
# either a string or array of data items
self.values=values
def __str__(self):
return self.printable
def __repr__(self):
return '(0x%04X) %s=%s @ %d' % (self.tag,
FIELD_TYPES[self.field_type][2],
self.printable,
self.field_offset)
# class that handles an EXIF header
class EXIF_header:
def __init__(self, file, endian, offset, fake_exif, debug=0):
self.file=file
self.endian=endian
self.offset=offset
self.fake_exif=fake_exif
self.debug=debug
self.tags={}
# convert slice to integer, based on sign and endian flags
# usually this offset is assumed to be relative to the beginning of the
# start of the EXIF information. For some cameras that use relative tags,
# this offset may be relative to some other starting point.
def s2n(self, offset, length, signed=0):
self.file.seek(self.offset+offset)
slice=self.file.read(length)
if self.endian == 'I':
val=s2n_intel(slice)
else:
val=s2n_motorola(slice)
# Sign extension ?
if signed:
msb=1L << (8*length-1)
if val & msb:
val=val-(msb << 1)
return val
# convert offset to string
def n2s(self, offset, length):
s=''
for i in range(length):
if self.endian == 'I':
s=s+chr(offset & 0xFF)
else:
s=chr(offset & 0xFF)+s
offset=offset >> 8
return s
# return first IFD
def first_IFD(self):
return self.s2n(4, 4)
# return pointer to next IFD
def next_IFD(self, ifd):
entries=self.s2n(ifd, 2)
return self.s2n(ifd+2+12*entries, 4)
# return list of IFDs in header
def list_IFDs(self):
i=self.first_IFD()
a=[]
while i:
a.append(i)
i=self.next_IFD(i)
return a
# return list of entries in this IFD
def dump_IFD(self, ifd, ifd_name, dict=EXIF_TAGS, relative=0):
entries=self.s2n(ifd, 2)
for i in range(entries):
# entry is index of start of this IFD in the file
entry=ifd+2+12*i
tag=self.s2n(entry, 2)
# get tag name. We do it early to make debugging easier
tag_entry=dict.get(tag)
if tag_entry:
tag_name=tag_entry[0]
else:
tag_name='Tag 0x%04X' % tag
field_type=self.s2n(entry+2, 2)
if not 0 < field_type < len(FIELD_TYPES):
# unknown field type
raise ValueError, \
'unknown type %d in tag 0x%04X' % (field_type, tag)
typelen=FIELD_TYPES[field_type][0]
count=self.s2n(entry+4, 4)
offset=entry+8
if count*typelen > 4:
# offset is not the value; it's a pointer to the value
# if relative we set things up so s2n will seek to the right
# place when it adds self.offset. Note that this 'relative'
# is for the Nikon type 3 makernote. Other cameras may use
# other relative offsets, which would have to be computed here
# slightly differently.
if relative:
tmp_offset=self.s2n(offset, 4)
offset=tmp_offset+ifd-self.offset+4
if self.fake_exif:
offset=offset+18
else:
offset=self.s2n(offset, 4)
field_offset=offset
if field_type == 2:
# special case: null-terminated ASCII string
if count != 0:
self.file.seek(self.offset+offset)
values=self.file.read(count)
values=values.strip().replace('\x00','')
else:
values=''
else:
values=[]
signed=(field_type in [6, 8, 9, 10])
for j in range(count):
if field_type in (5, 10):
# a ratio
value_j=Ratio(self.s2n(offset, 4, signed),
self.s2n(offset+4, 4, signed))
else:
value_j=self.s2n(offset, typelen, signed)
values.append(value_j)
offset=offset+typelen
# now "values" is either a string or an array
if count == 1 and field_type != 2:
printable=str(values[0])
else:
printable=str(values)
# compute printable version of values
if tag_entry:
if len(tag_entry) != 1:
# optional 2nd tag element is present
if callable(tag_entry[1]):
# call mapping function
printable=tag_entry[1](values)
else:
printable=''
for i in values:
# use lookup table for this tag
printable+=tag_entry[1].get(i, repr(i))
self.tags[ifd_name+' '+tag_name]=IFD_Tag(printable, tag,
field_type,
values, field_offset,
count*typelen)
if self.debug:
print ' debug: %s: %s' % (tag_name,
repr(self.tags[ifd_name+' '+tag_name]))
# extract uncompressed TIFF thumbnail (like pulling teeth)
# we take advantage of the pre-existing layout in the thumbnail IFD as
# much as possible
def extract_TIFF_thumbnail(self, thumb_ifd):
entries=self.s2n(thumb_ifd, 2)
# this is header plus offset to IFD ...
if self.endian == 'M':
tiff='MM\x00*\x00\x00\x00\x08'
else:
tiff='II*\x00\x08\x00\x00\x00'
# ... plus thumbnail IFD data plus a null "next IFD" pointer
self.file.seek(self.offset+thumb_ifd)
tiff+=self.file.read(entries*12+2)+'\x00\x00\x00\x00'
# fix up large value offset pointers into data area
for i in range(entries):
entry=thumb_ifd+2+12*i
tag=self.s2n(entry, 2)
field_type=self.s2n(entry+2, 2)
typelen=FIELD_TYPES[field_type][0]
count=self.s2n(entry+4, 4)
oldoff=self.s2n(entry+8, 4)
# start of the 4-byte pointer area in entry
ptr=i*12+18
# remember strip offsets location
if tag == 0x0111:
strip_off=ptr
strip_len=count*typelen
# is it in the data area?
if count*typelen > 4:
# update offset pointer (nasty "strings are immutable" crap)
# should be able to say "tiff[ptr:ptr+4]=newoff"
newoff=len(tiff)
tiff=tiff[:ptr]+self.n2s(newoff, 4)+tiff[ptr+4:]
# remember strip offsets location
if tag == 0x0111:
strip_off=newoff
strip_len=4
# get original data and store it
self.file.seek(self.offset+oldoff)
tiff+=self.file.read(count*typelen)
# add pixel strips and update strip offset info
old_offsets=self.tags['Thumbnail StripOffsets'].values
old_counts=self.tags['Thumbnail StripByteCounts'].values
for i in range(len(old_offsets)):
# update offset pointer (more nasty "strings are immutable" crap)
offset=self.n2s(len(tiff), strip_len)
tiff=tiff[:strip_off]+offset+tiff[strip_off+strip_len:]
strip_off+=strip_len
# add pixel strip to end
self.file.seek(self.offset+old_offsets[i])
tiff+=self.file.read(old_counts[i])
self.tags['TIFFThumbnail']=tiff
# decode all the camera-specific MakerNote formats
# Note is the data that comprises this MakerNote. The MakerNote will
# likely have pointers in it that point to other parts of the file. We'll
# use self.offset as the starting point for most of those pointers, since
# they are relative to the beginning of the file.
#
# If the MakerNote is in a newer format, it may use relative addressing
# within the MakerNote. In that case we'll use relative addresses for the
# pointers.
#
# As an aside: it's not just to be annoying that the manufacturers use
# relative offsets. It's so that if the makernote has to be moved by the
# picture software all of the offsets don't have to be adjusted. Overall,
# this is probably the right strategy for makernotes, though the spec is
# ambiguous. (The spec does not appear to imagine that makernotes would
# follow EXIF format internally. Once they did, it's ambiguous whether
# the offsets should be from the header at the start of all the EXIF info,
# or from the header at the start of the makernote.)
def decode_maker_note(self):
note=self.tags['EXIF MakerNote']
make=self.tags['Image Make'].printable
model=self.tags['Image Model'].printable
# Nikon
# The maker note usually starts with the word Nikon, followed by the
# type of the makernote (1 or 2, as a short). If the word Nikon is
# not at the start of the makernote, it's probably type 2, since some
# cameras work that way.
if make in ('NIKON', 'NIKON CORPORATION'):
if note.values[0:7] == [78, 105, 107, 111, 110, 00, 01]:
if self.debug:
print "Looks like a type 1 Nikon MakerNote."
self.dump_IFD(note.field_offset+8, 'MakerNote',
dict=MAKERNOTE_NIKON_OLDER_TAGS)
elif note.values[0:7] == [78, 105, 107, 111, 110, 00, 02]:
if self.debug:
print "Looks like a labeled type 2 Nikon MakerNote"
if note.values[12:14] != [0, 42] and note.values[12:14] != [42L, 0L]:
raise ValueError, "Missing marker tag '42' in MakerNote."
# skip the Makernote label and the TIFF header
self.dump_IFD(note.field_offset+10+8, 'MakerNote',
dict=MAKERNOTE_NIKON_NEWER_TAGS, relative=1)
else:
# E99x or D1
if self.debug:
print "Looks like an unlabeled type 2 Nikon MakerNote"
self.dump_IFD(note.field_offset, 'MakerNote',
dict=MAKERNOTE_NIKON_NEWER_TAGS)
return
# Olympus
if make[:7] == 'OLYMPUS':
self.dump_IFD(note.field_offset+8, 'MakerNote',
dict=MAKERNOTE_OLYMPUS_TAGS)
return
# Casio
if make == 'Casio':
self.dump_IFD(note.field_offset, 'MakerNote',
dict=MAKERNOTE_CASIO_TAGS)
return
# Fujifilm
if make == 'FUJIFILM':
# bug: everything else is "Motorola" endian, but the MakerNote
# is "Intel" endian
endian=self.endian
self.endian='I'
# bug: IFD offsets are from beginning of MakerNote, not
# beginning of file header
offset=self.offset
self.offset+=note.field_offset
# process note with bogus values (note is actually at offset 12)
self.dump_IFD(12, 'MakerNote', dict=MAKERNOTE_FUJIFILM_TAGS)
# reset to correct values
self.endian=endian
self.offset=offset
return
# Canon
if make == 'Canon':
self.dump_IFD(note.field_offset, 'MakerNote',
dict=MAKERNOTE_CANON_TAGS)
for i in (('MakerNote Tag 0x0001', MAKERNOTE_CANON_TAG_0x001),
('MakerNote Tag 0x0004', MAKERNOTE_CANON_TAG_0x004)):
self.canon_decode_tag(self.tags[i[0]].values, i[1])
return
# decode Canon MakerNote tag based on offset within tag
# see http://www.burren.cx/david/canon.html by David Burren
def canon_decode_tag(self, value, dict):
for i in range(1, len(value)):
x=dict.get(i, ('Unknown', ))
if self.debug:
print i, x
name=x[0]
if len(x) > 1:
val=x[1].get(value[i], 'Unknown')
else:
val=value[i]
# it's not a real IFD Tag but we fake one to make everybody
# happy. this will have a "proprietary" type
self.tags['MakerNote '+name]=IFD_Tag(str(val), None, 0, None,
None, None)
# process an image file (expects an open file object)
# this is the function that has to deal with all the arbitrary nasty bits
# of the EXIF standard
def process_file(file, debug=0):
# determine whether it's a JPEG or TIFF
data=file.read(12)
if data[0:4] in ['II*\x00', 'MM\x00*']:
# it's a TIFF file
file.seek(0)
endian=file.read(1)
file.read(1)
offset=0
elif data[0:2] == '\xFF\xD8':
# it's a JPEG file
# skip JFIF style header(s)
fake_exif=0
while data[2] == '\xFF' and data[6:10] in ('JFIF', 'JFXX', 'OLYM'):
length=ord(data[4])*256+ord(data[5])
file.read(length-8)
# fake an EXIF beginning of file
data='\xFF\x00'+file.read(10)
fake_exif=1
if data[2] == '\xFF' and data[6:10] == 'Exif':
# detected EXIF header
offset=file.tell()
endian=file.read(1)
else:
# no EXIF information
return {}
else:
# file format not recognized
return {}
# deal with the EXIF info we found
if debug:
print {'I': 'Intel', 'M': 'Motorola'}[endian], 'format'
hdr=EXIF_header(file, endian, offset, fake_exif, debug)
ifd_list=hdr.list_IFDs()
ctr=0
for i in ifd_list:
if ctr == 0:
IFD_name='Image'
elif ctr == 1:
IFD_name='Thumbnail'
thumb_ifd=i
else:
IFD_name='IFD %d' % ctr
if debug:
print ' IFD %d (%s) at offset %d:' % (ctr, IFD_name, i)
hdr.dump_IFD(i, IFD_name)
# EXIF IFD
exif_off=hdr.tags.get(IFD_name+' ExifOffset')
if exif_off:
if debug:
print ' EXIF SubIFD at offset %d:' % exif_off.values[0]
hdr.dump_IFD(exif_off.values[0], 'EXIF')
# Interoperability IFD contained in EXIF IFD
intr_off=hdr.tags.get('EXIF SubIFD InteroperabilityOffset')
if intr_off:
if debug:
print ' EXIF Interoperability SubSubIFD at offset %d:' \
% intr_off.values[0]
hdr.dump_IFD(intr_off.values[0], 'EXIF Interoperability',
dict=INTR_TAGS)
# GPS IFD
gps_off=hdr.tags.get(IFD_name+' GPSInfo')
if gps_off:
if debug:
print ' GPS SubIFD at offset %d:' % gps_off.values[0]
hdr.dump_IFD(gps_off.values[0], 'GPS', dict=GPS_TAGS)
ctr+=1
# extract uncompressed TIFF thumbnail
thumb=hdr.tags.get('Thumbnail Compression')
if thumb and thumb.printable == 'Uncompressed TIFF':
hdr.extract_TIFF_thumbnail(thumb_ifd)
# JPEG thumbnail (thankfully the JPEG data is stored as a unit)
thumb_off=hdr.tags.get('Thumbnail JPEGInterchangeFormat')
if thumb_off:
file.seek(offset+thumb_off.values[0])
size=hdr.tags['Thumbnail JPEGInterchangeFormatLength'].values[0]
hdr.tags['JPEGThumbnail']=file.read(size)
# deal with MakerNote contained in EXIF IFD
if hdr.tags.has_key('EXIF MakerNote'):
hdr.decode_maker_note()
# Sometimes in a TIFF file, a JPEG thumbnail is hidden in the MakerNote
# since it's not allowed in a uncompressed TIFF IFD
if not hdr.tags.has_key('JPEGThumbnail'):
thumb_off=hdr.tags.get('MakerNote JPEGThumbnail')
if thumb_off:
file.seek(offset+thumb_off.values[0])
hdr.tags['JPEGThumbnail']=file.read(thumb_off.field_length)
return hdr.tags
# library test/debug function (dump given files)
if __name__ == '__main__':
import sys
if len(sys.argv) < 2:
print 'Usage: %s files...\n' % sys.argv[0]
sys.exit(0)
for filename in sys.argv[1:]:
try:
file=open(filename, 'rb')
except:
print filename, 'unreadable'
print
continue
print filename+':'
# data=process_file(file, 1) # with debug info
data=process_file(file)
if not data:
print 'No EXIF information found'
continue
x=data.keys()
x.sort()
for i in x:
if i in ('JPEGThumbnail', 'TIFFThumbnail'):
continue
try:
print ' %s (%s): %s' % \
(i, FIELD_TYPES[data[i].field_type][2], data[i].printable)
except:
print 'error', i, '"', data[i], '"'
if data.has_key('JPEGThumbnail'):
print 'File has JPEG thumbnail'
print
| 36.080335 | 86 | 0.505682 | 4,627 | 43,116 | 4.657445 | 0.259996 | 0.007935 | 0.003341 | 0.001949 | 0.120046 | 0.094524 | 0.05768 | 0.04 | 0.038701 | 0.025522 | 0 | 0.073047 | 0.371649 | 43,116 | 1,194 | 87 | 36.110553 | 0.72239 | 0.228685 | 0 | 0.165217 | 0 | 0 | 0.219499 | 0.021303 | 0 | 0 | 0.044742 | 0 | 0 | 0 | null | null | 0 | 0.001087 | null | null | 0.03587 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6143b3550f49956ddea8c4bfd01709e885c15e6 | 703 | py | Python | CodesComplete/Alura/DesignPatterns/calculador_de_impostos.py | vinimmelo/python | ef1f4e0550773592d3b0a88a3213de2f522870a3 | [
"MIT"
] | null | null | null | CodesComplete/Alura/DesignPatterns/calculador_de_impostos.py | vinimmelo/python | ef1f4e0550773592d3b0a88a3213de2f522870a3 | [
"MIT"
] | null | null | null | CodesComplete/Alura/DesignPatterns/calculador_de_impostos.py | vinimmelo/python | ef1f4e0550773592d3b0a88a3213de2f522870a3 | [
"MIT"
] | 1 | 2020-03-03T22:34:13.000Z | 2020-03-03T22:34:13.000Z | class Calculador_de_impostos:
def realiza_calculo(self, orcamento, imposto):
imposto_calculado = imposto.calcula(orcamento)
print(imposto_calculado)
if __name__ == '__main__':
from orcamento import Orcamento, Item
from impostos import ISS, ICMS, ICPP, IKCV
orcamento = Orcamento()
orcamento.adiciona_item(Item('ITEM 1', 50))
orcamento.adiciona_item(Item('ITEM 2', 200))
orcamento.adiciona_item(Item('ITEM 3', 250))
calculador_de_impostos = Calculador_de_impostos()
print('ISS e ICMS')
calculador_de_impostos.realiza_calculo(orcamento, ICMS(ISS()))
print('ICPP e IKCV')
calculador_de_impostos.realiza_calculo(orcamento, IKCV(ICPP()))
| 30.565217 | 67 | 0.721195 | 86 | 703 | 5.593023 | 0.360465 | 0.099792 | 0.2079 | 0.155925 | 0.359667 | 0.178794 | 0 | 0 | 0 | 0 | 0 | 0.0189 | 0.172119 | 703 | 22 | 68 | 31.954545 | 0.80756 | 0 | 0 | 0 | 0 | 0 | 0.066856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.25 | 0.1875 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
373624a9b643c4143bff91e625ea72076589eebb | 3,580 | py | Python | populate/populate.py | vascoalramos/roomie | 031aef815af910b259da0fef7cca5bec02459006 | [
"MIT"
] | null | null | null | populate/populate.py | vascoalramos/roomie | 031aef815af910b259da0fef7cca5bec02459006 | [
"MIT"
] | null | null | null | populate/populate.py | vascoalramos/roomie | 031aef815af910b259da0fef7cca5bec02459006 | [
"MIT"
] | 2 | 2021-06-16T07:12:35.000Z | 2021-06-16T22:47:46.000Z | from faker import Faker
import os, random, requests, json
fake = Faker()
BASE_URL = "http://localhost:8083/api"
cities = [
"Braga",
"Viseu",
"Porto",
"Lisboa",
"Guimarães",
"Leiria",
"Coimbra",
"Santarém",
"Guarda",
"Aveiro",
"Faro",
"Portimão",
"Beja",
"Évora",
]
def register_landlord():
profile = fake.simple_profile()
payload = {
"email": "l_" + str(fake.random_int(0, 100)) + profile["mail"],
"password": fake.password(),
"username": profile["username"],
"name": profile["name"],
"birthDate": "2021-04-14",
"sex": "male",
"nif": "111111111",
"address": "Av Test B1 2E, Viseu",
"phone": "9111111111",
}
response = requests.post(
f"{BASE_URL}/landlords",
data=payload,
files={
"file": open("./avatars/" + random.choice(os.listdir("./avatars")), "rb")
},
)
payload["id"] = response.json()["id"]
return payload
def register_tenant():
profile = fake.simple_profile()
payload = {
"email": "t_" + str(fake.random_int(0, 100)) + profile["mail"],
"password": fake.password(),
"username": profile["username"],
"name": profile["name"],
"birthDate": "2021-04-14",
"sex": "male",
"nif": "111111111",
"nationality": "PT",
"occupation": "Test occupation",
"phone": "9111111111",
}
response = requests.post(
f"{BASE_URL}/tenants",
data=payload,
files={
"file": open("./avatars/" + random.choice(os.listdir("./avatars")), "rb")
},
)
payload["id"] = response.json()["id"]
return payload
def login(email, password):
headers = {"content-type": "application/json"}
payload = {"email": email, "password": password}
response = requests.post(f"{BASE_URL}/auth/login", json=payload, headers=headers)
return response.json()["token"]
def post_house(token):
headers = {"Authorization": "Bearer " + token}
payload = {
"title": fake.text(),
"rooms": fake.random_int(0, 6),
"availableRooms": fake.random_int(0, 5),
"bathRooms": fake.random_int(0, 3),
"minPrice": 250,
"maxPrice": 300,
"description": fake.text(max_nb_chars=500).replace("\n", " "),
"features": "feat1,feat2,feat3,feat4",
"address": "Av Test B1 2E, " + random.choice(cities),
}
files = []
for _i in range(0, random.randint(1, 7)):
files.append(
("files", open("./houses/" + random.choice(os.listdir("./houses")), "rb"))
)
response = requests.post(
f"{BASE_URL}/houses",
data=payload,
files=files,
headers=headers,
)
return response.json()
def main():
landlords = []
tenants = []
houses = []
for _i in range(0, 50):
landlord = register_landlord()
landlord["token"] = login(landlord["email"], landlord["password"])
landlords.append(landlord)
for _i in range(0, 50):
tenant = register_tenant()
tenant["token"] = login(tenant["email"], tenant["password"])
tenants.append(tenant)
for _i in range(0, 500):
landlord = random.choice(landlords)
house = post_house(landlord["token"])
houses.append(house)
users = {"landlords": landlords, "tenants": tenants}
with open("./users.json", "w") as file:
json.dump(users, file, indent=4, sort_keys=True)
if __name__ == "__main__":
main()
| 25.755396 | 86 | 0.548603 | 378 | 3,580 | 5.103175 | 0.359788 | 0.018144 | 0.033696 | 0.036288 | 0.42198 | 0.358735 | 0.277864 | 0.277864 | 0.233281 | 0.233281 | 0 | 0.039418 | 0.270112 | 3,580 | 138 | 87 | 25.942029 | 0.698814 | 0 | 0 | 0.301724 | 0 | 0 | 0.218994 | 0.012291 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043103 | false | 0.051724 | 0.017241 | 0 | 0.094828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
373f83cec206a62336b452fd4c464d8bd69932f0 | 2,518 | py | Python | quex/engine/state_machine/algebra/TESTS/additional_laws/TEST/complement-relative.py | smmckay/quex-mirror | 7d75ed560e9f3a591935e59243188676eecb112a | [
"MIT"
] | null | null | null | quex/engine/state_machine/algebra/TESTS/additional_laws/TEST/complement-relative.py | smmckay/quex-mirror | 7d75ed560e9f3a591935e59243188676eecb112a | [
"MIT"
] | null | null | null | quex/engine/state_machine/algebra/TESTS/additional_laws/TEST/complement-relative.py | smmckay/quex-mirror | 7d75ed560e9f3a591935e59243188676eecb112a | [
"MIT"
] | null | null | null | import os
import sys
sys.path.insert(0, os.environ["QUEX_PATH"])
from quex.engine.state_machine.core import DFA
from quex.engine.state_machine.algebra.TESTS.helper import test2, test1, test3, union, \
intersection, \
identity, \
complement, \
difference, \
add_more_DFAs, sample_DFAs
if "--hwut-info" in sys.argv:
print "Complement: Relativity in difference operations;"
print "CHOICES: 1, 2, 3;"
print "HAPPY: [0-9]+;"
sys.exit()
count = 0
def one(A):
global count
assert identity(difference(A, A), DFA.Empty())
assert identity(difference(DFA.Empty(), A), DFA.Empty())
assert identity(difference(A, DFA.Empty()), A)
assert identity(difference(DFA.Universal(), A), complement(A))
assert identity(difference(A, DFA.Universal()), DFA.Empty())
count += 1
def two(A, B):
global count
assert identity(difference(B, A), intersection([complement(A), B]))
assert identity(complement(difference(B, A)), union([A, complement(B)]))
count += 1
def three(A, B, C):
global count
diff_C_B = difference(C.clone(), B.clone())
diff_C_A = difference(C.clone(), A.clone())
diff_B_A = difference(B.clone(), A.clone())
assert identity(difference(C.clone(), intersection([A.clone(), B.clone()])),
union([diff_C_A.clone(), diff_C_B.clone()]))
assert identity(difference(C.clone(), union([A.clone(), B.clone()])),
intersection([diff_C_A.clone(), diff_C_B.clone()]))
assert identity(difference(C.clone(), diff_B_A.clone()),
union([intersection([A.clone(), C.clone()]), diff_C_B.clone()]))
tmp = intersection([diff_B_A.clone(), C.clone()])
assert identity(tmp, difference(intersection([B.clone(), C.clone()]), A.clone()))
assert identity(tmp, intersection([B.clone(), diff_C_A.clone()]))
assert identity(union([diff_B_A.clone(), C.clone()]),
difference(union([B.clone(), C.clone()]), difference(A.clone(), C.clone())))
count += 1
if "1" in sys.argv:
add_more_DFAs()
test1(one)
elif "2" in sys.argv:
test2(two)
elif "3" in sys.argv:
sample_DFAs(3)
test3(three)
print "<terminated: %i>" % count
| 36.492754 | 96 | 0.554011 | 309 | 2,518 | 4.407767 | 0.200647 | 0.133627 | 0.15859 | 0.035242 | 0.340675 | 0.175477 | 0.076358 | 0.076358 | 0.076358 | 0.076358 | 0 | 0.011173 | 0.289118 | 2,518 | 68 | 97 | 37.029412 | 0.749721 | 0 | 0 | 0.109091 | 0 | 0 | 0.0469 | 0 | 0 | 0 | 0 | 0 | 0.236364 | 0 | null | null | 0 | 0.072727 | null | null | 0.072727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
374c7d9cbe16ddf4267d0363ddef0fd64684f962 | 5,451 | py | Python | Fund/main.py | livi2000/FundSpider | c79407241fe189b61afc54dd2e5b73c906aae0b5 | [
"MIT"
] | null | null | null | Fund/main.py | livi2000/FundSpider | c79407241fe189b61afc54dd2e5b73c906aae0b5 | [
"MIT"
] | null | null | null | Fund/main.py | livi2000/FundSpider | c79407241fe189b61afc54dd2e5b73c906aae0b5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from url_manager import *
from downloader import *
from parser import *
from collector import *
from url_manager import FundURLIndex
class FundMain(object):
def __init__(self):
self.url_manager = FundURLManager()
self.html_downloader = FundDownloader()
self.html_paser = FundParser()
self.collector = FundCollector()
#先定接口,再做实现,其中首页特殊处理一下,基金三个月才出一次一次季报,如果不是数据结构改了大部分时间没必要全量更新
def crawl(self, homeurl, incremental=True):
# 先处理首页
home_content = self.html_downloader.download(homeurl)
if home_content is None:
return
funds_info = self.html_paser.parse_home(home_content)
if funds_info is None:
return
count = 0
finished_count = [0]
for fund_info_code in funds_info:
#全量更新或者新的基金才下载
if not incremental or not self.collector.fundexist(fund_info_code):
self.url_manager.add_url(fund_info_code)
count += 1
print '共需爬取基金详情 ' + str(count) + " 个"
def inner_crawl(isretry=False):
if isretry:
self.url_manager.transfer_url()
while (not self.url_manager.is_empyt() and not self.url_manager.is_overflow()):
urls = self.url_manager.pop_url()
fundcode = urls[FundURLIndex.CODE.value]
try:
#简化一下问题,只有所有相关页面都下载完毕才算ok
print 'start parse ' + urls[FundURLIndex.MAIN.value]
basecontent = self.html_downloader.download(urls[FundURLIndex.BASE.value])
ratiocontent = self.html_downloader.download(urls[FundURLIndex.RATIO.value])
statisticcontent = self.html_downloader.download(urls[FundURLIndex.STATISTIC.value])
stockscontent = self.html_downloader.download(urls[FundURLIndex.STOCKS.value])
annualcontent = self.html_downloader.download(urls[FundURLIndex.ANNUAL.value])
#只要有一个失败就都重试哦,其实也有个别网页是真的不存在,但懒得管了
if basecontent is None or len(basecontent) == 0 or ratiocontent is None or len(ratiocontent) == 0\
or statisticcontent is None or len(statisticcontent) == 0 or stockscontent is None or len(stockscontent) == 0 \
or annualcontent is None or len(annualcontent) == 0:
print 'download fund ' + fundcode + ' failed'
self.url_manager.fail_url(fundcode)
continue
self.url_manager.finish_url(fundcode)
result = self.html_paser.parse_fund(basecontent, ratiocontent, statisticcontent, stockscontent, annualcontent, urls[FundURLIndex.MAIN.value])
self.collector.addFund(result)
finished_count[0] += 1
print 'finish parse fund ' + fundcode + " " + str(finished_count[0]) + '/' + str(count)
except Exception as e:
print 'parse fund ' + fundcode + ' fail, cause ' + str(e)
self.url_manager.fail_url(fundcode)
#尝试重试两次吧,因为第一时间就重试其实很可能还是出错
inner_crawl()
inner_crawl(True)
inner_crawl(True)
print 'success finish parse url sum ' + str(finished_count[0])
print 'failed urls is'
self.url_manager.output_faileds()
if __name__ == "__main__":
icMain = FundMain()
icMain.crawl('http://fund.eastmoney.com/allfund.html', False)
# url_manager = SBURLManager()
# # http://m.zhcw.com/clienth5.do?lottery=FC_SSQ&kjissue=2005001&transactionType=300302&src=0000100001%7C6000003060
# for year in range(2005, 2018):
# for index in range(1, 160):
# url_manager.add_url("http://m.zhcw.com/clienth5.do?lottery=FC_SSQ&kjissue=" + str(year) + '{0:03}'.format(index) + "&transactionType=300302&src=0000100001%7C6000003060")
#
# import json
# downloader = SBDownloader()
# parse_count = 0
# areaDic = dict()
# while (not url_manager.is_empyt()):
# url = url_manager.pop_url()
# content = downloader.download(url)
# # 懒得重试了哦
# if content is not None and len(content) > 0:
# d = json.loads(content)
# l = d.get("dataList", None)
# if l is not None:
# parse_count += 1
# for info in l:
# area = info['dqname']
# ones = int(info["onez"])
# money = int(info['tzmoney'])
# sum = areaDic.get(area, None)
# if sum is None:
# areaDic[area] = (ones, money)
# else:
# areaDic[area] = (sum[0] + ones, sum[1] + money)
#
# # 最后输出结果
# print "统计双色球地域特性共" + str(parse_count) + "期"
#
# areaResult = dict()
# for area in areaDic:
# count = areaDic[area][0]
# money = areaDic[area][1]
# if count > 0:
# average = money / count
# else :
# average = 10000000000
# # print area + '购买彩票共' + str(money) + '元, 共', str(count) + "人中头奖, 平均每花" + average + "出一个头奖嘻嘻"
# areaResult[area] = average
#
# print '按照平均花费中头奖金额排序:'
# for key, value in sorted(areaResult.iteritems(), key=lambda (k,v): (v,k)):
# print "%s每花%d万可出一个头奖" % (key, value/10000)
| 42.255814 | 183 | 0.571455 | 580 | 5,451 | 5.237931 | 0.305172 | 0.052666 | 0.046083 | 0.05135 | 0.157999 | 0.115207 | 0.026991 | 0.026991 | 0.026991 | 0.026991 | 0 | 0.031098 | 0.321592 | 5,451 | 128 | 184 | 42.585938 | 0.790427 | 0.332233 | 0 | 0.098361 | 0 | 0 | 0.049414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.081967 | null | null | 0.114754 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37581aba7700b786b80fdf0d929cb3132078bc45 | 3,563 | py | Python | test/unit/report/test_table.py | colibri-coruscans/pyGSTi | da54f4abf668a28476030528f81afa46a1fbba33 | [
"Apache-2.0"
] | 73 | 2016-01-28T05:02:05.000Z | 2022-03-30T07:46:33.000Z | test/unit/report/test_table.py | colibri-coruscans/pyGSTi | da54f4abf668a28476030528f81afa46a1fbba33 | [
"Apache-2.0"
] | 113 | 2016-02-25T15:32:18.000Z | 2022-03-31T13:18:13.000Z | test/unit/report/test_table.py | colibri-coruscans/pyGSTi | da54f4abf668a28476030528f81afa46a1fbba33 | [
"Apache-2.0"
] | 41 | 2016-03-15T19:32:07.000Z | 2022-02-16T10:22:05.000Z | from pygsti.report.table import ReportTable
from ..util import BaseCase
class TableInstanceTester(BaseCase):
custom_headings = {
'html': 'test',
'python': 'test',
'latex': 'test'
}
def setUp(self):
self.table = ReportTable(self.custom_headings, ['Normal'] * 4) # Four formats
def test_element_accessors(self):
self.table.add_row(['1.0'], ['Normal'])
self.assertTrue('1.0' in self.table)
self.assertEqual(len(self.table), self.table.num_rows)
row_by_key = self.table.row(key=self.table.row_names[0])
row_by_idx = self.table.row(index=0)
self.assertEqual(row_by_key, row_by_idx)
col_by_key = self.table.col(key=self.table.col_names[0])
col_by_idx = self.table.col(index=0)
self.assertEqual(col_by_key, col_by_idx)
def test_to_string(self):
s = str(self.table)
# TODO assert correctness
def test_render_HTML(self):
self.table.add_row(['1.0'], ['Normal'])
self.table.add_row(['1.0'], ['Normal'])
render = self.table.render('html')
# TODO assert correctness
def test_render_LaTeX(self):
self.table.add_row(['1.0'], ['Normal'])
self.table.add_row(['1.0'], ['Normal'])
render = self.table.render('latex')
# TODO assert correctness
def test_finish(self):
self.table.add_row(['1.0'], ['Normal'])
self.table.finish()
# TODO assert correctness
def test_render_raises_on_unknown_format(self):
with self.assertRaises(NotImplementedError):
self.table.render('foobar')
def test_raise_on_invalid_accessor(self):
# XXX are these neccessary? EGN: maybe not - checks invalid inputs, which maybe shouldn't need testing?
with self.assertRaises(KeyError):
self.table['foobar']
with self.assertRaises(KeyError):
self.table.row(key='foobar') # invalid key
with self.assertRaises(ValueError):
self.table.row(index=100000) # out of bounds
with self.assertRaises(ValueError):
self.table.row() # must specify key or index
with self.assertRaises(ValueError):
self.table.row(key='foobar', index=1) # cannot specify key and index
with self.assertRaises(KeyError):
self.table.col(key='foobar') # invalid key
with self.assertRaises(ValueError):
self.table.col(index=100000) # out of bounds
with self.assertRaises(ValueError):
self.table.col() # must specify key or index
with self.assertRaises(ValueError):
self.table.col(key='foobar', index=1) # cannot specify key and index
class CustomHeadingTableTester(TableInstanceTester):
def setUp(self):
self.table = ReportTable([0.1], ['Normal'], self.custom_headings)
def test_labels(self):
self.table.add_row(['1.0'], ['Normal'])
self.assertTrue('1.0' in self.table)
rowLabels = list(self.table.keys())
self.assertEqual(rowLabels, self.table.row_names)
self.assertEqual(len(rowLabels), self.table.num_rows)
self.assertTrue(rowLabels[0] in self.table)
row1Data = self.table[rowLabels[0]]
colLabels = list(row1Data.keys())
self.assertEqual(colLabels, self.table.col_names)
self.assertEqual(len(colLabels), self.table.num_cols)
class CustomHeadingNoFormatTableTester(TableInstanceTester):
def setUp(self):
self.table = ReportTable(self.custom_headings, None)
| 35.989899 | 112 | 0.63963 | 446 | 3,563 | 4.991031 | 0.215247 | 0.165768 | 0.089847 | 0.04717 | 0.519766 | 0.504492 | 0.408805 | 0.369272 | 0.369272 | 0.289308 | 0 | 0.015734 | 0.23295 | 3,563 | 98 | 113 | 36.357143 | 0.798756 | 0.104687 | 0 | 0.291667 | 0 | 0 | 0.048189 | 0 | 0 | 0 | 0 | 0.010204 | 0.277778 | 1 | 0.152778 | false | 0 | 0.027778 | 0 | 0.236111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3758975e21f34ce7477e14ad7bcebc66b331327c | 364 | py | Python | pyFunc/pyFunc_12.py | pemtash/pyrevision2022 | c1a9510729b44f61575f406865eb823cb7cabd63 | [
"Apache-2.0"
] | null | null | null | pyFunc/pyFunc_12.py | pemtash/pyrevision2022 | c1a9510729b44f61575f406865eb823cb7cabd63 | [
"Apache-2.0"
] | null | null | null | pyFunc/pyFunc_12.py | pemtash/pyrevision2022 | c1a9510729b44f61575f406865eb823cb7cabd63 | [
"Apache-2.0"
] | null | null | null | def namedArgumentFunction(a, b, c):
print("the values are a: {}, b: {}, c: {}".format(a,b,c))
namedArgumentFunction(100, 200, 300) # positional arguments
namedArgumentFunction(c=3, a=1, b=2) # named arguments
#namedArgumentFunction(181, a=102, b=103) # mix of position + name error
namedArgumentFunction(101, b=102, c=103) # mix of position + no error | 45.5 | 73 | 0.692308 | 53 | 364 | 4.754717 | 0.54717 | 0.02381 | 0.035714 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09772 | 0.156593 | 364 | 8 | 74 | 45.5 | 0.723127 | 0.365385 | 0 | 0 | 0 | 0 | 0.155251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37598780f2fe29fd5a5224fb42181e382bdf6a7e | 1,435 | py | Python | views.py | ezl/hnofficehours | 3729eca064998bd2d0a9ba1b4fe7e56ccc57324b | [
"MIT"
] | 2 | 2015-11-05T13:47:44.000Z | 2020-07-20T19:57:45.000Z | views.py | ezl/hnofficehours | 3729eca064998bd2d0a9ba1b4fe7e56ccc57324b | [
"MIT"
] | null | null | null | views.py | ezl/hnofficehours | 3729eca064998bd2d0a9ba1b4fe7e56ccc57324b | [
"MIT"
] | null | null | null | from django.contrib.auth.models import User
from datetime import datetime, timedelta
from django.core.urlresolvers import reverse
from django.shortcuts import render_to_response, get_object_or_404
from django.http import HttpResponseRedirect
from django.template import RequestContext
from django.views.generic.simple import direct_to_template
from schedule.models import Event
from schedule.periods import Period
def site_index(request, template_name='index.html'):
# most future office hours to show
MAX_FUTURE_OFFICE_HOURS = 30
# furthest into the future to display office hours
MAX_FUTURE_DAYS = 30
users_available_now = User.objects.filter(profile__is_available=True)
events = Event.objects.all()
now = Period(events=events, start=datetime.now(),
end=datetime.now() + timedelta(minutes=1))
occurences = now.get_occurrences()
users_holding_office_hours_now = map(lambda x: x.event.creator, occurences)
users = set(list(users_available_now) + users_holding_office_hours_now)
future = Period(events=events, start=datetime.now(),
end=datetime.now() + timedelta(days=MAX_FUTURE_DAYS))
upcoming_office_hours = future.get_occurrences()
upcoming_office_hours = upcoming_office_hours[:MAX_FUTURE_OFFICE_HOURS]
return direct_to_template(request, template_name, locals())
def about(request):
return direct_to_template(request, 'about.html')
| 44.84375 | 79 | 0.772822 | 192 | 1,435 | 5.53125 | 0.401042 | 0.09322 | 0.045198 | 0.037665 | 0.210923 | 0.107345 | 0.107345 | 0.107345 | 0.107345 | 0.107345 | 0 | 0.006541 | 0.147735 | 1,435 | 31 | 80 | 46.290323 | 0.861815 | 0.056446 | 0 | 0 | 0 | 0 | 0.014804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.346154 | 0.038462 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3759a78356966487f78b8550100b9d77dd7fd966 | 712 | py | Python | declarative/properties/__init__.py | jrollins/python-declarative | ac3ba9bf56611adefb4b2673e50bd8067c024e6b | [
"Apache-2.0"
] | 6 | 2018-02-28T18:32:06.000Z | 2022-03-20T13:04:05.000Z | declarative/properties/__init__.py | jrollins/python-declarative | ac3ba9bf56611adefb4b2673e50bd8067c024e6b | [
"Apache-2.0"
] | 2 | 2021-02-22T17:18:59.000Z | 2021-03-03T16:39:22.000Z | declarative/properties/__init__.py | jrollins/python-declarative | ac3ba9bf56611adefb4b2673e50bd8067c024e6b | [
"Apache-2.0"
] | 1 | 2021-02-09T18:58:53.000Z | 2021-02-09T18:58:53.000Z | # -*- coding: utf-8 -*-
"""
"""
from __future__ import (
division,
print_function,
absolute_import,
)
from .bases import (
PropertyTransforming,
HasDeclaritiveAttributes,
InnerException,
PropertyAttributeError,
)
from .memoized import (
memoized_class_property,
mproperty,
dproperty,
mproperty_plain,
dproperty_plain,
mproperty_fns,
dproperty_fns,
mfunction,
)
from .memoized_adv import (
mproperty_adv,
dproperty_adv,
)
from .memoized_adv_group import (
dproperty_adv_group,
mproperty_adv_group,
group_mproperty,
group_dproperty,
)
#because this is the critical unique object
from ..utilities.unique import (
NOARG,
)
| 16.181818 | 43 | 0.696629 | 70 | 712 | 6.757143 | 0.485714 | 0.07611 | 0.063425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001808 | 0.223315 | 712 | 43 | 44 | 16.55814 | 0.853526 | 0.088483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.205882 | 0 | 0.205882 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
375c83f1d331a617d630736291806350c6d98cad | 10,993 | py | Python | k8skiller.py | ech0png/k8skiller | 1f066a0c02acf2b71bb7805c18d08899ba7ac25f | [
"Apache-2.0"
] | null | null | null | k8skiller.py | ech0png/k8skiller | 1f066a0c02acf2b71bb7805c18d08899ba7ac25f | [
"Apache-2.0"
] | null | null | null | k8skiller.py | ech0png/k8skiller | 1f066a0c02acf2b71bb7805c18d08899ba7ac25f | [
"Apache-2.0"
] | null | null | null | import urllib3
from art import *
from terminaltables import AsciiTable
from vulnsVerify import *
from podTable import *
from listarPods import *
from menu import *
from shells import *
from podDeploy import *
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
tprint("K8SKILLER")
print("1 - Search for vulnerabilities in the host.")
print()
print("2 - Using Service Account.")
print()
opcao = int(input("Option: "))
print()
if opcao == 2:
host = input("Host: ")
sa = input("Service Account: ")
ns = input("Service Account Namespace: ")
print()
menu_service()
while True:
command = input("k8skiller: ")
print()
# Opção 1 - LISTAGEM DE PODS
if command == "1":
listar_pods_service(host, sa, ns)
# Opção 2 - SHELL SIMPLES NO POD ESCOLHIDO
elif command == "2":
pod_name = input("Pod Name: ")
if pod_name == "exit":
pass
else:
shell_service(host, sa, ns, pod_name)
# Opção 3 - DEPLOY DE POD MALICIOSO
elif command == "3":
tabela = [["ID", "POD", "DESCRIPTION"], ["1", "Busybox Mount Node Filesystem", "Monta o filesystem do Node."], ["2", "Busybox RCE Node", "Obtem uma shell no Node."]]
tabela_ascii = AsciiTable(tabela)
print(tabela_ascii.table)
print()
malicioso = input("Option ID: ")
if malicioso == "exit":
pass
else:
pod_deploy_service(host, sa, ns, int(malicioso))
# Opção 4 - DELETAR POD MALICIOSO
elif command == "4":
pod_name = input("Pod Name: ")
if pod_name == "exit":
pass
else:
pod_delete_service(host, sa, ns, pod_name)
# Opção menu - RETORNA AS OPÇÕES DO MENU
elif command == "menu":
menu_service()
# Opção exit - FECHA A FERRAMENTA
elif command == "exit":
break
elif opcao == 1:
host = input("Host: ")
print()
print("Searching Vulnerabilities...")
#Verificando se as vulnerabilidades existem;
kubelet, apiserver, hostFull = vuln_verify(host)
# Caso o cluster não esteja vulnerável a nenhum dos ataques!
if kubelet == False and apiserver == "False":
print("[-] Host not vulnerable to a Kubelet or API Server attack!")
# Caso o cluster esteja vulnerável ao ataque ao Kubelet
elif kubelet == True:
print()
print("[+] Host may be vulnerable to a Kubelet Attack!")
print()
menu_kubelet()
pod, namespace, container = listar_pods(hostFull)
while True:
command = input("k8skiller: ")
# Opção 1 - LISTAGEM DE PODS
if command == "1":
pod, namespace, container = listar_pods(hostFull)
podTable = pod_table_kubelet(pod, namespace, container)
print(podTable)
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 2 - LISTAGEM DE SECRETS
elif command == "2":
id = 0
for i in range(len(pod)):
if "tiller" in pod[i]:
id = i
listar_secrets_kubelet(host, pod, container, id)
# Opção 3 - SHELL SIMPLES NO POD ESCOLHIDO
elif command == "3":
num = input("Pod ID: ")
if num == "exit":
pass
else:
id = int(num) - 1
while True:
print()
comando_exec = input(pod[id]+" # ")
shellPod = shell(comando_exec, hostFull, namespace, pod, container, id)
if shellPod == "exit":
break
else:
print(shell(comando_exec, hostFull, namespace, pod, container, id))
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 4 - DEPLOY DE POD MALICIOSO
elif command == "4":
tabela = [["ID", "POD MALICIOSO", "DESCRIÇÃO"], ["1", "Busybox Mount Node Filesystem", "Monta o filesystem do Node."], ["2", "Busybox RCE Node", "Obtem uma shell no Node."]]
tabela_ascii = AsciiTable(tabela)
print(tabela_ascii.table)
print()
malicioso = int(input("Option ID: "))
if malicioso == "exit":
pass
else:
# Obtenção correta do id do pod com privilégios de criação de pods, que será utilizado para o RCE.
id = 0
pod, namespace, container = listar_pods(hostFull)
for i in range(len(pod)):
if "tiller" in pod[i]:
id = i + 1
pod_deploy(hostFull, pod, namespace, container, malicioso, id)
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 5 - DELETAR POD MALICIOSO
elif command == "5":
pod, namespace, container = listar_pods(hostFull)
pod_id = 0
pod_id_malicioso = ""
malicioso = 0
for i in range(len(pod)):
if pod[i] == "busybox-rce" or pod[i] == "busybox-filesystem":
pod_id_malicioso = pod[i]
for y in range(len(container)):
if container[y] == "tiller":
pod_id = y
else:
pass
print("*** POD BUSYBOX SPOTTED! ***")
print()
malicioso = int(input("Are you sure? (1 - YES / 0 - NO): "))
if malicioso == 1:
pod_delete(hostFull, pod, namespace, container, pod_id, pod_id_malicioso)
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 6 - OBTER ACESSO AO HOST
elif command == "6":
hostShell(host, hostFull, namespace, pod, container)
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção menu - RETORNA AS OPÇÕES DO MENU
elif command == "menu":
menu_kubelet()
# Opção exit - FECHA A FERRAMENTA
elif command == "exit":
break
# Caso o cluster esteja vulnerável ao ataque a API Server
if apiserver == True:
print("[+] Host may be vulnerable to an API Server attack!")
print()
menu_api()
while True:
command = input("k8skiller: ")
# Opção 1 - LISTAGEM DE SECRETS;
if command == "1":
print()
print("--------------------------------------------------------------------------------------------------------")
print()
listar_secrets(hostFull)
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 2 - LISTAGEM DE PODS;
elif command == "2":
print()
print("--------------------------------------------------------------------------------------------------------")
print()
listar_pods_api(hostFull)
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 3 - SHELL EM POD;
elif command == "3":
nome = input("Pod Name: ")
if nome == "exit":
pass
else:
namespace_str = input("Pod Namespace: ")
if namespace_str == "exit":
pass
else:
shell_api(hostFull, namespace_str, nome)
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 4 - DEPLOY POD MALICIOSO;
elif command == "4":
tabela = [["ID", "POD MALICIOSO", "DESCRIÇÃO"], ["1", "Busybox Mount Node Filesystem", "Monta o filesystem do Node."], ["2", "Busybox RCE Node", "Obtem uma shell no Node."]]
tabela_ascii = AsciiTable(tabela)
print(tabela_ascii.table)
print()
malicioso = input("Option ID: ")
# Obtenção correta do id do pod com privilégios de criação de pods, que será utilizado para o RCE.
if malicioso == "exit":
pass
else:
pod_deploy_api(hostFull, int(malicioso))
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção 5 - DELETAR POD;
elif command == "5":
pod = input("Pod Name to delete: ")
if pod == "exit":
pass
else:
print()
ns = input("Pod Namespace: ")
if ns == "exit":
pass
else:
malicioso = int(input("Are you sure you want to delete the pod " +(pod)+ " in the namespace " + (ns) + " (1 - YES / 0 - NO): "))
if malicioso == 1:
pod_delete_api(hostFull, ns, pod)
print()
print("--------------------------------------------------------------------------------------------------------")
print()
# Opção menu - RETORNA AS OPÇÕES DO MENU
elif command == "menu":
menu_api()
# Opção exit - FECHA A FERRAMENTA
elif command == "exit":
break | 38.844523 | 189 | 0.395888 | 916 | 10,993 | 4.68559 | 0.165939 | 0.058248 | 0.027959 | 0.037279 | 0.551491 | 0.523299 | 0.42987 | 0.360438 | 0.309646 | 0.243709 | 0 | 0.009028 | 0.405531 | 10,993 | 283 | 190 | 38.844523 | 0.647743 | 0.098699 | 0 | 0.661972 | 0 | 0 | 0.239348 | 0.126303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.051643 | 0.042254 | 0 | 0.042254 | 0.29108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
375d646ba6e1a05a1beb8bd9fc2faa1d4c02305c | 5,216 | py | Python | tests/archive/test_archive_value.py | heikomuller/histore | d600052514a1c5f672137f76a6e1388184b17cd4 | [
"BSD-3-Clause"
] | 2 | 2020-09-05T23:27:41.000Z | 2021-08-08T20:46:54.000Z | tests/archive/test_archive_value.py | heikomuller/histore | d600052514a1c5f672137f76a6e1388184b17cd4 | [
"BSD-3-Clause"
] | 22 | 2020-05-22T01:38:08.000Z | 2021-04-28T12:41:46.000Z | tests/archive/test_archive_value.py | heikomuller/histore | d600052514a1c5f672137f76a6e1388184b17cd4 | [
"BSD-3-Clause"
] | 1 | 2021-08-08T20:46:58.000Z | 2021-08-08T20:46:58.000Z | # This file is part of the History Store (histore).
#
# Copyright (C) 2018-2021 New York University.
#
# The History Store (histore) is released under the Revised BSD License. See
# file LICENSE for full license details.
"""Unit test for archived cell values."""
import pytest
from histore.archive.value import MultiVersionValue, SingleVersionValue
from histore.archive.timestamp import SingleVersion, Timestamp, TimeInterval
def test_cell_history():
"""Test adding values to the history of a dataset row cell."""
cell = SingleVersionValue(value=1, timestamp=SingleVersion(version=1))
assert cell.at_version(version=1) == 1
assert cell.is_single_version()
assert not cell.is_multi_version()
with pytest.raises(ValueError):
cell.at_version(version=2)
assert cell.at_version(version=2, raise_error=False) is None
cell = cell.merge(value=1, version=2)
assert cell.at_version(version=1) == 1
assert cell.at_version(version=2) == 1
assert cell.diff(original_version=1, new_version=2) is None
assert cell.at_version(version=3, raise_error=False) is None
prov = cell.diff(original_version=2, new_version=3)
assert prov is not None
assert prov.old_value == 1
assert prov.new_value is None
cell = SingleVersionValue(value=1, timestamp=SingleVersion(version=1))
cell = cell.merge(value='1', version=2)
assert len(cell.values) == 2
assert cell.at_version(version=1) == 1
assert cell.at_version(version=2) == '1'
prov = cell.diff(original_version=1, new_version=2)
assert prov is not None
assert prov.old_value == 1
assert prov.new_value == '1'
with pytest.raises(ValueError):
cell.at_version(version=3)
cell = cell.merge(value=1, version=3)
assert len(cell.values) == 2
assert cell.at_version(version=1) == 1
assert cell.at_version(version=2) == '1'
assert cell.at_version(version=3) == 1
assert not cell.is_single_version()
assert cell.is_multi_version()
def test_extend_cell_value_timestamp():
"""Test extending the timestamp of a cell value."""
cell = SingleVersionValue(value=1, timestamp=SingleVersion(version=1))
cell = cell.extend(version=2, origin=1)
assert not cell.timestamp.contains(0)
assert cell.timestamp.contains(1)
assert cell.timestamp.contains(2)
assert not cell.timestamp.contains(3)
cell = cell.extend(version=4, origin=0)
assert not cell.timestamp.contains(0)
assert cell.timestamp.contains(1)
assert cell.timestamp.contains(2)
assert not cell.timestamp.contains(3)
assert not cell.timestamp.contains(4)
cell = cell.merge(value='1', version=3)
cell = cell.merge(value=1, version=4)
cell = cell.extend(version=5, origin=4)
cell = cell.extend(version=6, origin=3)
assert cell.at_version(1) == 1
assert cell.at_version(2) == 1
assert cell.at_version(3) == '1'
assert cell.at_version(4) == 1
assert cell.at_version(5) == 1
assert cell.at_version(6) == '1'
with pytest.raises(ValueError):
cell.at_version(0)
def test_rollback_multi_value():
"""Test rollback for single version values."""
value = MultiVersionValue([
SingleVersionValue(
value=1,
timestamp=Timestamp(intervals=[TimeInterval(start=2, end=3)])
),
SingleVersionValue(
value=2,
timestamp=Timestamp(intervals=[TimeInterval(start=4, end=5)])
)
])
value = value.rollback(4)
assert isinstance(value, MultiVersionValue)
assert len(value.values) == 2
assert value.at_version(3) == 1
assert value.at_version(4) == 2
value = value.rollback(2)
assert isinstance(value, SingleVersionValue)
assert value.value == 1
# -- Rollback to version that did not contain the value -------------------
value = MultiVersionValue([
SingleVersionValue(
value=1,
timestamp=Timestamp(intervals=[TimeInterval(start=2, end=3)])
),
SingleVersionValue(
value=2,
timestamp=Timestamp(intervals=[TimeInterval(start=4, end=5)])
)
])
assert value.rollback(1) is None
def test_rollback_single_value():
"""Test rollback for single version values."""
value = SingleVersionValue(
value=1,
timestamp=Timestamp(intervals=[TimeInterval(start=1, end=3)])
)
value = value.rollback(2)
assert value.value == 1
assert value.timestamp.contains(1)
assert value.timestamp.contains(2)
assert not value.timestamp.contains(3)
assert value.rollback(0) is None
def test_value_repr():
"""Test string representations for archive values."""
value = SingleVersionValue(
value=1,
timestamp=Timestamp(intervals=[TimeInterval(start=1, end=3)])
)
assert str(value) == '(1 [[1, 3]])'
value = MultiVersionValue([
SingleVersionValue(
value=1,
timestamp=Timestamp(intervals=[TimeInterval(start=2, end=3)])
),
SingleVersionValue(
value=2,
timestamp=Timestamp(intervals=[TimeInterval(start=4, end=5)])
)
])
assert str(value) == '((1 [[2, 3]]), (2 [[4, 5]]))'
| 35.243243 | 79 | 0.665836 | 680 | 5,216 | 5.026471 | 0.129412 | 0.067291 | 0.072264 | 0.088941 | 0.700995 | 0.587185 | 0.576653 | 0.545933 | 0.418666 | 0.407548 | 0 | 0.032329 | 0.211273 | 5,216 | 147 | 80 | 35.482993 | 0.798493 | 0.105828 | 0 | 0.520661 | 0 | 0 | 0.010158 | 0 | 0 | 0 | 0 | 0 | 0.429752 | 1 | 0.041322 | false | 0 | 0.024793 | 0 | 0.066116 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
375e217f752444584219ab50db3fdf6f47a97b25 | 1,275 | py | Python | jts/backend/event/migrations/0002_auto_20191009_1119.py | goupaz/babylon | 4e638d02705469061e563fec349676d8faa9f648 | [
"MIT"
] | 1 | 2019-08-08T09:03:17.000Z | 2019-08-08T09:03:17.000Z | backend/event/migrations/0002_auto_20191009_1119.py | goupaz/website | ce1bc8b6c52ee0815a7b98842ec3bde0c20e0add | [
"Apache-2.0"
] | 2 | 2020-10-09T19:16:09.000Z | 2020-10-10T20:40:41.000Z | jts/backend/event/migrations/0002_auto_20191009_1119.py | goupaz/babylon-hackathon | 4e638d02705469061e563fec349676d8faa9f648 | [
"MIT"
] | 1 | 2019-07-21T01:42:21.000Z | 2019-07-21T01:42:21.000Z | # Generated by Django 2.2 on 2019-10-09 18:19
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('event', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('users', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='eventattendee',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='event_attendee', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='event',
name='event_type',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='event.EventType'),
),
migrations.AddField(
model_name='event',
name='host_user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='event',
name='user_types',
field=models.ManyToManyField(to='users.UserType'),
),
]
| 31.875 | 141 | 0.62902 | 140 | 1,275 | 5.564286 | 0.364286 | 0.051348 | 0.071887 | 0.112965 | 0.3543 | 0.3543 | 0.269576 | 0.269576 | 0.269576 | 0.269576 | 0 | 0.023085 | 0.252549 | 1,275 | 39 | 142 | 32.692308 | 0.794334 | 0.033725 | 0 | 0.34375 | 1 | 0 | 0.112195 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.09375 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
376519ec24ba2ad35ffc9686805878b26230c5a8 | 897 | py | Python | ansiblemetrics/playbook/num_included_vars.py | radon-h2020/AnsibleMetrics | 8a8e27d9b54fc1578d00526c8663184a2e686cb2 | [
"Apache-2.0"
] | 1 | 2020-04-24T16:09:14.000Z | 2020-04-24T16:09:14.000Z | ansiblemetrics/playbook/num_included_vars.py | radon-h2020/AnsibleMetrics | 8a8e27d9b54fc1578d00526c8663184a2e686cb2 | [
"Apache-2.0"
] | null | null | null | ansiblemetrics/playbook/num_included_vars.py | radon-h2020/AnsibleMetrics | 8a8e27d9b54fc1578d00526c8663184a2e686cb2 | [
"Apache-2.0"
] | null | null | null | import ansiblemetrics.utils as utils
from ansiblemetrics.ansible_metric import AnsibleMetric
class NumIncludedVars(AnsibleMetric):
""" This class measures the number of included variables in a playbook.
"""
def count(self):
"""Return the number of included variables.
Example
-------
.. highlight:: python
.. code-block:: python
from ansiblemetrics.general.num_included_vars import NumIncludedVars
playbook = '''
- name: Include a play after another play
include_vars: myvars.yaml
'''
NumIncludedVars(playbook).count()
>> 1
Returns
-------
int
number of included variables
"""
script = self.playbook
keys = utils.all_keys(script)
return sum(1 for i in keys if i == 'include_vars')
| 24.243243 | 80 | 0.57971 | 89 | 897 | 5.775281 | 0.550562 | 0.046693 | 0.093385 | 0.145914 | 0.108949 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00335 | 0.334448 | 897 | 36 | 81 | 24.916667 | 0.857621 | 0.510591 | 0 | 0 | 0 | 0 | 0.040404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
376705b0b8ad10c2fc0878dcf3a019ac3ddc7559 | 1,721 | py | Python | predict.py | smacawi/tweet-classifier | 948f7c4123e37f07071482e528d411203166e5f7 | [
"MIT"
] | null | null | null | predict.py | smacawi/tweet-classifier | 948f7c4123e37f07071482e528d411203166e5f7 | [
"MIT"
] | 10 | 2020-01-24T23:03:28.000Z | 2021-04-26T12:01:09.000Z | predict.py | smacawi/tweet-classifier | 948f7c4123e37f07071482e528d411203166e5f7 | [
"MIT"
] | 1 | 2019-12-23T23:46:47.000Z | 2019-12-23T23:46:47.000Z | from allennlp.data.vocabulary import Vocabulary
from content_analyzer.models.rnn_classifier import RnnClassifier
from allennlp.data.tokenizers.word_tokenizer import WordTokenizer
from content_analyzer.data.dataset_readers.twitter import TwitterNLPDatasetReader
from allennlp.data.token_indexers import PretrainedBertIndexer
from allennlp.modules.token_embedders import PretrainedBertEmbedder
from allennlp.modules.text_field_embedders import BasicTextFieldEmbedder
from allennlp.modules.seq2vec_encoders import Seq2VecEncoder, PytorchSeq2VecWrapper
import torch
from allennlp.predictors import Predictor
from allennlp.predictors.text_classifier import TextClassifierPredictor
import overrides
from allennlp.common.util import JsonDict
indexer = PretrainedBertIndexer('bert-base-uncased')
wt = WordTokenizer()
tdr = TwitterNLPDatasetReader({"tokens": indexer}, wt)
GRU_args = {
"bidirectional": True,
"input_size": 768,
"hidden_size": 768,
"num_layers": 1,
}
print("vocab")
vocab = Vocabulary.from_files("out/flood_model/vocabulary")
print("embedder")
token_embedder = PretrainedBertEmbedder("bert-base-uncased")
text_embedder = BasicTextFieldEmbedder({"tokens": token_embedder}, allow_unmatched_keys = True)
print("encoder")
seq2vec = PytorchSeq2VecWrapper(torch.nn.GRU(batch_first=True, **GRU_args))
print("model")
model = RnnClassifier(vocab, text_embedder, seq2vec)
print("model state")
with open("out/flood_model/best.th", 'rb') as f:
state_dict = torch.load(f)
model.load_state_dict(state_dict)
predictor = TextClassifierPredictor(model, tdr)
prediction = predictor.predict("five people missing according to state police. if you have any information please contact us.")
print(prediction) | 40.97619 | 127 | 0.818129 | 204 | 1,721 | 6.754902 | 0.480392 | 0.078374 | 0.034833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008317 | 0.091807 | 1,721 | 42 | 128 | 40.97619 | 0.873321 | 0 | 0 | 0 | 0 | 0 | 0.156794 | 0.028455 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.342105 | 0 | 0.342105 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
378539d2d8f38f193773d342677f939cb42a4203 | 746 | py | Python | mdn_ik/test.py | uenian33/Franka_Panda_IK_Sensor | c9956fb7a7f1d570104296af72aa2a600085ae6e | [
"MIT"
] | null | null | null | mdn_ik/test.py | uenian33/Franka_Panda_IK_Sensor | c9956fb7a7f1d570104296af72aa2a600085ae6e | [
"MIT"
] | null | null | null | mdn_ik/test.py | uenian33/Franka_Panda_IK_Sensor | c9956fb7a7f1d570104296af72aa2a600085ae6e | [
"MIT"
] | 1 | 2021-12-07T11:47:03.000Z | 2021-12-07T11:47:03.000Z | import torch
a = torch.rand(3, 4)
#a = a.unsqueeze(0)
#print(a.reshape(3,4,1))
b = torch.rand(3, 4)
#b = b.unsqueeze(0)
print(b)
c = torch.stack([a, b, b, b, b], dim=1)
c = torch.rand(3, 20)
print(c)
c = c.reshape(3, 5, 4)
print(c.shape)
d = torch.rand(3, 5)
d = d.reshape(3,5,1)
print(d)
e = c*d
print(c*d)
print(torch.mean(e, axis=1))
print(torch.mean(e, axis=1).reshape(6,2))
f = torch.mean(e, axis=1).reshape(6,2)
print(f)
#f = f.reshape(f.shape[0],1,f.shape[1])
#print(f)
f = torch.stack([f,f,f], dim=1)
print(f)
f = f.reshape(f.shape[0]*f.shape[1], f.shape[2])
print(f)
"""
a = torch.rand(1, 3, 4)
print(a.shape)
b = torch.rand(3, 4)
print(b.shape)
b = b.unsqueeze(0)
print(b.shape)
c = torch.cat([a, b], dim=0)
print(c.shape)
""" | 15.87234 | 48 | 0.601877 | 169 | 746 | 2.656805 | 0.153846 | 0.03118 | 0.111359 | 0.073497 | 0.394209 | 0.340757 | 0.2049 | 0.2049 | 0 | 0 | 0 | 0.065421 | 0.13941 | 746 | 47 | 49 | 15.87234 | 0.633956 | 0.140751 | 0 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
378ca6c4004eb7f05493e60723c6ea0ea4a59fbb | 5,136 | py | Python | server/src/sdistance.py | bepnye/brat | 28acfb2d3cce20bd4d4ff1a67690e271675841f2 | [
"CC-BY-3.0"
] | 20 | 2015-01-26T01:39:44.000Z | 2020-05-30T19:04:14.000Z | server/src/sdistance.py | bepnye/brat | 28acfb2d3cce20bd4d4ff1a67690e271675841f2 | [
"CC-BY-3.0"
] | 7 | 2015-04-11T12:57:42.000Z | 2016-04-08T13:43:44.000Z | server/src/sdistance.py | bepnye/brat | 28acfb2d3cce20bd4d4ff1a67690e271675841f2 | [
"CC-BY-3.0"
] | 13 | 2015-01-26T01:39:45.000Z | 2022-03-09T16:45:09.000Z | #!/usr/bin/env python
'''
Various string distance measures.
Author: Pontus Stenetorp <pontus stenetorp se>
Version: 2011-08-09
'''
from string import digits, lowercase
from sys import maxint
DIGITS = set(digits)
LOWERCASE = set(lowercase)
TSURUOKA_2004_INS_CHEAP = set((' ', '-', ))
TSURUOKA_2004_DEL_CHEAP = TSURUOKA_2004_INS_CHEAP
TSURUOKA_2004_REPL_CHEAP = set([(a, b) for a in DIGITS for b in DIGITS] +
[(a, a.upper()) for a in LOWERCASE] +
[(a.upper(), a) for a in LOWERCASE] +
[(' ', '-'), ('-', '_')])
# Testing; not sure number replacements should be cheap.
NONNUM_T2004_REPL_CHEAP = set([(a, a.upper()) for a in LOWERCASE] +
[(a.upper(), a) for a in LOWERCASE] +
[(' ', '-'), ('-', '_')])
TSURUOKA_INS = dict([(c, 10) for c in TSURUOKA_2004_INS_CHEAP])
TSURUOKA_DEL = dict([(c, 10) for c in TSURUOKA_2004_DEL_CHEAP])
#TSURUOKA_REPL = dict([(c, 10) for c in TSURUOKA_2004_REPL_CHEAP])
TSURUOKA_REPL = dict([(c, 10) for c in NONNUM_T2004_REPL_CHEAP])
def tsuruoka(a, b):
# Special case for empties
if len(a) == 0 or len(b) == 0:
return 100*max(len(a),len(b))
# Initialise the first column
prev_min_col = [0]
for b_c in b:
prev_min_col.append(prev_min_col[-1] + TSURUOKA_INS.get(b_c, 100))
curr_min_col = prev_min_col
for a_c in a:
curr_min_col = [prev_min_col[0] + TSURUOKA_DEL.get(a_c, 100)]
for b_i, b_c in enumerate(b):
if b_c == a_c:
curr_min_col.append(prev_min_col[b_i])
else:
curr_min_col.append(min(
prev_min_col[b_i + 1] + TSURUOKA_DEL.get(a_c, 100),
curr_min_col[-1] + TSURUOKA_INS.get(b_c, 100),
prev_min_col[b_i] + TSURUOKA_REPL.get((a_c, b_c), 50)
))
prev_min_col = curr_min_col
return curr_min_col[-1]
def tsuruoka_local(a, b, edge_insert_cost=1, max_cost=maxint):
# Variant of the tsuruoka metric for local (substring) alignment:
# penalizes initial or final insertion for a by a different
# (normally small or zero) cost than middle insertion.
# If the current cost at any point exceeds max_cost, returns
# max_cost, which may allow early return.
# Special cases for empties
if len(a) == 0:
return len(b)*edge_insert_cost
if len(b) == 0:
return 100*len(b)
# Shortcut: strict containment
if a in b:
cost = (len(b)-len(a)) * edge_insert_cost
return cost if cost < max_cost else max_cost
# Initialise the first column. Any sequence of initial inserts
# have edge_insert_cost.
prev_min_col = [0]
for b_c in b:
prev_min_col.append(prev_min_col[-1] + edge_insert_cost)
curr_min_col = prev_min_col
for a_c in a:
curr_min_col = [prev_min_col[0] + TSURUOKA_DEL.get(a_c, 100)]
for b_i, b_c in enumerate(b):
if b_c == a_c:
curr_min_col.append(prev_min_col[b_i])
else:
curr_min_col.append(min(
prev_min_col[b_i + 1] + TSURUOKA_DEL.get(a_c, 100),
curr_min_col[-1] + TSURUOKA_INS.get(b_c, 100),
prev_min_col[b_i] + TSURUOKA_REPL.get((a_c, b_c), 50)
))
# early return
if min(curr_min_col) >= max_cost:
return max_cost
prev_min_col = curr_min_col
# Any number of trailing inserts have edge_insert_cost
min_cost = curr_min_col[-1]
for i in range(len(curr_min_col)):
cost = curr_min_col[i] + edge_insert_cost * (len(curr_min_col)-i-1)
min_cost = min(min_cost, cost)
if min_cost < max_cost:
return min_cost
else:
return max_cost
def tsuruoka_norm(a, b):
return 1 - (tsuruoka(a,b) / (max(len(a),len(b)) * 100.))
def levenshtein(a, b):
# Special case for empties
if len(a) == 0 or len(b) == 0:
return max(len(a),len(b))
# Initialise the first column
prev_min_col = [0]
for b_c in b:
prev_min_col.append(prev_min_col[-1] + 1)
curr_min_col = prev_min_col
for a_c in a:
curr_min_col = [prev_min_col[0] + 1]
for b_i, b_c in enumerate(b):
if b_c == a_c:
curr_min_col.append(prev_min_col[b_i])
else:
curr_min_col.append(min(
prev_min_col[b_i + 1] + 1,
curr_min_col[-1] + 1,
prev_min_col[b_i] + 1
))
prev_min_col = curr_min_col
return curr_min_col[-1]
if __name__ == '__main__':
for a, b in (('kitten', 'sitting'), ('Saturday', 'Sunday'), ('Caps', 'caps'), ('', 'bar'), ('dog', 'dog'), ('dog', '___dog__'), ('dog', '__d_o_g__')):
print 'levenshtein', a, b, levenshtein(a,b)
print 'tsuruoka', a, b, tsuruoka(a,b)
print 'tsuruoka_local', a, b, tsuruoka_local(a,b)
print 'tsuruoka_norm', a, b, tsuruoka_norm(a,b)
| 34.013245 | 154 | 0.573988 | 784 | 5,136 | 3.457908 | 0.160714 | 0.115087 | 0.099594 | 0.036518 | 0.523792 | 0.46588 | 0.447436 | 0.447436 | 0.424567 | 0.395426 | 0 | 0.033127 | 0.306464 | 5,136 | 150 | 155 | 34.24 | 0.727962 | 0.141355 | 0 | 0.530612 | 0 | 0 | 0.030658 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.020408 | null | null | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
378cc1e59dd6dd0b75d637ce5507619c66eb1093 | 2,431 | py | Python | expressmanage/customers/views.py | abbas133/expressmanage-free | cd4b5a37fa012781c70ade933885b1c63bc7f2df | [
"MIT"
] | null | null | null | expressmanage/customers/views.py | abbas133/expressmanage-free | cd4b5a37fa012781c70ade933885b1c63bc7f2df | [
"MIT"
] | null | null | null | expressmanage/customers/views.py | abbas133/expressmanage-free | cd4b5a37fa012781c70ade933885b1c63bc7f2df | [
"MIT"
] | null | null | null | from django.views import generic
from django.urls import reverse_lazy
from django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin
from .forms import CustomerForm
from .models import Customer
from .helper import CustomerSummary
class Customer_IndexView(LoginRequiredMixin, generic.ListView):
template_name = 'customers/index.html'
def get_queryset(self):
return Customer.objects.all()
class Customer_DetailView(LoginRequiredMixin, PermissionRequiredMixin, generic.DetailView):
raise_exception = True
permission_required = ('customers.view_customer')
model = Customer
template_name = 'customers/detail.html'
object = None
def get(self, request, *args, **kwargs):
self.object = self.get_object()
recent_invoices = CustomerSummary.get_recent_invoices(self.object)[:3]
active_lots = CustomerSummary.get_active_lots(self.object)
active_invoices = CustomerSummary.get_active_invoices(self.object)
pending_amount = CustomerSummary.get_pending_amount(self.object)
return self.render_to_response(
self.get_context_data(
recent_invoices=recent_invoices,
active_lots=active_lots,
active_invoices=active_invoices,
pending_amount=pending_amount
)
)
class Customer_CreateView(LoginRequiredMixin, PermissionRequiredMixin, generic.CreateView):
raise_exception = True
permission_required = ('customers.add_customer')
model = Customer
form_class = CustomerForm
template_name = 'customers/edit.html'
def get_success_url(self):
return reverse_lazy('customers:customer_detail', kwargs={'pk': self.object.pk})
class Customer_UpdateView(LoginRequiredMixin, PermissionRequiredMixin, generic.UpdateView):
raise_exception = True
permission_required = ('customers.change_customer')
model = Customer
form_class = CustomerForm
template_name = 'customers/edit.html'
def get_success_url(self):
return reverse_lazy('customers:customer_detail', kwargs={'pk': self.object.pk})
class Customer_DeleteView(LoginRequiredMixin, PermissionRequiredMixin, generic.DeleteView):
raise_exception = True
permission_required = ('customers.delete_customer')
model = Customer
template_name = 'customers/delete.html'
success_url = reverse_lazy('customers:customer_index') | 33.763889 | 91 | 0.739202 | 254 | 2,431 | 6.830709 | 0.279528 | 0.040346 | 0.060519 | 0.064553 | 0.34121 | 0.34121 | 0.189049 | 0.189049 | 0.189049 | 0.189049 | 0 | 0.000502 | 0.180584 | 2,431 | 72 | 92 | 33.763889 | 0.870482 | 0 | 0 | 0.307692 | 0 | 0 | 0.112253 | 0.08676 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.115385 | 0.057692 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
378e8237793b5c3217bc562c15e5d1954a382294 | 835 | py | Python | tests/test_image_upload.py | ephes/django-cast | 34b6aab98f7e9a750116ec2949e9cda4f2dcb127 | [
"BSD-3-Clause"
] | 11 | 2018-12-23T15:58:35.000Z | 2021-10-04T12:14:46.000Z | tests/test_image_upload.py | ephes/django-cast | 34b6aab98f7e9a750116ec2949e9cda4f2dcb127 | [
"BSD-3-Clause"
] | 9 | 2018-11-18T12:12:29.000Z | 2022-02-27T09:51:36.000Z | tests/test_image_upload.py | ephes/django-cast | 34b6aab98f7e9a750116ec2949e9cda4f2dcb127 | [
"BSD-3-Clause"
] | 12 | 2018-11-17T15:13:09.000Z | 2020-05-02T00:10:07.000Z | import pytest
from django.urls import reverse
class TestImageUpload:
@pytest.mark.django_db
def test_upload_image_not_authenticated(self, client, small_jpeg_io):
upload_url = reverse("cast:api:upload_image")
small_jpeg_io.seek(0)
r = client.post(upload_url, {"original": small_jpeg_io})
# redirect to login
assert r.status_code == 302
@pytest.mark.django_db
def test_upload_image_authenticated(self, client, user, small_jpeg_io):
# login
r = client.login(username=user.username, password=user._password)
# upload
upload_url = reverse("cast:api:upload_image")
small_jpeg_io.seek(0)
r = client.post(upload_url, {"original": small_jpeg_io})
assert r.status_code == 201
assert int(r.content.decode("utf-8")) > 0
| 29.821429 | 75 | 0.670659 | 113 | 835 | 4.690265 | 0.39823 | 0.101887 | 0.124528 | 0.067925 | 0.471698 | 0.471698 | 0.471698 | 0.471698 | 0.335849 | 0.335849 | 0 | 0.015456 | 0.22515 | 835 | 27 | 76 | 30.925926 | 0.803709 | 0.035928 | 0 | 0.470588 | 0 | 0 | 0.078652 | 0.052434 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.117647 | false | 0.058824 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3792efaebf3dc5a8b5c922f66b2df8883d7ff1ec | 1,055 | py | Python | Sorting Algorithms/quick_sort.py | Divyamop/Python-DSA | 43cc8ffddd632ba07ef91ac4d4daeede949341c6 | [
"MIT"
] | 13 | 2021-10-02T09:25:07.000Z | 2022-01-30T17:49:52.000Z | Sorting Algorithms/quick_sort.py | Divyamop/Python-DSA | 43cc8ffddd632ba07ef91ac4d4daeede949341c6 | [
"MIT"
] | 14 | 2021-10-01T12:58:14.000Z | 2021-10-05T15:42:52.000Z | Sorting Algorithms/quick_sort.py | Divyamop/Python-DSA | 43cc8ffddd632ba07ef91ac4d4daeede949341c6 | [
"MIT"
] | 32 | 2021-10-01T12:40:00.000Z | 2021-10-14T05:09:14.000Z | """
Quick sort is a divide and conquer algorithm
Steps:
1. We first select an element randomly which we call pivot element. We can choose any element as pivot element.
But for consistency and performce purposes we select middle element of array as the pivot element.
2. Then we move all the elments lower than pivot to the left and higher than pivot to the right to the pivot
3. Then we recursively apply the above 2 steps seperately to each of the sub-arrays of
element smaller and larger than last pivot
3. Then we recursively apply the above 2 steps seperately to each of the sub-arrays of
element smaller and larger than last pivot
"""
Scanner sc=new Scanner(System.in);
int n=sc.nextInt();
int st=1;int sp=n/2;
for(int i=1;i<=n;i++) {
for(int j=1;j<=sp;j++) {
if(i==n/2+1) {
System.out.print("* ");
}
else {
System.out.print(" ");
}
}
for(int j=1;j<=st;j++) {
System.out.print("* ");
}
if(i<=n/2) {
st++;
}
else {
st--;
}
System.out.println();
}
}
| 25.119048 | 111 | 0.641706 | 179 | 1,055 | 3.782123 | 0.396648 | 0.053176 | 0.062038 | 0.041359 | 0.344165 | 0.317578 | 0.317578 | 0.317578 | 0.317578 | 0.317578 | 0 | 0.01761 | 0.246446 | 1,055 | 41 | 112 | 25.731707 | 0.833962 | 0 | 0 | 0.166667 | 0 | 0 | 0.015152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37964fe5ef397296275522b31af654802f9c7a91 | 3,337 | py | Python | perfkitbenchmarker/providers/ibmcloud/flags.py | Nowasky/PerfKitBenchmarker | cfa88e269eb373780910896ed4bdc8db09469753 | [
"Apache-2.0"
] | 3 | 2018-04-28T13:06:14.000Z | 2020-06-09T02:39:44.000Z | perfkitbenchmarker/providers/ibmcloud/flags.py | Nowasky/PerfKitBenchmarker | cfa88e269eb373780910896ed4bdc8db09469753 | [
"Apache-2.0"
] | 1 | 2018-03-15T21:01:27.000Z | 2018-03-15T21:01:27.000Z | perfkitbenchmarker/providers/ibmcloud/flags.py | Nowasky/PerfKitBenchmarker | cfa88e269eb373780910896ed4bdc8db09469753 | [
"Apache-2.0"
] | 6 | 2019-06-11T18:59:57.000Z | 2021-03-02T19:14:42.000Z | # Copyright 2020 PerfKitBenchmarker Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module containing flags applicable across benchmark run on IBM Cloud."""
from absl import flags
flags.DEFINE_string('ibmcloud_azone', None,
'IBMCloud internal DC name')
flags.DEFINE_integer('ibmcloud_volume_iops', 20000,
'Desired volume IOPS.')
flags.DEFINE_integer('ibmcloud_volume_bandwidth', None,
'Desired volume bandwidth in Mbps.')
flags.DEFINE_boolean('ibmcloud_volume_encrypted', False,
'Enable encryption on volume creates.')
flags.DEFINE_string('ibmcloud_image_username', 'root',
'Ssh username for cloud image.')
flags.DEFINE_integer('ibmcloud_polling_delay', 2,
'Delay between polling attempts in seconds.')
flags.DEFINE_integer('ibmcloud_timeout', 600,
'timeout in secs.')
flags.DEFINE_integer('ibmcloud_boot_disk_size', 10,
'boot volume disk size.')
flags.DEFINE_boolean('ibmcloud_debug', False,
'debug flag.')
flags.DEFINE_boolean('ibmcloud_resources_keep', False,
'keep resources.')
flags.DEFINE_string('ibmcloud_volume_profile', 'custom',
'volume profile')
flags.DEFINE_string('ibmcloud_bootvol_encryption_key', None,
'boot volume encryption key crn')
flags.DEFINE_string('ibmcloud_datavol_encryption_key', None,
'data volume encryption key crn')
flags.DEFINE_string('ibmcloud_vpcid', None,
'IBM Cloud vpc id')
flags.DEFINE_string('ibmcloud_subnet', None,
'primary subnet id')
flags.DEFINE_string('ibmcloud_networks', None,
'additional network ids, comma separated')
flags.DEFINE_string('ibmcloud_prefix', 'perfkit',
'resource name prefix')
flags.DEFINE_string('ibmcloud_rgid', None,
'Resource Group id for the account.')
flags.DEFINE_integer('ibmcloud_boot_volume_iops', 3000,
'boot voume iops')
flags.DEFINE_integer('ibmcloud_boot_volume_size', 0,
'boot voume size in GB')
flags.DEFINE_string('ibmcloud_pub_keyid', None,
'rias public sshkey id')
flags.DEFINE_integer('ibmcloud_network_mtu', 9000,
'MTU size on network interfaces.')
flags.DEFINE_integer('ibmcloud_subnets_extra', 0,
'extra subnets to lookup')
flags.DEFINE_integer('ibmcloud_vdisks_extra', 0,
'extra disks to create')
flags.DEFINE_string('ibmcloud_image_info', None,
'image info in json formatted file')
flags.DEFINE_boolean('ibmcloud_encrypted_image', False,
'encrypted image.')
| 35.126316 | 75 | 0.661073 | 391 | 3,337 | 5.450128 | 0.414322 | 0.134209 | 0.09573 | 0.140779 | 0.179259 | 0.077898 | 0.044111 | 0.044111 | 0 | 0 | 0 | 0.011976 | 0.249326 | 3,337 | 94 | 76 | 35.5 | 0.838723 | 0.195984 | 0 | 0 | 0 | 0 | 0.444653 | 0.128705 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.018868 | 0 | 0.018868 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37a716de9ac1554b60a3ff8e3c6b5f25ee3aacd8 | 897 | py | Python | app.py | elben10/corona-dashboard | ce3be765ee560b9cfec364f3dca32cc804776b8a | [
"MIT"
] | null | null | null | app.py | elben10/corona-dashboard | ce3be765ee560b9cfec364f3dca32cc804776b8a | [
"MIT"
] | 1 | 2021-05-11T07:29:24.000Z | 2021-05-11T07:29:24.000Z | app.py | elben10/corona-dashboard | ce3be765ee560b9cfec364f3dca32cc804776b8a | [
"MIT"
] | null | null | null | import dash
from flask_caching import Cache
EXTERNAL_SCRIPTS = [
"https://code.jquery.com/jquery-3.4.1.slim.min.js",
"https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js",
"https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js",
]
EXTERNAL_STYLESHEETS = [
"https://fonts.googleapis.com/css?family=Nunito:200,200i,300,300i,400,400i,600,600i,700,700i,800,800i,900,900i",
"https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css",
"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.12.1/css/all.min.css",
]
app = dash.Dash(
__name__,
external_scripts=EXTERNAL_SCRIPTS,
external_stylesheets=EXTERNAL_STYLESHEETS,
)
server = app.server
app.config.suppress_callback_exceptions = True
cache = Cache(server, config={
'CACHE_TYPE': 'filesystem',
'CACHE_DIR': 'cache-directory'
})
TIMEOUT = 60 * 60 * 6
| 29.9 | 116 | 0.721293 | 133 | 897 | 4.75188 | 0.556391 | 0.071203 | 0.031646 | 0.091772 | 0.129747 | 0.129747 | 0.129747 | 0.129747 | 0 | 0 | 0 | 0.0801 | 0.109253 | 897 | 29 | 117 | 30.931034 | 0.710889 | 0 | 0 | 0 | 0 | 0.25 | 0.541295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37aaacec4e931cb07cb16c1a2609ce1ddba1e5f7 | 1,158 | py | Python | tests/modules/idn/test_idn_update.py | bladeroot/heppy | b597916ff80890ca057b17cdd156e90bbbd9a87a | [
"BSD-3-Clause"
] | 20 | 2016-06-02T20:29:29.000Z | 2022-01-31T07:47:02.000Z | tests/modules/idn/test_idn_update.py | bladeroot/heppy | b597916ff80890ca057b17cdd156e90bbbd9a87a | [
"BSD-3-Clause"
] | 1 | 2018-10-09T16:09:24.000Z | 2018-10-10T08:17:42.000Z | tests/modules/idn/test_idn_update.py | bladeroot/heppy | b597916ff80890ca057b17cdd156e90bbbd9a87a | [
"BSD-3-Clause"
] | 7 | 2018-04-11T16:05:06.000Z | 2020-01-28T16:30:40.000Z | #!/usr/bin/env python
import unittest
from ..TestCase import TestCase
class TestIdnUpdate(TestCase):
def test_render_idn_update_request(self):
self.assertRequest('''<?xml version="1.0" ?>
<epp xmlns="urn:ietf:params:xml:ns:epp-1.0">
<command>
<update>
<domain:update xmlns:domain="urn:ietf:params:xml:ns:domain-1.0">
<domain:name>example.com</domain:name>
<domain:chg/>
</domain:update>
</update>
<extension>
<idn:update xmlns:idn="urn:afilias:params:xml:ns:idn-1.0">
<idn:chg>
<idn:script>fr</idn:script>
</idn:chg>
</idn:update>
</extension>
<clTRID>XXXX-11</clTRID>
</command>
</epp>
''', {
'command': 'domain:update',
'name': 'example.com',
'chg': {},
'extensions': [
{
'command': 'idn:update',
'script': 'fr'
}
],
'clTRID': 'XXXX-11',
})
if __name__ == '__main__':
unittest.main(verbosity=2)
| 25.733333 | 76 | 0.474093 | 115 | 1,158 | 4.669565 | 0.4 | 0.067039 | 0.061453 | 0.05959 | 0.067039 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017663 | 0.364421 | 1,158 | 44 | 77 | 26.318182 | 0.711957 | 0.017271 | 0 | 0 | 0 | 0.054054 | 0.612137 | 0.19613 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.027027 | false | 0 | 0.054054 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
37b95409e84e525a09791b57d6c976475b723763 | 1,810 | py | Python | tests/test_player.py | ssichynskyi/lotti-karotti-calc | 44eb39ce4c4bc8ddf4049d72268597c6d7411f84 | [
"Apache-2.0"
] | null | null | null | tests/test_player.py | ssichynskyi/lotti-karotti-calc | 44eb39ce4c4bc8ddf4049d72268597c6d7411f84 | [
"Apache-2.0"
] | null | null | null | tests/test_player.py | ssichynskyi/lotti-karotti-calc | 44eb39ce4c4bc8ddf4049d72268597c6d7411f84 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
from logic.player import Player
class TestPlayer(unittest.TestCase):
"""
Collection of unittests for Player class
"""
def setUp(self):
pass
def test_player_init(self):
player = Player(player_id=1, rabbits=2, active_rabbits=1, lost_rabbits=1)
self.assertEqual(len(player.lost_rabbits), 1)
self.assertEqual(player.lost_rabbits[0].player_id, 1)
self.assertEqual(player.lost_rabbits[0].number, 1)
print('lost rabbits are ok')
self.assertEqual(len(player.active_rabbits), 1)
self.assertEqual(player.active_rabbits[0].player_id, 1)
self.assertEqual(player.active_rabbits[0].number, 2)
self.assertEqual(len(player.ready_rabbits), 0)
def test_player_reset(self):
player = Player(player_id=1, rabbits=2, active_rabbits=1, lost_rabbits=1)
player.reset_condition()
self.assertEqual(len(player.lost_rabbits), 0, 'number of lost rabbits expected 0')
self.assertLessEqual(len(player.active_rabbits), 1, 'number of act. rabbits expected <= 1')
self.assertLessEqual(len(player.ready_rabbits), 2, 'number of ready rabbits expected <= 2')
self.assertEqual(len(player.ready_rabbits) + len(player.active_rabbits), 2)
def test_player_drop_rabbit(self):
player = Player(player_id=1, rabbits=2, active_rabbits=1, lost_rabbits=1)
player.drop_active_rabbit()
self.assertEqual(len(player.lost_rabbits), 2)
self.assertEqual(len(player.active_rabbits), 0)
self.assertEqual(len(player.ready_rabbits), 0)
def test_player_out_of_the_game(self):
player = Player(player_id=1, rabbits=0)
self.assertFalse(player.is_active)
self.assertEqual(player.get_active_rabbit(), None)
| 42.093023 | 99 | 0.69337 | 244 | 1,810 | 4.959016 | 0.20082 | 0.161157 | 0.119008 | 0.158678 | 0.604132 | 0.573554 | 0.42562 | 0.301653 | 0.238843 | 0.238843 | 0 | 0.02454 | 0.189503 | 1,810 | 42 | 100 | 43.095238 | 0.800273 | 0.034807 | 0 | 0.15625 | 0 | 0 | 0.072213 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.15625 | false | 0.03125 | 0.0625 | 0 | 0.25 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8063dfe36de56cbc689a8b310df3ce5003370b6d | 432 | py | Python | str10.py | ABHISHEKSUBHASHSWAMI/String-Manipulation | e22efdbe76069e0280cc1acdeeabc4b663ac4f36 | [
"MIT"
] | null | null | null | str10.py | ABHISHEKSUBHASHSWAMI/String-Manipulation | e22efdbe76069e0280cc1acdeeabc4b663ac4f36 | [
"MIT"
] | null | null | null | str10.py | ABHISHEKSUBHASHSWAMI/String-Manipulation | e22efdbe76069e0280cc1acdeeabc4b663ac4f36 | [
"MIT"
] | null | null | null | #Program to change a given string to a new string where the first and last chars have been exchanged.
string=str(input("Enter a string :"))
first=string[0] #store first index element of string in variable
last=string[-1] #store last index element of string in variable
new=last+string[1:-1]+first #concatenate last the middle part and first part
print(new)
| 43.2 | 101 | 0.641204 | 63 | 432 | 4.396825 | 0.492063 | 0.086643 | 0.101083 | 0.144404 | 0.216607 | 0.216607 | 0 | 0 | 0 | 0 | 0 | 0.013072 | 0.291667 | 432 | 9 | 102 | 48 | 0.892157 | 0.555556 | 0 | 0 | 0 | 0 | 0.085562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8063fd86f68ebf320baf9273c420614c93e98bab | 990 | py | Python | setup.py | MailboxValidator/mailboxvalidator-python | 740d64a4cd6a32bf7d65903c0d30164a8cfafcde | [
"MIT"
] | 9 | 2018-07-09T06:49:05.000Z | 2022-03-15T07:40:41.000Z | setup.py | MailboxValidator/mailboxvalidator-python | 740d64a4cd6a32bf7d65903c0d30164a8cfafcde | [
"MIT"
] | null | null | null | setup.py | MailboxValidator/mailboxvalidator-python | 740d64a4cd6a32bf7d65903c0d30164a8cfafcde | [
"MIT"
] | 1 | 2021-05-26T12:43:16.000Z | 2021-05-26T12:43:16.000Z | import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="MailboxValidator",
version="1.2.0",
author="MailboxValidator.com",
author_email="support@mailboxvalidator.com",
description="Email verification module for Python using MailboxValidator API. It validates if the email is valid, from a free provider, contains high-risk keywords, whether it\'s a catch-all address and so much more.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/MailboxValidator/mailboxvalidator-python",
packages=setuptools.find_packages(),
tests_require=['pytest>=3.0.6'],
classifiers=(
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
),
) | 39.6 | 220 | 0.729293 | 119 | 990 | 5.991597 | 0.747899 | 0.084151 | 0.053296 | 0.084151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009423 | 0.142424 | 990 | 25 | 221 | 39.6 | 0.830389 | 0 | 0 | 0 | 0 | 0.043478 | 0.628749 | 0.028956 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
806fcfc95b660907356bd62342b9bc9e14255a07 | 237 | py | Python | pySpectralFPK/__init__.py | alanmatzumiya/Paper | d65ff68475eb72324594701d06754d0d005f6a86 | [
"MIT"
] | 2 | 2019-03-19T23:55:45.000Z | 2020-06-03T19:10:51.000Z | pySpectralFPK/__init__.py | alanmatzumiya/Paper | d65ff68475eb72324594701d06754d0d005f6a86 | [
"MIT"
] | null | null | null | pySpectralFPK/__init__.py | alanmatzumiya/Paper | d65ff68475eb72324594701d06754d0d005f6a86 | [
"MIT"
] | null | null | null | """
Solvers define how a pde is solved, i.e., advanced in time.
.. autosummary::
.. codeauthor:: Alan Matzumiya <alan.matzumiya@gmail.com>
"""
from typing import List
from .setup_solver import FPK_solver
__all__ = [
"FPK_solver"
] | 18.230769 | 59 | 0.708861 | 33 | 237 | 4.878788 | 0.787879 | 0.161491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 237 | 13 | 60 | 18.230769 | 0.813131 | 0.565401 | 0 | 0 | 0 | 0 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
80723d94b3de6b981e622070fc675bae6d441510 | 4,108 | py | Python | bot/Bot.py | Facco98/TwitchBotPy | 27b4fb0f76542db68621b52ada316bf00fbf68d2 | [
"MIT"
] | 1 | 2020-05-14T02:14:10.000Z | 2020-05-14T02:14:10.000Z | bot/Bot.py | Facco98/TwitchBotPy | 27b4fb0f76542db68621b52ada316bf00fbf68d2 | [
"MIT"
] | null | null | null | bot/Bot.py | Facco98/TwitchBotPy | 27b4fb0f76542db68621b52ada316bf00fbf68d2 | [
"MIT"
] | null | null | null | import websocket
from threading import Thread
from bot.Command import Command
import time
class Bot:
def __init__(self, username, password, host):
self.__commands = dict()
self._username = username
self._password = password
self.__host = host
self.__threadStarted = False
self.__thread = None
def connect(self):
self.__websocket = websocket.create_connection(self.__host)
def join(self, channel):
self.__websocket.send("JOIN #" + channel)
def send(self, msg):
self.__websocket.send(msg)
def send_message_to(self, channel, message=""):
self.__websocket.send("PRIVMSG #" + channel + " :" + message)
def start_listening(self, callback):
try:
if not self.__threadStarted:
self.__thread = Thread(target=self.__listen_function__, args=(callback,))
self.__thread.deamon = True
self.__threadStarted = True
self.__thread.start()
except Exception:
pass
def __listen_function__(self, callback):
try:
while self.__threadStarted:
received = self.__websocket.recv()
callback(received)
except Exception:
pass
def stop_listening(self):
self.__threadStarted = False
self.__thread = None
def disconnect(self):
if self.__threadStarted:
self.stop_listening()
self.__websocket.close()
def add_command(self, command):
if isinstance(command, Command):
self.__commands[command.name()] = command
else:
raise ValueError("\"command\" must be an instance of class Command")
def responds_to(self, cmd):
return cmd in self.__commands
def execute_command(self, cmd, params):
if self.responds_to(cmd):
self.__commands[cmd].execute(params)
class TwitchBot(Bot):
def __init__(self, username, password):
super().__init__(username, password, "ws://irc-ws.chat.twitch.tv:80")
self.on_message = self.__default_on_message
self.on_command = self.__default_on_command
self.unknown_command = self.__defualt_unknown_command
def connect(self, channels=[]):
super().connect()
super().send("PASS " + self._password)
super().send("NICK " + self._username)
for channel in channels:
self.join(channel)
def start_listening(self, callback=None ):
if callback is None:
super().start_listening(self.dispatch)
else:
super().start_listening(callback)
def dispatch(self, msg):
if msg == "PING :tmi.twitch.tv":
super().send("PONG :tmi.twitch.tv")
else:
try:
finenome = msg.index("!")
who = msg[1:finenome]
inizioCanale = msg.index("#")
fineCanale = msg.index(" :")
canale = msg[inizioCanale+1:fineCanale]
content = msg[fineCanale+2: ]
if content.startswith("!"):
cmd, other = self.__parse_command(content+" ")
self.on_command(cmd.strip(), other.strip(), who, canale)
else:
self.on_message(content.strip(), who, canale)
except Exception:
pass
def __parse_command(self, str):
try:
finecomando = str.find(" ")
cmd = str[1:finecomando]
content = str[finecomando+1:]
return cmd, content
except Exception as ex:
pass
def __default_on_message(self, msg, who, channel):
pass
def __default_on_command(self, cmd, other, who, channel):
if super().responds_to(cmd):
super().execute_command(cmd, [other, who, channel])
else:
self.unknown_command(cmd, who, channel)
def __defualt_unknown_command(self, cmd, who, channel):
super().send_message_to("#"+channel, "@" + who + ", unknown command")
| 29.342857 | 89 | 0.579357 | 432 | 4,108 | 5.219907 | 0.24537 | 0.043902 | 0.022616 | 0.029268 | 0.086918 | 0.061197 | 0.03459 | 0 | 0 | 0 | 0 | 0.002481 | 0.313291 | 4,108 | 139 | 90 | 29.553957 | 0.796881 | 0 | 0 | 0.2 | 0 | 0 | 0.03849 | 0.007065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0.095238 | 0.038095 | 0.009524 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
807c60f448466d06587af2768eab735c6b00629e | 831 | py | Python | config.example.py | isaacnoboa/balaguer_bot | 1b7d61db7ebfc1b9067e6ac1762b077ed259ecb8 | [
"MIT"
] | null | null | null | config.example.py | isaacnoboa/balaguer_bot | 1b7d61db7ebfc1b9067e6ac1762b077ed259ecb8 | [
"MIT"
] | null | null | null | config.example.py | isaacnoboa/balaguer_bot | 1b7d61db7ebfc1b9067e6ac1762b077ed259ecb8 | [
"MIT"
] | null | null | null | # Make sure to rename this file as "config.py" before running the bot.
verbose=True
api_token='0000000000:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
# Enter the user ID and a readable name for each user in your group.
# TODO make balaguer automatically collect user IDs.
# But that's only useful if the bot actually gathers widespread usage.
all_users={
000000000: 'Readable Name',
000000000: 'Readable Name',
000000000: 'Readable Name',
000000000: 'Readable Name',
}
# A list containing the user IDs of who can access the admin features of the bot.
admins = [000000000,
000000000,
000000000,
000000000]
# A list of the groups where the bot is allowed to operate (usually the main group and an admin test group)
approved_groups = [-000000000,
-000000000] | 31.961538 | 108 | 0.701564 | 112 | 831 | 5.178571 | 0.607143 | 0.103448 | 0.144828 | 0.155172 | 0.144828 | 0.144828 | 0.144828 | 0.144828 | 0.144828 | 0 | 0 | 0.157233 | 0.234657 | 831 | 26 | 109 | 31.961538 | 0.754717 | 0.533093 | 0 | 0.428571 | 0 | 0 | 0.27451 | 0.128852 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
807f9c9ab79882e8cda374e3c06329f34a95d56f | 1,299 | py | Python | src/favorites_crawler/itemloaders.py | RyouMon/FavoritesCrawler | c11750ea4094cd7e4d91a0dd79c9ee21a066c0ee | [
"MIT"
] | 2 | 2022-02-05T04:24:55.000Z | 2022-02-22T23:50:23.000Z | src/favorites_crawler/itemloaders.py | RyouMon/FavoritesCrawler | c11750ea4094cd7e4d91a0dd79c9ee21a066c0ee | [
"MIT"
] | 3 | 2022-02-22T13:35:29.000Z | 2022-02-28T13:29:56.000Z | src/favorites_crawler/itemloaders.py | RyouMon/FavoritesCrawler | c11750ea4094cd7e4d91a0dd79c9ee21a066c0ee | [
"MIT"
] | null | null | null | from itemloaders import ItemLoader
from itemloaders.processors import Join, Compose, MapCompose
from favorites_crawler import items
from favorites_crawler.processors import take_first, identity, get_nhentai_id, original_url_from_nhentai_thumb_url
from favorites_crawler.processors import replace_space_with_under_scope
class PixivIllustItemLoader(ItemLoader):
"""Pixiv Illust Loader"""
default_item_class = items.PixivIllustItem
default_output_processor = take_first
image_urls_out = identity
class YanderePostItemLoader(ItemLoader):
"""Yandere Post Loader"""
default_item_class = items.YanderePostItem
default_output_processor = take_first
image_urls_out = identity
class NHentaiGalleryItemLoader(ItemLoader):
default_item_class = items.NHentaiGalleryItem
default_output_processor = take_first
id_out = Compose(take_first, get_nhentai_id)
title_out = Join('')
image_urls_out = MapCompose(original_url_from_nhentai_thumb_url)
tags_out = MapCompose(replace_space_with_under_scope)
characters_out = MapCompose(replace_space_with_under_scope)
class LemonPicPostItemLoader(ItemLoader):
default_item_class = items.LemonPicPostItem
default_output_processor = take_first
image_urls_out = identity
tags_out = identity
| 30.928571 | 114 | 0.810624 | 153 | 1,299 | 6.464052 | 0.320261 | 0.054601 | 0.064712 | 0.084934 | 0.562184 | 0.340748 | 0.24368 | 0.164813 | 0.164813 | 0.113246 | 0 | 0 | 0.138568 | 1,299 | 41 | 115 | 31.682927 | 0.883825 | 0.030023 | 0 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.192308 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8094ccd0dc06024e1e4323bc6b4fa0cfcef4fd31 | 346 | py | Python | setup.py | smok-serwis/longshot-python | 9671d60d77e12d2cb6bc2530d05f55d4bafa8e66 | [
"MIT"
] | null | null | null | setup.py | smok-serwis/longshot-python | 9671d60d77e12d2cb6bc2530d05f55d4bafa8e66 | [
"MIT"
] | null | null | null | setup.py | smok-serwis/longshot-python | 9671d60d77e12d2cb6bc2530d05f55d4bafa8e66 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from distutils.core import setup
setup(name='longshot',
version='0.1alpha',
description='SMOK client connectivity library',
author='smok-serwis.pl',
author_email='admin@smok.co',
url='https://github.com/smok-serwis/longshot-python',
packages=['longshot', 'longshot.persistence'],
) | 28.833333 | 59 | 0.66763 | 41 | 346 | 5.609756 | 0.756098 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.17341 | 346 | 12 | 60 | 28.833333 | 0.797203 | 0.057803 | 0 | 0 | 0 | 0 | 0.457055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80959fb4e1855167032241c2b29b45d563eacafd | 1,799 | py | Python | users/tests/test_delete_user.py | tgamauf/spritstat | 849526ec8dec46c57194d50ff3b32c16d0cb684a | [
"MIT"
] | 1 | 2022-01-30T10:50:14.000Z | 2022-01-30T10:50:14.000Z | users/tests/test_delete_user.py | tgamauf/spritstat | 849526ec8dec46c57194d50ff3b32c16d0cb684a | [
"MIT"
] | 47 | 2022-02-02T22:07:28.000Z | 2022-03-30T13:53:37.000Z | users/tests/test_delete_user.py | tgamauf/spritstat | 849526ec8dec46c57194d50ff3b32c16d0cb684a | [
"MIT"
] | null | null | null | from django.urls import reverse
from rest_framework import status
from rest_framework.test import APITestCase
from users.models import CustomUser
class TestDeleteUser(APITestCase):
fixtures = ["user.json"]
url: str
user: CustomUser
@classmethod
def setUpTestData(cls):
cls.url = reverse("account_delete")
cls.user = CustomUser.objects.get(email="test@test.at")
def setUp(self):
if not self.id().endswith("_not_logged_in"):
self.client.login(username=self.user.email, password="test")
def test_ok(self):
response = self.client.delete(self.url)
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
with self.assertRaisesMessage(
CustomUser.DoesNotExist, "CustomUser matching query does not exist."
):
CustomUser.objects.get(id=self.user.id)
def test_not_logged_in(self):
response = self.client.delete(self.url)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
# Let's just try to get the user, which would fail if it is in fact
# deleted.
CustomUser.objects.get(id=self.user.id)
def test_get(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
def test_post(self):
response = self.client.post(self.url)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
def test_put(self):
response = self.client.put(self.url)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
def test_patch(self):
response = self.client.patch(self.url)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
| 33.943396 | 82 | 0.696498 | 236 | 1,799 | 5.135593 | 0.330508 | 0.057756 | 0.079208 | 0.108911 | 0.438119 | 0.438119 | 0.438119 | 0.438119 | 0.438119 | 0.373762 | 0 | 0.012596 | 0.20567 | 1,799 | 52 | 83 | 34.596154 | 0.835549 | 0.04169 | 0 | 0.210526 | 0 | 0 | 0.054619 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 1 | 0.210526 | false | 0.026316 | 0.105263 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8095bb749718d47f183223964eb45c85dd3110bd | 1,323 | py | Python | sphinxcontrib/needs/builder.py | tlovett/sphinxcontrib-needs | 41794403266deb6a4f7ec07bb8297abb0ddc57b1 | [
"MIT"
] | null | null | null | sphinxcontrib/needs/builder.py | tlovett/sphinxcontrib-needs | 41794403266deb6a4f7ec07bb8297abb0ddc57b1 | [
"MIT"
] | null | null | null | sphinxcontrib/needs/builder.py | tlovett/sphinxcontrib-needs | 41794403266deb6a4f7ec07bb8297abb0ddc57b1 | [
"MIT"
] | null | null | null | from sphinx.builders import Builder
from sphinxcontrib.needs.utils import NeedsList
import sphinx
from pkg_resources import parse_version
sphinx_version = sphinx.__version__
if parse_version(sphinx_version) >= parse_version("1.6"):
from sphinx.util import logging
else:
import logging
class NeedsBuilder(Builder):
name = 'needs'
format = 'json'
file_suffix = '.txt'
links_suffix = None
def write_doc(self, docname, doctree):
pass
def finish(self):
log = logging.getLogger(__name__)
needs = self.env.need_all_needs
config = self.env.config
version = config.version
needs_list = NeedsList(config, self.outdir, self.confdir)
needs_list.load_json()
for key, need in needs.items():
needs_list.add_need(version, need)
try:
needs_list.write_json()
except Exception as e:
log.error("Error during writing json file: {0}".format(e))
else:
log.info("Needs successfully exported")
def get_outdated_docs(self):
return ""
def prepare_writing(self, docnames):
pass
def write_doc_serialized(self, docname, doctree):
pass
def cleanup(self):
pass
def get_target_uri(self, docname, typ=None):
return ""
| 24.962264 | 70 | 0.646259 | 162 | 1,323 | 5.074074 | 0.462963 | 0.034063 | 0.072993 | 0.060827 | 0.060827 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003086 | 0.265306 | 1,323 | 52 | 71 | 25.442308 | 0.842593 | 0 | 0 | 0.195122 | 0 | 0 | 0.058957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0.097561 | 0.146341 | 0.04878 | 0.487805 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
809ae1f39dc9d7fac5d194516d317497caa9edc2 | 1,881 | py | Python | tests/model_test.py | jicewarwick/AShareData | 13c78602fe00a5326f421c8a8003f3889492e6dd | [
"MIT"
] | 30 | 2019-09-18T07:26:05.000Z | 2022-03-17T11:15:47.000Z | tests/model_test.py | jicewarwick/Tushare2MySQL | 13c78602fe00a5326f421c8a8003f3889492e6dd | [
"MIT"
] | 2 | 2019-12-11T02:45:58.000Z | 2020-12-21T10:41:43.000Z | tests/model_test.py | jicewarwick/Tushare2MySQL | 13c78602fe00a5326f421c8a8003f3889492e6dd | [
"MIT"
] | 9 | 2019-10-22T09:00:14.000Z | 2022-02-02T02:21:31.000Z | import datetime as dt
import unittest
from AShareData import set_global_config
from AShareData.model import *
class MyTestCase(unittest.TestCase):
def setUp(self) -> None:
set_global_config('config.json')
def test_something(self):
self.assertEqual(True, False)
@staticmethod
def test_FF3factor_return():
model = FamaFrench3FactorModel()
smb = SMBandHMLCompositor(model)
date = dt.datetime(2021, 3, 9)
pre_date = dt.datetime(2021, 3, 8)
pre_month_date = dt.datetime(2021, 2, 26)
smb.compute_factor_return(balance_date=pre_date, pre_date=pre_date, date=date,
rebalance_marker='D', period_marker='D')
smb.compute_factor_return(balance_date=pre_month_date, pre_date=pre_date, date=date,
rebalance_marker='M', period_marker='D')
smb.compute_factor_return(balance_date=pre_month_date, pre_date=pre_month_date, date=date,
rebalance_marker='M', period_marker='M')
@staticmethod
def test_FFC4_factor_return():
model = FamaFrenchCarhart4FactorModel()
umd = UMDCompositor(model)
date = dt.datetime(2021, 3, 9)
pre_date = dt.datetime(2021, 3, 8)
pre_month_date = dt.datetime(2021, 2, 26)
umd.compute_factor_return(balance_date=pre_date, pre_date=pre_date, date=date,
rebalance_marker='D', period_marker='D')
umd.compute_factor_return(balance_date=pre_month_date, pre_date=pre_date, date=date,
rebalance_marker='M', period_marker='D')
umd.compute_factor_return(balance_date=pre_month_date, pre_date=pre_month_date, date=date,
rebalance_marker='M', period_marker='M')
if __name__ == '__main__':
unittest.main()
| 40.891304 | 98 | 0.641148 | 231 | 1,881 | 4.883117 | 0.233766 | 0.111702 | 0.117021 | 0.099291 | 0.647163 | 0.647163 | 0.647163 | 0.641844 | 0.641844 | 0.641844 | 0 | 0.030238 | 0.261563 | 1,881 | 45 | 99 | 41.8 | 0.781857 | 0 | 0 | 0.378378 | 0 | 0 | 0.016481 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.108108 | false | 0 | 0.108108 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
809b18b1d50940a472d99838108c97ff7972ad17 | 1,079 | py | Python | HorribleConky.py | WolfgangAxel/ConkyConfigs | a437fd91761872202f90c0dab36e1c050a017054 | [
"MIT"
] | 1 | 2016-02-12T11:52:10.000Z | 2016-02-12T11:52:10.000Z | HorribleConky.py | WolfgangAxel/ConkyConfigs | a437fd91761872202f90c0dab36e1c050a017054 | [
"MIT"
] | null | null | null | HorribleConky.py | WolfgangAxel/ConkyConfigs | a437fd91761872202f90c0dab36e1c050a017054 | [
"MIT"
] | null | null | null | #!/usr/bin/python
from lxml import html
import requests
"""
Enter HorribleSubs's title for the shows you
watch in quotes followed by a comma, then
hit enter to add another show. When
all of your watched shows are entered, put
the ending bracket.
EX:
MYSHOWS = ["Bananya",
"New Game",
"Kono Bijutsubu ni wa Mondai ga Aru!",
"Re Zero kara Hajimeru Isekai Seikatsu"]
"""
MYSHOWS = [
]
def makeLine(string,time,size=50):
if string in MYSHOWS:
string = "**" + string
if len(string) <= 42:
out = string
else:
out = string[:42]
for i in range(45-len(out)):
out = out + "."
tzadj = int(time[0:2])+2
if tzadj >= 24:
tzadj = tzadj-24
tzadj = str(tzadj)
if len(tzadj) == 2:
time = tzadj + time[2:]
else:
time = "0" + tzadj + time[2:]
out = out + time
return out
page = requests.get("http://horriblesubs.info/")
tree = html.fromstring(page.content)
show = tree.xpath('//a[@title="See all releases for this show"]/text()')
sched = tree.xpath('//td[@class="schedule-time"]/text()')
for i in range(len(show)):
line = makeLine(show[i],sched[i])
print line
| 21.156863 | 72 | 0.658943 | 172 | 1,079 | 4.133721 | 0.55814 | 0.025316 | 0.016878 | 0.030942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021566 | 0.183503 | 1,079 | 50 | 73 | 21.58 | 0.785471 | 0.014829 | 0 | 0.066667 | 0 | 0 | 0.153743 | 0.046791 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80a43ce3d179c95872aebbda4deee5d9f217f5de | 8,035 | py | Python | CAAPR/CAAPR_AstroMagic/PTS/pts/evolve/FunctionSlot.py | wdobbels/CAAPR | 50d0b32642a61af614c22f1c6dc3c4a00a1e71a3 | [
"MIT"
] | 7 | 2016-05-20T21:56:39.000Z | 2022-02-07T21:09:48.000Z | CAAPR/CAAPR_AstroMagic/PTS/pts/evolve/FunctionSlot.py | wdobbels/CAAPR | 50d0b32642a61af614c22f1c6dc3c4a00a1e71a3 | [
"MIT"
] | 1 | 2019-03-21T16:10:04.000Z | 2019-03-22T17:21:56.000Z | CAAPR/CAAPR_AstroMagic/PTS/pts/evolve/FunctionSlot.py | wdobbels/CAAPR | 50d0b32642a61af614c22f1c6dc3c4a00a1e71a3 | [
"MIT"
] | 1 | 2020-05-19T16:17:17.000Z | 2020-05-19T16:17:17.000Z | #!/usr/bin/env python
# -*- coding: utf8 -*-
# *****************************************************************
# ** PTS -- Python Toolkit for working with SKIRT **
# ** © Astronomical Observatory, Ghent University **
# *****************************************************************
## \package pts.evolve.functionslot The *function slot* concept is large used by Pyevolve, the idea
# is simple, each genetic operator or any operator, can be assigned
# to a slot, by this way, we can add more than simple one operator,
# we can have for example, two or more mutator operators at same time,
# two or more evaluation functions, etc. In this :mod:`FunctionSlot` module,
# you'll find the class :class:`FunctionSlot.FunctionSlot`, which is the slot class.
# -----------------------------------------------------------------
# Import standard modules
from types import BooleanType
# Import other evolve modules
import utils
# Import the relevant PTS classes and modules
from ..core.tools.random import prng
# -----------------------------------------------------------------
class FunctionSlot(object):
"""
FunctionSlot Class - The function slot
Example:
>>> genome.evaluator.set(eval_func)
>>> genome.evaluator[0]
<function eval_func at 0x018C8930>
>>> genome.evaluator
Slot [Evaluation Function] (Count: 1)
Name: eval_func
>>> genome.evaluator.clear()
>>> genome.evaluator
Slot [Evaluation Function] (Count: 0)
No function
You can add weight to functions when using the `rand_apply` paramter:
>>> genome.evaluator.set(eval_main, 0.9)
>>> genome.evaluator.add(eval_sec, 0.3)
>>> genome.evaluator.setRandomApply()
In the above example, the function *eval_main* will be called with 90% of
probability and the *eval_sec* will be called with 30% of probability.
There are another way to add functions too:
>>> genome.evaluator += eval_func
:param name: the slot name
:param rand_apply: if True, just one of the functions in the slot
will be applied, this function is randomly picked based
on the weight of the function added.
"""
def __init__(self, name="Anonymous Function", rand_apply=False):
""" The creator of the FunctionSlot Class """
self.funcList = []
self.funcWeights = []
self.slotName = name
self.rand_apply = rand_apply
# -----------------------------------------------------------------
def __typeCheck(self, func):
"""
Used internally to check if a function passed to the
function slot is callable. Otherwise raises a TypeError exception.
:param func: the function object
"""
if not callable(func):
utils.raiseException("The function must be a method or function", TypeError)
# -----------------------------------------------------------------
def __iadd__(self, func):
""" To add more functions using the += operator
.. versionadded:: 0.6
The __iadd__ method.
"""
self.__typeCheck(func)
self.funcList.append(func)
return self
# -----------------------------------------------------------------
def __getitem__(self, index):
""" Used to retrieve some slot function index """
return self.funcList[index]
# -----------------------------------------------------------------
def __setitem__(self, index, value):
""" Used to set the index slot function """
self.__typeCheck(value)
self.funcList[index] = value
# -----------------------------------------------------------------
def __iter__(self):
""" Return the function list iterator """
return iter(self.funcList)
# -----------------------------------------------------------------
def __len__(self):
""" Return the number of functions on the slot
.. versionadded:: 0.6
The *__len__* method
"""
return len(self.funcList)
# -----------------------------------------------------------------
def setRandomApply(self, flag=True):
"""
Sets the random function application, in this mode, the
function will randomly choose one slot to apply
:param flag: True or False
"""
if type(flag) != BooleanType:
utils.raiseException("Random option must be True or False", TypeError)
self.rand_apply = flag
# -----------------------------------------------------------------
def clear(self):
""" Used to clear the functions in the slot """
if len(self.funcList) > 0:
del self.funcList[:]
del self.funcWeights[:]
# -----------------------------------------------------------------
def add(self, func, weight=0.5):
""" Used to add a function to the slot
:param func: the function to be added in the slot
:param weight: used when you enable the *random apply*, it's the weight
of the function for the random selection
.. versionadded:: 0.6
The `weight` parameter.
"""
self.__typeCheck(func)
self.funcList.append(func)
self.funcWeights.append(weight)
# -----------------------------------------------------------------
def isEmpty(self):
""" Return true if the function slot is empy """
return (len(self.funcList) == 0)
# -----------------------------------------------------------------
def set(self, func, weight=0.5):
""" Used to clear all functions in the slot and add one
:param func: the function to be added in the slot
:param weight: used when you enable the *random apply*, it's the weight
of the function for the random selection
.. versionadded:: 0.6
The `weight` parameter.
.. note:: the method *set* of the function slot remove all previous
functions added to the slot.
"""
self.clear()
self.__typeCheck(func)
self.add(func, weight)
# -----------------------------------------------------------------
def apply(self, index, obj, **args):
""" Apply the index function
:param index: the index of the function
:param obj: this object is passes as parameter to the function
:param args: this args dictionary is passed to the function
"""
if len(self.funcList) <= 0:
raise Exception("No function defined: " + self.slotName)
return self.funcList[index](obj, **args)
# -----------------------------------------------------------------
def applyFunctions(self, obj=None, **args):
""" Generator to apply all function slots in obj
:param obj: this object is passes as parameter to the function
:param args: this args dictionary is passed to the function
"""
if len(self.funcList) <= 0:
utils.raiseException("No function defined: " + self.slotName)
if not self.rand_apply:
for f in self.funcList:
yield f(obj, **args)
else:
v = prng.uniform(0, 1)
fobj = None
for func, weight in zip(self.funcList, self.funcWeights):
fobj = func
if v < weight:
break
v = v - weight
yield fobj(obj, **args)
# -----------------------------------------------------------------
def __repr__(self):
""" String representation of FunctionSlot """
strRet = "Slot [%s] (Count: %d)\n" % (self.slotName, len(self.funcList))
if len(self.funcList) <= 0:
strRet += "\t\tNo function\n"
return strRet
for f, w in zip(self.funcList, self.funcWeights):
strRet += "\t\tName: %s - Weight: %.2f\n" % (f.func_name, w)
if f.func_doc:
strRet += "\t\tDoc: " + f.func_doc + "\n"
return strRet
# -----------------------------------------------------------------
| 30.903846 | 99 | 0.511886 | 863 | 8,035 | 4.692932 | 0.251448 | 0.054321 | 0.025926 | 0.019753 | 0.240988 | 0.201975 | 0.165432 | 0.135309 | 0.135309 | 0.135309 | 0 | 0.006651 | 0.251525 | 8,035 | 259 | 100 | 31.023166 | 0.666611 | 0.589421 | 0 | 0.138889 | 0 | 0 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.041667 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80abafe50ec268ef5c6ba2106a574b5b6aeb135a | 2,036 | py | Python | example.py | Tobi-De/qosic-sdk | a9c7a17c3a328883dfd033080175c64fb8c8fd32 | [
"MIT"
] | 1 | 2022-03-12T13:12:17.000Z | 2022-03-12T13:12:17.000Z | example.py | Tobi-De/qosic-sdk | a9c7a17c3a328883dfd033080175c64fb8c8fd32 | [
"MIT"
] | 247 | 2021-05-12T08:52:46.000Z | 2022-03-30T15:22:06.000Z | example.py | Tobi-De/qosic-sdk | a9c7a17c3a328883dfd033080175c64fb8c8fd32 | [
"MIT"
] | null | null | null | import phonenumbers
from dotenv import dotenv_values
from qosic import Client, MtnConfig, MTN, MOOV, OPERATION_CONFIRMED
from qosic.exceptions import (
InvalidCredentialsError,
InvalidClientIdError,
ServerError,
RequestError,
)
config = dotenv_values(".env")
moov_client_id = config.get("MOOV_CLIENT_ID")
mtn_client_id = config.get("MTN_CLIENT_ID")
server_login = config.get("SERVER_LOGIN")
server_pass = config.get("SERVER_PASS")
# This is just for test purpose, you should directly pass the phone number
raw_phone = config.get("PHONE_NUMBER")
providers = [
MTN(client_id=mtn_client_id, config=MtnConfig(step=30, timeout=60 * 2)),
MOOV(client_id=moov_client_id),
]
client = Client(
providers=providers,
login=server_login,
password=server_pass,
active_logging=True,
)
def main():
providers = [
MTN(client_id=mtn_client_id, config=MtnConfig(step=30, timeout=60 * 2)),
MOOV(client_id=moov_client_id),
]
try:
client = Client(
providers=providers,
login=server_login,
password=server_pass,
active_logging=True,
)
phone = phonenumbers.parse(raw_phone)
result = client.request_payment(
phone=phone, amount=1000, first_name="User", last_name="TEST"
)
except (
InvalidCredentialsError,
InvalidClientIdError,
ServerError,
RequestError,
) as e:
print(e)
else:
if result.state == OPERATION_CONFIRMED:
print(
f"TransRef: {result.trans_ref} -> Your requested payment to {result.phone} for an amount "
f"of {result.amount} has been successfully validated "
)
else:
print(f"Payment rejected: {result}")
# If you need to make a refund : (remember that refund are only available for MTN phone number right now)
# result = client.request_refund(trans_ref=result.trans_ref, phone=phone)
if __name__ == "__main__":
main()
| 29.085714 | 113 | 0.655697 | 241 | 2,036 | 5.323651 | 0.381743 | 0.074825 | 0.056118 | 0.039751 | 0.279813 | 0.279813 | 0.260327 | 0.260327 | 0.260327 | 0.260327 | 0 | 0.009168 | 0.25 | 2,036 | 69 | 114 | 29.507246 | 0.831041 | 0.121807 | 0 | 0.448276 | 0 | 0 | 0.138453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017241 | false | 0.051724 | 0.068966 | 0 | 0.086207 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
80abf7caa3074490050648cd507ce1a0805c6ca8 | 2,277 | py | Python | rlcard/games/karma/card.py | pettaa123/rlcard | f5b98eb3a836406ee51197728a258c834959ddb3 | [
"MIT"
] | null | null | null | rlcard/games/karma/card.py | pettaa123/rlcard | f5b98eb3a836406ee51197728a258c834959ddb3 | [
"MIT"
] | null | null | null | rlcard/games/karma/card.py | pettaa123/rlcard | f5b98eb3a836406ee51197728a258c834959ddb3 | [
"MIT"
] | null | null | null | class KarmaCard(object):
info = {'type': ['number', 'wild'],
'trait': ['4', '5', '6', '7', '8', '9', 'J', 'Q', 'K', 'A', '2', '3', '10', 'draw'],
'order': ['4:4', '4:3', '4:2', '4:1', '5:4', '5:3', '5:2', '5:1', '6:4', '6:3', '6:2', '6:1',
'7:4', '7:3', '7:2', '7:1', '8:4', '8:3', '8:2', '8:1', '9:4', '9:3', '9:2', '9:1',
'J:4', 'J:3', 'J:2', 'J:1', 'Q:4', 'Q:3', 'Q:2', 'Q:1', 'K:1', 'K:2', 'K:3', 'K:4',
'A:1', 'A:2', 'A:3', 'A:4', '2:1', '2:2', '2:3', '2:4', '3:1', '3:2', '3:3', '3:4',
'10:1', '10:2', '10:3', '10:4', 'draw:1'],
'order_start': ['4:4', '4:3', '4:2', '4:1', '5:4', '5:3', '5:2', '5:1', '6:4', '6:3', '6:2', '6:1',
'7:4', '7:3', '7:2', '7:1', '8:4', '8:3', '8:2', '8:1', '9:4', '9:3', '9:2', '9:1',
'J:1', 'J:2', 'J:3', 'J:4', 'Q:1', 'Q:2', 'Q:3', 'Q:4', 'K:1', 'K:2', 'K:3', 'K:4',
'A:1', 'A:2', 'A:3', 'A:4', '2:1', '2:2', '2:3', '2:4', '3:1', '3:2', '3:3', '3:4',
'10:1', '10:2', '10:3', '10:4', 'draw:1']
}
def __init__(self, card_type, trait):
''' Initialize the class of UnoCard
Args:
card_type (str): The type of card
trait (str): The trait of card
'''
self.type = card_type
self.trait = trait
self.str = self.get_str()
def get_str(self):
''' Get the string representation of card
Return:
(str): The string of card's trait
'''
return self.trait
def get_index(self):
''' Get the index of trait
Return:
(int): The index of card's trait (value)
'''
return self.info['trait'].index(self.trait)
@staticmethod
def print_cards(cards):
''' Print out card in a nice form
Args:
card (str or list): The string form or a list of a Karma card
'''
if isinstance(cards, str):
cards = [cards]
for i, card in enumerate(cards):
print(card, end='')
if i < len(cards) - 1:
print(', ', end='')
| 36.725806 | 111 | 0.368906 | 369 | 2,277 | 2.243902 | 0.162602 | 0.012077 | 0.007246 | 0.009662 | 0.236715 | 0.236715 | 0.236715 | 0.236715 | 0.236715 | 0.236715 | 0 | 0.135956 | 0.363636 | 2,277 | 61 | 112 | 37.327869 | 0.435473 | 0.16513 | 0 | 0.133333 | 0 | 0 | 0.221719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0 | 0 | 0.266667 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80ae30ede933b26169568f957d647578f83c393b | 4,131 | py | Python | datasets/Part 1 - Data Preprocessing/Section 2 -------------------- Part 1 - Data Preprocessing --------------------/my_version_kevinml/Preprocessing_CategoricalData.py | kevinLCG/machinelearning-az | 54e3090275a3fc419aad17caadc6a47a71dcd3d4 | [
"MIT"
] | null | null | null | datasets/Part 1 - Data Preprocessing/Section 2 -------------------- Part 1 - Data Preprocessing --------------------/my_version_kevinml/Preprocessing_CategoricalData.py | kevinLCG/machinelearning-az | 54e3090275a3fc419aad17caadc6a47a71dcd3d4 | [
"MIT"
] | null | null | null | datasets/Part 1 - Data Preprocessing/Section 2 -------------------- Part 1 - Data Preprocessing --------------------/my_version_kevinml/Preprocessing_CategoricalData.py | kevinLCG/machinelearning-az | 54e3090275a3fc419aad17caadc6a47a71dcd3d4 | [
"MIT"
] | null | null | null | #!/home/kevinml/anaconda3/bin/python3.7
# -*- coding: utf-8 -*-
"""
Created on Tue Dec 10 11:47:39 2019
@author: kevinml
Version Python: 3.7
"""
# Pre Procesado - Datos Categóricos
###########################################################
# Input Dataset #
###########################################################
# Importamos las librerías
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
## Importar el data set
dataset = pd.read_csv('./Data.csv')
# Generamos un subdataset con las variales independientes y otro con las dependientes
# INDEPENDIENTES (matriz)
X = dataset.iloc[:, :-1].values
# DEPENDIENTES (vector)
y = dataset.iloc[:, 3].values
###########################################################
# Tratamiento de los NAs #
###########################################################
# Importamos las librerías
# https://scikit-learn.org/stable/modules/impute.html
from sklearn.impute import SimpleImputer
# Creamos una funcion para reemplazar los valores faltantes (NaN/np.nan) con la MEDIA/mediana/most_frequent/etc de los valores de la COLUMNA (axis=0).
imputer = SimpleImputer(missing_values = np.nan, strategy = "mean")
# Hacemos unos ajustes a nuestra funcion para solo aplicarla a las columnas con datos faltantes.
imputer = imputer.fit(X[:, 1:3])
# Sobreescribimos nuestra matriz, haciendo las sustituciones correspondientes.
X[:, 1:3] = imputer.transform(X[:,1:3])
###########################################################
# Codificacion de Datos Categoricos #
###########################################################
# Importamos las librerías
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
# https://scikit-learn.org/stable/modules/generated/sklearn.compose.make_column_transformer.html
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.compose import make_column_transformer
# Le pasamos a la funcion tuplas (transformador, columnas) que especifiquen los
# objetos del transformador que se aplicarán a los subconjuntos de datos.
# Las columnas no seleccionadas se ignoraran
# Codificaremos cada uno de los nombres de los paises
onehotencoder = make_column_transformer((OneHotEncoder(), [0]), remainder = "passthrough")
X = onehotencoder.fit_transform(X).toarray()
# Codificamos el valor de Purchase "Yes", "No" por "1", "0"
labelencoder_y = LabelEncoder()
y = labelencoder_y.fit_transform(y)
###########################################################
# Training & Testing Splitting #
###########################################################
# Dividir el data set en conjunto de entrenamiento y conjunto de testing
# Importamos las librerías
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
from sklearn.model_selection import train_test_split
# Obtenemos 4 variables; caracteristicas y etiquetas, de entrenamiento y testing respectivamente. Colocamos semilla en 42.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
###########################################################
# Escalado de Variables #
###########################################################
# Esto se hace devido a que el rango dinamico de cada una de las variables diferentes
# y al momento de operar con ellas, como al momento de sacar distancias euclidianas
# el valor de las variables de mayor rango, puede opacar el de aquellas cuyo rango sea menor.
# Obtenderemos variables entre -1 y 1.
# Importamos las librerías
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.preprocessing import StandardScaler
# Creamos el objeto escalador
sc_X = StandardScaler()
# Generamos un escalador de acuerdo con nuestros datos de entrenamiento.
X_train = sc_X.fit_transform(X_train)
# Utilizamos es escalador obtenido en el paso anterior, para escalar nuestros datos de testing.
X_test = sc_X.transform(X_test) | 41.727273 | 150 | 0.639797 | 482 | 4,131 | 5.410788 | 0.43361 | 0.024923 | 0.042178 | 0.036426 | 0.155675 | 0.129601 | 0.129601 | 0.129601 | 0.111196 | 0.090491 | 0 | 0.01107 | 0.14718 | 4,131 | 99 | 151 | 41.727273 | 0.729208 | 0.595013 | 0 | 0 | 0 | 0 | 0.024225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.045455 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
80ae4466a6cac93d4e909355cee631a4f38e2bce | 3,120 | py | Python | docker_registry_client_async/specs.py | GitHK/docker-registry-client-async | 384b1b7f7abcda55258028d930b45054ab03f6c4 | [
"Apache-2.0"
] | 2 | 2021-10-13T00:25:23.000Z | 2022-02-23T22:22:33.000Z | docker_registry_client_async/specs.py | GitHK/docker-registry-client-async | 384b1b7f7abcda55258028d930b45054ab03f6c4 | [
"Apache-2.0"
] | 17 | 2020-07-18T21:58:51.000Z | 2022-03-31T06:53:30.000Z | docker_registry_client_async/specs.py | GitHK/docker-registry-client-async | 384b1b7f7abcda55258028d930b45054ab03f6c4 | [
"Apache-2.0"
] | 4 | 2020-09-25T22:12:05.000Z | 2022-02-15T06:26:50.000Z | #!/usr/bin/env python
# pylint: disable=too-few-public-methods
"""Reusable string literals."""
class DockerAuthentication:
"""
https://github.com/docker/distribution/blob/master/docs/spec/auth/token.md
https://github.com/docker/distribution/blob/master/docs/spec/auth/scope.md
"""
DOCKERHUB_URL_PATTERN = (
"{0}?service={1}&scope={2}&client_id=docker-registry-client-async"
)
SCOPE_REGISTRY_CATALOG = "registry:catalog:*"
SCOPE_REPOSITORY_PULL_PATTERN = "repository:{0}:pull"
SCOPE_REPOSITORY_PUSH_PATTERN = "repository:{0}:push"
SCOPE_REPOSITORY_ALL_PATTERN = "repository:{0}:pull,push"
class DockerMediaTypes:
"""https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-2.md#manifest-list"""
CONTAINER_IMAGE_V1 = "application/vnd.docker.container.image.v1+json"
DISTRIBUTION_MANIFEST_LIST_V2 = (
"application/vnd.docker.distribution.manifest.list.v2+json"
)
DISTRIBUTION_MANIFEST_V1 = "application/vnd.docker.distribution.manifest.v1+json"
DISTRIBUTION_MANIFEST_V1_SIGNED = (
"application/vnd.docker.distribution.manifest.v1+prettyjws"
)
DISTRIBUTION_MANIFEST_V2 = "application/vnd.docker.distribution.manifest.v2+json"
IMAGE_ROOTFS_DIFF = "application/vnd.docker.image.rootfs.diff.tar.gzip"
IMAGE_ROOTFS_FOREIGN_DIFF = (
"application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
)
PLUGIN_V1 = "application/vnd.docker.plugin.v1+json"
class Indices:
"""Common registry indices."""
DOCKERHUB = "index.docker.io"
QUAY = "quay.io"
class QuayAuthentication:
"""
https://docs.quay.io/api/
"""
QUAY_URL_PATTERN = (
"{0}?service={1}&scope={2}&client_id=docker-registry-client-async"
)
SCOPE_REPOSITORY_PULL_PATTERN = "repo:{0}:read"
SCOPE_REPOSITORY_PUSH_PATTERN = "repo:{0}:write"
SCOPE_REPOSITORY_ALL_PATTERN = "repo:{0}:read,write"
class MediaTypes:
"""Generic mime types."""
APPLICATION_JSON = "application/json"
APPLICATION_OCTET_STREAM = "application/octet-stream"
APPLICATION_YAML = "application/yaml"
class OCIMediaTypes:
"""https://github.com/opencontainers/image-spec/blob/master/media-types.md"""
DESCRIPTOR_V1 = "application/vnd.oci.descriptor.v1+json"
IMAGE_CONFIG_V1 = "application/vnd.oci.image.config.v1+json"
IMAGE_INDEX_V1 = "application/vnd.oci.image.index.v1+json"
IMAGE_LAYER_V1 = "application/vnd.oci.image.layer.v1.tar"
IMAGE_LAYER_GZIP_V1 = "application/vnd.oci.image.layer.v1.tar+gzip"
IMAGE_LAYER_ZSTD_V1 = "application/vnd.oci.image.layer.v1.tar+zstd"
IMAGE_LAYER_NONDISTRIBUTABLE_V1 = (
"application/vnd.oci.image.layer.nondistributable.v1.tar"
)
IMAGE_LAYER_NONDISTRIBUTABLE_GZIP_V1 = (
"application/vnd.oci.image.layer.nondistributable.v1.tar+gzip"
)
IMAGE_LAYER_NONDISTRIBUTABLE_ZSTD_V1 = (
"application/vnd.oci.image.layer.nondistributable.v1.tar+zstd"
)
IMAGE_MANIFEST_V1 = "application/vnd.oci.image.manifest.v1+json"
LAYOUT_HEADER_V1 = "application/vnd.oci.layout.header.v1+json"
| 34.285714 | 101 | 0.721154 | 391 | 3,120 | 5.560102 | 0.230179 | 0.122355 | 0.103036 | 0.096136 | 0.409384 | 0.363385 | 0.25391 | 0.24655 | 0.199632 | 0.107636 | 0 | 0.017931 | 0.141987 | 3,120 | 90 | 102 | 34.666667 | 0.794173 | 0.150641 | 0 | 0.036364 | 0 | 0 | 0.478177 | 0.417922 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.709091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80b70820208ed5aa86523b3f57937420df90eabf | 1,777 | py | Python | integritybackend/geocoder.py | starlinglab/integrity-backend | 8d2a0640d7a9f66c97d180ad76aedf968dfa43e6 | [
"MIT"
] | 1 | 2022-03-18T16:11:31.000Z | 2022-03-18T16:11:31.000Z | integritybackend/geocoder.py | starlinglab/starling-integrity-api | c39161576e852afbffe3053af468acf58207b5ae | [
"MIT"
] | 37 | 2022-01-17T22:07:17.000Z | 2022-03-31T22:11:16.000Z | integritybackend/geocoder.py | starlinglab/starling-integrity-api | c39161576e852afbffe3053af468acf58207b5ae | [
"MIT"
] | null | null | null | import geocoder
from .log_helper import LogHelper
_logger = LogHelper.getLogger()
class Geocoder:
def reverse_geocode(self, lat, lon):
"""Retrieves reverse geocoding informatioon for the given latitude and longitude.
Args:
lat, long: latitude and longitude to reverse geocode, as floats
Returns:
geolocation JSON
"""
# TODO: Add some kind of throttling and/or caching to prevent us from sending more than 1 req/sec.
response = geocoder.osm([lat, lon], method="reverse")
if response.status_code != 200 or response.status != "OK":
_logger.error(
"Reverse geocode lookup for (%s, %s) failed with: %s",
lat,
lon,
response.status,
)
return None
return self._json_to_address(response.json)
def _json_to_address(self, geo_json):
"""Convert geocoding JSON to a uniform format for our own use."""
if (osm_address := geo_json.get("raw", {}).get("address")) is None:
_logger.warning("Reverse geocoding result did not include raw.address")
return None
address = {}
address["country_code"] = osm_address.get("country_code")
address["city"] = self._get_preferred_key(
osm_address, ["city", "town", "municipality", "village"]
)
address["country"] = osm_address.get("country")
address["state"] = self._get_preferred_key(
osm_address, ["state", "region", "state_district"]
)
return address
def _get_preferred_key(self, some_dict, keys):
for key in keys:
if key in some_dict:
return some_dict.get(key)
return None
| 34.173077 | 106 | 0.593697 | 207 | 1,777 | 4.932367 | 0.449275 | 0.048972 | 0.044074 | 0.039177 | 0.056807 | 0.056807 | 0 | 0 | 0 | 0 | 0 | 0.003244 | 0.306134 | 1,777 | 51 | 107 | 34.843137 | 0.824818 | 0.192459 | 0 | 0.088235 | 0 | 0 | 0.159913 | 0 | 0 | 0 | 0 | 0.019608 | 0 | 1 | 0.088235 | false | 0 | 0.058824 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80b986d0b98e79475549e867f1ac605bda0845f7 | 917 | py | Python | foodapp/urls.py | gauravmahale47/Foodstore | 2e43cee54e8dc418e3aa1d572085d100727ff904 | [
"BSD-3-Clause"
] | null | null | null | foodapp/urls.py | gauravmahale47/Foodstore | 2e43cee54e8dc418e3aa1d572085d100727ff904 | [
"BSD-3-Clause"
] | null | null | null | foodapp/urls.py | gauravmahale47/Foodstore | 2e43cee54e8dc418e3aa1d572085d100727ff904 | [
"BSD-3-Clause"
] | null | null | null | from django.urls import path
from foodapp.views import *
from django.views.generic.base import TemplateView
from .views import FoodCreateView,FoodListView
from django.conf import settings
from django.conf.urls.static import static
'''
TemplateView is built-in django Class Based view Class
which is used to render the request to template,
'''
urlpatterns = [
# FBV
# path('',index),
# path('addfood',addfood),
path('',TemplateView.as_view(template_name='foodapp/index.html'),name="Home"),
path('addfood',FoodCreateView.as_view(),name='addfood'),
path('foodlist',FoodListView.as_view(),name='foodmenu'),
path('foodupdate/<pk>',FoodUpdateView.as_view(),name='foodupdate'),
path('fooddelete/<pk>',FoodDeleteView.as_view(),name='fooddelete'),
path('fooddetail/<pk>',FoodDetailView.as_view(),name='fooddetail')
]#+static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) | 43.666667 | 82 | 0.735005 | 116 | 917 | 5.724138 | 0.422414 | 0.054217 | 0.075301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115594 | 917 | 21 | 83 | 43.666667 | 0.818742 | 0.115594 | 0 | 0 | 0 | 0 | 0.185131 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
80be84948c857c226b842731966f51f313b423cf | 498 | py | Python | prog_python/strings/sustring_2.py | TCGamer123/python | 82ad1f84b52d6cc7253fb4c5522ae8389824930a | [
"MIT"
] | 1 | 2022-03-08T13:29:59.000Z | 2022-03-08T13:29:59.000Z | prog_python/strings/sustring_2.py | TCGamer123/python | 82ad1f84b52d6cc7253fb4c5522ae8389824930a | [
"MIT"
] | null | null | null | prog_python/strings/sustring_2.py | TCGamer123/python | 82ad1f84b52d6cc7253fb4c5522ae8389824930a | [
"MIT"
] | null | null | null | s = "Olá, mundo!";
print(s[::2]); # Imprime os caracteres nos índices pares.
print(s[1::2]) # Imprime os caracteres nos índices ímpares.
frase = "Mundo mundo vasto mundo"
print(frase[::-1]); #inverte a frase;
# Forma mais avançada de formatação de strings
frase_2 = "Um triângulo de base igual a {0} e altura igual a {1} possui área igual {2}.".format(3,4,12);
print(frase_2);
# Formatação de strongs com f-strings
linguagem = "Python";
frase_3 = f"Progamando em {linguagem}";
print(frase_3);
| 29.294118 | 104 | 0.702811 | 82 | 498 | 4.219512 | 0.52439 | 0.086705 | 0.057803 | 0.115607 | 0.17341 | 0.17341 | 0 | 0 | 0 | 0 | 0 | 0.035629 | 0.154618 | 498 | 16 | 105 | 31.125 | 0.786223 | 0.361446 | 0 | 0 | 0 | 0.1 | 0.451923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
80bf4db9002340c8afb22321e1adb5cd22a14a77 | 7,492 | py | Python | ryu/gui/models/topology.py | isams1/Thesis | dfe03ce60169bd4e5b2eb6f1068a1c89fc9d9fd3 | [
"Apache-2.0"
] | 3 | 2019-04-23T11:11:46.000Z | 2020-11-04T20:14:17.000Z | ryu/gui/models/topology.py | isams1/Thesis | dfe03ce60169bd4e5b2eb6f1068a1c89fc9d9fd3 | [
"Apache-2.0"
] | null | null | null | ryu/gui/models/topology.py | isams1/Thesis | dfe03ce60169bd4e5b2eb6f1068a1c89fc9d9fd3 | [
"Apache-2.0"
] | 3 | 2019-10-03T09:31:42.000Z | 2021-05-15T04:41:12.000Z | import logging
import json
from socket import error as SocketError
from httplib import HTTPException
import gevent
import gevent.monkey
gevent.monkey.patch_all()
from ryu.lib.dpid import str_to_dpid
from ryu.lib.port_no import str_to_port_no
from ryu.app.client import TopologyClient
LOG = logging.getLogger('ryu.gui')
class Port(object):
def __init__(self, dpid, port_no, hw_addr, name):
assert type(dpid) == int
assert type(port_no) == int
assert type(hw_addr) == str or type(hw_addr) == unicode
assert type(name) == str or type(name) == unicode
self.dpid = dpid
self.port_no = port_no
self.hw_addr = hw_addr
self.name = name
def to_dict(self):
return {'dpid': self.dpid,
'port_no': self.port_no,
'hw_addr': self.hw_addr,
'name': self.name}
@classmethod
def from_rest_dict(cls, p):
return cls(str_to_dpid(p['dpid']),
str_to_port_no(p['port_no']),
p['hw_addr'],
p['name'])
def __eq__(self, other):
return self.dpid == other.dpid and self.port_no == other.port_no
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash((self.dpid, self.port_no))
def __str__(self):
return 'Port<dpid=%s, port_no=%s, hw_addr=%s, name=%s>' % \
(self.dpid, self.port_no, self.hw_addr, self.name)
class Switch(object):
def __init__(self, dpid, ports):
assert type(dpid) == int
assert type(ports) == list
self.dpid = dpid
self.ports = ports
def to_dict(self):
return {'dpid': self.dpid,
'ports': [port.to_dict() for port in self.ports]}
def __eq__(self, other):
return self.dpid == other.dpid
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.dpid)
def __str__(self):
return 'Switch<dpid=%s>' % (self.dpid)
class Link(object):
def __init__(self, src, dst):
assert type(src) == Port
assert type(dst) == Port
self.src = src
self.dst = dst
def to_dict(self):
return {'src': self.src.to_dict(),
'dst': self.dst.to_dict()}
def __eq__(self, other):
return self.src == other.src and self.dst == other.dst
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash((self.src, self.dst))
def __str__(self):
return 'Link<%s to %s>' % (self.src, self.dst)
class Topology(dict):
def __init__(self, switches_json=None, links_json=None):
super(Topology, self).__init__()
self['switches'] = []
if switches_json:
for s in json.loads(switches_json):
ports = []
for p in s['ports']:
ports.append(Port.from_rest_dict(p))
switch = Switch(str_to_dpid(s['dpid']), ports)
self['switches'].append(switch)
self['links'] = []
if links_json:
for l in json.loads(links_json):
link = Link(Port.from_rest_dict(l['src']),
Port.from_rest_dict(l['dst']))
self['links'].append(link)
self['ports'] = []
for switch in self['switches']:
self['ports'].extend(switch.ports)
def peer(self, port):
for link in self['links']:
if link.src == port:
return link.dst
elif link.dst == port:
return link.src
return None
def attached(self, port):
for switch in self['switches']:
if port in switch.port:
return switch
return None
def neighbors(self, switch):
ns = []
for port in switch.port:
ns.append(self.attached(self.peer(port)))
return ns
# TopologyDelta = new_Topology - old_Topology
def __sub__(self, old):
assert type(old) == Topology
added = Topology()
deleted = Topology()
for k in self.iterkeys():
new_set = set(self[k])
old_set = set(old[k])
added[k] = list(new_set - old_set)
deleted[k] = list(old_set - new_set)
return TopologyDelta(added, deleted)
def __str__(self):
return 'Topology<switches=%d, ports=%d, links=%d>' % (
len(self['switches']),
len(self['ports']),
len(self['links']))
class TopologyDelta(object):
def __init__(self, added, deleted):
self.added = added
self.deleted = deleted
def __str__(self):
return 'TopologyDelta<added=%s, deleted=%s>' % \
(self.added, self.deleted)
class TopologyWatcher(object):
_LOOP_WAIT = 3
_REST_RETRY_WAIT = 10
def __init__(self, update_handler=None, rest_error_handler=None):
self.update_handler = update_handler
self.rest_error_handler = rest_error_handler
self.address = None
self.tc = None
self.is_active = None
self.threads = []
self.topo = Topology()
self.prev_switches_json = ''
self.prev_links_json = ''
def start(self, address):
LOG.debug('TopologyWatcher: start')
self.address = address
self.tc = TopologyClient(address)
self.is_active = True
self.threads.append(gevent.spawn(self._polling_loop))
def stop(self):
LOG.debug('TopologyWatcher: stop')
self.is_active = False
def _polling_loop(self):
LOG.debug('TopologyWatcher: Enter polling loop')
while self.is_active:
try:
switches_json = self.tc.list_switches().read()
links_json = self.tc.list_links().read()
except (SocketError, HTTPException) as e:
LOG.debug('TopologyWatcher: REST API(%s) is not available.' %
self.address)
LOG.debug(' wait %d secs...' %
self._REST_RETRY_WAIT)
self._call_rest_error_handler(e)
#gevent.sleep(self._REST_RETRY_WAIT)
self.is_active = False;
continue
if self._is_updated(switches_json, links_json):
LOG.debug('TopologyWatcher: topology updated')
new_topo = Topology(switches_json, links_json)
delta = new_topo - self.topo
self.topo = new_topo
self._call_update_handler(delta)
gevent.sleep(self._LOOP_WAIT)
def _is_updated(self, switches_json, links_json):
updated = (
self.prev_switches_json != switches_json or
self.prev_links_json != links_json)
self.prev_switches_json = switches_json
self.prev_links_json = links_json
return updated
def _call_rest_error_handler(self, e):
if self.rest_error_handler:
self.rest_error_handler(self.address, e)
def _call_update_handler(self, delta):
if self.update_handler:
self.update_handler(self.address, delta)
def handler(address, delta):
print delta
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
watcher = TopologyWatcher(handler)
watcher.start('127.0.0.1:8080')
gevent.joinall(watcher.threads)
| 28.378788 | 77 | 0.574212 | 924 | 7,492 | 4.378788 | 0.143939 | 0.022244 | 0.027682 | 0.019773 | 0.230351 | 0.134454 | 0.079338 | 0.079338 | 0.064014 | 0.045724 | 0 | 0.002525 | 0.312867 | 7,492 | 263 | 78 | 28.486692 | 0.783411 | 0.010411 | 0 | 0.152284 | 0 | 0 | 0.068817 | 0.005937 | 0 | 0 | 0 | 0 | 0.045685 | 0 | null | null | 0 | 0.045685 | null | null | 0.005076 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80bfef8f2adb756fa51ead93bbcc4295e352ae27 | 744 | py | Python | IMU/VTK-6.2.0/ThirdParty/Twisted/twisted/internet/test/process_gireactornocompat.py | timkrentz/SunTracker | 9a189cc38f45e5fbc4e4c700d7295a871d022795 | [
"MIT"
] | 4 | 2016-03-30T14:31:52.000Z | 2019-02-02T05:01:32.000Z | IMU/VTK-6.2.0/ThirdParty/Twisted/twisted/internet/test/process_gireactornocompat.py | timkrentz/SunTracker | 9a189cc38f45e5fbc4e4c700d7295a871d022795 | [
"MIT"
] | 1 | 2020-03-06T04:49:42.000Z | 2020-03-06T04:49:42.000Z | IMU/VTK-6.2.0/ThirdParty/Twisted/twisted/internet/test/process_gireactornocompat.py | timkrentz/SunTracker | 9a189cc38f45e5fbc4e4c700d7295a871d022795 | [
"MIT"
] | 2 | 2019-08-30T23:36:13.000Z | 2019-11-08T16:52:01.000Z | import sys
# Override theSystemPath so it throws KeyError on gi.pygtkcompat:
from twisted.python import modules
modules.theSystemPath = modules.PythonPath([], moduleDict={})
# Now, when we import gireactor it shouldn't use pygtkcompat, and should
# instead prevent gobject from being importable:
from twisted.internet import gireactor
for name in gireactor._PYGTK_MODULES:
if sys.modules[name] is not None:
sys.stdout.write("failure, sys.modules[%r] is %r, instead of None" %
(name, sys.modules["gobject"]))
sys.exit(0)
try:
import gobject
except ImportError:
sys.stdout.write("success")
else:
sys.stdout.write("failure: %s was imported" % (gobject.__path__,))
| 32.347826 | 77 | 0.686828 | 95 | 744 | 5.315789 | 0.589474 | 0.059406 | 0.083168 | 0.083168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001706 | 0.212366 | 744 | 22 | 78 | 33.818182 | 0.860068 | 0.24328 | 0 | 0 | 0 | 0 | 0.158582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
80bffaf5ce6de8b8154d194bc0ff65bdab497cc8 | 4,140 | py | Python | octostore/mongo_helper.py | luzhang06/octostore | c3a6ac42a86ab6943eaa7e11dfbcae50c0a68bfa | [
"MIT"
] | 1 | 2020-08-17T20:54:39.000Z | 2020-08-17T20:54:39.000Z | octostore/mongo_helper.py | luzhang06/octostore | c3a6ac42a86ab6943eaa7e11dfbcae50c0a68bfa | [
"MIT"
] | null | null | null | octostore/mongo_helper.py | luzhang06/octostore | c3a6ac42a86ab6943eaa7e11dfbcae50c0a68bfa | [
"MIT"
] | null | null | null | from pymongo import MongoClient
import os
import sys
from pathlib import Path
from environs import Env
sys.path.append("..")
sys.path.append(str(Path(__file__).parent.resolve()))
class MongoHelpers:
_client = None
_db = None
_collection = None
def __init__(self, connection_uri=None, db_name=None):
env = Env()
env.read_env()
if db_name is None:
db_name = os.getenv("MONGO_DB")
if connection_uri is None:
host = os.getenv("MONGO_HOST")
port = os.getenv("MONGO_PORT")
username = os.getenv("MONGO_USERNAME")
password = os.getenv("MONGO_PASSWORD")
args = "ssl=true&retrywrites=false&ssl_cert_reqs=CERT_NONE"
connection_uri = (
f"mongodb://{username}:{password}@{host}:{port}/{db_name}?{args}"
)
self.client = MongoClient(connection_uri)
self.db = self._client[db_name]
# def create_experiment(self, name, artifact_location=None, tags=[]):
# # all_experiments = self.get_all_experiments()
# # Get all existing experiments and find the one with largest numerical ID.
# # len(list_all(..)) would not work when experiments are deleted.
# # experiments_ids = [
# # int(e.experiment_id)
# # for e in self.list_experiments(ViewType.ALL)
# # if e.experiment_id.isdigit()
# # ]
# experiment_id = self._get_highest_experiment_id() + 1
# return self._create_experiment_with_id(
# name, str(experiment_id), artifact_location, tags
# )
# def _create_experiment_with_id(
# self,
# experiment_name,
# experiment_id,
# artifact_location,
# lifecycle_stage: LifecycleStage = LifecycleStage.ACTIVE,
# tags=[],
# ) -> int:
# e = Experiment(
# experiment_id,
# experiment_name,
# experiment_id,
# artifact_location,
# lifecycle_stage,
# tags,
# )
# def _get_highest_experiment_id(self):
# if len(list(self._client.experiments.find())) is not 0:
# last_experiment = list(
# self.db.experiments.find({}).sort("experiment_id", -1).limit(1)
# )
# return last_experiment[0]["experiment_id"]
# else:
# return 0
# def list_experiments(self, view_type=ViewType.ACTIVE_ONLY):
# rsl = []
# if view_type == ViewType.ACTIVE_ONLY or view_type == ViewType.ALL:
# rsl += self._get_active_experiments(full_path=False)
# if view_type == ViewType.DELETED_ONLY or view_type == ViewType.ALL:
# # rsl += self._get_deleted_experiments(full_path=False)
# pass
# experiments = []
# for exp_id in rsl:
# try:
# # trap and warn known issues, will raise unexpected exceptions to caller
# experiment = self._get_experiment(exp_id, view_type)
# if experiment:
# experiments.append(experiment)
# except MissingConfigException as rnfe:
# # Trap malformed experiments and log warnings.
# logging.warning(
# "Malformed experiment '%s'. Detailed error %s",
# str(exp_id),
# str(rnfe),
# exc_info=True,
# )
# return experiments
# def _get_active_experiments(self, full_path=False):
# active_experiments_query = {
# "type": "experiment",
# "experiment_state": LifecycleStage.ACTIVE,
# }
# all_experiments = self.db.experiments.find(active_experiments_query)
# # exp_list = list_subdirs(self.root_directory, full_path)
# # return [exp for exp in exp_list if not exp.endswith(FileStore.TRASH_FOLDER_NAME)]
# def _get_deleted_experiments(self, full_path=False):
# # return list_subdirs(self.trash_folder, full_path)
# raise NotImplementedError("get_deleted_experiments")
| 36.315789 | 93 | 0.58285 | 443 | 4,140 | 5.1693 | 0.297968 | 0.057642 | 0.028384 | 0.036681 | 0.124891 | 0.079476 | 0.079476 | 0.079476 | 0.030568 | 0 | 0 | 0.00211 | 0.313043 | 4,140 | 113 | 94 | 36.637168 | 0.803094 | 0.662319 | 0 | 0 | 0 | 0 | 0.129573 | 0.085366 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0.074074 | 0.185185 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
80c5febb11f85056db71fbcf343fcfa6d57b6f52 | 4,716 | py | Python | pypower/runpf_fast.py | felixkoeth/PYPOWER | 51476da14dead2ca23417bfa1210748800212ffe | [
"BSD-3-Clause"
] | null | null | null | pypower/runpf_fast.py | felixkoeth/PYPOWER | 51476da14dead2ca23417bfa1210748800212ffe | [
"BSD-3-Clause"
] | null | null | null | pypower/runpf_fast.py | felixkoeth/PYPOWER | 51476da14dead2ca23417bfa1210748800212ffe | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 1996-2015 PSERC. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
"""Runs a power flow.
"""
from sys import stdout, stderr
from os.path import dirname, join
from time import time
from numpy import r_, c_, ix_, zeros, pi, ones, exp, argmax,angle
from numpy import flatnonzero as find
#from pypower.bustypes import bustypes
#from pypower.ext2int import ext2int
#from pypower.loadcase import loadcase
#from pypower.ppoption import ppoption
#from pypower.ppver import ppver
#from pypower.makeBdc import makeBdc
from pypower.makeSbus import makeSbus
#from pypower.dcpf import dcpf
#from pypower.makeYbus import makeYbus
from pypower.newtonpf_fast import newtonpf_fast
#from pypower.fdpf import fdpf
#from pypower.gausspf import gausspf
#from pypower.makeB import makeB
#from pypower.pfsoln import pfsoln
#from pypower.printpf import printpf
#from pypower.savecase import savecase
#from pypower.int2ext import int2ext
from pypower.idx_bus import PD, QD, VM, VA, GS, BUS_TYPE, PQ, REF
from pypower.idx_brch import PF, PT, QF, QT
from pypower.idx_gen import PG, QG, VG, QMAX, QMIN, GEN_BUS, GEN_STATUS
def runpf_fast(Ybus, Yf,Yt,ref, pv, pq,on,ppc, ppopt=None, fname='', solvedcase=''):
"""Runs a power flow.
Runs a power flow [full AC Newton's method by default] and optionally
returns the solved values in the data matrices, a flag which is C{True} if
the algorithm was successful in finding a solution, and the elapsed
time in seconds. All input arguments are optional. If C{casename} is
provided it specifies the name of the input data file or dict
containing the power flow data. The default value is 'case9'.
If the ppopt is provided it overrides the default PYPOWER options
vector and can be used to specify the solution algorithm and output
options among other things. If the 3rd argument is given the pretty
printed output will be appended to the file whose name is given in
C{fname}. If C{solvedcase} is specified the solved case will be written
to a case file in PYPOWER format with the specified name. If C{solvedcase}
ends with '.mat' it saves the case as a MAT-file otherwise it saves it
as a Python-file.
If the C{ENFORCE_Q_LIMS} options is set to C{True} [default is false] then
if any generator reactive power limit is violated after running the AC
power flow, the corresponding bus is converted to a PQ bus, with Qg at
the limit, and the case is re-run. The voltage magnitude at the bus
will deviate from the specified value in order to satisfy the reactive
power limit. If the reference bus is converted to PQ, the first
remaining PV bus will be used as the slack bus for the next iteration.
This may result in the real power output at this generator being
slightly off from the specified values.
Enforcing of generator Q limits inspired by contributions from Mu Lin,
Lincoln University, New Zealand (1/14/05).
@author: Ray Zimmerman (PSERC Cornell)
"""
## default arguments
## options
## read data
#ppc = loadcase(casedata)
## convert to internal indexing
ppc["branch"][:,[0,1]]-=1
ppc["bus"][:,0]-=1
ppc["gen"][:,0]-=1
baseMVA, bus, gen, branch = \
ppc["baseMVA"], ppc["bus"], ppc["gen"], ppc["branch"]
## get bus index lists of each type of bus
#ref, pv, pq = bustypes(bus, gen)
#
# generator info
#print(gen[:, GEN_STATUS])
#on = find(gen[:, GEN_STATUS] > 0) ## which generators are on?
gbus = gen[on, GEN_BUS].astype(int) ## what buses are they at?
##----- run the power flow -----
t0 = time()
V0 = bus[:, VM] * exp(1j * 0.017453292519943295 * bus[:, VA])
V0[gbus] = gen[on, VG] / abs(V0[gbus]) * V0[gbus]
## build admittance matrices
#Ybus, Yf, Yt = makeYbus(baseMVA, bus, branch)
## compute complex bus power injections [generation - load]
Sbus = makeSbus(baseMVA, bus, gen)
## run the power flow
V, success, i = newtonpf_fast(Ybus, Sbus, V0, ref, pv, pq, ppopt)
## update data matrices with solution
#bus, gen, branch = pfsoln(baseMVA, bus, gen, branch, Ybus, Yf, Yt, V, ref, pv, pq)
bus[:, VM] = abs(V)
bus[:, VA] = angle(V) * 180 / pi
#UNTIL HERE
ppc["et"] = time() - t0
ppc["success"] = success
##----- output results -----
## convert back to original bus numbering & print results
ppc["bus"], ppc["gen"], ppc["branch"] = bus, gen, branch
ppc["branch"][:,[0,1]]+=1
ppc["bus"][:,0]+=1
ppc["gen"][:,0]+=1
return ppc, success,i
if __name__ == '__main__':
runpf()
| 33.211268 | 87 | 0.683842 | 732 | 4,716 | 4.370219 | 0.371585 | 0.068771 | 0.008753 | 0.013129 | 0.030635 | 0.030635 | 0.017505 | 0.017505 | 0.017505 | 0.017505 | 0 | 0.017321 | 0.216497 | 4,716 | 141 | 88 | 33.446809 | 0.848444 | 0.645462 | 0 | 0 | 0 | 0 | 0.048096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.30303 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
80c7f4d876bcca6792829d4bd5fbc77ce4c7d34b | 3,195 | py | Python | encryptor/encryptor.py | crafter-hub/Kreusada-Cogs | 9b7bf873484c7bfeb9707b50f386de82c355b571 | [
"MIT"
] | 21 | 2021-03-11T06:52:41.000Z | 2022-02-04T16:27:47.000Z | encryptor/encryptor.py | crafter-hub/Kreusada-Cogs | 9b7bf873484c7bfeb9707b50f386de82c355b571 | [
"MIT"
] | 77 | 2021-03-06T13:31:50.000Z | 2022-03-25T10:37:15.000Z | encryptor/encryptor.py | crafter-hub/Kreusada-Cogs | 9b7bf873484c7bfeb9707b50f386de82c355b571 | [
"MIT"
] | 33 | 2021-03-05T20:59:07.000Z | 2022-03-06T03:55:47.000Z | import contextlib
import random
import string
from password_strength import PasswordStats
from redbot.core import commands
from redbot.core.utils import chat_formatting as cf
from .word_list import *
GREEN_CIRCLE = "\N{LARGE GREEN CIRCLE}"
YELLOW_CIRCLE = "\N{LARGE YELLOW CIRCLE}"
ORANGE_CIRCLE = "\N{LARGE ORANGE CIRCLE}"
RED_CIRCLE = "\N{LARGE RED CIRCLE}"
class Encryptor(commands.Cog):
"""
Create, and validify the strength of passwords.
"""
__author__ = ["Kreusada"]
__version__ = "1.1.0"
def __init__(self, bot):
self.bot = bot
def format_help_for_context(self, ctx: commands.Context) -> str:
context = super().format_help_for_context(ctx)
authors = ", ".join(self.__author__)
return f"{context}\n\nAuthor: {authors}\nVersion: {self.__version__}"
async def red_delete_data_for_user(self, **kwargs):
"""Nothing to delete"""
return
def cog_unload(self):
with contextlib.suppress(Exception):
self.bot.remove_dev_env_value("encryptor")
async def initialize(self) -> None:
if 719988449867989142 in self.bot.owner_ids:
with contextlib.suppress(Exception):
self.bot.add_dev_env_value("encryptor", lambda x: self)
@commands.group()
async def password(self, ctx):
"""
Create, and validify the strength of passwords.
"""
pass
@password.group(name="generate")
async def password_generate(self, ctx):
"""Generate passwords."""
pass
@password_generate.command(name="complex")
async def password_generate_complex(self, ctx):
"""Generate a complex password."""
await ctx.send(
"".join(
random.choice(string.ascii_letters[:94]) for i in range(random.randint(20, 35))
)
)
@password_generate.command(name="strong")
async def password_generate_strong(self, ctx, delimeter: str = ""):
"""
Generate a strong password.
**Arguments**
* ``<delimeter>``: The character used to seperate each random word. Defaults to "-"
"""
d = delimeter
rc = random.choice
rr = random.randint
await ctx.send(
d.join(rc(RANDOM_WORDS).capitalize() for i in range(3)) + f"{d}{rr(1,1000)}"
)
@password.command(name="strength")
async def password_strength(self, ctx, password: str):
"""Validate a passwords strength."""
conv = PasswordStats(password)
converter = conv.strength()
if converter < 0.250:
emoji = RED_CIRCLE
text = "This is a **weak** password."
elif converter > 0.250 and converter < 0.500:
emoji = ORANGE_CIRCLE
text = "This is an **okay** password."
elif converter > 0.500 and converter < 0.750:
emoji = YELLOW_CIRCLE
text = "This is a **good** password!"
else:
emoji = GREEN_CIRCLE
text = "This is an **excellent** password!"
await ctx.maybe_send_embed(
f"**Strength rating: {round(converter * 100)}%** {emoji}\n{cf.quote(text)}"
)
| 31.019417 | 95 | 0.607199 | 375 | 3,195 | 5.010667 | 0.365333 | 0.029803 | 0.042576 | 0.034061 | 0.119212 | 0.081958 | 0.041511 | 0 | 0 | 0 | 0 | 0.024148 | 0.274178 | 3,195 | 102 | 96 | 31.323529 | 0.786115 | 0.01471 | 0 | 0.086957 | 0 | 0.014493 | 0.149496 | 0.009006 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.246377 | 0.101449 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
80cb754fd4097ddc5ceb99da12f2ad2947dbe655 | 345 | py | Python | project_3/code/characters.py | Psemp/oc_project_11 | 26ee2e607b2ccc768e19d264b5e1da010820fbc5 | [
"MIT"
] | null | null | null | project_3/code/characters.py | Psemp/oc_project_11 | 26ee2e607b2ccc768e19d264b5e1da010820fbc5 | [
"MIT"
] | null | null | null | project_3/code/characters.py | Psemp/oc_project_11 | 26ee2e607b2ccc768e19d264b5e1da010820fbc5 | [
"MIT"
] | null | null | null | from get_char_pos import get_char_position
class Character:
def __init__(self):
self.x = 0
self.y = 0
self.vel = 32
self.alive = True
self.tag = "str"
mac = Character()
mac.tag = "mac"
guard = Character()
guard.tag = "guard"
macpos = get_char_position(mac)
guardpos = get_char_position(guard)
| 15 | 42 | 0.626087 | 48 | 345 | 4.25 | 0.5 | 0.137255 | 0.220588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.269565 | 345 | 22 | 43 | 15.681818 | 0.793651 | 0 | 0 | 0 | 0 | 0 | 0.031884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80d9b6be298e2345e53f3894aadc55c0241856e5 | 667 | py | Python | 2019/try/simple.py | rishidevc/stkovrflw | c33dffbce887f32f609a10dd717d594390ceac8b | [
"MIT"
] | null | null | null | 2019/try/simple.py | rishidevc/stkovrflw | c33dffbce887f32f609a10dd717d594390ceac8b | [
"MIT"
] | 5 | 2020-05-04T03:11:14.000Z | 2021-06-10T20:20:38.000Z | 2019/try/simple.py | rishidevc/stkovrflw | c33dffbce887f32f609a10dd717d594390ceac8b | [
"MIT"
] | 1 | 2019-07-31T18:28:34.000Z | 2019-07-31T18:28:34.000Z | def get_assign(user_input):
key, value = user_input.split("gets")
key = key.strip()
value = int(value.strip())
my_dict[key] = value
print(my_dict)
def add_values(num1, num2):
return num1 + num2
print("Welcome to the Adder REPL.")
my_dict = dict()
while True:
user_input = input("???")
if 'gets' in user_input:
get_assign(user_input)
if 'input' in user_input:
print("Enter a value for :")
input_assign()
if 'adds' in user_input:
a, b = user_input.split("adds")
if 'print' in user_input:
print_values()
if 'quit' in user_input:
print("GoodBye")
exit() | 17.552632 | 41 | 0.590705 | 94 | 667 | 4 | 0.382979 | 0.239362 | 0.146277 | 0.12766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008333 | 0.28036 | 667 | 38 | 42 | 17.552632 | 0.775 | 0 | 0 | 0 | 0 | 0 | 0.127246 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0.041667 | 0.125 | 0.25 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
80e0fed34003c8412ae4f44e18a85afe86d3f7f7 | 375 | py | Python | example/011_matching_zero_or_more_repetitions.py | mafda/regex_101 | 085a9ee48829243d87e4bd74bb1baf07abc6481e | [
"MIT"
] | null | null | null | example/011_matching_zero_or_more_repetitions.py | mafda/regex_101 | 085a9ee48829243d87e4bd74bb1baf07abc6481e | [
"MIT"
] | null | null | null | example/011_matching_zero_or_more_repetitions.py | mafda/regex_101 | 085a9ee48829243d87e4bd74bb1baf07abc6481e | [
"MIT"
] | null | null | null | """
Task
You have a test string S.
Your task is to write a regex that will match S using the following conditions:
S should begin with 2 or more digits.
After that, S should have 0 or more lowercase letters.
S should end with 0 or more uppercase letters
"""
import re
Regex_Pattern = r'^[\d]{2,}[a-z]*[A-Z]*$'
print(str(bool(re.search(Regex_Pattern, input()))).lower())
| 22.058824 | 79 | 0.712 | 69 | 375 | 3.84058 | 0.637681 | 0.079245 | 0.05283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.168 | 375 | 16 | 80 | 23.4375 | 0.836538 | 0.669333 | 0 | 0 | 0 | 0 | 0.189655 | 0.189655 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.