hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0c11d6edd1fa7404e67e7a29c7dcaef50cd598a8 | 1,834 | py | Python | FunTOTP/interface.py | Z33DD/FunTOTP | 912c1a4a307af6a495f12a82305ae7dbf49916a2 | [
"Unlicense"
] | 3 | 2020-01-19T17:10:37.000Z | 2022-02-19T18:39:20.000Z | FunTOTP/interface.py | Z33DD/FunTOTP | 912c1a4a307af6a495f12a82305ae7dbf49916a2 | [
"Unlicense"
] | null | null | null | FunTOTP/interface.py | Z33DD/FunTOTP | 912c1a4a307af6a495f12a82305ae7dbf49916a2 | [
"Unlicense"
] | 1 | 2020-01-19T20:25:18.000Z | 2020-01-19T20:25:18.000Z | from getpass import getpass
from colorama import init, Fore, Back, Style
yes = ['Y', 'y', 'YES', 'yes', 'Yes']
class interface(object):
"""
Terminal CLI
"""
def log(self, arg, get=False):
if not get:
print("[*]: {} ".format(arg))
else:
return "[*]: {} ".format(arg)
def error(self, arg, get=False):
"""Short summary.
Parameters
----------
arg : str
String to print
get : bool
If true, returns a string with the formated string
Returns
-------
str
If get = true, returns a string with the formated string
"""
if not get:
print(Fore.RED + "[ERROR]: {}".format(arg))
print(Style.RESET_ALL)
exit(-1)
else:
return "[ERROR]: {}".format(arg)
def warning(self, arg, get=False):
if not get:
print(Fore.YELLOW + "[!]: {}".format(arg), end='')
print(Style.RESET_ALL)
else:
return "[!]: {}".format(arg)
def sure(self):
user = input(self.log("Are you sure? (y/N) ", get=True))
if user in yes:
return 0
else:
exit(0)
def newpasswd(self):
condition = True
while condition is True:
user_psswd = getpass("[*]: Password:")
user_psswd_repeat = getpass("[*]: Repeat password:")
if user_psswd == user_psswd_repeat:
condition = False
else:
self.warning("Passwords don't match! Try again")
return user_psswd_repeat
def passwd(self):
return getpass()
def info(self, arg):
print(Fore.BLACK + Back.WHITE + "[i]: {}".format(arg) + ' ', end='')
print(Style.RESET_ALL)
| 25.123288 | 76 | 0.490185 | 203 | 1,834 | 4.374384 | 0.369458 | 0.070946 | 0.033784 | 0.050676 | 0.291667 | 0.218468 | 0.218468 | 0.150901 | 0 | 0 | 0 | 0.002584 | 0.366957 | 1,834 | 72 | 77 | 25.472222 | 0.762274 | 0.124318 | 0 | 0.255814 | 0 | 0 | 0.105686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162791 | false | 0.162791 | 0.046512 | 0.023256 | 0.372093 | 0.162791 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0c1b29cfd60d9ee7d4e6451a8264af9459d2ddcb | 2,522 | py | Python | app/request/migrations/0001_initial.py | contestcrew/2019SeoulContest-Backend | 2e99cc6ec6a712911da3b79412ae84a9d35453e1 | [
"MIT"
] | null | null | null | app/request/migrations/0001_initial.py | contestcrew/2019SeoulContest-Backend | 2e99cc6ec6a712911da3b79412ae84a9d35453e1 | [
"MIT"
] | 32 | 2019-08-30T13:09:28.000Z | 2021-06-10T19:07:56.000Z | app/request/migrations/0001_initial.py | contestcrew/2019SeoulContest-Backend | 2e99cc6ec6a712911da3b79412ae84a9d35453e1 | [
"MIT"
] | 3 | 2019-09-19T10:12:50.000Z | 2019-09-30T15:59:13.000Z | # Generated by Django 2.2.5 on 2019-09-24 09:11
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=30, verbose_name='이름')),
('score', models.PositiveIntegerField(default=0, verbose_name='점수')),
('image', models.ImageField(blank=True, null=True, upload_to='category')),
],
),
migrations.CreateModel(
name='Request',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100, verbose_name='제목')),
('content', models.TextField(verbose_name='내용')),
('status', models.CharField(blank=True, choices=[('start', '도움요청중'), ('progress', '진행중'), ('complete', '완료')], default='start', max_length=20, verbose_name='상태')),
('score', models.PositiveIntegerField(default=0, verbose_name='점수')),
('main_address', models.CharField(blank=True, max_length=30, null=True, verbose_name='메인 주소')),
('detail_address', models.CharField(blank=True, max_length=50, null=True, verbose_name='상세 주소')),
('latitude', models.FloatField(blank=True, null=True, verbose_name='위도')),
('longitude', models.FloatField(blank=True, null=True, verbose_name='경도')),
('occurred_at', models.DateField(blank=True, null=True, verbose_name='발생 시각')),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='업로드 시각')),
('updated_at', models.DateTimeField(auto_now=True, verbose_name='수정 시각')),
],
),
migrations.CreateModel(
name='RequestImage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('image', models.ImageField(upload_to='request/%Y/%m/%d', verbose_name='이미지')),
('request', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='images', to='request.Request', verbose_name='의뢰')),
],
),
]
| 50.44 | 179 | 0.596352 | 274 | 2,522 | 5.339416 | 0.372263 | 0.135338 | 0.07177 | 0.064935 | 0.405332 | 0.367054 | 0.347915 | 0.293233 | 0.161996 | 0.161996 | 0 | 0.014675 | 0.243458 | 2,522 | 49 | 180 | 51.469388 | 0.752096 | 0.017843 | 0 | 0.404762 | 1 | 0 | 0.117172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c20b529cd83a9fd598afa8e482ff4d521f8b78a | 954 | py | Python | setup.py | eppeters/xontrib-dotenv | f866f557592d822d1ecb2b607c63c4cdecb580e4 | [
"BSD-2-Clause"
] | null | null | null | setup.py | eppeters/xontrib-dotenv | f866f557592d822d1ecb2b607c63c4cdecb580e4 | [
"BSD-2-Clause"
] | null | null | null | setup.py | eppeters/xontrib-dotenv | f866f557592d822d1ecb2b607c63c4cdecb580e4 | [
"BSD-2-Clause"
] | 1 | 2020-03-16T00:39:57.000Z | 2020-03-16T00:39:57.000Z | #!/usr/bin/env python
"""
xontrib-dotenv
-----
Automatically reads .env file from current working directory
or parentdirectories and push variables to environment.
"""
from setuptools import setup
setup(
name='xontrib-dotenv',
version='0.1',
description='Reads .env files into environment',
long_description=__doc__,
license='BSD',
url='https://github.com/urbaniak/xontrib-dotenv',
author='Krzysztof Urbaniak',
packages=['xontrib'],
package_dir={'xontrib': 'xontrib'},
package_data={'xontrib': ['*.xsh']},
zip_safe=True,
include_package_data=False,
platforms='any',
install_requires=[
'xonsh>=0.4.6',
],
classifiers=[
'Environment :: Console',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: System :: Shells',
'Topic :: System :: System Shells',
]
)
| 25.105263 | 60 | 0.631027 | 100 | 954 | 5.91 | 0.7 | 0.06599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006711 | 0.219078 | 954 | 37 | 61 | 25.783784 | 0.786577 | 0.165618 | 0 | 0 | 0 | 0 | 0.43401 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c24e2918c9577a7b38b38b7b54cfb7d7c91ca26 | 337 | py | Python | pytest/np.py | i0Ek3/disintegration | b59307f8166b93d76fab35af180a5cf3ffa51b09 | [
"MIT"
] | null | null | null | pytest/np.py | i0Ek3/disintegration | b59307f8166b93d76fab35af180a5cf3ffa51b09 | [
"MIT"
] | null | null | null | pytest/np.py | i0Ek3/disintegration | b59307f8166b93d76fab35af180a5cf3ffa51b09 | [
"MIT"
] | null | null | null | import numpy as np
list = [np.linspace([1,2,3], 3),\
np.array([1,2,3]),\
np.arange(3),\
np.arange(8).reshape(2,4),\
np.zeros((2,3)),\
np.zeros((2,3)).T,\
np.ones((3,1)),\
np.eye(3),\
np.full((3,3), 1),\
np.random.rand(3),\
np.random.rand(3,3),\
np.random.uniform(5,15,3),\
np.random.randn(3),\
np.random.normal(3, 2.5, 3)]
print(list)
| 17.736842 | 33 | 0.590504 | 74 | 337 | 2.689189 | 0.364865 | 0.135678 | 0.180905 | 0.090452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106452 | 0.080119 | 337 | 18 | 34 | 18.722222 | 0.535484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0.0625 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c2a0afb31018189385f06e7bd9d48b8c0f6df9c | 2,895 | py | Python | OpenPNM/Network/models/pore_topology.py | Eng-RSMY/OpenPNM | a0a057d0f6346c515792459b1da97f05bab383c1 | [
"MIT"
] | 1 | 2021-03-30T21:38:26.000Z | 2021-03-30T21:38:26.000Z | OpenPNM/Network/models/pore_topology.py | Eng-RSMY/OpenPNM | a0a057d0f6346c515792459b1da97f05bab383c1 | [
"MIT"
] | null | null | null | OpenPNM/Network/models/pore_topology.py | Eng-RSMY/OpenPNM | a0a057d0f6346c515792459b1da97f05bab383c1 | [
"MIT"
] | null | null | null | r"""
===============================================================================
pore_topology -- functions for monitoring and adjusting topology
===============================================================================
"""
import scipy as _sp
def get_subscripts(network, shape, **kwargs):
r"""
Return the 3D subscripts (i,j,k) into the cubic network
Parameters
----------
shape : list
The (i,j,k) shape of the network in number of pores in each direction
"""
if network.num_pores('internal') != _sp.prod(shape):
print('Supplied shape does not match Network size, cannot proceed')
else:
template = _sp.atleast_3d(_sp.empty(shape))
a = _sp.indices(_sp.shape(template))
i = a[0].flatten()
j = a[1].flatten()
k = a[2].flatten()
ind = _sp.vstack((i, j, k)).T
vals = _sp.ones((network.Np, 3))*_sp.nan
vals[network.pores('internal')] = ind
return vals
def adjust_spacing(network, new_spacing, **kwargs):
r"""
Adjust the the pore-to-pore lattice spacing on a cubic network
Parameters
----------
new_spacing : float
The new lattice spacing to apply
Notes
-----
At present this method only applies a uniform spacing in all directions.
This is a limiation of OpenPNM Cubic Networks in general, and not of the
method.
"""
coords = network['pore.coords']
try:
spacing = network._spacing
coords = coords/spacing*new_spacing
network._spacing = new_spacing
except:
pass
return coords
def reduce_coordination(network, z, mode='random', **kwargs):
r"""
Reduce the coordination number to the specified z value
Parameters
----------
z : int
The coordination number or number of throats connected a pore
mode : string, optional
Controls the logic used to trim connections. Options are:
- 'random': (default) Throats will be randomly removed to achieve a
coordination of z
- 'max': All pores will be adjusted to have a maximum coordination of z
(not implemented yet)
Returns
-------
A label array indicating which throats should be trimmed to achieve desired
coordination.
Notes
-----
Pores with only 1 throat will be ignored in all calculations since these
are generally boundary pores.
"""
T_trim = ~network['throat.all']
T_nums = network.num_neighbors(network.pores())
# Find protected throats
T_keep = network.find_neighbor_throats(pores=(T_nums == 1))
if mode == 'random':
z_ave = _sp.average(T_nums[T_nums > 1])
f_trim = (z_ave - z)/z_ave
T_trim = _sp.rand(network.Nt) < f_trim
T_trim = T_trim*(~network.tomask(throats=T_keep))
if mode == 'max':
pass
return T_trim
| 29.242424 | 79 | 0.587219 | 366 | 2,895 | 4.530055 | 0.412568 | 0.015078 | 0.005428 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004202 | 0.260104 | 2,895 | 98 | 80 | 29.540816 | 0.769841 | 0.485665 | 0 | 0.051282 | 0 | 0 | 0.084034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.051282 | 0.025641 | 0 | 0.179487 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0c3b154bc332d251c3e35e98a56001cf94c27a53 | 1,319 | py | Python | setup.py | themis-project/themis-finals-checker-app-py | 12e70102bcca3d6e4082d96e676e364176c0da67 | [
"MIT"
] | null | null | null | setup.py | themis-project/themis-finals-checker-app-py | 12e70102bcca3d6e4082d96e676e364176c0da67 | [
"MIT"
] | null | null | null | setup.py | themis-project/themis-finals-checker-app-py | 12e70102bcca3d6e4082d96e676e364176c0da67 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from setuptools import setup, find_packages
import io
import os
about = {}
about_filename = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'themis', 'finals', 'checker', 'app', '__about__.py')
with io.open(about_filename, 'rb') as fp:
exec(fp.read(), about)
setup(
name='themis.finals.checker.app',
version=about['__version__'],
description='Themis Finals checker application',
author='Alexander Pyatkin',
author_email='aspyatkin@gmail.com',
url='https://github.com/themis-project/themis-finals-checker-app-py',
license='MIT',
packages=find_packages('.'),
install_requires=[
'setuptools>=0.8',
'Flask>=0.11.1,<0.12',
'redis>=2.10.5,<2.11',
'hiredis>=0.2.0,<0.3',
'rq>=0.7.1,<0.8.0',
'requests>=2.11.0',
'python-dateutil>=2.5.3,<2.6',
'themis.finals.checker.result==1.1.0',
'raven>=5.26.0,<5.27.0',
'PyJWT>=1.5.0,<1.6.0',
'cryptography>=1.8.1,<1.9.0',
'PyYAML>=3.11'
],
namespace_packages=[
'themis',
'themis.finals',
'themis.finals.checker'
],
entry_points=dict(
console_scripts=[
'themis-finals-checker-app-worker = themis.finals.checker.app:start_worker'
]
)
)
| 26.918367 | 87 | 0.581501 | 178 | 1,319 | 4.185393 | 0.460674 | 0.144966 | 0.204027 | 0.147651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061404 | 0.222138 | 1,319 | 48 | 88 | 27.479167 | 0.664717 | 0.015921 | 0 | 0.046512 | 0 | 0 | 0.435185 | 0.173611 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.069767 | 0 | 0.069767 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c3ec0f29f7bce414073cc341dd9839fbf5fca06 | 1,393 | py | Python | guts/api/contrib/type_actions.py | smallwormer/stable-liberty-guts | e635b710cdd210f70e9d50c3b85fffdeb53e8f01 | [
"Apache-2.0"
] | null | null | null | guts/api/contrib/type_actions.py | smallwormer/stable-liberty-guts | e635b710cdd210f70e9d50c3b85fffdeb53e8f01 | [
"Apache-2.0"
] | null | null | null | guts/api/contrib/type_actions.py | smallwormer/stable-liberty-guts | e635b710cdd210f70e9d50c3b85fffdeb53e8f01 | [
"Apache-2.0"
] | 1 | 2022-03-03T05:41:31.000Z | 2022-03-03T05:41:31.000Z | # Copyright (c) 2015 Aptira Pty Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log as logging
from guts.api import extensions
from guts.api.openstack import wsgi
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
authorize = extensions.extension_authorizer('types', '')
class TypeActionsController(wsgi.Controller):
def __init__(self):
super(TypeActionsController, self).__init__()
class Type_actions(extensions.ExtensionDescriptor):
"""Enables source hypervisor type actions."""
name = "TypeActions"
alias = "os-type-actions"
namespace = ""
updated = ""
def get_controller_extensions(self):
controller = TypeActionsController()
extension = extensions.ControllerExtension(self, 'types', controller)
return [extension]
| 29.638298 | 78 | 0.724336 | 174 | 1,393 | 5.695402 | 0.62069 | 0.060545 | 0.026236 | 0.032291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007124 | 0.193826 | 1,393 | 46 | 79 | 30.282609 | 0.875334 | 0.463747 | 0 | 0 | 0 | 0 | 0.049451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.210526 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0c4483174d1c4ff711dd1bd4cb802a150131d7f7 | 469 | py | Python | posthog/migrations/0087_fix_annotation_created_at.py | avoajaugochukwu/posthog | 7e7fd42b0542ebc4734aedb926df11d462e3dd4f | [
"MIT"
] | 7,409 | 2020-02-09T23:18:10.000Z | 2022-03-31T22:36:25.000Z | posthog/migrations/0087_fix_annotation_created_at.py | avoajaugochukwu/posthog | 7e7fd42b0542ebc4734aedb926df11d462e3dd4f | [
"MIT"
] | 5,709 | 2020-02-09T23:26:13.000Z | 2022-03-31T20:20:01.000Z | posthog/migrations/0087_fix_annotation_created_at.py | avoajaugochukwu/posthog | 7e7fd42b0542ebc4734aedb926df11d462e3dd4f | [
"MIT"
] | 647 | 2020-02-13T17:50:55.000Z | 2022-03-31T11:24:19.000Z | # Generated by Django 3.0.7 on 2020-10-14 07:46
import django.utils.timezone
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("posthog", "0086_team_session_recording_opt_in"),
]
operations = [
migrations.AlterField(
model_name="annotation",
name="created_at",
field=models.DateTimeField(default=django.utils.timezone.now, null=True),
),
]
| 23.45 | 85 | 0.648188 | 53 | 469 | 5.603774 | 0.792453 | 0.074074 | 0.127946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053521 | 0.24307 | 469 | 19 | 86 | 24.684211 | 0.783099 | 0.095949 | 0 | 0 | 1 | 0 | 0.14455 | 0.080569 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c4fdea50a153837205a14c5c61c7d560b9d7a43 | 14,406 | py | Python | vdisk.py | cookpan001/vdisk | 1414e5c20eba3722ce99818fe48ddf0217fb25ca | [
"BSD-3-Clause"
] | 1 | 2016-01-11T06:46:11.000Z | 2016-01-11T06:46:11.000Z | vdisk.py | cookpan001/vdisk | 1414e5c20eba3722ce99818fe48ddf0217fb25ca | [
"BSD-3-Clause"
] | null | null | null | vdisk.py | cookpan001/vdisk | 1414e5c20eba3722ce99818fe48ddf0217fb25ca | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
# author: cookpan001
import sys
import logging
import time
import mimetypes
import urllib
import urllib2
"""
oauth2 client
"""
class OAuth2(object):
ACCESS_TOKEN_URL = "https://auth.sina.com.cn/oauth2/access_token"
AUTHORIZE_URL = "https://auth.sina.com.cn/oauth2/authorize"
def __init__(self,app_key, app_secret, call_back_url):
self.version = 1.0
self.app_key = app_key
self.app_secret = app_secret
self.call_back_url = call_back_url
#display = default|mobile|popup
def authorize(self,response_type = "code",display = "default",state = ""):
data = {"client_id":self.app_key,
"redirect_uri":self.call_back_url,
"response_type":response_type,
"display":display}
if len(state) > 0:
data["state"] = state
return OAuth2.AUTHORIZE_URL + "?" + urllib.urlencode(data)
#grant_type = authorization_code|refresh_token
def access_token(self,grant_type = "authorization_code",code = "",refresh_token = ""):
data = {"client_id":self.app_key,
"client_secret":self.app_secret,
"grant_type":grant_type}
if grant_type == "authorization_code":
data["code"] = code
data["redirect_uri"] = self.call_back_url
elif grant_type == "refresh_token":
data["refresh_token"] = refresh_token
try:
request = urllib2.Request(OAuth2.ACCESS_TOKEN_URL)
response = urllib2.urlopen(request,urllib.urlencode(data))
return response.read()
except urllib2.HTTPError,e:
return e.read()
except urllib2.URLError,e:
return e.read()
"""
All the responses will be a Response object
"""
class Response(object):
BLOCK_SIZE = 8192
def __init__(self,response):
self.response = response
"""
return a HTTPMessage object
"""
def headers(self):
if hasattr(self.response,"info"):
return self.response.info()
elif hasattr(self.resopnse,"msg"):
return self.resopnse.msg()
"""
Get the content of response ,optimized for big size resopnse.
data() return a generator. Developer can use this method like this:
for content in Response.data():
print content
"""
def data(self):
while True:
block = self.response.read(Response.BLOCK_SIZE)
if block:
yield block
else:
return
def read(self):
return self.response.read()
def __str__(self):
return self.response.read()
"""
The vdisk(weipan) client.
"""
class Client(object):
log = logging.getLogger('api_client')
API_URL = 'https://api.weipan.cn/2/'
WEIBO_URL = 'https://api.weipan.cn/weibo/'
UPLOAD_HOST = 'upload-vdisk.sina.com.cn'
CONTENT_SAFE_URL = 'https://'+UPLOAD_HOST+'/2/'
version = 1.0
def __init__(self,root = "basic"):
self.timeout = 10
self.python_version_is_bigger_than_2_4 = float(sys.version[:3]) > 2.4
self.root = root
def setRoot(self,root):
self.root = root
def get(self, host, api, queries={}):
try:
if isinstance(api, unicode):
api = api.encode('utf-8')
else:
api = str(api)
url = host.strip('/') + '/' + urllib.quote(api.strip('/'))
queries = self.encode_queries(queries)
request = urllib2.Request('%s?%s' % (url, queries))
# set timeout.
if self.python_version_is_bigger_than_2_4:
response = urllib2.urlopen(request, timeout=self.timeout)
else:
# http://stackoverflow.com/questions/2084782/timeout-for-urllib2-urlopen-in-pre-python-2-6-versions
import socket
socket.setdefaulttimeout(self.timeout)
response = urllib2.urlopen(request)
return Response(response)
except urllib2.HTTPError,e:
return e.read()
except urllib2.URLError,e:
return e.read()
def post(self, host, api, data=[], files=[]):
try:
if isinstance(api, unicode):
api = api.encode('utf-8')
else:
api = str(api)
url = host.strip('/') + '/' + api.strip('/')
if isinstance(data, dict):
data = data.items()
content_type, body = self.encode_multipart_formdata(data, files)
request = urllib2.Request(url, data=body)
request.add_header('Content-Type', content_type)
request.add_header('Content-Length', str(len(body)))
if self.python_version_is_bigger_than_2_4:
response = urllib2.urlopen(request, timeout=self.timeout)
else:
import socket
socket.setdefaulttimeout(self.timeout)
response = urllib2.urlopen(request)
return Response(response)
except urllib2.HTTPError,e:
return e.read()
except urllib2.URLError,e:
return e.read()
# used by non GET or POST method. such as PUT
def request(self, method,host, api, data, headers = {}, use_safe = True):
import httplib
if isinstance(api, unicode):
api = api.encode('utf-8')
else:
api = str(api)
if isinstance(data, dict):
data = self.encode_queries(data)
try:
if use_safe:
conn = httplib.HTTPSConnection(host)
else:
conn = httplib.HTTPConnection(host)
conn.request(method,api,data,headers)
return Response(conn.getresponse())
except httplib.HTTPException,e:
print e
return e.read()
def get_content_type(self, filename):
return mimetypes.guess_type(filename)[0] or 'application/octet-stream'
def encode_multipart_formdata(self, fields, files):
"""
fields is a sequence of (name, value) elements for regular form fields.
files is a sequence of (name, filename, value) elements for data to be uploaded as files
Return (content_type, body) ready for httplib.HTTP instance
"""
BOUNDARY = '----------%s' % hex(int(time.time() * 1000))
CRLF = '\r\n'
L = []
for (key, value) in fields:
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="%s"' % str(key))
L.append('')
if isinstance(value, unicode):
L.append(value.encode('utf-8'))
else:
L.append(value)
for (key, filename, value) in files:
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (str(key), str(filename)))
L.append('Content-Type: %s' % str(self.get_content_type(filename)))
L.append('Content-Length: %d' % len(value))
L.append('')
L.append(value)
L.append('--' + BOUNDARY + '--')
L.append('')
body = CRLF.join(L)
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
return content_type, body
def encode_queries(self, queries={}, **kwargs):
queries.update(kwargs)
args = []
for k, v in queries.iteritems():
if isinstance(v, unicode):
qv = v.encode('utf-8')
else:
qv = str(v)
args.append('%s=%s' % (k, urllib.quote(qv)))
return '&'.join(args)
def account_info(self,access_token):
data = self.get(Client.API_URL,
'account/info',
{"access_token":access_token})
return data
def metadata(self,access_token,path):
data = self.get(Client.API_URL,
'metadata/' + self.root + '/' + path,
{"access_token":access_token})
return data
def delta(self,access_token,cursor = ''):
param = {"access_token":access_token}
if len(cursor) > 0:
param['cursor'] = cursor
data = self.get(Client.API_URL,
'delta/' + self.root,
param)
return data
def files(self,access_token,path,rev = ''):
param = {"access_token":access_token}
if len(rev) > 0:
param['rev'] = rev
data = self.get(Client.API_URL,
'files/' + self.root + "/" + path,
param)
return data
def revisions(self,access_token,path):
data = self.get(Client.API_URL,
'revisions/' + self.root + "/" + path,
{"access_token":access_token})
return data
#files = {"filename":filename,"content":"file content"}
def files_post(self,access_token,path,files,overwrite = "true",sha1 = "",size = "", parent_rev = ""):
param = {
"access_token":access_token,
"overwrite":overwrite
}
if len(sha1) > 0:
param["sha1"] = sha1
if len(size) > 0:
param["size"] = size
if len(parent_rev) > 0:
param["parent_rev"] = parent_rev
queries = self.encode_queries(param)
data = self.post(Client.CONTENT_SAFE_URL,
'files/'+self.root+"/"+path+"?"+queries,
[],
[("file",files["filename"],files["content"])])
return data
"""
content should be a file object or file raw content, such as: open("./filename","rb"), "rb" is prefered.
"""
def files_put(self,access_token,path,content,overwrite = "true",sha1 = "",size = "", parent_rev = ""):
param = {
"access_token":access_token,
"overwrite":overwrite
}
if len(sha1) > 0:
param["sha1"] = sha1
if len(size) > 0:
param["size"] = size
if len(parent_rev) > 0:
param["parent_rev"] = parent_rev
data = self.request(
method="PUT",
host=Client.UPLOAD_HOST,
api='/2/files_put/'+self.root+"/"+path+"?"+self.encode_queries(param),
data=content)
return data
# 公开分享
def shares(self,access_token,path,cancel = "false"):
data = self.post(Client.API_URL,
'shares/'+self.root+"/"+path,
{"access_token":access_token,
"cancel":cancel
})
return data
def restore(self,access_token,path,rev = ""):
param = {"access_token":access_token,
"path":path
}
if len(rev) > 0:
param['rev'] = rev
data = self.post(Client.API_URL,
'restore/'+self.root+"/"+path,
{"access_token":access_token})
return data
def search(self,access_token,path,query,file_limit = 1000,include_deleted = "false"):
data = self.get(Client.API_URL,
'search/'+self.root+"/"+path,
{"access_token":access_token,
"path":path,
"query":query,
"file_limit":file_limit,
"include_deleted":include_deleted
})
return data
def copy_ref(self,access_token,path):
data = self.post(Client.API_URL,
'copy_ref/'+self.root+"/"+path,
{"access_token":access_token,
"path":path})
return data
def media(self,access_token,path):
data = self.get(Client.API_URL,
'media/'+self.root+"/"+path,
{"access_token":access_token,
"path":path})
return data
#s:60x60,m:100x100,l:640x480,xl:1027x768
def thumbnails(self,access_token,path,size):
data = self.get(Client.API_URL,
'thumbnails/'+self.root+"/"+path,
{"access_token":access_token,
"path":path,
"size":size})
return data
def fileops_copy(self,access_token,to_path,from_path = "",from_copy_ref = ""):
param = {"access_token":access_token,
"root":self.root,
"to_path":to_path
}
if len(from_path) > 0:
param['from_path'] = from_path
if len(from_copy_ref) > 0:
param['from_copy_ref'] = from_copy_ref
data = self.post(Client.API_URL,
'fileops/copy',
param)
return data
def fileops_delete(self,access_token,path):
data = self.post(Client.API_URL,
'fileops/delete',
{"access_token":access_token,
"root":self.root,
"path":path
})
return data
def fileops_move(self,access_token,from_path = "",to_path = ""):
param = {"access_token":access_token,
"root":self.root
}
if len(from_path) > 0:
param['from_path'] = from_path
if len(to_path) > 0:
param['to_path'] = to_path
data = self.post(Client.API_URL,
'fileops/move',
param)
return data
def fileops_create_folder(self,access_token,path):
data = self.post(Client.API_URL,
'fileops/create_folder',
{"access_token":access_token,
"root":self.root,
"path":path
})
return data
def shareops_media(self,access_token,from_copy_ref):
data = self.get(Client.API_URL,
'shareops/media',
{"access_token":access_token,
"from_copy_ref":from_copy_ref})
return data
| 36.470886 | 115 | 0.521102 | 1,557 | 14,406 | 4.663455 | 0.156069 | 0.090897 | 0.044484 | 0.057568 | 0.432998 | 0.3797 | 0.335904 | 0.310288 | 0.290456 | 0.265115 | 0 | 0.012513 | 0.356518 | 14,406 | 394 | 116 | 36.563452 | 0.770766 | 0.026586 | 0 | 0.478659 | 0 | 0 | 0.0989 | 0.005197 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027439 | null | null | 0.003049 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c517d2c976cb6c4a933b0a237cbe0bcc83aaacb | 31,109 | py | Python | hpedockerplugin/request_context.py | renovate-bot/python-hpedockerplugin | b7fa6b3193fa6dd42574585b4c621ff6a16babc9 | [
"Apache-2.0"
] | 49 | 2016-06-14T22:25:40.000Z | 2021-04-05T05:00:59.000Z | hpedockerplugin/request_context.py | imran-ansari/python-hpedockerplugin | e2726f48ac793dc894100e3772c40ce89bfe9bb8 | [
"Apache-2.0"
] | 550 | 2016-07-25T12:01:12.000Z | 2021-11-15T17:52:40.000Z | hpedockerplugin/request_context.py | imran-ansari/python-hpedockerplugin | e2726f48ac793dc894100e3772c40ce89bfe9bb8 | [
"Apache-2.0"
] | 96 | 2016-06-01T22:07:03.000Z | 2021-06-22T09:05:05.000Z | import abc
import json
import re
from collections import OrderedDict
from oslo_log import log as logging
import hpedockerplugin.exception as exception
from hpedockerplugin.hpe import share
LOG = logging.getLogger(__name__)
class RequestContextBuilderFactory(object):
def __init__(self, all_configs):
self._all_configs = all_configs
# if 'block' in all_configs:
# block_configs = all_configs['block']
# backend_configs = block_configs[1]
# self._vol_req_ctxt_creator = VolumeRequestContextBuilder(
# backend_configs)
# else:
# self._vol_req_ctxt_creator = NullRequestContextBuilder(
# "ERROR: Volume driver not enabled. Please provide hpe.conf "
# "file to enable it")
if 'file' in all_configs:
file_configs = all_configs['file']
f_backend_configs = file_configs[1]
self._file_req_ctxt_builder = FileRequestContextBuilder(
f_backend_configs)
else:
self._file_req_ctxt_builder = NullRequestContextBuilder(
"ERROR: File driver not enabled. Please provide hpe_file.conf "
"file to enable it")
def get_request_context_builder(self):
return self._file_req_ctxt_builder
class NullRequestContextBuilder(object):
def __init__(self, msg):
self._msg = msg
def build_request_context(self, contents, def_backend_name):
raise exception.InvalidInput(self._msg)
class RequestContextBuilder(object):
def __init__(self, backend_configs):
self._backend_configs = backend_configs
def build_request_context(self, contents, def_backend_name):
LOG.info("build_request_context: Entering...")
self._validate_name(contents['Name'])
req_ctxt_map = self._get_build_req_ctxt_map()
if 'Opts' in contents and contents['Opts']:
# self._validate_mutually_exclusive_ops(contents)
self._validate_dependent_opts(contents)
for op_name, req_ctxt_creator in req_ctxt_map.items():
op_name = op_name.split(',')
found = not (set(op_name) - set(contents['Opts'].keys()))
if found:
return req_ctxt_creator(contents, def_backend_name)
return self._default_req_ctxt_creator(contents)
@staticmethod
def _validate_name(vol_name):
is_valid_name = re.match("^[A-Za-z0-9]+[A-Za-z0-9_-]+$", vol_name)
if not is_valid_name:
msg = 'Invalid volume name: %s is passed.' % vol_name
raise exception.InvalidInput(reason=msg)
@staticmethod
def _get_int_option(options, option_name, default_val):
opt = options.get(option_name)
if opt and opt != '':
try:
opt = int(opt)
except ValueError as ex:
msg = "ERROR: Invalid value '%s' specified for '%s' option. " \
"Please specify an integer value." % (opt, option_name)
LOG.error(msg)
raise exception.InvalidInput(msg)
else:
opt = default_val
return opt
# This method does the following:
# 1. Option specified
# - Some value:
# -- return if valid value else exception
# - Blank value:
# -- Return default if provided
# ELSE
# -- Throw exception if value_unset_exception is set
# 2. Option NOT specified
# - Return default value
@staticmethod
def _get_str_option(options, option_name, default_val, valid_values=None,
value_unset_exception=False):
opt = options.get(option_name)
if opt:
if opt != '':
opt = str(opt)
if valid_values and opt.lower() not in valid_values:
msg = "ERROR: Invalid value '%s' specified for '%s'" \
"option. Valid values are: %s" %\
(opt, option_name, valid_values)
LOG.error(msg)
raise exception.InvalidInput(msg)
return opt
if default_val:
return default_val
if value_unset_exception:
return json.dumps({
'Err': "Value not set for option: %s" % opt
})
return default_val
def _validate_dependent_opts(self, contents):
pass
# To be implemented by derived class
@abc.abstractmethod
def _get_build_req_ctxt_map(self):
pass
def _default_req_ctxt_creator(self, contents):
pass
@staticmethod
def _validate_mutually_exclusive_ops(contents):
mutually_exclusive_ops = ['virtualCopyOf', 'cloneOf', 'importVol',
'replicationGroup']
if 'Opts' in contents and contents['Opts']:
received_opts = contents.get('Opts').keys()
diff = set(mutually_exclusive_ops) - set(received_opts)
if len(diff) < len(mutually_exclusive_ops) - 1:
mutually_exclusive_ops.sort()
msg = "Operations %s are mutually exclusive and cannot be " \
"specified together. Please check help for usage." % \
mutually_exclusive_ops
raise exception.InvalidInput(reason=msg)
@staticmethod
def _check_valid_fsMode_string(value):
valid_type = ['A', 'D', 'U', 'L']
valid_flag = ['f', 'd', 'p', 'i', 'S', 'F', 'g']
valid_perm1 = ['r', 'w', 'a', 'x', 'd', 'D', 't', 'T']
valid_perm2 = ['n', 'N', 'c', 'C', 'o', 'y']
valid_perm = valid_perm1 + valid_perm2
type_flag_perm = value.split(':')
if len(type_flag_perm) != 3:
msg = "Incorrect value passed , please check correct "\
"format and values to be passed in help"
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
vtype = type_flag_perm[0]
if vtype not in valid_type:
msg = "Incorrect value passed for type of a mode, please check "\
"correct format and values to be passed."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
passed_vflag_len = len(list(type_flag_perm[1]))
vflag = list(set(list(type_flag_perm[1])))
if len(vflag) < passed_vflag_len:
msg = "Duplicate characters for given flag are passed. "\
"Please correct the passed flag characters for fsMode."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
if set(vflag) - set(valid_flag):
msg = "Invalid flag passed for the fsMode. Please "\
"pass the correct flag characters"
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
passed_vperm_len = len(list(type_flag_perm[2]))
vperm = list(set(list(type_flag_perm[2])))
if len(vperm) < passed_vperm_len:
msg = "Duplicate characters for given permission are passed. "\
"Please correct the passed permissions for fsMode."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
if set(vperm) - set(valid_perm):
msg = "Invalid characters for the permissions of fsMode are "\
"passed. Please remove the invalid characters."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
return True
def _check_is_valid_acl_string(self, fsMode):
fsMode_list = fsMode.split(',')
if len(fsMode_list) != 3:
msg = "Passed acl string is not valid. "\
"Pass correct acl string."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
for value in fsMode_list:
self._check_valid_fsMode_string(value)
return True
@staticmethod
def _is_valid_octal_num(fsMode):
return re.match('^0[0-7]{3}$', fsMode)
def _validate_fsMode(self, fsMode):
is_valid_fs_mode = True
if ':' in fsMode:
is_valid_fs_mode = self._check_is_valid_acl_string(fsMode)
else:
is_valid_fs_mode = self._is_valid_octal_num(fsMode)
if not is_valid_fs_mode:
msg = "Invalid value passed for the fsMode."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
@staticmethod
def _validate_fsOwner(fsOwner):
fsOwner_list = fsOwner.split(':')
if len(fsOwner_list) != 2:
msg = "Invalid value specified for fsOwner Option."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
try:
for val in fsOwner_list:
int(val)
except ValueError as ex:
msg = "Please provide correct fsowner inforamtion. You have "\
"passed non integer values."
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
@staticmethod
def _validate_opts(operation, contents, valid_opts, mandatory_opts=None):
LOG.info("Validating options for operation '%s'" % operation)
if 'Opts' in contents and contents['Opts']:
received_opts = contents.get('Opts').keys()
if mandatory_opts:
diff = set(mandatory_opts) - set(received_opts)
if diff:
# Print options in sorted manner
mandatory_opts.sort()
msg = "One or more mandatory options %s are missing " \
"for operation %s" % (mandatory_opts, operation)
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
diff = set(received_opts) - set(valid_opts)
if diff:
diff = list(diff)
diff.sort()
msg = "Invalid option(s) %s specified for operation %s. " \
"Please check help for usage." % \
(diff, operation)
LOG.error(msg)
raise exception.InvalidInput(reason=msg)
class FileRequestContextBuilder(RequestContextBuilder):
def __init__(self, backend_configs):
super(FileRequestContextBuilder, self).__init__(backend_configs)
def _get_build_req_ctxt_map(self):
build_req_ctxt_map = OrderedDict()
# If share-dir is specified, file-store MUST be specified
build_req_ctxt_map['filePersona,help'] = self._create_help_req_ctxt
build_req_ctxt_map['filePersona'] = \
self._create_share_req_ctxt
# build_req_ctxt_map['persona,cpg'] = \
# self._create_share_req_ctxt
# build_req_ctxt_map['persona,cpg,size'] = \
# self._create_share_req_ctxt
# build_req_ctxt_map['persona,cpg,size,fpg_name'] = \
# self._create_share_req_ctxt
# build_req_ctxt_map['virtualCopyOf,shareName'] = \
# self._create_snap_req_ctxt
# build_req_ctxt_map['updateShare'] = \
# self._create_update_req_ctxt
return build_req_ctxt_map
def _create_share_req_params(self, name, options, def_backend_name):
LOG.info("_create_share_req_params: Entering...")
# import pdb
# pdb.set_trace()
backend = self._get_str_option(options, 'backend', def_backend_name)
if backend == 'DEFAULT_BLOCK':
msg = 'Backend DEFAULT_BLOCK is reserved for Block ' \
'operations. Cannot specify it for File operations'
LOG.error(msg)
raise exception.InvalidInput(msg)
config = self._backend_configs.get(backend)
if not config:
raise exception.InvalidInput(
'ERROR: Backend %s is not configured for File Persona'
% backend
)
cpg = self._get_str_option(
options, 'cpg',
config.hpe3par_cpg[0] if config.hpe3par_cpg else None)
if not cpg:
raise exception.InvalidInput(
"ERROR: CPG is not configured in hpe.conf. Please specify"
"name of an existing CPG in hpe.conf and restart plugin")
fpg = self._get_str_option(options, 'fpg', None)
fsMode = self._get_str_option(options, 'fsMode', None)
fsOwner = self._get_str_option(options, 'fsOwner', None)
if fsMode:
self._validate_fsMode(fsMode)
if fsOwner:
self._validate_fsOwner(fsOwner)
if fsMode:
if fsOwner is None:
raise exception.InvalidInput(
" ERROR: If mode bits or directory permissions"
" needs to be changed then, providing fsOwner"
" is mandetory")
size_gib = self._get_int_option(options, 'size', 1024)
# Default share size or quota in MiB which is 1TiB
size = size_gib * 1024
fpg_size_gib = int(config.hpe3par_default_fpg_size) * 1024
if size_gib > fpg_size_gib:
raise exception.InvalidInput(
"ERROR: Share size cannot be greater than the FPG size. "
"Either specify hpe3par_default_fpg_size >= %s GiB or "
"specify option '-o size' < %s GiB"
% (size_gib, fpg_size_gib))
# TODO: This check would be required when VFS needs to be created.
# NOT HERE
# if not ip_subnet and not config.hpe3par_ip_pool:
# raise exception.InvalidInput(
# "ERROR: Unable to create share as neither 'ipSubnet' "
# "option specified not IP address pool hpe3par_ip_pool "
# "configured in configuration file specified")
readonly_str = self._get_str_option(options, 'readonly', 'false')
readonly = str.lower(readonly_str)
if readonly == 'true':
readonly = True
elif readonly == 'false':
readonly = False
else:
raise exception.InvalidInput(
'ERROR: Invalid value "%s" supplied for "readonly" option. '
'Valid values are case insensitive ["true", "false"]'
% readonly_str)
nfs_options = self._get_str_option(options, 'nfsOptions', None)
comment = self._get_str_option(options, 'comment', None)
share_details = share.create_metadata(backend, cpg, fpg, name, size,
readonly=readonly,
nfs_options=nfs_options,
comment=comment, fsMode=fsMode,
fsOwner=fsOwner)
LOG.info("_create_share_req_params: %s" % share_details)
return share_details
def _create_share_req_ctxt(self, contents, def_backend_name):
LOG.info("_create_share_req_ctxt: Entering...")
valid_opts = ('backend', 'filePersona', 'cpg', 'fpg',
'size', 'mountConflictDelay', 'fsMode', 'fsOwner')
mandatory_opts = ('filePersona',)
self._validate_opts("create share", contents, valid_opts,
mandatory_opts)
share_args = self._create_share_req_params(contents['Name'],
contents['Opts'],
def_backend_name)
ctxt = {'orchestrator': 'file',
'operation': 'create_share',
'kwargs': share_args}
LOG.info("_create_share_req_ctxt: Exiting: %s" % ctxt)
return ctxt
def _create_help_req_ctxt(self, contents, def_backend_name):
LOG.info("_create_help_req_ctxt: Entering...")
valid_opts = ('filePersona', 'help', 'mountConflictDelay')
self._validate_opts("create help content for share", contents,
valid_opts, mandatory_opts=None)
options = contents['Opts']
if options:
value = self._get_str_option(options, 'help', None)
if not value:
return {
'orchestrator': 'file',
'operation': 'create_share_help',
'kwargs': {}
}
if value == 'backends':
return {
'orchestrator': 'file',
'operation': 'get_backends_status',
'kwargs': {}
}
else:
raise exception.InvalidInput(
"ERROR: Invalid value %s for option 'help' specified."
% value)
LOG.info("_create_help_req_ctxt: Exiting...")
def _create_snap_req_ctxt(self, contents):
pass
def _create_update_req_ctxt(self, contents):
pass
# TODO: This is work in progress - can be taken up later if agreed upon
# class VolumeRequestContextBuilder(RequestContextBuilder):
# def __init__(self, backend_configs):
# super(VolumeRequestContextBuilder, self).__init__(backend_configs)
#
# def _get_build_req_ctxt_map(self):
# build_req_ctxt_map = OrderedDict()
# build_req_ctxt_map['virtualCopyOf,scheduleName'] = \
# self._create_snap_schedule_req_ctxt,
# build_req_ctxt_map['virtualCopyOf,scheduleFrequency'] = \
# self._create_snap_schedule_req_ctxt
# build_req_ctxt_map['virtualCopyOf,snaphotPrefix'] = \
# self._create_snap_schedule_req_ctxt
# build_req_ctxt_map['virtualCopyOf'] = \
# self._create_snap_req_ctxt
# build_req_ctxt_map['cloneOf'] = \
# self._create_clone_req_ctxt
# build_req_ctxt_map['importVol'] = \
# self._create_import_vol_req_ctxt
# build_req_ctxt_map['replicationGroup'] = \
# self._create_rcg_req_ctxt
# build_req_ctxt_map['help'] = self._create_help_req_ctxt
# return build_req_ctxt_map
#
# def _default_req_ctxt_creator(self, contents):
# return self._create_vol_create_req_ctxt(contents)
#
# @staticmethod
# def _validate_mutually_exclusive_ops(contents):
# mutually_exclusive_ops = ['virtualCopyOf', 'cloneOf', 'importVol',
# 'replicationGroup']
# if 'Opts' in contents and contents['Opts']:
# received_opts = contents.get('Opts').keys()
# diff = set(mutually_exclusive_ops) - set(received_opts)
# if len(diff) < len(mutually_exclusive_ops) - 1:
# mutually_exclusive_ops.sort()
# msg = "Operations %s are mutually exclusive and cannot be " \
# "specified together. Please check help for usage." % \
# mutually_exclusive_ops
# raise exception.InvalidInput(reason=msg)
#
# @staticmethod
# def _validate_opts(operation, contents, valid_opts, mandatory_opts=None):
# if 'Opts' in contents and contents['Opts']:
# received_opts = contents.get('Opts').keys()
#
# if mandatory_opts:
# diff = set(mandatory_opts) - set(received_opts)
# if diff:
# # Print options in sorted manner
# mandatory_opts.sort()
# msg = "One or more mandatory options %s are missing " \
# "for operation %s" % (mandatory_opts, operation)
# raise exception.InvalidInput(reason=msg)
#
# diff = set(received_opts) - set(valid_opts)
# if diff:
# diff = list(diff)
# diff.sort()
# msg = "Invalid option(s) %s specified for operation %s. " \
# "Please check help for usage." % \
# (diff, operation)
# raise exception.InvalidInput(reason=msg)
#
# def _create_vol_create_req_ctxt(self, contents):
# valid_opts = ['compression', 'size', 'provisioning',
# 'flash-cache', 'qos-name', 'fsOwner',
# 'fsMode', 'mountConflictDelay', 'cpg',
# 'snapcpg', 'backend']
# self._validate_opts("create volume", contents, valid_opts)
# return {'operation': 'create_volume',
# '_vol_orchestrator': 'volume'}
#
# def _create_clone_req_ctxt(self, contents):
# valid_opts = ['cloneOf', 'size', 'cpg', 'snapcpg',
# 'mountConflictDelay']
# self._validate_opts("clone volume", contents, valid_opts)
# return {'operation': 'clone_volume',
# 'orchestrator': 'volume'}
#
# def _create_snap_req_ctxt(self, contents):
# valid_opts = ['virtualCopyOf', 'retentionHours', 'expirationHours',
# 'mountConflictDelay', 'size']
# self._validate_opts("create snapshot", contents, valid_opts)
# return {'operation': 'create_snapshot',
# '_vol_orchestrator': 'volume'}
#
# def _create_snap_schedule_req_ctxt(self, contents):
# valid_opts = ['virtualCopyOf', 'scheduleFrequency', 'scheduleName',
# 'snapshotPrefix', 'expHrs', 'retHrs',
# 'mountConflictDelay', 'size']
# mandatory_opts = ['scheduleName', 'snapshotPrefix',
# 'scheduleFrequency']
# self._validate_opts("create snapshot schedule", contents,
# valid_opts, mandatory_opts)
# return {'operation': 'create_snapshot_schedule',
# 'orchestrator': 'volume'}
#
# def _create_import_vol_req_ctxt(self, contents):
# valid_opts = ['importVol', 'backend', 'mountConflictDelay']
# self._validate_opts("import volume", contents, valid_opts)
#
# # Replication enabled backend cannot be used for volume import
# backend = contents['Opts'].get('backend', 'DEFAULT')
# if backend == '':
# backend = 'DEFAULT'
#
# try:
# config = self._backend_configs[backend]
# except KeyError:
# backend_names = list(self._backend_configs.keys())
# backend_names.sort()
# msg = "ERROR: Backend '%s' doesn't exist. Available " \
# "backends are %s. Please use " \
# "a valid backend name and retry." % \
# (backend, backend_names)
# raise exception.InvalidInput(reason=msg)
#
# if config.replication_device:
# msg = "ERROR: Import volume not allowed with replication " \
# "enabled backend '%s'" % backend
# raise exception.InvalidInput(reason=msg)
#
# volname = contents['Name']
# existing_ref = str(contents['Opts']['importVol'])
# manage_opts = contents['Opts']
# return {'orchestrator': 'volume',
# 'operation': 'import_volume',
# 'args': (volname,
# existing_ref,
# backend,
# manage_opts)}
#
# def _create_rcg_req_ctxt(self, contents):
# valid_opts = ['replicationGroup', 'size', 'provisioning',
# 'backend', 'mountConflictDelay', 'compression']
# self._validate_opts('create replicated volume', contents, valid_opts)
#
# # It is possible that the user configured replication in hpe.conf
# # but didn't specify any options. In that case too, this operation
# # must fail asking for "replicationGroup" parameter
# # Hence this validation must be done whether "Opts" is there or not
# options = contents['Opts']
# backend = self._get_str_option(options, 'backend', 'DEFAULT')
# create_vol_args = self._get_create_volume_args(options)
# rcg_name = create_vol_args['replicationGroup']
# try:
# self._validate_rcg_params(rcg_name, backend)
# except exception.InvalidInput as ex:
# return json.dumps({u"Err": ex.msg})
#
# return {'operation': 'create_volume',
# 'orchestrator': 'volume',
# 'args': create_vol_args}
#
# def _get_fs_owner(self, options):
# val = self._get_str_option(options, 'fsOwner', None)
# if val:
# fs_owner = val.split(':')
# if len(fs_owner) != 2:
# msg = "Invalid value '%s' specified for fsOwner. Please " \
# "specify a correct value." % val
# raise exception.InvalidInput(msg)
# return fs_owner
# return None
#
# def _get_fs_mode(self, options):
# fs_mode_str = self._get_str_option(options, 'fsMode', None)
# if fs_mode_str:
# try:
# int(fs_mode_str)
# except ValueError as ex:
# msg = "Invalid value '%s' specified for fsMode. Please " \
# "specify an integer value." % fs_mode_str
# raise exception.InvalidInput(msg)
#
# if fs_mode_str[0] != '0':
# msg = "Invalid value '%s' specified for fsMode. Please " \
# "specify an octal value." % fs_mode_str
# raise exception.InvalidInput(msg)
#
# for mode in fs_mode_str:
# if int(mode) > 7:
# msg = "Invalid value '%s' specified for fsMode. Please"\
# " specify an octal value." % fs_mode_str
# raise exception.InvalidInput(msg)
# return fs_mode_str
#
# def _get_create_volume_args(self, options):
# ret_args = dict()
# ret_args['size'] = self._get_int_option(
# options, 'size', volume.DEFAULT_SIZE)
# ret_args['provisioning'] = self._get_str_option(
# options, 'provisioning', volume.DEFAULT_PROV,
# ['full', 'thin', 'dedup'])
# ret_args['flash-cache'] = self._get_str_option(
# options, 'flash-cache', volume.DEFAULT_FLASH_CACHE,
# ['true', 'false'])
# ret_args['qos-name'] = self._get_str_option(
# options, 'qos-name', volume.DEFAULT_QOS)
# ret_args['compression'] = self._get_str_option(
# options, 'compression', volume.DEFAULT_COMPRESSION_VAL,
# ['true', 'false'])
# ret_args['fsOwner'] = self._get_fs_owner(options)
# ret_args['fsMode'] = self._get_fs_mode(options)
# ret_args['mountConflictDelay'] = self._get_int_option(
# options, 'mountConflictDelay',
# volume.DEFAULT_MOUNT_CONFLICT_DELAY)
# ret_args['cpg'] = self._get_str_option(options, 'cpg', None)
# ret_args['snapcpg'] = self._get_str_option(options, 'snapcpg', None)
# ret_args['replicationGroup'] = self._get_str_option(
# options, 'replicationGroup', None)
#
# return ret_args
#
# def _validate_rcg_params(self, rcg_name, backend_name):
# LOG.info("Validating RCG: %s, backend name: %s..." % (rcg_name,
# backend_name))
# hpepluginconfig = self._backend_configs[backend_name]
# replication_device = hpepluginconfig.replication_device
#
# LOG.info("Replication device: %s" % six.text_type(
# replication_device))
#
# if rcg_name and not replication_device:
# msg = "Request to create replicated volume cannot be fulfilled"\
# "without defining 'replication_device' entry defined in"\
# "hpe.conf for the backend '%s'. Please add it and execute"\
# "the request again." % backend_name
# raise exception.InvalidInput(reason=msg)
#
# if replication_device and not rcg_name:
# backend_names = list(self._backend_configs.keys())
# backend_names.sort()
#
# msg = "'%s' is a replication enabled backend. " \
# "Request to create replicated volume cannot be fulfilled "\
# "without specifying 'replicationGroup' option in the "\
# "request. Please either specify 'replicationGroup' or use"\
# "a normal backend and execute the request again. List of"\
# "backends defined in hpe.conf: %s" % (backend_name,
# backend_names)
# raise exception.InvalidInput(reason=msg)
#
# if rcg_name and replication_device:
#
# def _check_valid_replication_mode(mode):
# valid_modes = ['synchronous', 'asynchronous', 'streaming']
# if mode.lower() not in valid_modes:
# msg = "Unknown replication mode '%s' specified. Valid "\
# "values are 'synchronous | asynchronous | " \
# "streaming'" % mode
# raise exception.InvalidInput(reason=msg)
#
# rep_mode = replication_device['replication_mode'].lower()
# _check_valid_replication_mode(rep_mode)
# if replication_device.get('quorum_witness_ip'):
# if rep_mode.lower() != 'synchronous':
# msg = "For Peer Persistence, replication mode must be "\
# "synchronous"
# raise exception.InvalidInput(reason=msg)
#
# sync_period = replication_device.get('sync_period')
# if sync_period and rep_mode == 'synchronous':
# msg = "'sync_period' can be defined only for 'asynchronous'"\
# " and 'streaming' replicate modes"
# raise exception.InvalidInput(reason=msg)
#
# if (rep_mode == 'asynchronous' or rep_mode == 'streaming')\
# and sync_period:
# try:
# sync_period = int(sync_period)
# except ValueError as ex:
# msg = "Non-integer value '%s' not allowed for " \
# "'sync_period'. %s" % (
# replication_device.sync_period, ex)
# raise exception.InvalidInput(reason=msg)
# else:
# SYNC_PERIOD_LOW = 300
# SYNC_PERIOD_HIGH = 31622400
# if sync_period < SYNC_PERIOD_LOW or \
# sync_period > SYNC_PERIOD_HIGH:
# msg = "'sync_period' must be between 300 and " \
# "31622400 seconds."
# raise exception.InvalidInput(reason=msg)
#
# @staticmethod
# def _validate_name(vol_name):
# is_valid_name = re.match("^[A-Za-z0-9]+[A-Za-z0-9_-]+$", vol_name)
# if not is_valid_name:
# msg = 'Invalid volume name: %s is passed.' % vol_name
# raise exception.InvalidInput(reason=msg)
| 43.387727 | 79 | 0.567746 | 3,256 | 31,109 | 5.157248 | 0.115786 | 0.027096 | 0.065031 | 0.051453 | 0.477787 | 0.385958 | 0.31783 | 0.271677 | 0.23672 | 0.1942 | 0 | 0.003663 | 0.333119 | 31,109 | 716 | 80 | 43.448324 | 0.805746 | 0.488187 | 0 | 0.275542 | 0 | 0 | 0.181001 | 0.013736 | 0 | 0 | 0 | 0.001397 | 0 | 1 | 0.080495 | false | 0.077399 | 0.024768 | 0.006192 | 0.167183 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0c51eb0b9b67869087426ffee62488bbc0029d3f | 1,230 | py | Python | src/freshchat/client/configuration.py | twyla-ai/python-freshchat | 5bb0ea730f82b63292688be61315b6b880896e1f | [
"MIT"
] | 4 | 2019-10-15T11:03:28.000Z | 2021-08-19T01:14:12.000Z | src/freshchat/client/configuration.py | twyla-ai/python-freshchat | 5bb0ea730f82b63292688be61315b6b880896e1f | [
"MIT"
] | 137 | 2019-10-18T04:36:21.000Z | 2022-03-21T04:11:18.000Z | src/freshchat/client/configuration.py | twyla-ai/python-freshchat | 5bb0ea730f82b63292688be61315b6b880896e1f | [
"MIT"
] | 1 | 2021-08-19T01:14:14.000Z | 2021-08-19T01:14:14.000Z | import os
from dataclasses import dataclass, field
from typing import AnyStr, Dict, Optional
from urllib.parse import urljoin
@dataclass
class FreshChatConfiguration:
"""
Class represents the base configuration for Freshchat
"""
app_id: str
token: str = field(repr=False)
default_channel_id: Optional[str] = field(default=None)
default_initial_message: Optional[str] = field(default=None)
url: Optional[str] = field(
default_factory=lambda: os.environ.get(
"FRESHCHAT_API_URL", "https://api.freshchat.com/v2/"
)
)
@property
def authorization_header(self) -> Dict[AnyStr, AnyStr]:
"""
Property which returns the proper format of the authorization header
"""
return {
"Authorization": f"Bearer {self.token}"
if "Bearer" not in self.token
else self.token
}
def get_url(self, endpoint: str) -> str:
"""
Method responsible to build the url using the given endpoint
:param endpoint: String with the endpoint which needs to attached to URL
:return: a string which represents URL
"""
return urljoin(self.url, endpoint.lstrip("/"))
| 28.604651 | 80 | 0.64065 | 144 | 1,230 | 5.402778 | 0.5 | 0.041131 | 0.061697 | 0.088689 | 0.069409 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001111 | 0.268293 | 1,230 | 42 | 81 | 29.285714 | 0.863333 | 0.24065 | 0 | 0 | 0 | 0 | 0.099299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0c52795432861cbcf4e3ec45d893ec1acc331585 | 7,668 | py | Python | aiida_phonopy/parsers/phonopy.py | giovannipizzi/aiida-phonopy | 26e419c34415c68f815fa81ce2ac644aa387ae72 | [
"MIT"
] | null | null | null | aiida_phonopy/parsers/phonopy.py | giovannipizzi/aiida-phonopy | 26e419c34415c68f815fa81ce2ac644aa387ae72 | [
"MIT"
] | null | null | null | aiida_phonopy/parsers/phonopy.py | giovannipizzi/aiida-phonopy | 26e419c34415c68f815fa81ce2ac644aa387ae72 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from aiida.orm.data.folder import FolderData
from aiida.parsers.parser import Parser
from aiida.common.datastructures import calc_states
from aiida.parsers.exceptions import OutputParsingError
from aiida.common.exceptions import UniquenessError
import numpy
from aiida.orm.data.array import ArrayData
from aiida.orm.data.array.bands import BandsData
from aiida.orm.data.array.kpoints import KpointsData
from aiida.orm.data.parameter import ParameterData
from aiida.orm.data.structure import StructureData
import json
from aiida_phonopy.calculations.phonopy import PhonopyCalculation
__copyright__ = u"Copyright (c), 2014-2015, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, Laboratory of Theory and Simulation of Materials (THEOS). All rights reserved."
__license__ = "Non-Commercial, End-User Software License Agreement, see LICENSE.txt file"
__version__ = "0.4.1"
class PhonopyParser(Parser):
"""
This class is the implementation of the Parser class for a Phonopy calculator.
"""
_out_band_name = 'phonon_frequencies'
_out_dos_name = 'phonon_dos'
_out_thermal_name = 'thermal_properties'
def __init__(self,calc):
"""
Initialize the instance of PhonopyParser
"""
# check for valid input
if not isinstance(calc,PhonopyCalculation):
raise OutputParsingError("Input calculation must be a PhonopyCalculation")
self._calc = calc
def parse_with_retrieved(self, retrieved):
"""
Parses the datafolder, stores results.
This parser for this simple code does simply store in the DB a node
representing the file of forces in real space
"""
from aiida.common.exceptions import InvalidOperation
# suppose at the start that the job is successful
successful = True
# check that calculation is in the right state
# state = self._calc.get_state()
# if state != calc_states.PARSING:
# raise InvalidOperation("Calculation not in {} state"
# .format(calc_states.PARSING) )
# select the folder object
# Check that the retrieved folder is there
try:
out_folder = retrieved[self._calc._get_linkname_retrieved()]
except KeyError:
self.logger.error("No retrieved folder found")
return False, ()
# check what is inside the folder
list_of_files = out_folder.get_folder_list()
# at least the stdout should exist
if not self._calc._OUTPUT_FILE_NAME in list_of_files or not self._calc._RESULT_FILE_NAME:
successful = False
self.logger.error("Output/results not found",extra=logger_extra)
return successful,()
# load the results dictionary
json_outfile = out_folder.get_abs_path( self._calc._RESULT_FILE_NAME )
with open(json_outfile,'r') as f:
json_params = json.load(f)
# look at warnings
warnings = []
with open(out_folder.get_abs_path( self._calc._SCHED_ERROR_FILE )) as f:
errors = f.read()
if errors:
warnings = [errors]
# I implicitly assume that all data inside the json are arrays
# it should be very often the case for the phonon properties
# ====================== prepare the output nodes ======================
# save the outputs
new_nodes_list= []
# save dos
try:
frequencies_dos = json_params['frequencies_dos']
total_dos = json_params['total_dos']
array_dos = ArrayData()
array_dos.set_array('frequency', frequencies_dos)
array_dos.set_array('phonon_dos', total_dos)
new_nodes_list.append( (self._out_dos_name, array_dos) )
except KeyError: # keys not found in json
pass
# save thermodynamic quantities
try:
temperature = json_params['temperature']
free_energy = json_params['free_energy']
entropy = json_params['entropy']
cv = json_params['cv']
array_thermal = ArrayData()
array_thermal.set_array('temperature', temperature)
array_thermal.set_array('free_energy', free_energy)
array_thermal.set_array('entropy', entropy)
array_thermal.set_array('specific_heat', cv)
# TODO: in which units am I storing stuff???
new_nodes_list.append( (self._out_thermal_name, array_thermal) )
except KeyError: # keys not found in json
pass
# save frequencies
array_freq = BandsData()
try:
structure = self._calc.inp.structure
except AttributeError:
structure = self._calc.inp.force_constants.structure
inp_kpoints = self._calc.inp.qpoints
try:
inp_kpoints.get_kpoints()
array_freq.set_kpointsdata(inp_kpoints)
except AttributeError: # it had a mesh of kpoints in input
try:
cell = inp_kpoints.cell
except AttributeError:
cell = structure.cell
try:
pbc = inp_kpoints.pbc
except AttributeError:
pbc = structure.pbc
try:
the_kpoints = json_params['q_points']
except KeyError:
the_kpoints = inp_kpoints.get_kpoints()
try:
the_weights = json_params['weights']
except KeyError:
the_weights = None
array_freq.cell = cell
array_freq.pbc = pbc
array_freq.set_kpoints(the_kpoints, weights=the_weights)
array_freq.labels = inp_kpoints.labels
try:
frequencies = json_params['frequencies']
except KeyError:
warnings.append('Unable to read phonon frequencies')
new_nodes_list.append((self.get_linkname_outparams(), ParameterData(dict={'warnings': warnings})))
return False, new_nodes_list
labels = 'frequencies'
bands = frequencies
try:
group_velocities = json_params['group_velocities']
vx = [ _[0] for _ in group_velocities ]
vy = [ _[1] for _ in group_velocities ]
vz = [ _[2] for _ in group_velocities ]
bands = [frequencies]
labels = ['frequencies']
bands.append(vx)
bands.append(vy)
bands.append(vz)
labels += ['vx','vy','vz']
except KeyError:
pass
array_freq.set_bands(frequencies, units='THz', occupations=None, labels='frequencies')
#TODO: verify the units
try:
eigenvectors = json_params['eigenvectors']
array_freq.set_array('eigenvectors', eigenvectors)
except KeyError:
pass
print 'here'
new_nodes_list.append( (self._out_band_name, array_freq) )
#except KeyError as e: # keys not found in json
# raise e
# add the dictionary with warnings
new_nodes_list.append( (self.get_linkname_outparams(), ParameterData(dict={'warnings': warnings})))
return successful, new_nodes_list
| 36.865385 | 185 | 0.594027 | 828 | 7,668 | 5.275362 | 0.292271 | 0.024725 | 0.021978 | 0.021978 | 0.125916 | 0.084936 | 0.067766 | 0.055403 | 0.055403 | 0.037088 | 0 | 0.002913 | 0.328378 | 7,668 | 207 | 186 | 37.043478 | 0.845243 | 0.13276 | 0 | 0.232558 | 0 | 0.007752 | 0.105877 | 0 | 0 | 0 | 0 | 0.004831 | 0 | 0 | null | null | 0.031008 | 0.108527 | null | null | 0.007752 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0c52883ec5869dd4ebaf9438c8845a04d78492ff | 1,128 | py | Python | bagua/bagua_define.py | jphgxq/bagua | 3444f79b8fe9c9d2975a8994a1a613ebd14c3d33 | [
"MIT"
] | 1 | 2021-07-12T03:33:38.000Z | 2021-07-12T03:33:38.000Z | bagua/bagua_define.py | jphgxq/bagua | 3444f79b8fe9c9d2975a8994a1a613ebd14c3d33 | [
"MIT"
] | null | null | null | bagua/bagua_define.py | jphgxq/bagua | 3444f79b8fe9c9d2975a8994a1a613ebd14c3d33 | [
"MIT"
] | null | null | null | import enum
from typing import List
import sys
if sys.version_info >= (3, 9):
from typing import TypedDict # pytype: disable=not-supported-yet
else:
from typing_extensions import TypedDict # pytype: disable=not-supported-yet
from pydantic import BaseModel
class TensorDtype(str, enum.Enum):
F32 = "f32"
F16 = "f16"
U8 = "u8"
class TensorDeclaration(TypedDict):
name: str
num_elements: int
dtype: TensorDtype
def get_tensor_declaration_bytes(td: TensorDeclaration) -> int:
dtype_unit_size = {
TensorDtype.F32.value: 4,
TensorDtype.F16.value: 2,
TensorDtype.U8.value: 1,
}
return td["num_elements"] * dtype_unit_size[td["dtype"]]
class BaguaHyperparameter(BaseModel):
"""
Structured all bagua hyperparameters
"""
buckets: List[List[TensorDeclaration]] = []
is_hierarchical_reduce: bool = False
def update(self, param_dict: dict):
tmp = self.dict()
tmp.update(param_dict)
for key, value in param_dict.items():
if key in tmp:
self.__dict__[key] = value
return self
| 22.56 | 80 | 0.656915 | 137 | 1,128 | 5.262774 | 0.481752 | 0.041609 | 0.044383 | 0.07767 | 0.119279 | 0.119279 | 0.119279 | 0 | 0 | 0 | 0 | 0.023474 | 0.244681 | 1,128 | 49 | 81 | 23.020408 | 0.82277 | 0.093085 | 0 | 0 | 0 | 0 | 0.024851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.181818 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0c6419af7c4ea362b8097a85b3a1cb0ca9746ce0 | 9,196 | py | Python | tests/test_wvlns.py | seignovert/pyvims | a70b5b9b8bc5c37fa43b7db4d15407f312a31849 | [
"BSD-3-Clause"
] | 4 | 2019-09-16T15:50:22.000Z | 2021-04-08T15:32:48.000Z | tests/test_wvlns.py | seignovert/pyvims | a70b5b9b8bc5c37fa43b7db4d15407f312a31849 | [
"BSD-3-Clause"
] | 3 | 2018-05-04T09:28:24.000Z | 2018-12-03T09:00:31.000Z | tests/test_wvlns.py | seignovert/pyvims | a70b5b9b8bc5c37fa43b7db4d15407f312a31849 | [
"BSD-3-Clause"
] | 1 | 2020-10-12T15:14:17.000Z | 2020-10-12T15:14:17.000Z | """Test VIMS wavelength module."""
from pathlib import Path
import numpy as np
from numpy.testing import assert_array_almost_equal as assert_array
from pyvims import QUB
from pyvims.vars import ROOT_DATA
from pyvims.wvlns import (BAD_IR_PIXELS, CHANNELS, FWHM, SHIFT,
VIMS_IR, VIMS_VIS, WLNS, YEARS,
bad_ir_pixels, ir_multiplexer, ir_hot_pixels,
is_hot_pixel, median_spectrum, moving_median,
sample_line_axes)
from pytest import approx, raises
DATA = Path(__file__).parent / 'data'
def test_vims_csv():
"""Test CSV global variables."""
assert len(CHANNELS) == len(WLNS) == len(FWHM) == 352
assert CHANNELS[0] == 1
assert CHANNELS[-1] == 352
assert WLNS[0] == .350540
assert WLNS[-1] == 5.1225
assert FWHM[0] == .007368
assert FWHM[-1] == .016
assert len(YEARS) == len(SHIFT) == 58
assert YEARS[0] == 1999.6
assert YEARS[-1] == 2017.8
assert SHIFT[0] == -25.8
assert SHIFT[-1] == 9.8
def test_vims_ir():
"""Test VIMS IR wavelengths."""
# Standard wavelengths
wvlns = VIMS_IR()
assert len(wvlns) == 256
assert wvlns[0] == .884210
assert wvlns[-1] == 5.122500
# Full-width at half maximum value
fwhms = VIMS_IR(fwhm=True)
assert len(fwhms) == 256
assert fwhms[0] == .012878
assert fwhms[-1] == .016
# Wavenumber (cm-1)
wvnb = VIMS_IR(sigma=True)
assert len(wvnb) == 256
assert wvnb[0] == approx(11309.53, abs=1e-2)
assert wvnb[-1] == approx(1952.17, abs=1e-2)
# Single band
assert VIMS_IR(band=97) == .884210
assert VIMS_IR(band=97, fwhm=True) == .012878
assert VIMS_IR(band=97, sigma=True) == approx(11309.53, abs=1e-2)
assert VIMS_IR(band=97, fwhm=True, sigma=True) == approx(164.72, abs=1e-2)
# Selected bands array
assert_array(VIMS_IR(band=[97, 352]), [.884210, 5.122500])
assert_array(VIMS_IR(band=[97, 352], fwhm=True), [.012878, .016])
# Time offset
assert VIMS_IR(band=97, year=2002) == approx(.884210, abs=1e-6)
assert VIMS_IR(band=97, year=2005) == approx(.884210, abs=1e-6)
assert VIMS_IR(band=97, year=2001.5) == approx(.885410, abs=1e-6) # +.0012
assert VIMS_IR(band=97, year=2011) == approx(.890210, abs=1e-6) # +.006
# Time offset on all IR bands
wvlns_2011 = VIMS_IR(year=2011)
assert len(wvlns_2011) == 256
assert wvlns_2011[0] == approx(.890210, abs=1e-6)
assert wvlns_2011[-1] == approx(5.128500, abs=1e-6)
# No change in FWHM with time
assert VIMS_IR(band=97, year=2001.5, fwhm=True) == .012878
# Outside IR band range
assert np.isnan(VIMS_IR(band=0))
assert np.isnan(VIMS_IR(band=96, fwhm=True))
assert np.isnan(VIMS_IR(band=353, sigma=True))
def test_vims_vis():
"""Test VIMS VIS wavelengths."""
# Standard wavelengths
wvlns = VIMS_VIS()
assert len(wvlns) == 96
assert wvlns[0] == .350540
assert wvlns[-1] == 1.045980
# Full-width at half maximum value
fwhms = VIMS_VIS(fwhm=True)
assert len(fwhms) == 96
assert fwhms[0] == .007368
assert fwhms[-1] == .012480
# Wavenumber (cm-1)
wvnb = VIMS_VIS(sigma=True)
assert len(wvnb) == 96
assert wvnb[0] == approx(28527.41, abs=1e-2)
assert wvnb[-1] == approx(9560.41, abs=1e-2)
# Single band
assert VIMS_VIS(band=96) == 1.045980
assert VIMS_VIS(band=96, fwhm=True) == .012480
assert VIMS_VIS(band=96, sigma=True) == approx(9560.41, abs=1e-2)
assert VIMS_VIS(band=96, fwhm=True, sigma=True) == approx(114.07, abs=1e-2)
# Selected bands array
assert_array(VIMS_VIS(band=[1, 96]), [.350540, 1.045980])
assert_array(VIMS_VIS(band=[1, 96], fwhm=True), [.007368, .012480])
# Time offset
with raises(ValueError):
_ = VIMS_VIS(band=97, year=2002)
with raises(ValueError):
_ = VIMS_VIS(year=2011)
# Outside IR band range
assert np.isnan(VIMS_VIS(band=0))
assert np.isnan(VIMS_VIS(band=97, fwhm=True))
assert np.isnan(VIMS_VIS(band=353, sigma=True))
def test_bad_ir_pixels():
"""Test bad IR pixels list."""
csv = np.loadtxt(ROOT_DATA / 'wvlns_std.csv',
delimiter=',', usecols=(0, 1, 2, 3),
dtype=str, skiprows=98)
# Extract bad pixels
wvlns = np.transpose([
(int(channel), float(wvln) - .5 * float(fwhm), float(fwhm))
for channel, wvln, fwhm, comment in csv
if comment
])
# Group bad pixels
news = [True] + list((wvlns[0, 1:] - wvlns[0, :-1]) > 1.5)
bads = []
for i, new in enumerate(news):
if new:
bads.append(list(wvlns[1:, i]))
else:
bads[-1][1] += wvlns[2, i]
assert_array(BAD_IR_PIXELS, bads)
coll = bad_ir_pixels()
assert len(coll.get_paths()) == len(bads)
def test_moving_median():
"""Test moving median filter."""
a = [1, 2, 3, 4, 5]
assert_array(moving_median(a, width=1), a)
assert_array(moving_median(a, width=3),
[1.5, 2, 3, 4, 4.5])
assert_array(moving_median(a, width=5),
[2, 2.5, 3, 3.5, 4])
assert_array(moving_median(a, width=2),
[1.5, 2.5, 3.5, 4.5, 5])
assert_array(moving_median(a, width=4),
[2, 2.5, 3.5, 4, 4.5])
def test_is_hot_pixel():
"""Test hot pixel detector."""
# Create random signal
signal = np.random.default_rng().integers(20, size=100)
# Add hot pixels
signal[10::20] = 50
signal[10::30] = 150
hot_pix = is_hot_pixel(signal)
assert len(hot_pix) == 100
assert 3 <= sum(hot_pix) < 6
assert all(hot_pix[10::30])
hot_pix = is_hot_pixel(signal, tol=1.5, frac=90)
assert len(hot_pix) == 100
assert 6 <= sum(hot_pix) < 12
assert all(hot_pix[10::20])
def test_sample_line_axes():
"""Test locatation sample and line axes."""
# 2D case
assert sample_line_axes((64, 352)) == (0, )
assert sample_line_axes((256, 32)) == (1, )
# 3D case
assert sample_line_axes((32, 64, 352)) == (0, 1)
assert sample_line_axes((32, 352, 64)) == (0, 2)
assert sample_line_axes((352, 32, 64)) == (1, 2)
# 1D case
with raises(TypeError):
_ = sample_line_axes((352))
# No band axis
with raises(ValueError):
_ = sample_line_axes((64, 64))
def test_median_spectrum():
"""Test the median spectrum extraction."""
# 2D cases
spectra = [CHANNELS, CHANNELS]
spectrum = median_spectrum(spectra) # (2, 352)
assert spectrum.shape == (352,)
assert spectrum[0] == 1
assert spectrum[-1] == 352
spectrum = median_spectrum(np.transpose(spectra)) # (352, 2)
assert spectrum.shape == (352,)
assert spectrum[0] == 1
assert spectrum[-1] == 352
# 3D cases
spectra = [[CHANNELS, CHANNELS]]
spectrum = median_spectrum(spectra) # (1, 2, 352)
assert spectrum.shape == (352,)
assert spectrum[0] == 1
assert spectrum[-1] == 352
spectrum = median_spectrum(np.moveaxis(spectra, 1, 2)) # (1, 352, 2)
assert spectrum.shape == (352,)
assert spectrum[0] == 1
assert spectrum[-1] == 352
spectrum = median_spectrum(np.moveaxis(spectra, 2, 0)) # (352, 1, 2)
assert spectrum.shape == (352,)
assert spectrum[0] == 1
assert spectrum[-1] == 352
def test_ir_multiplexer():
"""Test spectrum split in each IR multiplexer."""
# Full spectrum
spec_1, spec_2 = ir_multiplexer(CHANNELS)
assert len(spec_1) == 128
assert len(spec_2) == 128
assert spec_1[0] == 97
assert spec_1[-1] == 351
assert spec_2[0] == 98
assert spec_2[-1] == 352
# IR spectrum only
spec_1, spec_2 = ir_multiplexer(CHANNELS[96:])
assert len(spec_1) == 128
assert len(spec_2) == 128
assert spec_1[0] == 97
assert spec_1[-1] == 351
assert spec_2[0] == 98
assert spec_2[-1] == 352
# 2D spectra
spectra = [CHANNELS, CHANNELS]
spec_1, spec_2 = ir_multiplexer(spectra)
assert len(spec_1) == 128
assert len(spec_2) == 128
assert spec_1[0] == 97
assert spec_1[-1] == 351
assert spec_2[0] == 98
assert spec_2[-1] == 352
# 3D spectra
spectra = [[CHANNELS, CHANNELS]]
spec_1, spec_2 = ir_multiplexer(spectra)
assert len(spec_1) == 128
assert len(spec_2) == 128
assert spec_1[0] == 97
assert spec_1[-1] == 351
assert spec_2[0] == 98
assert spec_2[-1] == 352
# VIS spectrum only
with raises(ValueError):
_ = ir_multiplexer(CHANNELS[:96])
# Dimension too high
with raises(ValueError):
_ = ir_multiplexer([[[CHANNELS]]])
def test_ir_hot_pixels():
"""Test IR hot pixel detector from spectra."""
qub = QUB('1787314297_1', root=DATA)
# 1D spectrum
hot_pixels = ir_hot_pixels(qub['BACKGROUND'][0])
assert len(hot_pixels) == 10
assert_array(hot_pixels,
[105, 119, 124, 168, 239, 240, 275, 306, 317, 331])
# 2D spectra
hot_pixels = ir_hot_pixels(qub['BACKGROUND'])
assert len(hot_pixels) == 10
assert_array(hot_pixels,
[105, 119, 124, 168, 239, 240, 275, 306, 317, 331])
| 27.450746 | 79 | 0.605154 | 1,357 | 9,196 | 3.957259 | 0.162122 | 0.036872 | 0.026071 | 0.024581 | 0.551955 | 0.458845 | 0.372626 | 0.285847 | 0.22067 | 0.206145 | 0 | 0.124024 | 0.247716 | 9,196 | 334 | 80 | 27.532934 | 0.652212 | 0.107329 | 0 | 0.29703 | 0 | 0 | 0.006163 | 0 | 0 | 0 | 0 | 0 | 0.584158 | 1 | 0.049505 | false | 0 | 0.034653 | 0 | 0.084158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7433e8c895ee751d0a668a187a9eb4c45927efe | 6,223 | py | Python | mooc_access_number.py | mengshouer/mooc_access_number | 8de596ce34006f1f8c5d0404f5e40546fb438b2a | [
"MIT"
] | 6 | 2020-05-12T14:36:17.000Z | 2021-12-03T01:56:58.000Z | mooc_access_number.py | mengshouer/mooc_tools | 8de596ce34006f1f8c5d0404f5e40546fb438b2a | [
"MIT"
] | 2 | 2020-05-11T06:21:13.000Z | 2020-05-23T12:34:18.000Z | mooc_access_number.py | mengshouer/mooc_tools | 8de596ce34006f1f8c5d0404f5e40546fb438b2a | [
"MIT"
] | 1 | 2020-05-11T04:19:15.000Z | 2020-05-11T04:19:15.000Z | import requests,time,json,re,base64
requests.packages.urllib3.disable_warnings()
from io import BytesIO
from PIL import Image,ImageDraw,ImageChops
from lxml import etree
from urllib.parse import urlparse, parse_qs
username = "" #登录账号
password = "" #登录密码
s = requests.Session()
s.headers.update({'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36'})
def login():
global uid,username,password
if(username == "" or password == ""):
username = input("登录账号:")
password = input("登录密码:")
#旧接口,已失效
#url="http://i.chaoxing.com/vlogin?passWord="+str(password)+"&userName="+str(username)
url = f'https://passport2-api.chaoxing.com/v11/loginregister?uname='+str(username)+'&code='+str(password)
res= s.get(url)
if("验证通过" in str(res.text)):
print('Login success!')
for key, value in res.cookies.items():
if key=="_uid":
uid=value
return s
else:
print(username,password)
print('账号密码有误,请重试。')
username = ""
password = ""
login()
'''
def captchalogin(username,password):
if(username == "" or password == ""):
username = input("登录账号:")
password = input("登录密码:")
#以下两个用于自动识别验证码,手动输入验证码可无视
#百度云内的人工智能文字识别orc创建应用获得,不保证识别成功率,可以试试
APIKey = ""
SecretKey = ""
#免费的每天有api限制
if(APIKey != "" or SecretKey != ""):
getkeyurl = f'https://aip.baidubce.com/oauth/2.0/token'
data = {
"grant_type" : "client_credentials",
"client_id" : APIKey,
"client_secret" : SecretKey
}
getkey = requests.post(getkeyurl,data).text
access_token = json.loads(getkey)["access_token"]
numcode = ""
while 1:
t = int(round(time.time()*1000))
codeurl = f'http://passport2.chaoxing.com/num/code?'+ str(t)
img_numcode = s.get(codeurl).content
img = base64.b64encode(img_numcode)
orcurl = f'https://aip.baidubce.com/rest/2.0/ocr/v1/accurate_basic?access_token='+access_token
data = {"image":img}
headers = {'content-type': 'application/x-www-form-urlencoded'}
captcha = requests.post(orcurl,data=data,headers=headers).text
numcodelen = json.loads(captcha)["words_result_num"]
if numcodelen == 0:
print("验证码识别错误,重新获取验证码识别")
time.sleep(1)
else:
numcode = json.loads(captcha)["words_result"][0]["words"]
numcode = re.sub("\D","",numcode)
if len(numcode) < 4:
print("验证码识别错误,重新获取验证码识别")
time.sleep(1)
else:
print("识别成功")
break
else:
t = int(round(time.time()*1000))
url = f'http://passport2.chaoxing.com/num/code?'+ str(t)
web = s.get(url,verify=False)
img = Image.open(BytesIO(web.content))
img.show()
numcode = input('验证码:')
url = 'http://passport2.chaoxing.com/login?refer=http://i.mooc.chaoxing.com'
data = {'refer_0x001': 'http%3A%2F%2Fi.mooc.chaoxing.com',
'pid':'-1',
'pidName':'',
'fid':'1467', #院校id 1467:a系统
'fidName':'',
'allowJoin':'0',
'isCheckNumCode':'1',
'f':'0',
'productid':'',
'uname':username,
'password':password,
'numcode':numcode,
'verCode':''
}
web = s.post(url,data=data,verify=False)
time.sleep(2)
if('账号管理' in str(web.text)):
print('Login success!')
return s
else:
print('账号密码或验证码有误,请重试。')
username = ""
password = ""
captchalogin(username,password)
'''
def getuserdata():
web = s.get('http://mooc1-1.chaoxing.com/visit/courses')
h1 = etree.HTML(web.text)
name = h1.xpath('//h3[@class = "clearfix"]/a/text()')
print("-----------课程名称-----------")
print(name)
print("------------------------------")
global count
try:
count
except NameError:
count_exist = False
else:
count_exist = True
if(count_exist):
pass
else:
if(len(name) == 1):
count = 0
else:
#count = 0
count = int(input("请用数字选择要访问的课程(从0开始):"))
geturl = h1.xpath('//div[@class = "Mcon1img httpsClass"]/a/@href')
i = 0
courseurl = []
for temp in range(0,len(geturl)):
if("course" in geturl[i]):
courseurl.append(geturl[i])
i += 1
url = 'https://mooc1-1.chaoxing.com' + courseurl[count]
url_query = urlparse(url).query
userdata = dict([(k, v[0]) for k, v in parse_qs(url_query).items()])
global cpi, enc, courseId, classId, encode
cpi = userdata["cpi"]
#enc = userdata["enc"]
courseId = userdata["courseid"]
classId = userdata["clazzid"]
web = s.get(url)
h2 = etree.HTML(web.text)
encodeurl = h2.xpath('//script[@type = "text/javascript"]/@src')
i=0
for temp in range(0,len(encodeurl)):
if("encode" in encodeurl[i]):
break
i += 1
url_query = urlparse(encodeurl[i]).query
userdata = dict([(k, v[0]) for k, v in parse_qs(url_query).items()])
encode = userdata["encode"]
def main():
getuserdata()
url = 'https://fystat-ans.chaoxing.com/log/setlog?personid='+cpi+'&courseId='+courseId+'&classId='+classId+'&encode=' +encode
i = 0
while 1:
web = s.get(url,verify=False)
time.sleep(5)
i+=1
print(i)
if(i == 500):
break
main()
if __name__ == "__main__":
print("登录成功后等待访问数慢慢增加,显示的数字并不代表访问数,只是用于计数")
try:
#captchalogin(username,password)
login()
main()
except:
print("登录信息尝试重新登录")
#captchalogin(username,password)
login()
main()
| 32.752632 | 151 | 0.527398 | 686 | 6,223 | 4.733236 | 0.346939 | 0.033877 | 0.008623 | 0.022174 | 0.204496 | 0.152756 | 0.115799 | 0.094241 | 0.094241 | 0.072067 | 0 | 0.025941 | 0.31239 | 6,223 | 189 | 152 | 32.925926 | 0.732882 | 0.030853 | 0 | 0.270833 | 0 | 0.010417 | 0.21519 | 0.036676 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0.083333 | 0.052083 | 0 | 0.09375 | 0.09375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a747752e784483f13e0672fa7ef44261d743dd9f | 403 | py | Python | babybuddy/migrations/0017_promocode_max_usage_per_account.py | amcquistan/babyasst | 310a7948f06b71ae0d62593a3b5932abfd4eb444 | [
"BSD-2-Clause"
] | null | null | null | babybuddy/migrations/0017_promocode_max_usage_per_account.py | amcquistan/babyasst | 310a7948f06b71ae0d62593a3b5932abfd4eb444 | [
"BSD-2-Clause"
] | null | null | null | babybuddy/migrations/0017_promocode_max_usage_per_account.py | amcquistan/babyasst | 310a7948f06b71ae0d62593a3b5932abfd4eb444 | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 2.2.6 on 2019-11-27 20:28
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('babybuddy', '0016_auto_20191127_1424'),
]
operations = [
migrations.AddField(
model_name='promocode',
name='max_usage_per_account',
field=models.IntegerField(default=1),
),
]
| 21.210526 | 49 | 0.615385 | 44 | 403 | 5.477273 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 0.275434 | 403 | 18 | 50 | 22.388889 | 0.715753 | 0.111663 | 0 | 0 | 1 | 0 | 0.174157 | 0.123596 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a74cb2eb35421327d8faf002d2a0cd393a5579ab | 1,151 | py | Python | splitListToParts.py | pflun/learningAlgorithms | 3101e989488dfc8a56f1bf256a1c03a837fe7d97 | [
"MIT"
] | null | null | null | splitListToParts.py | pflun/learningAlgorithms | 3101e989488dfc8a56f1bf256a1c03a837fe7d97 | [
"MIT"
] | null | null | null | splitListToParts.py | pflun/learningAlgorithms | 3101e989488dfc8a56f1bf256a1c03a837fe7d97 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Definition for singly-linked list.
class ListNode(object):
def __init__(self, x):
self.val = x
self.next = None
class Solution(object):
def splitListToParts(self, root, k):
res = []
size = 0
traverse = root
while traverse:
size += 1
traverse = traverse.next
# 制作一个queue,[4, 3, 3],代表每一回合走几步
d, r = divmod(size, k)
queue = []
for _ in range(d):
queue.append(k)
for i in range(r):
queue[i] += 1
# 从queue里取该回合走的步数,一边走一边加入tmp,然后tmp加入res
for q in queue:
tmp = []
for _ in range(q):
tmp.append(root)
root = root.next
res.append(tmp)
return res
head = ListNode(1)
p1 = ListNode(2)
p2 = ListNode(3)
p3 = ListNode(4)
p4 = ListNode(5)
p5 = ListNode(6)
p6 = ListNode(7)
p7 = ListNode(8)
p8 = ListNode(9)
p9 = ListNode(10)
head.next = p1
p1.next = p2
p2.next = p3
p3.next = p4
p4.next = p5
p5.next = p6
p6.next = p7
p7.next = p8
p8.next = p9
test = Solution()
print test.splitListToParts(head, 3) | 20.553571 | 47 | 0.536056 | 154 | 1,151 | 3.967532 | 0.415584 | 0.03437 | 0.032733 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059289 | 0.340573 | 1,151 | 56 | 48 | 20.553571 | 0.745718 | 0.107732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a74d8736deea9179712853219ede84e9608d42dd | 1,276 | py | Python | utils/utils.py | cheng052/H3DNet | 872dabb37d8c2ca3581cf4e242014e6464debe57 | [
"MIT"
] | 212 | 2020-06-11T01:03:36.000Z | 2022-03-17T17:29:21.000Z | utils/utils.py | cheng052/H3DNet | 872dabb37d8c2ca3581cf4e242014e6464debe57 | [
"MIT"
] | 25 | 2020-06-15T13:35:13.000Z | 2022-03-10T05:44:05.000Z | utils/utils.py | cheng052/H3DNet | 872dabb37d8c2ca3581cf4e242014e6464debe57 | [
"MIT"
] | 24 | 2020-06-11T01:17:24.000Z | 2022-03-30T13:34:45.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
def conv3x3x3(in_planes, out_planes, stride):
# 3x3x3 convolution with padding
return nn.Conv3d(
in_planes,
out_planes,
kernel_size=3,
stride=stride,
padding=1)
def upconv3x3x3(in_planes, out_planes, stride):
return nn.ConvTranspose3d(
in_planes,
out_planes,
kernel_size=3,
stride=1,
padding=1,
output_padding=1)
def conv_block_3d(in_dim, out_dim, activation):
return nn.Sequential(
nn.Conv3d(in_dim, out_dim, kernel_size=3, stride=1, padding=1),
nn.BatchNorm3d(out_dim),
activation,)
def conv_trans_block_3d(in_dim, out_dim, activation, stride=2):
return nn.Sequential(
nn.ConvTranspose3d(in_dim, out_dim, kernel_size=3, stride=stride, padding=1, output_padding=1),
nn.BatchNorm3d(out_dim),
activation,)
def max_pooling_3d():
return nn.MaxPool3d(kernel_size=2, stride=2, padding=0)
def conv_block_2_3d(in_dim, out_dim, activation, stride=1):
return nn.Sequential(
conv_block_3d(in_dim, out_dim, activation),
nn.Conv3d(out_dim, out_dim, kernel_size=3, stride=stride, padding=1),
nn.BatchNorm3d(out_dim),)
| 27.73913 | 103 | 0.670063 | 182 | 1,276 | 4.450549 | 0.208791 | 0.081481 | 0.077778 | 0.081481 | 0.609877 | 0.528395 | 0.504938 | 0.397531 | 0.098765 | 0.098765 | 0 | 0.043699 | 0.22884 | 1,276 | 45 | 104 | 28.355556 | 0.779472 | 0.023511 | 0 | 0.371429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0 | 0.085714 | 0.171429 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
a74fd79fe36c35a1329c69bf98a54c22cc8f9a55 | 12,349 | py | Python | ftc/lib/net/network.py | efulet/ann_text_classification | fba05a1789a19aa6d607ee36069dda419bb98e28 | [
"MIT"
] | null | null | null | ftc/lib/net/network.py | efulet/ann_text_classification | fba05a1789a19aa6d607ee36069dda419bb98e28 | [
"MIT"
] | null | null | null | ftc/lib/net/network.py | efulet/ann_text_classification | fba05a1789a19aa6d607ee36069dda419bb98e28 | [
"MIT"
] | null | null | null | """
@created_at 2015-01-18
@author Exequiel Fuentes Lettura <efulet@gmail.com>
"""
from pybrain.datasets import ClassificationDataSet
from pybrain.tools.shortcuts import buildNetwork
from pybrain.structure.modules import SoftmaxLayer
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.utilities import percentError
from pybrain.tools.validation import Validator
# Only needed for data generation and graphical output
import pylab as pl
import numpy as np
# Only needed for saving and loading trained network
import pickle
import os
from lib.util import SystemUtils
from network_exception import NetworkException
class Network:
""""""
# Define the split proportion into 75% training and 25% test data sets
SPLIT_PROPORTION = 0.25
# Define 5 hidden units
HIDDEN_NEURONS = 5
# Define the momentum, which is the ratio by which the gradient of the last
# timestep is used
MOMENTUM = 0.1
# Weightdecay corresponds to the weightdecay rate, where 0 is no weight decay at all.
WEIGHTDECAY = 0.01
# Define epochs
EPOCHS = 100
def __init__(self, input, classes, options, logger=None):
"""
:param input: Dataset
:param classes: Class values
:param options: Optional vales
:param logger: logger object [opcional]
"""
if input == None or len(input) == 0:
raise NetworkException("Empty dataset")
self._input = input
if classes == None or len(classes) == 0:
raise NetworkException("Empty class vector")
self._classes = classes
self._options = options
if self._options.hidden_neurons:
self._hidden_neurons = self._options.hidden_neurons
else:
self._hidden_neurons = self.HIDDEN_NEURONS
if self._options.momentum:
self._momentum = self._options.momentum
else:
self._momentum = self.MOMENTUM
if self._options.weightdecay:
self._weightdecay = self._options.weightdecay
else:
self._weightdecay = self.WEIGHTDECAY
if self._options.epochs:
self._epochs = self._options.epochs
else:
self._epochs = self.EPOCHS
if self._options.verbose:
self._verbose = True
else:
self._verbose = False
self._logger = logger or SystemUtils().configure_log()
self._dataset = None
self._X_train = None
self._X_test = None
self._feed_forward_network = None
self._X_train_results = []
self._X_test_results = []
def fit(self):
"""
Fit network using PyBrain library
"""
# Create the dataset
# http://pybrain.org/docs/api/datasets/classificationdataset.html
self._dataset = ClassificationDataSet(len(self._input[0][0]), 1, \
nb_classes=len(self._classes), \
class_labels=self._classes)
# Add samples
# http://pybrain.org/docs/tutorial/fnn.html
for sample in self._input:
self._dataset.addSample(sample[0], [sample[1]])
# Print statistics
#print self._dataset.calculateStatistics()
# Randomly split the dataset into 75% training and 25% test data sets.
# Of course, we could also have created two different datasets to begin with.
self._X_test, self._X_train = self._dataset.splitWithProportion(self.SPLIT_PROPORTION)
# For neural network classification, it is highly advisable to encode
# classes with one output neuron per class. Note that this operation
# duplicates the original targets and stores them in an (integer) field
# named 'class'.
self._X_train._convertToOneOfMany()
self._X_test._convertToOneOfMany()
if self._verbose:
# Test our dataset by printing a little information about it.
self._logger.info("Number of training patterns: %4d" % len(self._X_train))
self._logger.info("Input dimensions: %4d" % self._X_train.indim)
self._logger.info("Output dimensions: %4d" % self._X_train.outdim)
#print "First sample (input, target, class):"
#print self._X_train['input'][0], self._X_train['target'][0], self._X_train['class'][0]
# Now build a feed-forward network with 5 hidden units. We use the shortcut
# buildNetwork() for this. The input and output layer size must match
# the dataset's input and target dimension. You could add additional
# hidden layers by inserting more numbers giving the desired layer sizes.
# The output layer uses a softmax function because we are doing classification.
# There are more options to explore here, e.g. try changing the hidden
# layer transfer function to linear instead of (the default) sigmoid.
self._feed_forward_network = buildNetwork(self._X_train.indim, \
self._hidden_neurons, \
self._X_train.outdim, \
outclass=SoftmaxLayer)
# Set up a trainer that basically takes the network and training dataset
# as input. We are using a BackpropTrainer for this.
trainer = BackpropTrainer(self._feed_forward_network, dataset=self._X_train, \
momentum=self._momentum, verbose=self._verbose, \
weightdecay=self._weightdecay)
# Start the training iterations
epoch_results = []
train_error_results = []
test_error_results = []
for i in xrange(self._epochs):
# Train the network for some epochs. Usually you would set something
# like 5 here, but for visualization purposes we do this one epoch
# at a time.
trainer.trainEpochs(1)
# http://pybrain.org/docs/api/supervised/trainers.html
X_train_result = percentError(trainer.testOnClassData(), self._X_train['class'])
X_test_result = percentError(trainer.testOnClassData(dataset=self._X_test), self._X_test['class'])
# Store the results
epoch_results.append(trainer.totalepochs)
train_error_results.append(X_train_result)
test_error_results.append(X_test_result)
if (trainer.totalepochs == 1 or trainer.totalepochs % 10 == 0 or \
trainer.totalepochs == self._epochs) and self._verbose:
self._logger.info("Epoch: %4d" % trainer.totalepochs +
" Train error: %5.2f%%" % X_train_result +
" Test error: %5.2f%%" % X_test_result)
# Now, plot the train and test data
pl.figure(1)
pl.ioff() # interactive graphics off
pl.clf() # clear the plot
pl.hold(True) # overplot on
pl.plot(epoch_results, train_error_results, 'b',
epoch_results, test_error_results, 'r')
pl.xlabel('Epoch number')
pl.ylabel('Error')
pl.legend(['Training result', 'Test result'])
pl.title('Training/Test results')
pl.ion() # interactive graphics on
pl.draw() # update the plot
if self._verbose:
# Print network coefs
#self._logger.info(self._feed_forward_network['in'].outputbuffer[self._feed_forward_network['in'].offset])
#self._logger.info(self._feed_forward_network['hidden0'].outputbuffer[self._feed_forward_network['hidden0'].offset])
#self._logger.info(self._feed_forward_network['out'].outputbuffer[self._feed_forward_network['out'].offset])
# Finally, keep showing the plot.
pl.ioff()
# Store the results
self._X_train_results = (epoch_results, train_error_results)
self._X_test_results = (epoch_results, test_error_results)
def predict(self, validation_dataset):
"""
Generate predictions
:param validation_dataset: Validation dataset
"""
y_pred = []
for i in xrange(len(validation_dataset)):
output = self._feed_forward_network.activate(validation_dataset[i][0])
class_index = max(xrange(len(output)), key=output.__getitem__)
y_pred.append(class_index)
return y_pred
def classification_performance(self, output, target):
"""
Returns the hit rate of the outputs compared to the targets.
http://pybrain.org/docs/api/tools.html#pybrain.tools.validation.Validator.classificationPerformance
"""
return Validator.classificationPerformance(np.array(output), np.array(target))
def explained_sum_squares(self, output, target):
"""
Returns the explained sum of squares (ESS).
http://pybrain.org/docs/api/tools.html#pybrain.tools.validation.Validator.ESS
"""
return Validator.ESS(np.array(output), np.array(target))
def mean_squared_error(self, output, target):
"""
Returns the mean squared error. The multidimensional arrays will get
flattened in order to compare them.
http://pybrain.org/docs/api/tools.html#pybrain.tools.validation.Validator.MSE
"""
return Validator.MSE(np.array(output), np.array(target))
def show_plot(self):
pl.show()
def show_error(self):
"""
Show training and test process versus epochs
"""
pl.figure(1)
pl.plot(self._X_train_results[0], self._X_train_results[1], 'b',
self._X_test_results[0], self._X_test_results[1], 'r')
pl.xlabel('Epoch number')
pl.ylabel('Error')
pl.legend(['Training result', 'Test result'])
pl.title('Training/Test results')
pl.draw()
def show_layer(self):
"""
Show network layers in text format
"""
for mod in self._feed_forward_network.modules:
print "Module:", mod.name
if mod.paramdim > 0:
print "--parameters:", mod.params
for conn in self._feed_forward_network.connections[mod]:
print "-connection to", conn.outmod.name
if conn.paramdim > 0:
print "- parameters", conn.params
if hasattr(self._feed_forward_network, "recurrentConns"):
print "Recurrent connections"
for conn in self._feed_forward_network.recurrentConns:
print "-", conn.inmod.name, " to", conn.outmod.name
if conn.paramdim > 0:
print "- parameters", conn.params
def save(self, file_path):
"""
Save network
"""
try:
file_net = None
file_net = open(file_path, 'w')
pickle.dump(self._feed_forward_network, file_net)
except Exception, err:
raise NetworkException(str(err))
finally:
if file_net != None:
file_net.close()
def load(self, file_path):
"""
Load network from file
"""
try:
file_net = None
if os.path.isfile(file_path) == False:
raise NetworkException("No such file: " + file_path)
file_net = open(file_path,'r')
self._feed_forward_network = pickle.load(file_net)
except Exception, err:
raise NetworkException(str(err))
finally:
if file_net != None:
file_net.close()
| 38.711599 | 128 | 0.581667 | 1,356 | 12,349 | 5.115044 | 0.261062 | 0.018743 | 0.02451 | 0.05075 | 0.251298 | 0.149942 | 0.141292 | 0.114619 | 0.09357 | 0.09357 | 0 | 0.008134 | 0.332982 | 12,349 | 318 | 129 | 38.833333 | 0.83392 | 0.204956 | 0 | 0.236364 | 0 | 0 | 0.048359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.072727 | null | null | 0.042424 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a75006d06757a5f27ac00ff68ada7211ab1bbdc4 | 342 | py | Python | python2/probe_yd.py | Nzen/run_ydl | 90d7075ba8ec5771b5edcbe2ad52211d95546f83 | [
"WTFPL"
] | null | null | null | python2/probe_yd.py | Nzen/run_ydl | 90d7075ba8ec5771b5edcbe2ad52211d95546f83 | [
"WTFPL"
] | null | null | null | python2/probe_yd.py | Nzen/run_ydl | 90d7075ba8ec5771b5edcbe2ad52211d95546f83 | [
"WTFPL"
] | null | null | null | from sys import argv
from subprocess import call
try :
link = argv[ 1 ]
except IndexError:
link = raw_input( " - which url interests you? " )
try:
ydl_answ = call( "youtube-dl -F "+ link, shell = True )
if ydl_answ is not 0 :
print "-- failed "+ link + " code "+ str(ydl_answ)
except OSError as ose :
print "Execution failed:", ose
| 21.375 | 56 | 0.663743 | 52 | 342 | 4.288462 | 0.692308 | 0.09417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007491 | 0.219298 | 342 | 15 | 57 | 22.8 | 0.827715 | 0 | 0 | 0.166667 | 0 | 0 | 0.219298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a75582560560cf86bc8bb8744feee3c442ea60e2 | 1,514 | py | Python | src/Segmentation/segmentation.py | odigous-labs/video-summarization | c125bf9fa1016d76680d5e9389e4bdb0f83bc4fb | [
"MIT"
] | 1 | 2019-03-05T06:00:38.000Z | 2019-03-05T06:00:38.000Z | src/Segmentation/segmentation.py | odigous-labs/video-summarization | c125bf9fa1016d76680d5e9389e4bdb0f83bc4fb | [
"MIT"
] | 2 | 2019-03-02T05:12:59.000Z | 2019-09-26T17:03:56.000Z | src/Segmentation/segmentation.py | odigous-labs/video-summarization | c125bf9fa1016d76680d5e9389e4bdb0f83bc4fb | [
"MIT"
] | null | null | null | import os
import cv2
from Segmentation import CombinedHist, get_histograms, HistQueue
import matplotlib.pyplot as plt
import numpy as np
listofFiles = os.listdir('generated_frames')
# change the size of queue accordingly
queue_of_hists = HistQueue.HistQueue(25)
x = []
y_r = []
y_g = []
y_b = []
def compare(current_hist, frame_no):
avg_histr = queue_of_hists.getAverageHist()
red_result = cv2.compareHist(current_hist.getRedHistr(), avg_histr.getRedHistr(), 0)
green_result = cv2.compareHist(current_hist.getGreenHistr(), avg_histr.getGreenHistr(), 0)
blue_result = cv2.compareHist(current_hist.getBlueHistr(), avg_histr.getBlueHistr(), 0)
x.append(i)
y_r.append(red_result)
y_g.append(green_result)
y_b.append(blue_result)
# print(red_result)
for i in range(0, 4000):
blue_histr, green_histr, red_histr = get_histograms.get_histograms('generated_frames/frame' + str(i) + ".jpg")
hist_of_image = CombinedHist.CombinedHist(blue_histr, green_histr, red_histr)
compare(hist_of_image, i)
queue_of_hists.insert_histr(hist_of_image)
print("frame" + str(i) + ".jpg")
fig = plt.figure(figsize=(18, 5))
y = np.add(np.add(y_r, y_g), y_b) / 3
value = np.percentile(y, 5)
median = np.median(y)
minimum = np.amin(y)
y_sorted = np.sort(y)
getting_index = y_sorted[8]
print("quartile" + str(value))
print("median" + str(median))
plt.plot(x, y, color='k')
plt.axhline(y=value, color='r', linestyle='-')
plt.xticks(np.arange(min(x), max(x) + 1, 100.0))
plt.show()
| 29.115385 | 114 | 0.718626 | 236 | 1,514 | 4.389831 | 0.385593 | 0.042471 | 0.034749 | 0.078185 | 0.153475 | 0.063707 | 0 | 0 | 0 | 0 | 0 | 0.019142 | 0.137384 | 1,514 | 51 | 115 | 29.686275 | 0.774119 | 0.035667 | 0 | 0 | 1 | 0 | 0.046671 | 0.0151 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.128205 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a75778c132db31042c63da3f963565d091dded6a | 1,231 | py | Python | dataflow/core/visualization.py | alphamatic/amp | 5018137097159415c10eaa659a2e0de8c4e403d4 | [
"BSD-3-Clause"
] | 5 | 2021-08-10T23:16:44.000Z | 2022-03-17T17:27:00.000Z | dataflow/core/visualization.py | alphamatic/amp | 5018137097159415c10eaa659a2e0de8c4e403d4 | [
"BSD-3-Clause"
] | 330 | 2021-06-10T17:28:22.000Z | 2022-03-31T00:55:48.000Z | dataflow/core/visualization.py | alphamatic/amp | 5018137097159415c10eaa659a2e0de8c4e403d4 | [
"BSD-3-Clause"
] | 6 | 2021-06-10T17:20:32.000Z | 2022-03-28T08:08:03.000Z | """
Helper functions to visualize a graph in a notebook or save the plot to file.
Import as:
import dataflow.core.visualization as dtfcorvisu
"""
import IPython
import networkx as networ
import pygraphviz
import dataflow.core.dag as dtfcordag
import helpers.hdbg as hdbg
import helpers.hio as hio
def draw(dag: dtfcordag.DAG) -> IPython.core.display.Image:
"""
Render DAG in a notebook.
"""
agraph = _extract_agraph_from_dag(dag)
image = IPython.display.Image(agraph.draw(format="png", prog="dot"))
return image
def draw_to_file(dag: dtfcordag.DAG, file_name: str = "graph.png") -> str:
"""
Save DAG rendering to a file.
"""
agraph = _extract_agraph_from_dag(dag)
# Save to file.
hio.create_enclosing_dir(file_name)
agraph.draw(file_name, prog="dot")
return file_name
def _extract_agraph_from_dag(dag: dtfcordag.DAG) -> pygraphviz.agraph.AGraph:
"""
Extract a pygraphviz `agraph` from a DAG.
"""
# Extract networkx DAG.
hdbg.dassert_isinstance(dag, dtfcordag.DAG)
graph = dag.dag
hdbg.dassert_isinstance(graph, networ.Graph)
# Convert the DAG into a pygraphviz graph.
agraph = networ.nx_agraph.to_agraph(graph)
return agraph
| 25.122449 | 77 | 0.707555 | 174 | 1,231 | 4.867816 | 0.298851 | 0.056671 | 0.070838 | 0.070838 | 0.095632 | 0.068477 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192526 | 1,231 | 48 | 78 | 25.645833 | 0.852113 | 0.25589 | 0 | 0.095238 | 0 | 0 | 0.020906 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a75cf13072fe0194f0d08765f3c331975a5d8df7 | 424 | py | Python | user/migrations/0002_user_photo.py | martinlehoux/erp-reloaded | db7dea603095dec558f4b0ad9a0d2dbd20f8703c | [
"MIT"
] | null | null | null | user/migrations/0002_user_photo.py | martinlehoux/erp-reloaded | db7dea603095dec558f4b0ad9a0d2dbd20f8703c | [
"MIT"
] | 5 | 2021-04-08T18:54:04.000Z | 2021-06-10T18:37:26.000Z | user/migrations/0002_user_photo.py | martinlehoux/erp-reloaded | db7dea603095dec558f4b0ad9a0d2dbd20f8703c | [
"MIT"
] | null | null | null | # Generated by Django 3.0.3 on 2020-03-01 00:58
from django.db import migrations, models
import user.models
class Migration(migrations.Migration):
dependencies = [
('user', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='user',
name='photo',
field=models.ImageField(null=True, upload_to=user.models.UploadTo('photo')),
),
]
| 21.2 | 88 | 0.606132 | 48 | 424 | 5.291667 | 0.708333 | 0.07874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06129 | 0.268868 | 424 | 19 | 89 | 22.315789 | 0.758065 | 0.106132 | 0 | 0 | 1 | 0 | 0.079576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7624496ee4975eb04a3c005275217a54323fb5d | 27,209 | py | Python | minesweeper.py | MrAttoAttoAtto/Cool-Programming-Project | 68214d089b612fdcca7fe76dce3464edec35ce2b | [
"MIT"
] | null | null | null | minesweeper.py | MrAttoAttoAtto/Cool-Programming-Project | 68214d089b612fdcca7fe76dce3464edec35ce2b | [
"MIT"
] | null | null | null | minesweeper.py | MrAttoAttoAtto/Cool-Programming-Project | 68214d089b612fdcca7fe76dce3464edec35ce2b | [
"MIT"
] | null | null | null | #Minesweeper!
from tkinter import *
import random, time, math, threading, os.path, os
#Tkinter Class
class MinesweeperMain: #Initialising class
def __init__(self, xLength, yLength, percentOfBombs, caller=None, winChoice=True):
try: #kills the 'play again' host (if it exists)
caller.root.destroy()
except TclError:
pass
except AttributeError:
pass
self.gameStarted = False #makes the necessary variables
self.gameOver = False
self.vlc64bitInstalled = True
self.squaresRevealed = 0
try: #checks if the user has vlc
import vlc
except OSError:
self.vlc64bitInstalled = False
self.xLength = xLength #sets these variables to the object
self.yLength = yLength
self.percentOfBombs = percentOfBombs #sets the variable
self.numOfBombs = math.floor(self.percentOfBombs/100*self.xLength*self.yLength) #setting the number of bombs
self.bombsLeftToReveal = self.numOfBombs #sets a variable that will allow for enough labels to be created
if self.vlc64bitInstalled:
self.explosionSound = vlc.MediaPlayer(os.path.join('sounds', 'explosion-sound.mp3')) #loads the sounds
self.alertSound = vlc.MediaPlayer(os.path.join('sounds', 'alert-sound.mp3')) #alert sound
self.winChoice = winChoice
if self.winChoice: #chooses the sound to load
self.winSound = vlc.MediaPlayer(os.path.join('sounds', 'win-sound.mp3'))
else:
self.winSound = vlc.MediaPlayer(os.path.join('sounds', 'win-sound.wav'))
self.mapData = [] #creating the variable which holds the map data
self.revealedSquareIds = [] #list so that, when the loss occurs and all tiles are revealed, already revealed squares are not affected
self.bombLocationsReserved = [] #creates a list that will hold the locations where no more bombs can be placed
self.root = Tk()
self.root.title('Minesweeper') #sets up the tkinter window
self.listOfNumberImages = [] #sets up this list for holding the images of the numbers
for x in range(9):
self.listOfNumberImages.append(PhotoImage(file='numbers'+os.sep+str(x)+'.PNG')) #fills said list
self.transImage = PhotoImage(file=os.path.join('pictures', 'transparent.png'))
self.flagImage = PhotoImage(file=os.path.join('pictures', 'flag.png'))
self.bombImage = PhotoImage(file=os.path.join('pictures', 'mine2-11.png'))
self.explosionImage = PhotoImage(file=os.path.join('pictures', 'explosion.png')) #sets up the rest of the images
self.frame = Frame(self.root) #makes the frame widget
self.frame.pack()
self.bombLabelList = [] #list for storing the bomb pictures
for i in range(self.numOfBombs): #adds all the necessary bomb picture labels
self.bombLabelList.append(Label(self.frame, image=self.bombImage, width=62, height=51)) #adds the right amount of bomb pictures to the list
if self.xLength % 2 == 0:
timeXPos = int(self.xLength/2-1) #sets the positions so they are in the middle
bombCountXPos = timeXPos + 1
else:
timeXPos = int(self.xLength/2-1.5)
bombCountXPos = timeXPos + 2
self.timeSecs = 0 #sets these time variables
self.timeMins = 0
self.timeLabel = Label(self.frame, text='Time') #puts the time and bomb count onto the tkinter window
self.timeLabel.grid(row=0, column=timeXPos)
self.bombLabel = Label(self.frame, text='Bombs')
self.bombLabel.grid(row=0, column=bombCountXPos)
self.timeStrVar = StringVar()
self.timeStrVar.set('00:00')
self.timeClock = Label(self.frame, textvariable=self.timeStrVar)
self.timeClock.grid(row=1, column=timeXPos)
self.bombStrVar = StringVar()
self.bombStrVar.set(str(self.numOfBombs))
self.bombsLeftLabel = Label(self.frame, textvariable=self.bombStrVar)
self.bombsLeftLabel.grid(row=1, column=bombCountXPos)
self.buttonList = [] #lists to hold data for buttons/labels
self.buttonStringVarList = []
self.labelList = []
self.isFlaggedList = []
self.mapData = [] #creating the variable which holds the map data
for l in range(self.yLength): #fills the lists with their required starting data
self.buttonStringVarList.append([])
self.buttonList.append([])
self.labelList.append([])
self.isFlaggedList.append([])
self.mapData.append([])
for p in range(self.xLength):
self.buttonStringVarList[l].append(StringVar())
self.buttonList[l].append('')
self.labelList[l].append('')
self.isFlaggedList[l].append(False)
self.mapData[l].append('')
xPos = 0 #sets the working positions of the button creation
yPos = 0
for pos in range(0, self.xLength*self.yLength): #creates all of the buttons required
if xPos == self.xLength:
yPos += 1
xPos = 0
self.buttonList[yPos][xPos] = Button(self.frame, height=49, width=60, textvariable=self.buttonStringVarList[yPos][xPos], image=self.transImage)
self.buttonList[yPos][xPos].grid(row=yPos+2, column=xPos)
self.buttonList[yPos][xPos].bind('<Button-1>', lambda e, xPosLoc=xPos, yPosLoc=yPos: self.revealSquare(xPosLoc, yPosLoc)) #reveals the square if left-clicked
self.buttonList[yPos][xPos].bind('<Button-3>', lambda e, xPosLoc=xPos, yPosLoc=yPos: self.markSquare(xPosLoc, yPosLoc)) #marks the square if right-clicked
xPos += 1
self.timerThread = threading.Thread(target=self.timerCode, name="timer") #starts the timer
self.timerThread.start()
self.root.mainloop() #mainloop!
def timerCode(self):
while True:
if self.gameOver: #if the game is over, exit this loop of the timer
return
self.timeSecs = int(self.timeSecs) #turns them back into ints (just in case they were converted into strings to add 0s to the front of them)
self.timeMins = int(self.timeMins)
start = time.time()
self.timeSecs += 1 #increments the seconds
if self.timeSecs == 60: #if it is a minute...
self.timeSecs = 0 #change the seconds to 0 and add 1 to the mins
self.timeMins += 1
if self.timeSecs < 10: #if either is lower than 10, make sure it has a 0 in front of the number
self.timeSecs = '0'+str(self.timeSecs)
if self.timeMins < 10:
self.timeMins = '0'+str(self.timeMins)
try:
self.timeStrVar.set(str(self.timeMins)+':'+str(self.timeSecs)) #sets the visual time
except RuntimeError: #if the window has been forcefully ended
return
while time.time() < start+1: #waits for a sec
continue
def generateBoard(self, xPos, yPos): #generating the board
self.bombLocationsReserved.append(xPos+yPos*self.xLength) #reserving the 3x3 area around the button placed
self.bombLocationsReserved.append(xPos+yPos*self.xLength-1)
self.bombLocationsReserved.append(xPos+yPos*self.xLength+1)
self.bombLocationsReserved.append(xPos+yPos*self.xLength-self.xLength)
self.bombLocationsReserved.append(xPos+yPos*self.xLength+self.xLength)
self.bombLocationsReserved.append(xPos+yPos*self.xLength-self.xLength-1)
self.bombLocationsReserved.append(xPos+yPos*self.xLength-self.xLength+1)
self.bombLocationsReserved.append(xPos+yPos*self.xLength+self.xLength-1)
self.bombLocationsReserved.append(xPos+yPos*self.xLength+self.xLength+1)
bombsLeftToPlace = self.numOfBombs #sets a helpful temporary variable
while bombsLeftToPlace > 0:
yPlace = 0
bombPlacement = random.randint(0, self.xLength*self.yLength-1) #random square id
placementValue = bombPlacement #another helpful variable
while bombPlacement >= self.xLength: #figures out the x and y from that
bombPlacement = bombPlacement - self.xLength
yPlace += 1
xPlace = bombPlacement
if not placementValue in self.bombLocationsReserved: #checks the place isnt reserved
self.mapData[yPlace][xPlace] = 'B' #updates the map
bombsLeftToPlace = bombsLeftToPlace - 1 #self-explanatory
self.bombLocationsReserved.append(placementValue) #reserves the place just taken
for squareXPos in range(0, self.xLength): #for EVERY square...
for squareYPos in range(0, self.yLength):
bombsSurrounding = 0 #sets this to 0
if self.mapData[squareYPos][squareXPos] == 'B': #if a bomb...
self.buttonStringVarList[squareYPos][squareXPos].set('B') #sets the strVar to B (debugging)
continue #goes back to the loop
if squareXPos > 0: #all of this next part finds how many bombs surround a square (and makes sure that it does not wrap around or throw an error)
if squareYPos > 0:
if self.mapData[squareYPos-1][squareXPos-1] == 'B':
bombsSurrounding += 1
if self.mapData[squareYPos][squareXPos-1] == 'B':
bombsSurrounding += 1
try:
if self.mapData[squareYPos+1][squareXPos-1] == 'B':
bombsSurrounding += 1
except IndexError:
pass
if squareYPos > 0:
if self.mapData[squareYPos-1][squareXPos] == 'B':
bombsSurrounding += 1
try:
if self.mapData[squareYPos+1][squareXPos] == 'B':
bombsSurrounding += 1
except IndexError:
pass
if squareYPos > 0:
try:
if self.mapData[squareYPos-1][squareXPos+1] == 'B':
bombsSurrounding += 1
except IndexError:
pass
try:
if self.mapData[squareYPos][squareXPos+1] == 'B':
bombsSurrounding += 1
except IndexError:
pass
try:
if self.mapData[squareYPos+1][squareXPos+1] == 'B':
bombsSurrounding += 1
except IndexError:
pass
self.mapData[squareYPos][squareXPos] = bombsSurrounding #updates the mapData with the value of the square
def revealSquare(self, xPos, yPos): #if a square is left-clicked...
if not self.gameStarted: #is the game hasnt been generated yet...
self.generateBoard(xPos, yPos) #generate it having been clicked at xPos, yPos
self.gameStarted = True #the board has been generated
if xPos+yPos*self.xLength in self.revealedSquareIds or (self.isFlaggedList[yPos][xPos] and not self.gameOver): #if the id has already been revealed or the square if flagged...
return #exit the function
self.squaresRevealed += 1 #increments the squares revealed
self.revealedSquareIds.append(xPos+yPos*self.xLength) #append the id to the revealed ids
self.buttonList[yPos][xPos].destroy() #destroy the button
if self.mapData[yPos][xPos] != 'B': #if it is NOT a bomb...
self.labelList[yPos][xPos] = Label(self.frame, width=62, height=51, image=self.listOfNumberImages[self.mapData[yPos][xPos]]) #create a label for it,
self.labelList[yPos][xPos].grid(row=yPos+2, column=xPos)
self.labelList[yPos][xPos].bind('<Button-2>', lambda e, xPos=xPos, yPos=yPos: self.chordSquare(xPos, yPos)) # and if middle-clicked, it will call chordSquare
if not self.gameOver: #if the game hasn't been failed...
self.root.update() #update the window (for nice looking 0 chain reactions)
time.sleep(0.02) #sleep a bit
if self.mapData[yPos][xPos] == 0 and not self.gameOver: #if it is a 0 and the game has not been lost...
if xPos > 0: #reveal all round it (nice recursiveness)
if yPos > 0:
try:
self.revealSquare(xPos-1, yPos-1)
except Exception:
pass
try:
self.revealSquare(xPos-1, yPos)
except Exception:
pass
try:
self.revealSquare(xPos-1, yPos+1)
except Exception:
pass
if yPos > 0:
try:
self.revealSquare(xPos, yPos-1)
except Exception:
pass
try:
self.revealSquare(xPos, yPos+1)
except Exception:
pass
if yPos > 0:
try:
self.revealSquare(xPos+1, yPos-1)
except Exception:
pass
try:
self.revealSquare(xPos+1, yPos)
except Exception:
pass
try:
self.revealSquare(xPos+1, yPos+1)
except Exception:
pass
if self.mapData[yPos][xPos] == 'B': #if it's a bomb...
self.bombLabelList[self.bombsLeftToReveal-1].grid(row=yPos+2, column=xPos) #put the pic in its place
self.bombsLeftToReveal = self.bombsLeftToReveal-1 #self-explanatory
if self.mapData[yPos][xPos] == 'B' and not self.gameOver: #if it is the bomb which made you lose...
self.gameOver = True #you failed
print('Working...')
self.explosionLabel = Label(self.frame, width=62, height=51, image=self.explosionImage) #it becomes an explosion image
self.explosionLabel.grid(row=yPos+2, column=xPos)# and is placed where it was
if self.vlc64bitInstalled: #if vlc is installed...
self.alertSound.play() #play alert
time.sleep(0.3)
self.explosionSound.play() #play the sound
self.root.update() #update to show the explosion
for xFail in range(self.xLength*self.yLength): #open all squares
yFail = 0
while xFail >= self.xLength:
xFail = xFail - self.xLength
yFail += 1
self.revealSquare(xFail, yFail)
self.root.update() #update after all this is done
print('Done!')
gameOver = GameOverBox(self, 'loss') #activate the game over dialog
if self.squaresRevealed == self.xLength*self.yLength-self.numOfBombs and not self.gameOver: #if you have revealed all of the non-bomb squares and not failed...
self.gameOver = True
print('Working...')
if self.vlc64bitInstalled: #if vlc is installed...
self.winSound.play() #play the win sound
bombLocIds = self.bombLocationsReserved[8:] #give the bomb ids
for bombId in bombLocIds: #iterate through them
yLocBomb = 0
while bombId >= self.xLength: #turn the ids into coordinates
bombId = bombId - self.xLength
yLocBomb += 1
xLocBomb = bombId
self.revealSquare(xLocBomb, yLocBomb) #reveal those coords
print('Done!')
gameOver = GameOverBox(self, 'win') #open the win dialog box
def markSquare(self, xPos, yPos): #flagging
if not self.isFlaggedList[yPos][xPos]: #if the square is NOT flagged...
self.buttonList[yPos][xPos].configure(image=self.flagImage, height=49, width=60) #flag it
self.bombStrVar.set(int(self.bombStrVar.get())-1) #increment the bombs left
self.isFlaggedList[yPos][xPos] = True
else:
self.buttonList[yPos][xPos].configure(image=self.transImage, height=49, width=60) #get rid of the flag
self.bombStrVar.set(int(self.bombStrVar.get())+1) #increment the bombs left
self.isFlaggedList[yPos][xPos] = False
def chordSquare(self, xPos, yPos): #chording
flagsSurrounding = 0
flagsNeeded = self.mapData[yPos][xPos]
if xPos > 0: #all of this next part finds how many flags surround a square (and makes sure that it does not wrap around or throw an error)
if yPos > 0:
try:
if self.isFlaggedList[yPos-1][xPos-1]:
flagsSurrounding += 1
except Exception:
pass
try:
if self.isFlaggedList[yPos][xPos-1]:
flagsSurrounding += 1
except Exception:
pass
try:
if self.isFlaggedList[yPos+1][xPos-1]:
flagsSurrounding += 1
except IndexError:
pass
except Exception:
pass
if yPos > 0:
try:
if self.isFlaggedList[yPos-1][xPos]:
flagsSurrounding += 1
except Exception:
pass
try:
if self.isFlaggedList[yPos+1][xPos]:
flagsSurrounding += 1
except IndexError:
pass
except Exception:
pass
if yPos > 0:
try:
if self.isFlaggedList[yPos-1][xPos+1]:
flagsSurrounding += 1
except IndexError:
pass
except Exception:
pass
try:
if self.isFlaggedList[yPos][xPos+1]:
flagsSurrounding += 1
except IndexError:
pass
except Exception:
pass
try:
if self.isFlaggedList[yPos+1][xPos+1]:
flagsSurrounding += 1
except IndexError:
pass
except Exception:
pass
if flagsSurrounding == flagsNeeded: #if there are enough, but not too many flags...
if xPos > 0: #reveal all around it
if yPos > 0:
try:
self.revealSquare(xPos-1, yPos-1)
except Exception:
pass
try:
self.revealSquare(xPos-1, yPos)
except Exception:
pass
try:
self.revealSquare(xPos-1, yPos+1)
except Exception:
pass
if yPos > 0:
try:
self.revealSquare(xPos, yPos-1)
except Exception:
pass
try:
self.revealSquare(xPos, yPos+1)
except Exception:
pass
if yPos > 0:
try:
self.revealSquare(xPos+1, yPos-1)
except Exception:
pass
try:
self.revealSquare(xPos+1, yPos)
except Exception:
pass
try:
self.revealSquare(xPos+1, yPos+1)
except Exception:
pass
class GameOverBox: #end of game dialog
def __init__(self, master, state):
if state == 'loss': #if you lost
self.title = 'Game Over' #set these variables
self.message = 'You Lost!'
self.color = 'red'
else: #if you won
self.title = 'Congratulations' #set these variables
self.message = 'You Won, Well Done! It took you '+master.timeStrVar.get()+'!'
self.color = 'green'
self.root = Tk()
self.root.title(self.title) #create the window
self.frame = Frame(self.root) #create the frame
self.frame.pack()
self.label = Label(self.frame, text=self.message, fg=self.color) #create the label
self.label.grid(row=0, column=1)
self.playAgainButton = Button(self.frame, text='Play Again', fg='green', command=lambda: self.restart(master)) #create the play again button
self.playAgainButton.grid(row=0, column=0)
self.exitButton = Button(self.frame, text='Exit and Close', fg='red', command=lambda: self.exit(master)) #create the exit button
self.exitButton.grid(row=0, column=2)
self.playOtherButton = Button(self.frame, text='Play another configuration', command=lambda: self.playOther(master)) #create the 'play another config' button
self.playOtherButton.grid(row=1, column=1)
self.root.mainloop() #Mainloop!
def restart(self, master): #the restart function
try:
master.root.destroy() #kill the MinesweeperMain window
except Exception:
pass
openMain(self, master=master) #re-call it
def exit(self, master): #exit func
try:
master.root.destroy() #kill the MinesweepreMain window
except Exception:
pass
self.root.destroy() #kill the end of game dialog
def playOther(self, master):
global start
try:
master.root.destroy() #kill the MinesweeperMain window
except Exception:
pass
start = StartBox(self) #start the Start Box
class StartBox:
def __init__(self, caller=None):
try:
caller.root.destroy() #try killing the play again box (if it exists)
except Exception:
pass
self.choice = True #choice defaults to true
self.root = Tk() #creates the window
self.root.title('Start Minesweeper')
self.frame = Frame(self.root) #creates the frame
self.frame.pack()
self.xLabel = Label(self.frame, text='Enter the width of the minesweeper board')
self.xLabel.grid(row=0, column=0) #creates the xLabel
self.xLengthStrVar = StringVar()
self.xInput = Entry(self.frame, width=5, textvariable=self.xLengthStrVar)
self.xInput.grid(row=1, column=0) #creates the x entry box
self.yLabel = Label(self.frame, text='Enter the height of the minesweeper board')
self.yLabel.grid(row=3, column=0) #etc
self.yLengthStrVar = StringVar()
self.yInput = Entry(self.frame, width=5, textvariable=self.yLengthStrVar)
self.yInput.grid(row=4, column=0) #etc
self.bombPercentLabel = Label(self.frame, text='Enter the percentage of the squares you would like to be bombs')
self.bombPercentLabel.grid(row=6, column=0) #etc
self.bombPercentStrVar = StringVar()
self.bombPercentInput = Entry(self.frame, width=5, textvariable=self.bombPercentStrVar)
self.bombPercentInput.grid(row=7, column=0) #etc
self.winChoiceLabel = Label(self.frame, text='Select either the orchestral or vocal win event')
self.winChoiceLabel.grid(row=9, column=0) #creates the win choice label
self.vocalWinButton = Button(self.frame, text='Change to vocal', command=lambda: self.setWin(True))
self.orchestralWinButton = Button(self.frame, text='Change to orchestral', command=lambda: self.setWin(False))
self.orchestralWinButton.grid(row=10, column=0) #creates both win choice buttons and activates the orchestral one
self.winChoiceChoiceStrVar = StringVar()
self.winChoiceChoiceStrVar.set('The vocal win event is selected')
self.winChoiceChoiceLabel = Label(self.frame, textvariable=self.winChoiceChoiceStrVar)
self.winChoiceChoiceLabel.grid(row=10, column=1) #creates the StringVar which will tell you which choice you have selected
self.submitButton = Button(self.frame, text='Submit', fg='green', command=self.completeRequest)
self.submitButton.grid(row=12, column=0) #submit button
self.cancelButton = Button(self.frame, text='Cancel and Exit', fg='red', command=self.root.destroy)
self.cancelButton.grid(row=12, column=1) #exit button
self.root.mainloop() #Mainloop!
def setWin(self, choice):
self.choice = choice #sets the variable
if self.choice:
self.vocalWinButton.grid_forget() #updates which buttons you can press and the stringvar
self.orchestralWinButton.grid(row=10, column=0)
self.winChoiceChoiceStrVar.set('The vocal win event is selected')
else:
self.orchestralWinButton.grid_forget() #see above
self.vocalWinButton.grid(row=10, column=0)
self.winChoiceChoiceStrVar.set('The orchestral win event is selected')
def completeRequest(self): #completes the request
try:
self.xLen = int(self.xLengthStrVar.get()) #tries to make them ints/floats
self.yLen = int(self.yLengthStrVar.get())
self.bombPercent = float(self.bombPercentStrVar.get())
if not (self.xLen*self.yLen)-(self.xLen*self.yLen*self.bombPercent/100) >= 9: #if 9 squares cannot be reserved for the first click, dont allow them to play
error = ErrorBox('The percentage of bombs is too high, the game will not generate')
return
openMain(self, self.xLen, self.yLen, self.bombPercent, self.choice) #opens the opener
except ValueError:
error = ErrorBox('One or more values you have entered is invalid (all have to be numbers but the percentage does not have to be an integer)') #these have to be numbers!
pass
class ErrorBox:
def __init__(self, error):
self.error = error #sets the error
self.root = Tk() #creates the window
self.root.title('Error')
self.frame = Frame(self.root) #creates the frame
self.frame.pack()
self.label = Label(self.frame, text=error, fg='red') #shows the error
self.label.grid(row=0, column=0)
self.button = Button(self.frame, text='Ok', command=self.root.destroy) #button to kill the error box
self.button.grid(row=1, column=0)
self.root.mainloop() #Mainloop!
def openMain(caller, xLength=None, yLength=None, percentOfBombs=None, winChoice=None, master=None): #restarts it outside of the class
global minesweeper
if master != None: #if it has been called from the play again box...
minesweeper = MinesweeperMain(master.xLength, master.yLength, master.percentOfBombs, caller, master.winChoice) #use the old configs
else: #else
minesweeper = MinesweeperMain(xLength, yLength, percentOfBombs, caller, winChoice) #use the new configs
if __name__ == '__main__':
start = StartBox()
minesweeper = None
| 40.429421 | 183 | 0.592745 | 3,143 | 27,209 | 5.123131 | 0.161311 | 0.024593 | 0.033039 | 0.022854 | 0.366476 | 0.315861 | 0.2686 | 0.242889 | 0.234443 | 0.197429 | 0 | 0.014127 | 0.313168 | 27,209 | 672 | 184 | 40.489583 | 0.847496 | 0.177184 | 0 | 0.4334 | 0 | 0.001988 | 0.04482 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029821 | false | 0.081511 | 0.005964 | 0 | 0.05169 | 0.007952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a7625f42a7dd6cbf1419217f4da8ae9f6f00c5f6 | 5,431 | py | Python | cannlytics/utils/scraper.py | mindthegrow/cannlytics | c266bc1169bef75214985901cd3165f415ad9ba7 | [
"MIT"
] | 7 | 2021-05-31T15:30:22.000Z | 2022-02-05T14:12:31.000Z | cannlytics/utils/scraper.py | mindthegrow/cannlytics | c266bc1169bef75214985901cd3165f415ad9ba7 | [
"MIT"
] | 17 | 2021-06-09T01:04:27.000Z | 2022-03-18T14:48:12.000Z | cannlytics/utils/scraper.py | mindthegrow/cannlytics | c266bc1169bef75214985901cd3165f415ad9ba7 | [
"MIT"
] | 5 | 2021-06-07T13:52:33.000Z | 2021-08-04T00:09:39.000Z | # -*- coding: utf-8 -*-
"""
Scrape Website Data | Cannlytics
Copyright © 2021 Cannlytics
Author: Keegan Skeate <keegan@cannlytics.com>
Created: 1/10/2021
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Resources:
https://stackoverflow.com/questions/54416896/how-to-scrape-email-and-phone-numbers-from-a-list-of-websites
https://hackersandslackers.com/scraping-urls-with-beautifulsoup/
TODO:
Improve with requests-html - https://github.com/psf/requests-html
- Get #about
- Get absolute URLs
- Search for text (prices/analyses)
r.html.search('Python is a {} language')[0]
"""
import re
import requests
from bs4 import BeautifulSoup
def get_page_metadata(url):
"""Scrape target URL for metadata."""
headers = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Max-Age": "3600",
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0",
}
# Handle URLs without http beginning
if not url.startswith("http"):
url = "http://" + url
response = requests.get(url, headers=headers)
html = BeautifulSoup(response.content, "html.parser")
metadata = {
"description": get_description(html),
"image_url": get_image(html), # FIXME: Append URL if relative path.
"favicon": get_favicon(html, url),
"brand_color": get_theme_color(html),
}
return response, html, metadata
def get_description(html):
"""Scrape page description."""
description = None
if html.find("meta", property="description"):
description = html.find("meta", property="description").get("content")
elif html.find("meta", property="og:description"):
description = html.find("meta", property="og:description").get("content")
elif html.find("meta", property="twitter:description"):
description = html.find("meta", property="twitter:description").get("content")
elif html.find("p"):
description = html.find("p").contents
if isinstance(description, list):
try:
description = description[0]
except IndexError:
pass
return description
def get_image(html):
"""Scrape share image."""
image = None
if html.find("meta", property="image"):
image = html.find("meta", property="image").get("content")
elif html.find("meta", property="og:image"):
image = html.find("meta", property="og:image").get("content")
elif html.find("meta", property="twitter:image"):
image = html.find("meta", property="twitter:image").get("content")
elif html.find("img", src=True):
image = html.find_all("img")[0].get("src")
return image
def get_favicon(html, url):
"""Scrape favicon."""
if html.find("link", attrs={"rel": "icon"}):
favicon = html.find("link", attrs={"rel": "icon"}).get("href")
elif html.find("link", attrs={"rel": "shortcut icon"}):
favicon = html.find("link", attrs={"rel": "shortcut icon"}).get("href")
else:
favicon = f'{url.rstrip("/")}/favicon.ico'
return favicon
def get_theme_color(html):
"""Scrape brand color."""
if html.find("meta", property="theme-color"):
color = html.find("meta", property="theme-color").get("content")
return color
return None
def get_phone(html, response):
"""Scrape phone number."""
try:
phone = html.select("a[href*=callto]")[0].text
return phone
except:
pass
try:
phone = re.findall(
r"\(?\b[2-9][0-9]{2}\)?[-][2-9][0-9]{2}[-][0-9]{4}\b", response.text
)[0]
return phone
except:
pass
try:
phone = re.findall(
r"\(?\b[2-9][0-9]{2}\)?[-. ]?[2-9][0-9]{2}[-. ]?[0-9]{4}\b", response.text
)[-1]
return phone
except:
print("Phone number not found")
phone = ""
return phone
def get_email(html, response):
"""Get email."""
try:
email = re.findall(
r"([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)", response.text
)[-1]
return email
except:
pass
try:
email = html.select("a[href*=mailto]")[-1].text
except:
print("Email not found")
email = ""
return email
def find_lab_address():
"""
TODO: Tries to find a lab's address from their website, then Google Maps.
"""
street, city, state, zipcode = None, None, None, None
return street, city, state, zipcode
def find_lab_linkedin():
"""
TODO: Tries to find a lab's LinkedIn URL. (Try to find LinkedIn on homepage?)
"""
return ""
def find_lab_url():
"""
TODO: Find a lab's website URL. (Google search for name?)
"""
return ""
def clean_string_columns(df):
"""Clean string columns in a dataframe."""
for column in df.columns:
try:
df[column] = df[column].str.title()
df[column] = df[column].str.replace("Llc", "LLC")
df[column] = df[column].str.replace("L.L.C.", "LLC")
df[column] = df[column].str.strip()
except AttributeError:
pass
return df
| 30.511236 | 110 | 0.598048 | 699 | 5,431 | 4.606581 | 0.298999 | 0.054658 | 0.052174 | 0.086957 | 0.312422 | 0.305901 | 0.148447 | 0.104348 | 0.040994 | 0.040994 | 0 | 0.021415 | 0.234763 | 5,431 | 177 | 111 | 30.683616 | 0.753128 | 0.224268 | 0 | 0.252174 | 0 | 0.034783 | 0.21491 | 0.062409 | 0 | 0 | 0 | 0.028249 | 0 | 1 | 0.095652 | false | 0.043478 | 0.026087 | 0 | 0.26087 | 0.017391 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7646b2e354d22868d6a6f4cc986b8c2069e186b | 709 | py | Python | src/ch5-viewmodels/web/services/AccountPageService.py | saryeHaddadi/Python.Course.WebAppFastAPI | ddc1f1850473c227e715c8ecd2afd741e53c4680 | [
"MIT"
] | null | null | null | src/ch5-viewmodels/web/services/AccountPageService.py | saryeHaddadi/Python.Course.WebAppFastAPI | ddc1f1850473c227e715c8ecd2afd741e53c4680 | [
"MIT"
] | null | null | null | src/ch5-viewmodels/web/services/AccountPageService.py | saryeHaddadi/Python.Course.WebAppFastAPI | ddc1f1850473c227e715c8ecd2afd741e53c4680 | [
"MIT"
] | null | null | null | import fastapi
from starlette.requests import Request
from web.viewmodels.account.AccountViewModel import AccountViewModel
from web.viewmodels.account.LoginViewModel import LoginViewModel
from web.viewmodels.account.RegisterViewModel import RegisterViewModel
router = fastapi.APIRouter()
@router.get('/account')
def index(request: Request):
vm = AccountViewModel(request)
return vm.to_dict()
@router.get('/account/register')
def register(request: Request):
vm = RegisterViewModel(request)
return vm.to_dict()
@router.get('/account/login')
def login(request: Request):
vm = LoginViewModel(request)
return vm.to_dict()
@router.get('/account/logout')
def logout():
return {}
| 22.870968 | 70 | 0.760226 | 82 | 709 | 6.536585 | 0.292683 | 0.067164 | 0.119403 | 0.134328 | 0.20709 | 0.20709 | 0.20709 | 0.20709 | 0 | 0 | 0 | 0 | 0.126939 | 709 | 30 | 71 | 23.633333 | 0.865913 | 0 | 0 | 0.142857 | 0 | 0 | 0.076164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.238095 | 0.047619 | 0.619048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a7659e9cd38acecd1d387852d0d503d7207e98a9 | 29,031 | py | Python | src/opserver/uveserver.py | madkiss/contrail-controller | 17f622dfe99f8ab4163436399e80f95dd564814c | [
"Apache-2.0"
] | null | null | null | src/opserver/uveserver.py | madkiss/contrail-controller | 17f622dfe99f8ab4163436399e80f95dd564814c | [
"Apache-2.0"
] | null | null | null | src/opserver/uveserver.py | madkiss/contrail-controller | 17f622dfe99f8ab4163436399e80f95dd564814c | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2013 Juniper Networks, Inc. All rights reserved.
#
#
# UVEServer
#
# Operational State Server for UVEs
#
import gevent
import json
import copy
import xmltodict
import redis
import datetime
import sys
from opserver_util import OpServerUtils
import re
from gevent.coros import BoundedSemaphore
from pysandesh.util import UTCTimestampUsec
from pysandesh.connection_info import ConnectionState
from sandesh.viz.constants import UVE_MAP
class UVEServer(object):
def __init__(self, redis_uve_server, logger, redis_password=None):
self._local_redis_uve = redis_uve_server
self._redis_uve_list = []
self._logger = logger
self._sem = BoundedSemaphore(1)
self._redis = None
self._redis_password = redis_password
if self._local_redis_uve:
self._redis = redis.StrictRedis(self._local_redis_uve[0],
self._local_redis_uve[1],
password=self._redis_password,
db=1)
self._uve_reverse_map = {}
for h,m in UVE_MAP.iteritems():
self._uve_reverse_map[m] = h
#end __init__
def update_redis_uve_list(self, redis_uve_list):
self._redis_uve_list = redis_uve_list
# end update_redis_uve_list
def fill_redis_uve_info(self, redis_uve_info):
redis_uve_info.ip = self._local_redis_uve[0]
redis_uve_info.port = self._local_redis_uve[1]
try:
self._redis.ping()
except redis.exceptions.ConnectionError:
redis_uve_info.status = 'DisConnected'
else:
redis_uve_info.status = 'Connected'
#end fill_redis_uve_info
@staticmethod
def merge_previous(state, key, typ, attr, prevdict):
print "%s New val is %s" % (attr, prevdict)
nstate = copy.deepcopy(state)
if UVEServer._is_agg_item(prevdict):
count = int(state[key][typ][attr]['previous']['#text'])
count += int(prevdict['#text'])
nstate[key][typ][attr]['previous']['#text'] = str(count)
if UVEServer._is_agg_list(prevdict):
sname = ParallelAggregator.get_list_name(
state[key][typ][attr]['previous'])
count = len(prevdict['list'][sname]) + \
len(state[key][typ][attr]['previous']['list'][sname])
nstate[key][typ][attr]['previous']['list'][sname].extend(
prevdict['list'][sname])
nstate[key][typ][attr]['previous']['list']['@size'] = \
str(count)
tstate = {}
tstate[typ] = {}
tstate[typ][attr] = copy.deepcopy(
nstate[key][typ][attr]['previous'])
nstate[key][typ][attr]['previous'] =\
ParallelAggregator.consolidate_list(tstate, typ, attr)
print "%s Merged val is %s"\
% (attr, nstate[key][typ][attr]['previous'])
return nstate
def run(self):
lck = False
while True:
try:
k, value = self._redis.brpop("DELETED")
self._sem.acquire()
lck = True
self._logger.debug("%s del received for " % value)
# value is of the format:
# DEL:<key>:<src>:<node-type>:<module>:<instance-id>:<message-type>:<seqno>
info = value.rsplit(":", 6)
key = info[0].split(":", 1)[1]
typ = info[5]
existing = self._redis.hgetall("PREVIOUS:" + key + ":" + typ)
tstate = {}
tstate[key] = {}
tstate[key][typ] = {}
state = UVEServer.convert_previous(existing, tstate, key, typ)
for attr, hval in self._redis.hgetall(value).iteritems():
snhdict = xmltodict.parse(hval)
if UVEServer._is_agg_list(snhdict[attr]):
if snhdict[attr]['list']['@size'] == "0":
continue
if snhdict[attr]['list']['@size'] == "1":
sname = ParallelAggregator.get_list_name(
snhdict[attr])
if not isinstance(
snhdict[attr]['list'][sname], list):
snhdict[attr]['list'][sname] = \
[snhdict[attr]['list'][sname]]
if (attr not in state[key][typ]):
# There is no existing entry for the UVE
vstr = json.dumps(snhdict[attr])
else:
# There is an existing entry
# Merge the new entry with the existing one
state = UVEServer.merge_previous(
state, key, typ, attr, snhdict[attr])
vstr = json.dumps(state[key][typ][attr]['previous'])
# Store the merged result back in the database
self._redis.sadd("PUVES:" + typ, key)
self._redis.sadd("PTYPES:" + key, typ)
self._redis.hset("PREVIOUS:" + key + ":" + typ, attr, vstr)
self._redis.delete(value)
except redis.exceptions.ResponseError:
#send redis connection down msg. Coule be bcos of authentication
ConnectionState.update(conn_type = ConnectionType.REDIS,
name = 'UVE', status = ConnectionStatus.DOWN,
message = 'UVE result : Connection Error',
server_addrs = ['%s:%d' % (self._local_redis_uve[0],
self._local_redis_uve[1])])
sys.exit()
except redis.exceptions.ConnectionError:
if lck:
self._sem.release()
lck = False
gevent.sleep(5)
else:
if lck:
self._sem.release()
lck = False
self._logger.debug("Deleted %s" % value)
self._logger.debug("UVE %s Type %s" % (key, typ))
@staticmethod
def _is_agg_item(attr):
if attr['@type'] in ['i8', 'i16', 'i32', 'i64', 'byte',
'u8', 'u16', 'u32', 'u64']:
if '@aggtype' in attr:
if attr['@aggtype'] == "counter":
return True
return False
@staticmethod
def _is_agg_list(attr):
if attr['@type'] in ['list']:
if '@aggtype' in attr:
if attr['@aggtype'] == "append":
return True
return False
@staticmethod
def convert_previous(existing, state, key, typ, afilter=None):
# Take the existing delete record, and load it into the state dict
for attr, hval in existing.iteritems():
hdict = json.loads(hval)
if afilter is not None and len(afilter):
if attr not in afilter:
continue
# When recording deleted attributes, only record those
# for which delete-time aggregation is needed
if UVEServer._is_agg_item(hdict):
if (typ not in state[key]):
state[key][typ] = {}
if (attr not in state[key][typ]):
state[key][typ][attr] = {}
state[key][typ][attr]["previous"] = hdict
# For lists that require delete-time aggregation, we need
# to normailize lists of size 1, and ignore those of size 0
if UVEServer._is_agg_list(hdict):
if hdict['list']['@size'] != "0":
if (typ not in state[key]):
state[key][typ] = {}
if (attr not in state[key][typ]):
state[key][typ][attr] = {}
state[key][typ][attr]["previous"] = hdict
if hdict['list']['@size'] == "1":
sname = ParallelAggregator.get_list_name(hdict)
if not isinstance(hdict['list'][sname], list):
hdict['list'][sname] = [hdict['list'][sname]]
return state
def get_part(self, part):
uves = {}
for redis_uve in self._redis_uve_list:
gen_uves = {}
redish = redis.StrictRedis(host=redis_uve[0],
port=redis_uve[1], db=1)
for elems in redish.smembers("PART2KEY:" + str(part)):
info = elems.split(":", 5)
gen = info[0] + ":" + info[1] + ":" + info[2] + ":" + info[3]
key = info[5]
if not gen_uves.has_key(gen):
gen_uves[gen] = {}
gen_uves[gen][key] = 0
uves[redis_uve[0] + ":" + str(redis_uve[1])] = gen_uves
return uves
def get_uve(self, key, flat, filters=None, multi=False, is_alarm=False, base_url=None):
filters = filters or {}
sfilter = filters.get('sfilt')
mfilter = filters.get('mfilt')
tfilter = filters.get('cfilt')
ackfilter = filters.get('ackfilt')
state = {}
state[key] = {}
statdict = {}
for redis_uve in self._redis_uve_list:
redish = redis.StrictRedis(host=redis_uve[0],
port=redis_uve[1],
password=self._redis_password, db=1)
try:
qmap = {}
origins = redish.smembers("ALARM_ORIGINS:" + key)
if not is_alarm:
origins = origins.union(redish.smembers("ORIGINS:" + key))
for origs in origins:
info = origs.rsplit(":", 1)
sm = info[0].split(":", 1)
source = sm[0]
if sfilter is not None:
if sfilter != source:
continue
mdule = sm[1]
if mfilter is not None:
if mfilter != mdule:
continue
dsource = source + ":" + mdule
typ = info[1]
if tfilter is not None:
if typ not in tfilter:
continue
odict = redish.hgetall("VALUES:" + key + ":" + origs)
afilter_list = set()
if tfilter is not None:
afilter_list = tfilter[typ]
for attr, value in odict.iteritems():
if len(afilter_list):
if attr not in afilter_list:
continue
if typ not in state[key]:
state[key][typ] = {}
if value[0] == '<':
snhdict = xmltodict.parse(value)
if snhdict[attr]['@type'] == 'list':
sname = ParallelAggregator.get_list_name(
snhdict[attr])
if snhdict[attr]['list']['@size'] == '0':
continue
elif snhdict[attr]['list']['@size'] == '1':
if not isinstance(
snhdict[attr]['list'][sname], list):
snhdict[attr]['list'][sname] = [
snhdict[attr]['list'][sname]]
if typ == 'UVEAlarms' and attr == 'alarms' and \
ackfilter is not None:
alarms = []
for alarm in snhdict[attr]['list'][sname]:
ack_attr = alarm.get('ack')
if ack_attr:
ack = ack_attr['#text']
else:
ack = 'false'
if ack == ackfilter:
alarms.append(alarm)
if not len(alarms):
continue
snhdict[attr]['list'][sname] = alarms
snhdict[attr]['list']['@size'] = \
str(len(alarms))
else:
continue
# print "Attr %s Value %s" % (attr, snhdict)
if attr not in state[key][typ]:
state[key][typ][attr] = {}
if dsource in state[key][typ][attr]:
print "Found Dup %s:%s:%s:%s:%s = %s" % \
(key, typ, attr, source, mdule, state[
key][typ][attr][dsource])
state[key][typ][attr][dsource] = snhdict[attr]
if sfilter is None and mfilter is None:
for ptyp in redish.smembers("PTYPES:" + key):
afilter = None
if tfilter is not None:
if ptyp not in tfilter:
continue
afilter = tfilter[ptyp]
existing = redish.hgetall("PREVIOUS:" + key + ":" + ptyp)
nstate = UVEServer.convert_previous(
existing, state, key, ptyp, afilter)
state = copy.deepcopy(nstate)
pa = ParallelAggregator(state, self._uve_reverse_map)
rsp = pa.aggregate(key, flat, base_url)
except redis.exceptions.ConnectionError:
self._logger.error("Failed to connect to redis-uve: %s:%d" \
% (redis_uve[0], redis_uve[1]))
except Exception as e:
self._logger.error("Exception: %s" % e)
return {}
else:
self._logger.debug("Computed %s" % key)
for k, v in statdict.iteritems():
if k in rsp:
mp = dict(v.items() + rsp[k].items())
statdict[k] = mp
return dict(rsp.items() + statdict.items())
# end get_uve
def get_uve_regex(self, key):
regex = ''
if key[0] != '*':
regex += '^'
regex += key.replace('*', '.*?')
if key[-1] != '*':
regex += '$'
return re.compile(regex)
# end get_uve_regex
def multi_uve_get(self, table, flat, filters=None, is_alarm=False, base_url=None):
# get_uve_list cannot handle attribute names very efficiently,
# so we don't pass them here
uve_list = self.get_uve_list(table, filters, False, is_alarm)
for uve_name in uve_list:
uve_val = self.get_uve(
table + ':' + uve_name, flat, filters, True, is_alarm, base_url)
if uve_val == {}:
continue
else:
uve = {'name': uve_name, 'value': uve_val}
yield uve
# end multi_uve_get
def get_uve_list(self, table, filters=None, parse_afilter=False,
is_alarm=False):
filters = filters or {}
uve_list = set()
kfilter = filters.get('kfilt')
if kfilter is not None:
patterns = set()
for filt in kfilter:
patterns.add(self.get_uve_regex(filt))
for redis_uve in self._redis_uve_list:
redish = redis.StrictRedis(host=redis_uve[0],
port=redis_uve[1],
password=self._redis_password, db=1)
try:
# For UVE queries, we wanna read both UVE and Alarm table
entries = redish.smembers('ALARM_TABLE:' + table)
if not is_alarm:
entries = entries.union(redish.smembers('TABLE:' + table))
for entry in entries:
info = (entry.split(':', 1)[1]).rsplit(':', 5)
uve_key = info[0]
if kfilter is not None:
kfilter_match = False
for pattern in patterns:
if pattern.match(uve_key):
kfilter_match = True
break
if not kfilter_match:
continue
src = info[1]
sfilter = filters.get('sfilt')
if sfilter is not None:
if sfilter != src:
continue
module = info[2]+':'+info[3]+':'+info[4]
mfilter = filters.get('mfilt')
if mfilter is not None:
if mfilter != module:
continue
typ = info[5]
tfilter = filters.get('cfilt')
if tfilter is not None:
if typ not in tfilter:
continue
if parse_afilter:
if tfilter is not None and len(tfilter[typ]):
valkey = "VALUES:" + table + ":" + uve_key + \
":" + src + ":" + module + ":" + typ
for afilter in tfilter[typ]:
attrval = redish.hget(valkey, afilter)
if attrval is not None:
break
if attrval is None:
continue
uve_list.add(uve_key)
except redis.exceptions.ConnectionError:
self._logger.error('Failed to connect to redis-uve: %s:%d' \
% (redis_uve[0], redis_uve[1]))
except Exception as e:
self._logger.error('Exception: %s' % e)
return set()
return uve_list
# end get_uve_list
# end UVEServer
class ParallelAggregator:
def __init__(self, state, rev_map = {}):
self._state = state
self._rev_map = rev_map
def _default_agg(self, oattr):
itemset = set()
result = []
for source in oattr.keys():
elem = oattr[source]
hdelem = json.dumps(elem)
if hdelem not in itemset:
itemset.add(hdelem)
result.append([elem, source])
else:
for items in result:
if elem in items:
items.append(source)
return result
def _is_sum(self, oattr):
akey = oattr.keys()[0]
if '@aggtype' not in oattr[akey]:
return False
if oattr[akey]['@aggtype'] in ["sum"]:
return True
if oattr[akey]['@type'] in ['i8', 'i16', 'i32', 'i64',
'byte', 'u8', 'u16', 'u32', 'u64']:
if oattr[akey]['@aggtype'] in ["counter"]:
return True
return False
def _is_union(self, oattr):
akey = oattr.keys()[0]
if not oattr[akey]['@type'] in ["list"]:
return False
if '@aggtype' not in oattr[akey]:
return False
if oattr[akey]['@aggtype'] in ["union"]:
return True
else:
return False
def _is_append(self, oattr):
akey = oattr.keys()[0]
if not oattr[akey]['@type'] in ["list"]:
return False
if '@aggtype' not in oattr[akey]:
return False
if oattr[akey]['@aggtype'] in ["append"]:
return True
else:
return False
@staticmethod
def get_list_name(attr):
sname = ""
for sattr in attr['list'].keys():
if sattr[0] not in ['@']:
sname = sattr
return sname
@staticmethod
def _get_list_key(elem):
skey = ""
for sattr in elem.keys():
if '@aggtype' in elem[sattr]:
if elem[sattr]['@aggtype'] in ["listkey"]:
skey = sattr
return skey
def _sum_agg(self, oattr):
akey = oattr.keys()[0]
result = copy.deepcopy(oattr[akey])
count = 0
for source in oattr.keys():
count += int(oattr[source]['#text'])
result['#text'] = str(count)
return result
def _union_agg(self, oattr):
akey = oattr.keys()[0]
result = copy.deepcopy(oattr[akey])
itemset = set()
sname = ParallelAggregator.get_list_name(oattr[akey])
result['list'][sname] = []
siz = 0
for source in oattr.keys():
if isinstance(oattr[source]['list'][sname], basestring):
oattr[source]['list'][sname] = [oattr[source]['list'][sname]]
for elem in oattr[source]['list'][sname]:
hdelem = json.dumps(elem)
if hdelem not in itemset:
itemset.add(hdelem)
result['list'][sname].append(elem)
siz += 1
result['list']['@size'] = str(siz)
return result
def _append_agg(self, oattr):
akey = oattr.keys()[0]
result = copy.deepcopy(oattr[akey])
sname = ParallelAggregator.get_list_name(oattr[akey])
result['list'][sname] = []
siz = 0
for source in oattr.keys():
if not isinstance(oattr[source]['list'][sname], list):
oattr[source]['list'][sname] = [oattr[source]['list'][sname]]
for elem in oattr[source]['list'][sname]:
result['list'][sname].append(elem)
siz += 1
result['list']['@size'] = str(siz)
return result
@staticmethod
def _list_agg_attrs(item):
for ctrs in item.keys():
if '@aggtype'in item[ctrs]:
if item[ctrs]['@aggtype'] in ["listkey"]:
continue
if item[ctrs]['@type'] in ['i8', 'i16', 'i32', 'i64',
'byte', 'u8', 'u16', 'u32', 'u64']:
yield ctrs
@staticmethod
def consolidate_list(result, typ, objattr):
applist = ParallelAggregator.get_list_name(
result[typ][objattr])
appkey = ParallelAggregator._get_list_key(
result[typ][objattr]['list'][applist][0])
# There is no listkey ; no consolidation is possible
if len(appkey) == 0:
return result
# If the list's underlying struct has a listkey present,
# we need to further aggregate entries that have the
# same listkey
mod_result = copy.deepcopy(result[typ][objattr])
mod_result['list'][applist] = []
res_size = 0
mod_result['list']['@size'] = int(res_size)
# Add up stats
for items in result[typ][objattr]['list'][applist]:
matched = False
for res_items in mod_result['list'][applist]:
if items[appkey]['#text'] in [res_items[appkey]['#text']]:
for ctrs in ParallelAggregator._list_agg_attrs(items):
res_items[ctrs]['#text'] += int(items[ctrs]['#text'])
matched = True
if not matched:
newitem = copy.deepcopy(items)
for ctrs in ParallelAggregator._list_agg_attrs(items):
newitem[ctrs]['#text'] = int(items[ctrs]['#text'])
mod_result['list'][applist].append(newitem)
res_size += 1
# Convert results back into strings
for res_items in mod_result['list'][applist]:
for ctrs in ParallelAggregator._list_agg_attrs(res_items):
res_items[ctrs]['#text'] = str(res_items[ctrs]['#text'])
mod_result['list']['@size'] = str(res_size)
return mod_result
def aggregate(self, key, flat, base_url = None):
'''
This function does parallel aggregation of this UVE's state.
It aggregates across all sources and return the global state of the UVE
'''
result = {}
try:
for typ in self._state[key].keys():
result[typ] = {}
for objattr in self._state[key][typ].keys():
if self._is_sum(self._state[key][typ][objattr]):
sum_res = self._sum_agg(self._state[key][typ][objattr])
if flat:
result[typ][objattr] = \
OpServerUtils.uve_attr_flatten(sum_res)
else:
result[typ][objattr] = sum_res
elif self._is_union(self._state[key][typ][objattr]):
union_res = self._union_agg(
self._state[key][typ][objattr])
conv_res = None
if union_res.has_key('@ulink') and base_url and \
union_res['list']['@type'] == 'string':
uterms = union_res['@ulink'].split(":",1)
# This is the linked UVE's table name
m_table = uterms[0]
if self._rev_map.has_key(m_table):
h_table = self._rev_map[m_table]
conv_res = []
sname = ParallelAggregator.get_list_name(union_res)
for el in union_res['list'][sname]:
lobj = {}
lobj['name'] = el
lobj['href'] = base_url + '/analytics/uves/' + \
h_table + '/' + el
if len(uterms) == 2:
lobj['href'] = lobj['href'] + '?cfilt=' + uterms[1]
else:
lobj['href'] = lobj['href'] + '?flat'
conv_res.append(lobj)
if flat:
if not conv_res:
result[typ][objattr] = \
OpServerUtils.uve_attr_flatten(union_res)
else:
result[typ][objattr] = conv_res
else:
result[typ][objattr] = union_res
elif self._is_append(self._state[key][typ][objattr]):
result[typ][objattr] = self._append_agg(
self._state[key][typ][objattr])
append_res = ParallelAggregator.consolidate_list(
result, typ, objattr)
if flat:
result[typ][objattr] =\
OpServerUtils.uve_attr_flatten(append_res)
else:
result[typ][objattr] = append_res
else:
default_res = self._default_agg(
self._state[key][typ][objattr])
if flat:
if (len(default_res) == 1):
result[typ][objattr] =\
OpServerUtils.uve_attr_flatten(
default_res[0][0])
else:
nres = []
for idx in range(len(default_res)):
nres.append(default_res[idx])
nres[idx][0] =\
OpServerUtils.uve_attr_flatten(
default_res[idx][0])
result[typ][objattr] = nres
else:
result[typ][objattr] = default_res
except KeyError:
pass
return result
if __name__ == '__main__':
uveserver = UVEServer(None, 0, None, None)
gevent.spawn(uveserver.run())
uve_state = json.loads(uveserver.get_uve("abc-corp:vn02", False))
print json.dumps(uve_state, indent=4, sort_keys=True)
| 41.711207 | 91 | 0.450484 | 2,852 | 29,031 | 4.445302 | 0.130084 | 0.027134 | 0.026029 | 0.016564 | 0.389494 | 0.293816 | 0.26053 | 0.226771 | 0.192538 | 0.183862 | 0 | 0.008511 | 0.441494 | 29,031 | 695 | 92 | 41.771223 | 0.773406 | 0.043987 | 0 | 0.355556 | 0 | 0 | 0.048022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010256 | 0.022222 | null | null | 0.006838 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a765ee4d5ce159cb94158867be1e207d0bdc988c | 1,064 | py | Python | pycreds.py | Ennovar/aws-creds-test | fcc5c10c8cfb79bb0ea0fd52f2e2f137efd8a9ce | [
"Apache-2.0"
] | 7 | 2017-06-13T15:55:23.000Z | 2019-05-23T18:52:00.000Z | pycreds.py | Ennovar/aws-creds-test | fcc5c10c8cfb79bb0ea0fd52f2e2f137efd8a9ce | [
"Apache-2.0"
] | 2 | 2019-02-16T12:56:33.000Z | 2020-07-02T19:32:58.000Z | pycreds.py | Ennovar/aws-creds-test | fcc5c10c8cfb79bb0ea0fd52f2e2f137efd8a9ce | [
"Apache-2.0"
] | 8 | 2017-05-17T22:46:07.000Z | 2022-03-11T14:27:56.000Z | import os
import hashlib
import getpass
import hmac
import botocore.session
import botocore.exceptions
def _hash(value):
return hmac.new(os.environ['TEST_KEY'], value,
digestmod=hashlib.sha256).hexdigest()
def main():
access_key = getpass.getpass("Access Key: ").strip()
secret_access_key = getpass.getpass("Secret Access Key: ").strip()
print("AKID hash: %s" % _hash(access_key))
print("AKID length: %s" % len(access_key))
print("\nSAK hash: %s" % _hash(secret_access_key))
print("SAK length: %s" % len(secret_access_key))
session = botocore.session.get_session()
sts = session.create_client('sts', aws_access_key_id=access_key,
aws_secret_access_key=secret_access_key)
try:
response = sts.get_caller_identity()
print("Successfuly made an AWS request with the "
"provided credentials.\n")
except botocore.exceptions.ClientError as e:
print("Error making AWS request: %s\n" % e)
if __name__ == '__main__':
main()
| 30.4 | 72 | 0.656015 | 135 | 1,064 | 4.918519 | 0.422222 | 0.162651 | 0.135542 | 0.069277 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003641 | 0.225564 | 1,064 | 34 | 73 | 31.294118 | 0.802184 | 0 | 0 | 0 | 0 | 0 | 0.193609 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0.111111 | 0.222222 | 0.037037 | 0.333333 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a7687184494cf93d9f5d684cfc40811e7667b3e4 | 772 | py | Python | multranslate.py | anoidgit/NMTServer | f608695c4c1f5319fb3c56f218b1d78056861c62 | [
"Apache-2.0"
] | 3 | 2017-08-29T22:56:38.000Z | 2017-12-12T06:20:35.000Z | multranslate.py | anoidgit/NMTServer | f608695c4c1f5319fb3c56f218b1d78056861c62 | [
"Apache-2.0"
] | 1 | 2017-09-10T08:02:24.000Z | 2017-09-12T01:03:25.000Z | multranslate.py | anoidgit/NMTServer | f608695c4c1f5319fb3c56f218b1d78056861c62 | [
"Apache-2.0"
] | null | null | null | #encoding: utf-8
import sys
reload(sys)
sys.setdefaultencoding( "utf-8" )
import zmq, sys, json
import seg
import detoken
import datautils
from random import sample
serverl=["tcp://127.0.0.1:"+str(port) for port in xrange(5556,5556+4)]
def _translate_core(jsond):
global serverl
sock = zmq.Context().socket(zmq.REQ)
sock.connect(sample(serverl, 1)[0])
sock.send(jsond)
return sock.recv()
def _translate(srctext):
return detoken.detoken(datautils.char2pinyin(datautils.restoreFromBatch(json.loads(_translate_core(json.dumps(datautils.makeBatch(datautils.cutParagraph(seg.segline(srctext)))))))))
def translate(srctext):
tmp=srctext.strip()
if tmp:
return _translate(tmp)
else:
return tmp
def poweron():
seg.poweron()
def poweroff():
seg.poweroff()
| 19.794872 | 182 | 0.75 | 109 | 772 | 5.256881 | 0.495413 | 0.062827 | 0.034904 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02907 | 0.108808 | 772 | 38 | 183 | 20.315789 | 0.803779 | 0.01943 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.222222 | 0.037037 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a76b01dad5f2ae8289af31fef183a815e3bdd1f2 | 1,318 | py | Python | tests/conftest.py | sdrobert/pydrobert-param | d9f68bbcebfcc5ca909c639b03b959526a8b1631 | [
"Apache-2.0"
] | 1 | 2021-05-14T18:27:13.000Z | 2021-05-14T18:27:13.000Z | tests/conftest.py | sdrobert/pydrobert-param | d9f68bbcebfcc5ca909c639b03b959526a8b1631 | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | sdrobert/pydrobert-param | d9f68bbcebfcc5ca909c639b03b959526a8b1631 | [
"Apache-2.0"
] | null | null | null | from shutil import rmtree
from tempfile import mkdtemp
import pytest
import param
import pydrobert.param.serialization as serial
param.parameterized.warnings_as_exceptions = True
@pytest.fixture(params=["ruamel_yaml", "pyyaml"])
def yaml_loader(request):
if request.param == "ruamel_yaml":
try:
from ruamel_yaml import YAML # type: ignore
yaml_loader = YAML().load
except ImportError:
from ruamel.yaml import YAML # type: ignore
yaml_loader = YAML().load
module_names = ("ruamel_yaml", "ruamel.yaml")
else:
import yaml # type: ignore
def yaml_loader(x):
return yaml.load(x, Loader=yaml.FullLoader)
module_names = ("pyyaml",)
old_props = serial.YAML_MODULE_PRIORITIES
serial.YAML_MODULE_PRIORITIES = module_names
yield yaml_loader
serial.YAML_MODULE_PRIORITIES = old_props
@pytest.fixture(params=[True, False])
def with_yaml(request):
if request.param:
yield True
else:
old_props = serial.YAML_MODULE_PRIORITIES
serial.YAML_MODULE_PRIORITIES = tuple()
yield False
serial.YAML_MODULE_PRIORITIES = old_props
@pytest.fixture
def temp_dir():
dir_name = mkdtemp()
yield dir_name
rmtree(dir_name, ignore_errors=True)
| 24.867925 | 56 | 0.677542 | 161 | 1,318 | 5.322981 | 0.298137 | 0.070012 | 0.112019 | 0.18203 | 0.371062 | 0.371062 | 0.371062 | 0.371062 | 0.261377 | 0.261377 | 0 | 0 | 0.240516 | 1,318 | 52 | 57 | 25.346154 | 0.856144 | 0.028832 | 0 | 0.205128 | 0 | 0 | 0.043887 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.230769 | 0.025641 | 0.358974 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a76e729d78669a3e706e9fdd618185c47c67bee8 | 7,958 | py | Python | DictionaryOfNewZealandEnglish/headword/citation/views.py | eResearchSandpit/DictionaryOfNewZealandEnglish | cf3cec34aafc7a9a8bd0413883f5eeb314d46a48 | [
"BSD-3-Clause"
] | null | null | null | DictionaryOfNewZealandEnglish/headword/citation/views.py | eResearchSandpit/DictionaryOfNewZealandEnglish | cf3cec34aafc7a9a8bd0413883f5eeb314d46a48 | [
"BSD-3-Clause"
] | null | null | null | DictionaryOfNewZealandEnglish/headword/citation/views.py | eResearchSandpit/DictionaryOfNewZealandEnglish | cf3cec34aafc7a9a8bd0413883f5eeb314d46a48 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Citations
from flask import (Blueprint, request, render_template, flash, url_for,
redirect, session)
from flask.ext.login import login_required, current_user
import logging, sys, re
from sqlalchemy.exc import IntegrityError, InvalidRequestError
from DictionaryOfNewZealandEnglish.database import db
from DictionaryOfNewZealandEnglish.headword.citation.forms import *
from DictionaryOfNewZealandEnglish.headword.citation.models import *
import datetime as dt
blueprint = Blueprint("citations", __name__, url_prefix='/headwords/citations',
static_folder="../static")
@blueprint.route("/edit", methods=["GET", "POST"])
@login_required
def edit():
if not current_user.is_admin:
return redirect(url_for('public.home'))
headword = Headword.query.get( request.args.get('headword_id') )
citation_id = request.args.get('citation_id')
citation = Citation.query.get( citation_id )
form = CitationForm(request.form, obj=citation)
if request.method == "GET":
date = __pretty_print_date(citation)
return render_template("headwords/citations/edit.html", form=form,
citation_id=citation_id,
date=date,
headword=headword)
if request.method == "POST":
data = __set_data_for_citation(citation, form)
citation = Citation.query.get( citation_id )
date = __pretty_print_date(citation)
return render_template("headwords/citations/edit.html", form=form,
citation_id=citation_id,
date=date,
headword=headword)
@blueprint.route("/new", methods=["GET"])
@login_required
def new():
if not current_user.is_admin:
return redirect(url_for('public.home'))
headword = Headword.query.get( request.args.get('headword_id') )
form = CitationForm(request.form)
return render_template("headwords/citations/new.html", form=form,
headword=headword)
@blueprint.route("/create", methods=["POST"])
@login_required
def create():
if not current_user.is_admin:
return redirect(url_for('public.home'))
form = CitationForm(request.form)
headword = Headword.query.get( request.args.get('headword_id') )
try:
citation_id = __create_citation(form, headword)
circa = ""
if form.circa.data:
circa = "circa "
date_obj = __form_date(form)
date = __pretty_print_date(date_obj, form.circa.data)
flash("New citation created: {0} ({1}{2})".format(form.author.data,
circa,
date, 'success'))
return render_template("headwords/citations/edit.html",
form=form,
citation_id=citation_id,
date=date,
headword=headword)
except (IntegrityError) as e:
db.session.rollback()
flash("Input error %s" % e)
return render_template("headwords/citations/new.html",
form=form,
headword=headword)
except (InvalidRequestError):
return render_template("headwords/citations/new.html",
form=form,
headword=headword)
@blueprint.route("/delete", methods=["GET"])
@login_required
def delete():
if not current_user.is_admin:
return redirect(url_for('public.home'))
citation = Citation.query.get( request.args.get('citation_id') )
headword = Headword.query.get( request.args.get('headword_id') )
if citation in headword.citations:
headword.citations.remove(citation)
db.session.add(headword)
db.session.commit()
citations = headword.citations
return render_template("headwords/show.html", headword=headword,
citations=citations)
#############################################################################
### Private
def __create_citation(form, headword):
date = __form_date(form)
citation = Citation.create(
date = date,
circa = form.circa.data,
author = form.author.data,
source_id = form.source.data.id,
vol_page = form.vol_page.data,
edition = form.edition.data,
quote = form.quote.data,
notes = form.notes.data,
archived = False,
updated_at = dt.datetime.utcnow(),
updated_by = current_user.username
)
h = Headword.query.get(headword.id)
h.citations.append(citation)
db.session.add(h)
db.session.commit()
return citation.id
def __form_date(form):
if form.date.data == "":
flash("No date entered.", 'warning')
raise InvalidRequestError
form_date = re.split(r'/\s*', form.date.data)
if len(form_date) < 3:
if form.circa.data:
# pad out data to fit into datetime type
if len(form_date) == 2:
y = form_date[1].strip()
m = form_date[0].strip()
d = "1"
if len(form_date) == 1:
y = form_date[0].strip()
m = "1"
d = "1"
else:
flash("Partial date entered, perhaps 'Circa' should be checked.", 'warning')
raise InvalidRequestError
else:
y = form_date[2].strip()
m = form_date[1].strip()
d = form_date[0].strip()
# dt.datetime(y, m, d)
print "### form_date {0} / {1} / {2}".format(y,m,d)
date = dt.datetime(int(y), int(m), int(d))
return date
def __pretty_print_date(obj, circa=False):
print "### citation {0} {1}".format(obj, circa)
if isinstance(obj, Citation):
d = obj.date.day
m = obj.date.month
y = obj.date.year
circa = obj.circa
if isinstance(obj, dt.datetime):
d = obj.day
m = obj.month
y = obj.year
if circa:
if d == 1:
if m == 1:
m = ""
else:
m = "{0} / ".format(m)
d = ""
else:
d = "{0} / ".format(d)
m = "{0} / ".format(m)
print "test 1 {0}{1}{2}".format(d, m, y)
return "{0}{1}{2}".format(d, m, y)
else:
print "test 2 {0} / {1} / {2}".format(d, m, y)
return "{0} / {1} / {2}".format(d, m, y)
def __set_data_for_citation(citation, form):
try:
date = __form_date(form)
Citation.update(citation,
date = date,
circa = form.circa.data,
author = form.author.data,
source_id = form.source.data.id,
vol_page = form.vol_page.data,
edition = form.edition.data,
quote = form.quote.data,
notes = form.notes.data,
archived = form.archived.data,
updated_at = dt.datetime.utcnow(),
updated_by = current_user.username
)
flash("Edit of citation is saved.", 'success')
return True
except (IntegrityError, InvalidRequestError):
db.session.rollback()
flash("Edit of citation failed.", 'warning')
return False
| 35.846847 | 86 | 0.518975 | 824 | 7,958 | 4.868932 | 0.173544 | 0.035892 | 0.034895 | 0.050598 | 0.461864 | 0.398056 | 0.366152 | 0.366152 | 0.366152 | 0.342223 | 0 | 0.007876 | 0.361774 | 7,958 | 221 | 87 | 36.00905 | 0.782044 | 0.01244 | 0 | 0.450549 | 0 | 0 | 0.094184 | 0.022002 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.043956 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a77229f1a130b744660ffd1757e86e6d6dd38d54 | 1,074 | py | Python | questions/q197_choose_and_swap/code.py | aadhityasw/Competitive-Programs | 901a48d35f024a3a87c32a45b7f4531e8004a203 | [
"MIT"
] | null | null | null | questions/q197_choose_and_swap/code.py | aadhityasw/Competitive-Programs | 901a48d35f024a3a87c32a45b7f4531e8004a203 | [
"MIT"
] | 1 | 2021-05-15T07:56:51.000Z | 2021-05-15T07:56:51.000Z | questions/q197_choose_and_swap/code.py | aadhityasw/Competitive-Programs | 901a48d35f024a3a87c32a45b7f4531e8004a203 | [
"MIT"
] | null | null | null | class Solution:
def chooseandswap (self, A):
opt = 'a'
fir = A[0]
arr = [0]*26
for s in A :
arr[ord(s)-97] += 1
i = 0
while i < len(A) :
if opt > 'z' :
break
while opt < fir :
if opt in A :
ans = ""
for s in A :
if s == opt :
ans += fir
elif s == fir :
ans += opt
else :
ans += s
return ans
opt = chr(ord(opt) + 1)
opt = chr(ord(opt) + 1)
while i < len(A) and A[i] <= fir :
i += 1
if i < len(A) :
fir = A[i]
return A
if __name__ == '__main__':
ob = Solution()
t = int (input ())
for _ in range (t):
A = input()
ans = ob.chooseandswap(A)
print(ans)
| 23.347826 | 46 | 0.286778 | 109 | 1,074 | 2.743119 | 0.33945 | 0.0301 | 0.050167 | 0.046823 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02657 | 0.614525 | 1,074 | 45 | 47 | 23.866667 | 0.695652 | 0 | 0 | 0.111111 | 0 | 0 | 0.009311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0 | 0 | 0.111111 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a774dc8ec0c70281d59955e540db50979da5c0cf | 4,744 | py | Python | src/python/pants/scm/subsystems/changed.py | lahosken/pants | 1b0340987c9b2eab9411416803c75b80736716e4 | [
"Apache-2.0"
] | 1 | 2021-11-11T14:04:24.000Z | 2021-11-11T14:04:24.000Z | src/python/pants/scm/subsystems/changed.py | lahosken/pants | 1b0340987c9b2eab9411416803c75b80736716e4 | [
"Apache-2.0"
] | null | null | null | src/python/pants/scm/subsystems/changed.py | lahosken/pants | 1b0340987c9b2eab9411416803c75b80736716e4 | [
"Apache-2.0"
] | 1 | 2021-11-11T14:04:12.000Z | 2021-11-11T14:04:12.000Z | # coding=utf-8
# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
from pants.base.build_environment import get_scm
from pants.base.exceptions import TaskError
from pants.goal.workspace import ScmWorkspace
from pants.scm.change_calculator import BuildGraphChangeCalculator
from pants.subsystem.subsystem import Subsystem
from pants.util.objects import datatype
# TODO: Remove this in 1.5.0dev0.
class _ChainedOptions(object):
def __init__(self, options_seq):
self._options_seq = options_seq
def __getattr__(self, attr):
for options in self._options_seq:
option_value = getattr(options, attr, None)
if option_value is not None:
return option_value
return None
class ChangedRequest(datatype('ChangedRequest',
['changes_since', 'diffspec', 'include_dependees', 'fast'])):
"""Parameters required to compute a changed file/target set."""
@classmethod
def from_options(cls, options):
"""Given an `Options` object, produce a `ChangedRequest`."""
return cls(options.changes_since,
options.diffspec,
options.include_dependees,
options.fast)
def is_actionable(self):
return bool(self.changes_since or self.diffspec)
class Changed(object):
"""A subsystem for global `changed` functionality.
This supports the "legacy" `changed`, `test-changed` and `compile-changed` goals as well as the
v2 engine style `--changed-*` argument target root replacements which can apply to any goal (e.g.
`./pants --changed-parent=HEAD~3 list` replaces `./pants --changed-parent=HEAD~3 changed`).
"""
class Factory(Subsystem):
options_scope = 'changed'
@classmethod
def register_options(cls, register):
register('--changes-since', '--parent', '--since',
help='Calculate changes since this tree-ish/scm ref (defaults to current HEAD/tip).')
register('--diffspec',
help='Calculate changes contained within given scm spec (commit range/sha/ref/etc).')
register('--include-dependees', choices=['none', 'direct', 'transitive'], default='none',
help='Include direct or transitive dependees of changed targets.')
register('--fast', type=bool,
help='Stop searching for owners once a source is mapped to at least one owning target.')
# TODO: Remove or reduce this in 1.5.0dev0 - we only need the subsystem's options scope going fwd.
@classmethod
def create(cls, alternate_options=None):
"""
:param Options alternate_options: An alternate `Options` object for overrides.
"""
options = cls.global_instance().get_options()
# N.B. This chaining is purely to support the `changed` tests until deprecation.
ordered_options = [option for option in (alternate_options, options) if option is not None]
# TODO: Kill this chaining (in favor of outright options replacement) as part of the `changed`
# task removal (post-deprecation cycle). See https://github.com/pantsbuild/pants/issues/3893
chained_options = _ChainedOptions(ordered_options)
changed_request = ChangedRequest.from_options(chained_options)
return Changed(changed_request)
def __init__(self, changed_request):
self._changed_request = changed_request
# TODO: Remove this in 1.5.0dev0 in favor of `TargetRoots` use of `EngineChangeCalculator`.
def change_calculator(self, build_graph, address_mapper, scm=None, workspace=None,
exclude_target_regexp=None):
"""Constructs and returns a BuildGraphChangeCalculator.
:param BuildGraph build_graph: A BuildGraph instance.
:param AddressMapper address_mapper: A AddressMapper instance.
:param Scm scm: The SCM instance. Defaults to discovery.
:param ScmWorkspace: The SCM workspace instance.
:param string exclude_target_regexp: The exclude target regexp.
"""
scm = scm or get_scm()
if scm is None:
raise TaskError('A `changed` goal or `--changed` option was specified, '
'but no SCM is available to satisfy the request.')
workspace = workspace or ScmWorkspace(scm)
return BuildGraphChangeCalculator(
scm,
workspace,
address_mapper,
build_graph,
self._changed_request.include_dependees,
fast=self._changed_request.fast,
changes_since=self._changed_request.changes_since,
diffspec=self._changed_request.diffspec,
exclude_target_regexp=exclude_target_regexp
)
| 41.982301 | 103 | 0.706788 | 580 | 4,744 | 5.631034 | 0.377586 | 0.038579 | 0.033068 | 0.007348 | 0.032149 | 0.014085 | 0.014085 | 0 | 0 | 0 | 0 | 0.006869 | 0.20215 | 4,744 | 112 | 104 | 42.357143 | 0.856011 | 0.314081 | 0 | 0.044776 | 0 | 0 | 0.171816 | 0 | 0 | 0 | 0 | 0.026786 | 0 | 1 | 0.119403 | false | 0 | 0.104478 | 0.014925 | 0.373134 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a775681f9ac02e296e8b3818c15064c985162dc4 | 1,825 | py | Python | SOLID/LSP/GoodLSPCode.py | maumneto/DesignPatternCourse | eb55a3d4e6a3261265dc98fcc6ec48d7b8e6b7a8 | [
"MIT"
] | 1 | 2021-06-26T15:32:35.000Z | 2021-06-26T15:32:35.000Z | SOLID/LSP/GoodLSPCode.py | maumneto/DesignPatternCourse | eb55a3d4e6a3261265dc98fcc6ec48d7b8e6b7a8 | [
"MIT"
] | null | null | null | SOLID/LSP/GoodLSPCode.py | maumneto/DesignPatternCourse | eb55a3d4e6a3261265dc98fcc6ec48d7b8e6b7a8 | [
"MIT"
] | null | null | null | class AccountManager(object):
def __init__(self, balance = 0):
self.balance = balance
def getBalance(self):
return self.balance
def withdraw(self, value):
if self.balance >= value:
self.balance = self.balance - value
print('Successful Withdrawal.')
else:
print('Insufficient Funds')
def deposit(self, value):
self.balance = self.balance + value
print('Successful Deposit')
def income(self, rate):
self.balance = self.balance + self.balance*rate
class AccountCommon(AccountManager):
def __init__(self, balance = 0):
super(AccountCommon, self).__init__(balance=balance)
def getBalance(self):
return super().getBalance()
def deposit(self, value):
super().deposit(value)
def withdraw(self, value):
super().withdraw(value)
def income(self, rate):
super().income(rate)
def message(self):
print('Common account balance: %.2f' % self.getBalance())
class AccountSpetial(AccountManager):
def __init__(self, balance = 0):
super(AccountSpetial, self).__init__(balance=balance)
def getBalance(self):
return super().getBalance()
def deposit(self, value):
super().deposit(value)
def withdraw(self, value):
super().withdraw(value)
def message(self):
print('Common account balance: %.2f' % self.getBalance())
if __name__ == '__main__':
commonAccount = AccountCommon(500)
commonAccount.deposit(500)
commonAccount.withdraw(100)
commonAccount.income(0.005)
commonAccount.message()
print(' ------- ')
spetialAccount = AccountSpetial(1000)
spetialAccount.deposit(500)
spetialAccount.withdraw(200)
spetialAccount.message()
| 26.449275 | 65 | 0.629041 | 185 | 1,825 | 6.054054 | 0.205405 | 0.127679 | 0.053571 | 0.078571 | 0.541964 | 0.525 | 0.491964 | 0.424107 | 0.340179 | 0.340179 | 0 | 0.020453 | 0.249863 | 1,825 | 69 | 66 | 26.449275 | 0.797663 | 0 | 0 | 0.470588 | 0 | 0 | 0.071742 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.313725 | false | 0 | 0 | 0.058824 | 0.431373 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a777a11d9cdd73ba24751d88b5b9e8b62e919781 | 2,509 | py | Python | tests/test_symgroup.py | efrembernuz/symeess | d74868bbb8463e0420fcc28e3554fbfa8e6de22f | [
"MIT"
] | 1 | 2017-10-25T01:42:14.000Z | 2017-10-25T01:42:14.000Z | tests/test_symgroup.py | efrembernuz/symeess | d74868bbb8463e0420fcc28e3554fbfa8e6de22f | [
"MIT"
] | null | null | null | tests/test_symgroup.py | efrembernuz/symeess | d74868bbb8463e0420fcc28e3554fbfa8e6de22f | [
"MIT"
] | null | null | null | import unittest
from cosymlib import file_io
from numpy import testing
from cosymlib.molecule.geometry import Geometry
import os
data_dir = os.path.join(os.path.dirname(__file__), 'data')
class TestSymgroupFchk(unittest.TestCase):
def setUp(self):
self._structure = file_io.read_generic_structure_file(data_dir + '/wfnsym/tih4_5d.fchk')
self._geometry = self._structure.geometry
def test_symmetry_measure(self):
# print(self._structure.geometry)
measure = self._geometry.get_symmetry_measure('C3', central_atom=1)
self.assertAlmostEqual(measure, 0)
class TestSymgroupCycles(unittest.TestCase):
def setUp(self):
self._geometry = Geometry(positions=[[ 0.506643354, -1.227657970, 0.000000000],
[ 1.303068499, 0.000000000, 0.000000000],
[ 0.506643354, 1.227657970, 0.000000000],
[-0.926250976, 0.939345948, 0.000000000],
[-0.926250976, -0.939345948, 0.000000000]],
# name='test',
symbols=['C', 'C', 'C', 'C', 'C'],
connectivity_thresh=1.5,
)
def test_symmetry_measure(self):
measure = self._geometry.get_symmetry_measure('C5')
self.assertAlmostEqual(measure, 0.8247502, places=6)
measure = self._geometry.get_symmetry_measure('C2')
self.assertAlmostEqual(measure, 0.0, places=6)
measure = self._geometry.get_symmetry_measure('C3')
self.assertAlmostEqual(measure, 33.482451, places=6)
#def test_symmetry_measure_permutation(self):
# measure = self._geometry.get_symmetry_measure('C5', fix_permutation=True)
# self.assertAlmostEqual(measure, 0.8247502, places=6)
def test_symmetry_nearest(self):
nearest = self._geometry.get_symmetry_nearest_structure('C5').get_positions()
# print(nearest)
reference = [[ 4.05078542e-01, -1.24670356e+00, 0.00000000e+00],
[ 1.31086170e+00, -1.33226763e-16, 0.00000000e+00],
[ 4.05078542e-01, 1.24670356e+00, 0.00000000e+00],
[-1.06050939e+00, 7.70505174e-01, 0.00000000e+00],
[-1.06050939e+00, -7.70505174e-01, 0.00000000e+00]]
testing.assert_array_almost_equal(nearest, reference, decimal=6)
| 44.017544 | 96 | 0.595058 | 273 | 2,509 | 5.278388 | 0.304029 | 0.06662 | 0.062457 | 0.095767 | 0.491325 | 0.432339 | 0.353227 | 0.269951 | 0.100625 | 0.100625 | 0 | 0.20056 | 0.288561 | 2,509 | 56 | 97 | 44.803571 | 0.606723 | 0.094061 | 0 | 0.105263 | 0 | 0 | 0.017211 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 1 | 0.131579 | false | 0 | 0.131579 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7780199003eb4084f3a08db621a30c4ac94b9d2 | 2,894 | py | Python | Scripts/Ros/Identifica_cor.py | pcliquet/robotic_resumo | 3d1d8705820cae39d5be956836a94c7884ab490d | [
"MIT"
] | 1 | 2022-03-26T22:50:26.000Z | 2022-03-26T22:50:26.000Z | Scripts/Ros/Identifica_cor.py | pcliquet/robotic_resumo | 3d1d8705820cae39d5be956836a94c7884ab490d | [
"MIT"
] | null | null | null | Scripts/Ros/Identifica_cor.py | pcliquet/robotic_resumo | 3d1d8705820cae39d5be956836a94c7884ab490d | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
# -*- coding:utf-8 -*-
import rospy
import numpy as np
import tf
import math
import cv2
import time
from geometry_msgs.msg import Twist, Vector3, Pose
from nav_msgs.msg import Odometry
from sensor_msgs.msg import Image, CompressedImage
from cv_bridge import CvBridge, CvBridgeError
import smach
import smach_ros
def identifica_cor(frame):
'''
Segmenta o maior objeto cuja cor é parecida com cor_h (HUE da cor, no espaço HSV).
'''
frame_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
cor_menor = np.array([0, 50, 100])
cor_maior = np.array([6, 255, 255])
segmentado_cor = cv2.inRange(frame_hsv, cor_menor, cor_maior)
cor_menor = np.array([174, 50, 100])
cor_maior = np.array([180, 255, 255])
mask = cv2.inRange(frame_hsv, cor_menor, cor_maior)
kernel = np.ones((5, 5), np.uint8)
morpho = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
#segmentado_cor += cv2.inRange(frame_hsv, cor_menor, cor_maior)
# Note que a notacão do numpy encara as imagens como matriz, portanto o enderecamento é
# linha, coluna ou (y,x)
# Por isso na hora de montar a tupla com o centro precisamos inverter, porque
centro = (frame.shape[1]//2, frame.shape[0]//2)
def cross(img_rgb, point, color, width,length):
cv2.line(img_rgb, (int( point[0] - length/2 ), point[1] ), (int( point[0] + length/2 ), point[1]), color ,width, length)
cv2.line(img_rgb, (point[0], int(point[1] - length/2) ), (point[0], int( point[1] + length/2 ) ),color ,width, length)
segmentado_cor = cv2.morphologyEx(morpho,cv2.MORPH_CLOSE,np.ones((7, 7)))
contornos, arvore = cv2.findContours(morpho.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
maior_contorno = None
maior_contorno_area = 0
for cnt in contornos:
area = cv2.contourArea(cnt)
if area > maior_contorno_area:
maior_contorno = cnt
maior_contorno_area = area
# Encontramos o centro do contorno fazendo a média de todos seus pontos.
if not maior_contorno is None :
cv2.drawContours(frame, [maior_contorno], -1, [0, 0, 255], 5)
maior_contorno = np.reshape(maior_contorno, (maior_contorno.shape[0], 2))
media = maior_contorno.mean(axis=0)
media = media.astype(np.int32)
cv2.circle(frame, (media[0], media[1]), 5, [0, 255, 0])
cross(frame, centro, [255,0,0], 1, 17)
else:
media = (0, 0)
# Representa a area e o centro do maior contorno no frame
font = cv2.FONT_HERSHEY_COMPLEX_SMALL
cv2.putText(frame,"{:d} {:d}".format(*media),(20,100), 1, 4,(255,255,255),2,cv2.LINE_AA)
cv2.putText(frame,"{:0.1f}".format(maior_contorno_area),(20,50), 1, 4,(255,255,255),2,cv2.LINE_AA)
# cv2.imshow('video', frame)
cv2.imshow('seg', segmentado_cor)
cv2.waitKey(1)
return media, centro, maior_contorno_area | 33.651163 | 129 | 0.664824 | 445 | 2,894 | 4.195506 | 0.377528 | 0.097483 | 0.045528 | 0.028923 | 0.193894 | 0.193894 | 0.172469 | 0.094269 | 0.076058 | 0.076058 | 0 | 0.06418 | 0.203179 | 2,894 | 86 | 130 | 33.651163 | 0.745447 | 0.182446 | 0 | 0 | 0 | 0 | 0.008109 | 0 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.04 | false | 0 | 0.24 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a78199b06e3d85a0cae2dc6b22fe2403a2e45cd5 | 405 | py | Python | Python/luhnchecksum.py | JaredLGillespie/OpenKattis | 71d26883cb5b8a4a1d63a072587de5575d7c29af | [
"MIT"
] | null | null | null | Python/luhnchecksum.py | JaredLGillespie/OpenKattis | 71d26883cb5b8a4a1d63a072587de5575d7c29af | [
"MIT"
] | null | null | null | Python/luhnchecksum.py | JaredLGillespie/OpenKattis | 71d26883cb5b8a4a1d63a072587de5575d7c29af | [
"MIT"
] | null | null | null | # https://open.kattis.com/problems/luhnchecksum
for _ in range(int(input())):
count = 0
for i, d in enumerate(reversed(input())):
if i % 2 == 0:
count += int(d)
continue
x = 2 * int(d)
if x < 10:
count += x
else:
x = str(x)
count += int(x[0]) + int(x[1])
print('PASS' if count % 10 == 0 else 'FAIL')
| 25.3125 | 48 | 0.449383 | 56 | 405 | 3.232143 | 0.5 | 0.088398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044715 | 0.392593 | 405 | 15 | 49 | 27 | 0.691057 | 0.111111 | 0 | 0 | 0 | 0 | 0.022346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a7871a31d1f892b28ff5af9f08dffdc9caf09213 | 262 | py | Python | main/urls.py | homata/snow_removing | c02585b8ceab3da107b932d6066c8b8344af1ff7 | [
"Apache-2.0"
] | 2 | 2018-12-05T01:03:10.000Z | 2019-03-16T04:27:03.000Z | main/urls.py | homata/snow_removing | c02585b8ceab3da107b932d6066c8b8344af1ff7 | [
"Apache-2.0"
] | null | null | null | main/urls.py | homata/snow_removing | c02585b8ceab3da107b932d6066c8b8344af1ff7 | [
"Apache-2.0"
] | 1 | 2018-12-04T14:18:08.000Z | 2018-12-04T14:18:08.000Z | from django.urls import include, path
from . import views
from django.views.generic.base import RedirectView
# アプリケーションの名前空間
# https://docs.djangoproject.com/ja/2.0/intro/tutorial03/
app_name = 'main'
urlpatterns = [
path('', views.index, name='index'),
]
| 21.833333 | 57 | 0.736641 | 35 | 262 | 5.485714 | 0.714286 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017467 | 0.125954 | 262 | 11 | 58 | 23.818182 | 0.820961 | 0.263359 | 0 | 0 | 0 | 0 | 0.047368 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a7882585c7ab1245006e29c8a68efd228a0cc9dc | 1,114 | py | Python | server/server/urls.py | oSoc17/lopeningent_backend | 3e1c149038c3773f66dfbbc2f15ebd0692ecb4cd | [
"MIT"
] | 4 | 2017-07-04T15:18:59.000Z | 2017-07-08T10:48:37.000Z | server/server/urls.py | oSoc17/lopeningent_backend | 3e1c149038c3773f66dfbbc2f15ebd0692ecb4cd | [
"MIT"
] | 16 | 2017-07-04T15:36:41.000Z | 2017-10-18T07:47:45.000Z | server/server/urls.py | oSoc17/lopeningent_backend | 3e1c149038c3773f66dfbbc2f15ebd0692ecb4cd | [
"MIT"
] | null | null | null | """server URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url
import interface.stats as stats
import interface.routes as route
import interface.pois as pois
urlpatterns = [
url(r'^stats/check/', stats.get_stats_from_id ),
url(r'^stats/update/', stats.post_stats_from_id),
url(r'^route/generate/', route.generate),
url(r'^route/return/', route.return_home),
url(r'^route/rate/', route.rate_route),
url(r'^poi/coords/', pois.get_coords),
url(r'^poi/types/', pois.get_types)
]
| 37.133333 | 79 | 0.701975 | 176 | 1,114 | 4.369318 | 0.363636 | 0.052016 | 0.078023 | 0.031209 | 0.261378 | 0.222367 | 0.097529 | 0.097529 | 0 | 0 | 0 | 0.009585 | 0.157092 | 1,114 | 29 | 80 | 38.413793 | 0.809372 | 0.567325 | 0 | 0 | 0 | 0 | 0.193684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a78b53e7326a1d9b30856a88ddc123ec056f3a2a | 18,573 | py | Python | resources/lib/database_tv.py | bradyemerson/plugin.video.showtimeanytime | 65e7f130c14c8ef963cb3669638b8cf14860ec82 | [
"Apache-2.0"
] | null | null | null | resources/lib/database_tv.py | bradyemerson/plugin.video.showtimeanytime | 65e7f130c14c8ef963cb3669638b8cf14860ec82 | [
"Apache-2.0"
] | null | null | null | resources/lib/database_tv.py | bradyemerson/plugin.video.showtimeanytime | 65e7f130c14c8ef963cb3669638b8cf14860ec82 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os.path
from datetime import date, datetime
from sqlite3 import dbapi2 as sqlite
from bs4 import BeautifulSoup
import simplejson as json
import xbmcvfs
import xbmcgui
import common
import connection
import database_common as db_common
def create():
c = _database.cursor()
c.execute('''CREATE TABLE series
(series_id INTEGER PRIMARY KEY,
title TEXT,
title_sort TEXT,
plot TEXT,
directors TEXT,
actors TEXT,
thumb TEXT,
total_seasons INTEGER,
total_episodes INTEGER,
favor BOOLEAN DEFAULT 0,
in_last_update BOOLEAN DEFAULT 1,
last_updated timestamp);''')
c.execute('''CREATE TABLE season
(season_id INTEGER PRIMARY KEY,
season_no INTEGER,
series_id INTEGER,
plot TEXT,
FOREIGN KEY(series_id) REFERENCES series(series_id) ON DELETE CASCADE);''')
c.execute('''CREATE TABLE episode
(episode_id INTEGER PRIMARY KEY,
season_id INTEGER,
episode_no INTEGER,
title TEXT,
title_sort TEXT,
plot TEXT,
duration INTEGER,
year INTEGER,
studio TEXT,
mpaa TEXT,
advisories TEXT,
aired_date timestamp,
thumb TEXT,
play_count INTEGER DEFAULT 0,
FOREIGN KEY(season_id) REFERENCES season(season_id) ON DELETE CASCADE);''')
_database.commit()
c.close()
def insert_series(series_id, title=None, title_sort=None, plot=None, directors=None, actors=None, thumb=None,
total_seasons=None, total_episodes=None):
c = _database.cursor()
c.execute('''INSERT OR REPLACE INTO series (
series_id,
title,
title_sort,
plot,
directors,
actors,
thumb,
total_seasons,
total_episodes,
favor,
in_last_update,
last_updated) VALUES (
:series_id,
:title,
:title_sort,
:plot,
:directors,
:actors,
:thumb,
:total_seasons,
:total_episodes,
(SELECT favor FROM series WHERE series_id = :series_id),
:in_last_update,
(SELECT last_updated FROM series WHERE series_id = :series_id))''', {
'series_id': series_id,
'title': title,
'title_sort': title_sort,
'plot': plot,
'directors': directors,
'actors': actors,
'thumb': thumb,
'total_seasons': total_seasons,
'total_episodes': total_episodes,
'in_last_update': True
})
_database.commit()
c.close()
def insert_season(series_id, season_no, plot=None):
c = _database.cursor()
row = lookup_season(series_id=series_id, season_no=season_no, fields='season_id').fetchone()
if row:
c.execute('''UPDATE season SET plot = :plot WHERE season_id = :season_id''', {
'season_id': row['season_id'],
'plot': plot
})
else:
c.execute('''INSERT INTO season (series_id, season_no, plot) VALUES (
:series_id,
:season_no,
:plot
)''', {
'series_id': series_id,
'season_no': season_no,
'plot': plot
})
_database.commit()
c.close()
def insert_episode(episode_id, season_id, episode_no=None, title=None, title_sort=None, plot=None,
duration=None, year=None, studio=None, mpaa=None, advisories=None, aired_date=None, thumb=None):
c = _database.cursor()
c.execute('''INSERT OR REPLACE INTO episode (
episode_id,
season_id,
episode_no,
title,
title_sort,
plot,
duration,
year,
studio,
mpaa,
advisories,
aired_date,
thumb,
play_count) VALUES (
:episode_id,
:season_id,
:episode_no,
:title,
:title_sort,
:plot,
:duration,
:year,
:studio,
:mpaa,
:advisories,
:aired_date,
:thumb,
COALESCE((SELECT play_count FROM episode WHERE episode_id = :episode_id), 0))''', {
'episode_id': episode_id,
'season_id': season_id,
'episode_no': episode_no,
'title': title,
'title_sort': title_sort,
'plot': plot,
'duration': duration,
'year': year,
'studio': studio,
'mpaa': mpaa,
'advisories': advisories,
'aired_date': aired_date,
'thumb': thumb
})
_database.commit()
c.close()
def lookup_series(content_id, fields='*'):
c = _database.cursor()
return c.execute('SELECT DISTINCT {0} FROM series WHERE series_id = (?)'.format(fields), (content_id,))
def lookup_season(season_id=None, series_id=None, season_no=None, fields='*'):
c = _database.cursor()
if season_id:
return c.execute('SELECT {0} FROM season WHERE season_id = (?)'.format(fields), (season_id,))
elif series_id and season_no:
return c.execute('SELECT {0} FROM season WHERE series_id = (?) AND season_no = (?)'.format(fields),
(series_id, season_no))
def lookup_episode(content_id):
c = _database.cursor()
return c.execute('SELECT DISTINCT * FROM episode WHERE episode_id = (?)', (content_id,))
def delete_series(content_id):
c = _database.cursor()
c.execute('DELETE FROM series WHERE series_id = (?)', (content_id,))
c.close()
def watch_episode(content_id):
# TODO make this actually increment
c = _database.cursor()
c.execute("UPDATE episode SET play_count = play_count + 1 WHERE episode_id = (?)", (content_id,))
_database.commit()
c.close()
return c.rowcount
def unwatch_episode(content_id):
c = _database.cursor()
c.execute("UPDATE episode SET play_count=? WHERE episode_id = (?)", (0, content_id))
_database.commit()
c.close()
return c.rowcount
def favor_series(content_id):
c = _database.cursor()
c.execute("UPDATE series SET favor=? WHERE series_id=?", (True, content_id))
_database.commit()
c.close()
return c.rowcount
def unfavor_series(content_id):
c = _database.cursor()
c.execute("UPDATE series SET favor=? WHERE series_id=?", (False, content_id))
_database.commit()
c.close()
return c.rowcount
def get_series(directorfilter=False, watchedfilter=False, favorfilter=False, actorfilter=False,
alphafilter=False, studiofilter=False):
c = _database.cursor()
if actorfilter:
actorfilter = '%' + actorfilter + '%'
return c.execute('SELECT DISTINCT * FROM series WHERE actors LIKE (?)',
(actorfilter,))
elif directorfilter:
return c.execute('SELECT DISTINCT * FROM series WHERE directors LIKE (?)',
(directorfilter,))
elif studiofilter:
return c.execute('SELECT DISTINCT * FROM series WHERE studio = (?)', (studiofilter,))
elif watchedfilter:
return c.execute('SELECT DISTINCT * FROM series WHERE playcount > 0')
elif favorfilter:
return c.execute('SELECT DISTINCT * FROM series WHERE favor = 1')
elif alphafilter:
return c.execute('SELECT DISTINCT * FROM series WHERE title REGEXP (?)',
(alphafilter + '*',))
else:
return c.execute('SELECT DISTINCT * FROM series')
def get_series_season_count(series_id):
c = _database.cursor()
row = c.execute('''SELECT MAX(sea.content_id) AS total_seasons
FROM season AS sea
JOIN series AS ser ON ser.content_id = sea.series_content_id
WHERE ser.content_id = (?)
GROUP BY ser.content_id''', (series_id,)).fetchone()
c.close()
if row:
return row['total_seasons']
else:
return 0
def get_series_episode_count(series_id, filter=None):
c = _database.cursor()
if filter == 'watched':
row = c.execute('''SELECT COUNT(e.episode_id) AS total_episodes
FROM episode AS e
JOIN season AS sea ON sea.season_id = e.season_id
JOIN series AS ser ON ser.series_id = sea.series_id
WHERE ser.series_id = (?) AND e.play_count > 0
GROUP BY ser.series_id''', (series_id,)).fetchone()
else:
row = c.execute('''SELECT COUNT(e.episode_id) AS total_episodes
FROM episode AS e
JOIN season AS sea ON sea.season_id = e.season_id
JOIN series AS ser ON ser.series_id = sea.series_id
WHERE ser.series_id = (?)
GROUP BY ser.series_id''', (series_id,)).fetchone()
c.close()
if row:
return row['total_episodes']
else:
return 0
def get_series_year(series_id):
c = _database.cursor()
row = c.execute('''SELECT e.year FROM episode AS e
JOIN season AS sea ON sea.season_id = e.season_id
JOIN series AS ser ON ser.series_id = sea.series_id
WHERE ser.series_id = (?)
ORDER BY e.year ASC LIMIT 1''', (series_id,)).fetchone()
c.close()
if row:
return row['year']
else:
return None
def _update_series_last_update(series_id, time=datetime.now()):
c = _database.cursor()
c.execute('UPDATE series SET last_updated = :last_update WHERE series_id = :series_id', {
'last_update': time,
'series_id': series_id
})
c.close()
def get_seasons(series_id):
c = _database.cursor()
return c.execute('''SELECT DISTINCT sea.*,ser.title AS series_title
FROM season AS sea
JOIN series AS ser ON ser.series_id = sea.series_id
WHERE ser.series_id = (?)''', (series_id,))
def get_season_episode_count(season_id, filter=None):
c = _database.cursor()
if filter == 'watched':
row = c.execute('''SELECT COUNT(e.episode_id) AS total_episodes
FROM episode AS e
JOIN season AS sea ON sea.season_id = e.season_id
WHERE sea.season_id = (?) AND e.play_count > 0
GROUP BY sea.season_id''', (season_id,)).fetchone()
else:
row = c.execute('''SELECT COUNT(e.episode_id) AS total_episodes
FROM episode AS e
JOIN season AS sea ON sea.season_id = e.season_id
WHERE sea.season_id = (?)
GROUP BY sea.season_id''', (season_id,)).fetchone()
c.close()
if row:
return row['total_episodes']
else:
return 0
def get_season_year(season_id):
c = _database.cursor()
row = c.execute('''SELECT e.year FROM episode AS e
JOIN season AS sea ON sea.season_id = e.season_id
WHERE sea.season_id = (?)
ORDER BY e.year ASC LIMIT 1''', (season_id,)).fetchone()
c.close()
if row:
return row['year']
else:
return None
def get_episodes(season_id):
c = _database.cursor()
return c.execute('''SELECT DISTINCT e.*, sea.season_no AS season_no, ser.title AS series_title, ser.series_id AS series_id
FROM episode AS e
JOIN season AS sea ON sea.season_id = e.season_id
JOIN series AS ser ON ser.series_id = sea.series_id
WHERE e.season_id = (?)''', (season_id,))
def get_types(col):
c = _database.cursor()
items = c.execute('select distinct %s from series' % col)
list = []
for data in items:
data = data[0]
if type(data) == type(str()):
if 'Rated' in data:
item = data.split('for')[0]
if item not in list and item <> '' and item <> 0 and item <> 'Inc.' and item <> 'LLC.':
list.append(item)
else:
data = data.decode('utf-8').encode('utf-8').split(',')
for item in data:
item = item.replace('& ', '').strip()
if item not in list and item <> '' and item <> 0 and item <> 'Inc.' and item <> 'LLC.':
list.append(item)
elif data <> 0:
if data is not None:
list.append(str(data))
c.close()
return list
def update_tv(force=False):
# Check if we've recently updated and skip
if not force and not _needs_update():
return
dialog = xbmcgui.DialogProgress()
dialog.create('Refreshing TV Database')
dialog.update(0, 'Initializing TV Scan')
xml_series_url = '{0}/tve/xml/category?categoryid=101'.format(db_common.API_DOMAIN)
data = connection.get_url(xml_series_url)
soup = BeautifulSoup(data)
series_list = soup.find('subcategory', recursive=False).find('series', recursive=False).find_all('series', recursive=False)
# Mark all series as unfound. This will be updated as we go through
c = _database.cursor()
c.execute("UPDATE series SET in_last_update = 0")
_database.commit()
c.close()
total = len(series_list)
count = 0
for series in series_list:
count += 1
dialog.update(0, 'Scanned {0} of {1} TV series'.format(count, total))
print 'series: '
print series
series_json_url = '{0}/api/series/{1}'.format(db_common.API_DOMAIN, series['seriesid'])
json_data = json.loads(connection.get_url(series_json_url))
series_id = series['seriesid']
title = common.string_unicode(json_data['name'])
title_sort = common.string_unicode(json_data['sortName'])
plot = common.string_unicode(json_data['description']['long'])
total_seasons = json_data['totalSeasons']
total_episodes = json_data['totalEpisodes']
thumb = None
for image in series.find_all('Image'):
if image['width'] == '1920' and image['height'] == '1080':
thumb = image.find('url').string
break
insert_series(series_id, title, title_sort, plot, None, None, thumb, total_seasons, total_episodes)
# Season Children
if 'seasons' in json_data:
_json_process_seasons(json_data['seasons'], series_id)
_set_last_update()
# Remove unfound movies
c = _database.cursor()
c.execute("DELETE FROM series WHERE in_last_update = 0")
c.close()
def _json_process_seasons(season_data, series_id):
for season in season_data:
insert_season(series_id, season['seasonNum'], season['description']['long'])
def update_series(series_id, force=False):
# Check for new episodes every 12 hours
row = lookup_series(series_id, 'last_updated').fetchone()
if force is False and row['last_updated']:
last_update = common.parse_date(row['last_updated'], '%Y-%m-%d %H:%M:%S.%f')
if (datetime.now() - last_update).seconds < 43200:
# No update needed
return
xml_series_url = '{0}/tve/xml/series?seriesid={1}'.format(db_common.API_DOMAIN, series_id)
data = connection.get_url(xml_series_url)
series = BeautifulSoup(data).find('series', recursive=False)
for episode in series.find_all('title', attrs={'type': 'Episode'}):
episode_id = episode['titleid']
title = common.string_unicode(episode.find('title', recursive=False).string)
title_sort = common.string_unicode(episode.find('sorttitle', recursive=False).string)
plot = common.string_unicode(episode.find('description', recursive=False).string)
year = episode.find('releaseyear', recursive=False).string
duration = episode.find('duration', recursive=False).string
mpaa = episode.find('rating', recursive=False).string
advisories = episode.find('advisories', recursive=False).string
air_date = None
try:
air_date = common.parse_date(episode.find('originalairdate', recursive=False).string, '%m/%d/%Y %I:%M%p')
except:
pass
thumb = None
for image in episode.find_all('image'):
if image['width'] == '866' and image['height'] == '487':
thumb = image.find('url').string
break
series_tag = episode.find('series', recursive=False)
episode_no = series_tag['episode']
season_no = series_tag['season']
season = lookup_season(series_id=series_id, season_no=season_no, fields='season_id').fetchone()
if not season:
insert_season(series_tag['seriesid'], season_no)
season = lookup_season(series_id=series_id, season_no=season_no, fields='season_id').fetchone()
season_id = season['season_id']
insert_episode(episode_id, season_id, episode_no, title, title_sort, plot, duration, year, None,
mpaa, advisories, air_date, thumb)
_update_series_last_update(series_id)
def _needs_update():
# Update every 15 days
if 'last_update' in _database_meta:
last_update = common.parse_date(_database_meta['last_update'], '%Y-%m-%d')
return (date.today() - last_update.date()).days > 15
return True
def _set_last_update():
_database_meta['last_update'] = date.today().strftime('%Y-%m-%d')
_write_meta_file()
def _write_meta_file():
f = open(DB_META_FILE, 'w')
json.dump(_database_meta, f)
f.close()
DB_META_FILE = os.path.join(common.__addonprofile__, 'tv.meta')
_database_meta = False
if xbmcvfs.exists(DB_META_FILE):
f = open(DB_META_FILE, 'r')
_database_meta = json.load(f)
f.close()
else:
_database_meta = {}
DB_FILE = os.path.join(common.__addonprofile__, 'tv.db')
if not xbmcvfs.exists(DB_FILE):
_database = sqlite.connect(DB_FILE)
_database.text_factory = str
_database.row_factory = sqlite.Row
create()
else:
_database = sqlite.connect(DB_FILE)
_database.text_factory = str
_database.row_factory = sqlite.Row
| 33.769091 | 127 | 0.586658 | 2,252 | 18,573 | 4.62833 | 0.112789 | 0.05603 | 0.034539 | 0.019956 | 0.527679 | 0.45438 | 0.415523 | 0.367936 | 0.293677 | 0.266142 | 0 | 0.005395 | 0.301405 | 18,573 | 549 | 128 | 33.830601 | 0.797919 | 0.015991 | 0 | 0.406667 | 0 | 0.002222 | 0.400416 | 0.004763 | 0 | 0 | 0 | 0.001821 | 0 | 0 | null | null | 0.002222 | 0.022222 | null | null | 0.004444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a78ce58146e32ab5bc583a0b5ea144d7df99f985 | 10,152 | py | Python | EyePatterns/main_test_all_clusters.py | Sale1996/Pattern-detection-of-eye-tracking-scanpaths | 15c832f26dce98bb95445f9f39f454f99bbb6029 | [
"MIT"
] | 1 | 2021-12-07T08:02:30.000Z | 2021-12-07T08:02:30.000Z | EyePatterns/main_test_all_clusters.py | Sale1996/Pattern-detection-of-eye-tracking-scanpaths | 15c832f26dce98bb95445f9f39f454f99bbb6029 | [
"MIT"
] | null | null | null | EyePatterns/main_test_all_clusters.py | Sale1996/Pattern-detection-of-eye-tracking-scanpaths | 15c832f26dce98bb95445f9f39f454f99bbb6029 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import distance
from matplotlib import style
from clustering_algorithms.affinity_propagation import AffinityPropagation
from clustering_algorithms.custom_k_means import KMeans
from clustering_algorithms.custom_mean_shift import MeanShift
from clustering_algorithms.custom_mean_shift_string_edition import MeanShiftStringEdition
from clustering_algorithms.dbscan import DbScan
from prepare_data.format_sequences import format_sequences_from_student
from utils.e_mine import e_mine_find_common_scanpath
from utils.string_compare_algorithm import levenstein_sequence_similarity, is_string_similar, needleman_wunsch, \
needleman_wunsch_with_penalty
import numpy as np
# def initialize_2D_number_data_and_plot_them():
# number_data = np.array([[1, 2], [1.5, 1.8], [5, 8], [8, 8], [1, 0.6], [9, 11], [8, 2], [10, 2], [9, 3]])
# # plot data
# plt.scatter(number_data[:, 0], number_data[:, 1])
# plt.show()
# return number_data
#
#
# def test_k_means_with_numbers_then_plot_results():
# clf = KMeans(k=3)
# clf.fit(number_data)
#
# for centroid in clf.centroids:
# plt.scatter(clf.centroids[centroid][0], clf.centroids[centroid][1],
# marker="o", color="k", s=150, linewidths=5)
#
# for classification in clf.classifications:
# color = colors[classification]
# for featureset in clf.classifications[classification]:
# plt.scatter(featureset[0], featureset[1], marker="x", color=color,
# s=150, linewidths=5)
# plt.show()
#
#
# def test_mean_shift_with_numbers_then_plot_results():
# clf_ms = MeanShift()
# clf_ms.fit(number_data)
# plt.scatter(number_data[:, 0], number_data[:, 1], s=150)
# centroids = clf_ms.centroids
# for c in centroids:
# plt.scatter(centroids[c][0], centroids[c][1], color='k', marker="*", s=150)
# plt.show()
def initialize_string_sequences(student_name):
# print(format_sequences_from_student(student_name))
return format_sequences_from_student(student_name)
# return ["ACCAEF", "ACCEF", "AACF", "CCCEF", "CCAACCF", "CCACF"]
def print_description():
print("***************************************************")
print("NAME OF ALGORITHM")
print("- *CLUSTER REPRESENTER* [CLUSTER MEMBER, CLUSTER MEMBER, CLUSTER MEMBER]")
print("***************************************************")
def test_and_print_results_string_k_means_with_levenshtein_distance():
kmeans_alg = KMeans(k=3, distance_function=distance.levenshtein, find_average_function=e_mine_find_common_scanpath,
check_is_optimized_function=is_string_similar)
kmeans_alg.fit(string_data)
print_k_means_results(kmeans_alg, "Levenshtein")
def test_and_print_results_string_k_means_with_needleman_wunsch_distance():
kmeans_alg = KMeans(k=3, distance_function=needleman_wunsch, find_average_function=e_mine_find_common_scanpath,
check_is_optimized_function=is_string_similar)
kmeans_alg.fit(string_data)
print_k_means_results(kmeans_alg, "Needleman-Wunsch")
def test_and_print_results_string_k_means_with_needleman_wunsch_distance_with_extra_penalty_points():
kmeans_alg = KMeans(k=3, distance_function=needleman_wunsch_with_penalty,
find_average_function=e_mine_find_common_scanpath,
check_is_optimized_function=is_string_similar)
kmeans_alg.fit(string_data)
print_k_means_results(kmeans_alg, "Needleman-Wunsch with additional penalty")
def print_k_means_results(kmeans_alg, distance_algorithm):
centroid_cluster_map_kmeans = {}
for i in range(0, len(kmeans_alg.centroids)):
centroid_cluster_map_kmeans[kmeans_alg.centroids[i]] = kmeans_alg.classifications[i]
print()
print("K Means string edition with %s distance algorithm" % distance_algorithm)
for centroid in centroid_cluster_map_kmeans:
print(" - *%s* %s" % (centroid, centroid_cluster_map_kmeans[centroid]))
def test_and_print_results_string_mean_shift_with_levenshtein_distance():
mean_shift_string_edition = MeanShiftStringEdition()
mean_shift_string_edition.fit(string_data)
print_mean_shift_results(mean_shift_string_edition, "Levenshtein")
def test_and_print_results_string_mean_shift_with_needleman_wunsch_distance():
mean_shift_string_edition = MeanShiftStringEdition(distance_function=needleman_wunsch)
mean_shift_string_edition.fit(string_data)
print_mean_shift_results(mean_shift_string_edition, "Needleman-Wunsch")
def test_and_print_results_string_mean_shift_with_needleman_wunsch_distance_with_extra_penalty_points():
mean_shift_string_edition = MeanShiftStringEdition(distance_function=needleman_wunsch_with_penalty)
mean_shift_string_edition.fit(string_data)
print_mean_shift_results(mean_shift_string_edition, "Needleman-Wunsch with additional penalty")
def print_mean_shift_results(mean_shift_string_edition, distance_algorithm):
print()
print("Mean Shift string edition with %s distance algorithm" % distance_algorithm)
for centroid in mean_shift_string_edition.centroids:
print(" - *%s*" % mean_shift_string_edition.centroids[centroid])
def test_and_print_results_string_affinity_propagation_with_levenstein_distance():
data_as_array = np.asarray(string_data)
lev_similarity_scores = -1 * np.array(
[[distance.levenshtein(w1, w2) for w1 in data_as_array] for w2 in data_as_array])
affinity_propagation_alg = AffinityPropagation()
affinity_propagation_alg.fit(lev_similarity_scores)
print_affinity_propagation_results(affinity_propagation_alg, data_as_array, "Levenshtein")
def test_and_print_results_string_affinity_propagation_with_needleman_wunsch_distance():
data_as_array = np.asarray(string_data)
lev_similarity_scores = -1 * np.array(
[[needleman_wunsch(w1, w2) for w1 in data_as_array] for w2 in data_as_array])
affinity_propagation_alg = AffinityPropagation()
affinity_propagation_alg.fit(lev_similarity_scores)
print_affinity_propagation_results(affinity_propagation_alg, data_as_array, "Needleman-Wunsch")
def test_and_print_results_string_affinity_propagation_with_needleman_wunsch_distance_with_extra_penalty_points():
data_as_array = np.asarray(string_data)
lev_similarity_scores = -1 * np.array(
[[needleman_wunsch_with_penalty(w1, w2) for w1 in data_as_array] for w2 in data_as_array])
affinity_propagation_alg = AffinityPropagation()
affinity_propagation_alg.fit(lev_similarity_scores)
print_affinity_propagation_results(affinity_propagation_alg, data_as_array, "Needleman-Wunsch with additional penalty")
def print_affinity_propagation_results(affinity_propagation_alg, data_as_array, distance_algorithm):
print()
print('Affinity Propagation with %s distance algorithm' % distance_algorithm)
exemplar_features_map = affinity_propagation_alg.get_exemplars_and_their_features(data_as_array)
for exemplar in exemplar_features_map:
print(" - *%s* %s" % (exemplar, exemplar_features_map[exemplar]))
def test_and_print_results_string_db_scan_with_levenstein_distance():
def lev_metric(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return distance.levenshtein(string_data[i], string_data[j])
db_scan = DbScan()
db_scan.fit(lev_metric, string_data)
print_db_scan_results(db_scan, "Levenshtein")
def test_and_print_results_string_db_scan_with_needleman_wunsch_distance():
def lev_metric(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return needleman_wunsch(string_data[i], string_data[j])
db_scan = DbScan()
db_scan.fit(lev_metric, string_data)
print_db_scan_results(db_scan, "Needleman-Wunsch")
def test_and_print_results_string_db_scan_with_needleman_wunsch_distance_with_extra_penalty_points():
def lev_metric(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return needleman_wunsch_with_penalty(string_data[i], string_data[j])
db_scan = DbScan()
db_scan.fit(lev_metric, string_data)
print_db_scan_results(db_scan, "Needleman-Wunsch with additional penalty")
def print_db_scan_results(db_scan, distance_algorithm):
print()
print('DB Scan with %s distance algorithm' % distance_algorithm)
for cluster in db_scan.get_clusters():
cluster_representer = e_mine_find_common_scanpath(db_scan.get_clusters()[cluster])
print(" - *%s* %s" % (cluster_representer, db_scan.get_clusters()[cluster]))
'''
1# Initialize number collection and plot style
'''
# style.use('ggplot')
# number_data = initialize_2D_number_data_and_plot_them()
# colors = 10 * ["g", "r", "c", "b", "k"]
'''
Test classification algorithms with numbers
'''
# test_k_means_with_numbers_then_plot_results()
# test_mean_shift_with_numbers_then_plot_results()
'''
2# Initialize string collection and print description on printed form
'''
student_name = "student_1"
string_data = initialize_string_sequences(student_name)
print_description()
'''
Test classification algorithms with strings
'''
test_and_print_results_string_k_means_with_levenshtein_distance()
test_and_print_results_string_k_means_with_needleman_wunsch_distance()
test_and_print_results_string_k_means_with_needleman_wunsch_distance_with_extra_penalty_points()
test_and_print_results_string_mean_shift_with_levenshtein_distance()
test_and_print_results_string_mean_shift_with_needleman_wunsch_distance()
test_and_print_results_string_mean_shift_with_needleman_wunsch_distance_with_extra_penalty_points()
test_and_print_results_string_affinity_propagation_with_levenstein_distance()
test_and_print_results_string_affinity_propagation_with_needleman_wunsch_distance()
test_and_print_results_string_affinity_propagation_with_needleman_wunsch_distance_with_extra_penalty_points()
test_and_print_results_string_db_scan_with_levenstein_distance()
test_and_print_results_string_db_scan_with_needleman_wunsch_distance()
test_and_print_results_string_db_scan_with_needleman_wunsch_distance_with_extra_penalty_points() | 42.476987 | 123 | 0.775611 | 1,363 | 10,152 | 5.294204 | 0.10785 | 0.070676 | 0.039911 | 0.063193 | 0.719512 | 0.694568 | 0.65133 | 0.601441 | 0.570122 | 0.52439 | 0 | 0.009061 | 0.130319 | 10,152 | 239 | 124 | 42.476987 | 0.808246 | 0.149823 | 0 | 0.3 | 0 | 0 | 0.082384 | 0.012232 | 0 | 0 | 0 | 0 | 0 | 1 | 0.161538 | false | 0 | 0.092308 | 0.007692 | 0.284615 | 0.446154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
a799e70dc24ceb42c8e876b81ace1c8f5d0f6ceb | 727 | py | Python | demo_odoo_tutorial_wizard/models/models.py | digitalsatori/odoo-demo-addons-tutorial | 8eb56156ac55f317f90bca089886c392556759c2 | [
"MIT"
] | 57 | 2020-06-22T05:28:11.000Z | 2022-03-25T08:15:08.000Z | demo_odoo_tutorial_wizard/models/models.py | digitalsatori/odoo-demo-addons-tutorial | 8eb56156ac55f317f90bca089886c392556759c2 | [
"MIT"
] | 2 | 2020-11-20T07:11:27.000Z | 2022-03-30T00:20:29.000Z | demo_odoo_tutorial_wizard/models/models.py | digitalsatori/odoo-demo-addons-tutorial | 8eb56156ac55f317f90bca089886c392556759c2 | [
"MIT"
] | 29 | 2020-07-04T15:24:01.000Z | 2022-03-28T01:29:03.000Z | from odoo import models, fields, api
from odoo.exceptions import ValidationError
class DemoOdooWizardTutorial(models.Model):
_name = 'demo.odoo.wizard.tutorial'
_description = 'Demo Odoo Wizard Tutorial'
name = fields.Char('Description', required=True)
partner_id = fields.Many2one('res.partner', string='Partner')
@api.multi
def action_context_demo(self):
# if self._context.get('context_data', False):
if self.env.context.get('context_data'):
raise ValidationError('have context data')
raise ValidationError('hello')
@api.multi
def action_button(self):
for record in self:
record.with_context(context_data=True).action_context_demo() | 34.619048 | 72 | 0.696011 | 87 | 727 | 5.666667 | 0.482759 | 0.089249 | 0.056795 | 0.089249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001712 | 0.196699 | 727 | 21 | 72 | 34.619048 | 0.842466 | 0.060523 | 0 | 0.125 | 0 | 0 | 0.165689 | 0.036657 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a79daf8941b0f06f1e88f279de06585e5430d9d8 | 659 | py | Python | eaa_donations/donations/models/partner_charity.py | andrewbird2/eaa_donations | 40a2cb2431130b330130f101c89bd3f8c503d2e2 | [
"MIT"
] | null | null | null | eaa_donations/donations/models/partner_charity.py | andrewbird2/eaa_donations | 40a2cb2431130b330130f101c89bd3f8c503d2e2 | [
"MIT"
] | 13 | 2020-06-05T19:27:58.000Z | 2022-02-26T13:40:54.000Z | eaa_donations/donations/models/partner_charity.py | andrewbird2/eaa_donations | 40a2cb2431130b330130f101c89bd3f8c503d2e2 | [
"MIT"
] | null | null | null | from django.db import models
class PartnerCharity(models.Model):
slug_id = models.CharField(max_length=30, unique=True)
name = models.TextField(unique=True, verbose_name='Name (human readable)')
email = models.EmailField(help_text='Used to cc the charity on receipts')
xero_account_name = models.TextField(help_text='Exact text of incoming donation account in xero')
active = models.BooleanField(default=True)
thumbnail = models.FileField(blank=True, null=True)
order = models.IntegerField(null=True, blank=True)
def __str__(self):
return self.name
class Meta:
verbose_name_plural = 'Partner charities'
| 36.611111 | 101 | 0.732929 | 87 | 659 | 5.402299 | 0.643678 | 0.042553 | 0.080851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003656 | 0.169954 | 659 | 17 | 102 | 38.764706 | 0.855576 | 0 | 0 | 0 | 0 | 0 | 0.180577 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0.076923 | 0.923077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a7a3c07297bdc5a9d9dc9e8e2723b1d3e587876e | 915 | py | Python | sghymnal/users/models.py | shortnd/sghymnal | c10d9a7e2fda803dcb5046b9f7bc099f32b6c603 | [
"MIT"
] | null | null | null | sghymnal/users/models.py | shortnd/sghymnal | c10d9a7e2fda803dcb5046b9f7bc099f32b6c603 | [
"MIT"
] | null | null | null | sghymnal/users/models.py | shortnd/sghymnal | c10d9a7e2fda803dcb5046b9f7bc099f32b6c603 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
from django.db.models import BooleanField, CharField
from django.urls import reverse
from django.utils.translation import ugettext_lazy as _
class User(AbstractUser):
# First Name and Last Name do not cover name patterns
# around the globe.
name = CharField(_("Name of User"), blank=True, max_length=255)
foes_allowed = BooleanField("Foes Allowed", default=False)
push_notifications_allowed = BooleanField(
"Push Notifications Allowed", default=False
)
roster_allowed = BooleanField("Rosters Allowed", default=False)
songbook_allowed = BooleanField("Songbook Allowed", default=False)
users_allowed = BooleanField("Users Allowed", default=False)
feed_allowed = BooleanField("Feed Allowed", default=False)
def get_absolute_url(self):
return reverse("users:detail", kwargs={"username": self.username})
| 39.782609 | 74 | 0.748634 | 110 | 915 | 6.109091 | 0.509091 | 0.169643 | 0.169643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003901 | 0.159563 | 915 | 22 | 75 | 41.590909 | 0.869961 | 0.07541 | 0 | 0 | 0 | 0 | 0.149466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.25 | 0.0625 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a7a3c324c55d54b727b474911571c79dbd56bbdd | 3,285 | py | Python | GNetLMM/pycore/mtSet/linalg/linalg_matrix.py | PMBio/GNetLMM | 103d6433ff6d4a13b5787c116032fda268dc4302 | [
"Apache-2.0"
] | 4 | 2016-02-25T18:40:36.000Z | 2019-05-06T06:15:47.000Z | GNetLMM/pycore/mtSet/linalg/linalg_matrix.py | PMBio/GNetLMM | 103d6433ff6d4a13b5787c116032fda268dc4302 | [
"Apache-2.0"
] | 6 | 2016-03-29T02:55:17.000Z | 2017-11-27T19:30:04.000Z | GNetLMM/pycore/mtSet/linalg/linalg_matrix.py | PMBio/GNetLMM | 103d6433ff6d4a13b5787c116032fda268dc4302 | [
"Apache-2.0"
] | 2 | 2017-05-09T05:23:50.000Z | 2019-07-27T13:19:22.000Z | """Matrix linear algebra routines needed for GP models"""
import scipy as SP
import scipy.linalg as linalg
import logging
def solve_chol(A,B):
"""
Solve cholesky decomposition::
return A\(A'\B)
"""
# X = linalg.solve(A,linalg.solve(A.transpose(),B))
# much faster version
X = linalg.cho_solve((A, True), B)
return X
def jitChol(A, maxTries=10, warning=True):
"""Do a Cholesky decomposition with jitter.
Description:
U, jitter = jitChol(A, maxTries, warning) attempts a Cholesky
decomposition on the given matrix, if matrix isn't positive
definite the function adds 'jitter' and tries again. Thereafter
the amount of jitter is multiplied by 10 each time it is added
again. This is continued for a maximum of 10 times. The amount of
jitter added is returned.
Returns:
U - the Cholesky decomposition for the matrix.
jitter - the amount of jitter that was added to the matrix.
Arguments:
A - the matrix for which the Cholesky decomposition is required.
maxTries - the maximum number of times that jitter is added before
giving up (default 10).
warning - whether to give a warning for adding jitter (default is True)
See also
CHOL, PDINV, LOGDET
Copyright (c) 2005, 2006 Neil D. Lawrence
"""
jitter = 0
i = 0
while(True):
try:
# Try --- need to check A is positive definite
if jitter == 0:
jitter = abs(SP.trace(A))/A.shape[0]*1e-6
LC = linalg.cholesky(A, lower=True)
return LC.T, 0.0
else:
if warning:
# pdb.set_trace()
# plt.figure()
# plt.imshow(A, interpolation="nearest")
# plt.colorbar()
# plt.show()
logging.error("Adding jitter of %f in jitChol()." % jitter)
LC = linalg.cholesky(A+jitter*SP.eye(A.shape[0]), lower=True)
return LC.T, jitter
except linalg.LinAlgError:
# Seems to have been non-positive definite.
if i<maxTries:
jitter = jitter*10
else:
raise linalg.LinAlgError, "Matrix non positive definite, jitter of " + str(jitter) + " added but failed after " + str(i) + " trials."
i += 1
return LC
def jitEigh(A,maxTries=10,warning=True):
"""
Do a Eigenvalue Decompsition with Jitter,
works as jitChol
"""
warning = True
jitter = 0
i = 0
while(True):
if jitter == 0:
jitter = abs(SP.trace(A))/A.shape[0]*1e-6
S,U = linalg.eigh(A)
else:
if warning:
# pdb.set_trace()
# plt.figure()
# plt.imshow(A, interpolation="nearest")
# plt.colorbar()
# plt.show()
logging.error("Adding jitter of %f in jitEigh()." % jitter)
S,U = linalg.eigh(A+jitter*SP.eye(A.shape[0]))
if S.min()>1E-10:
return S,U
if i<maxTries:
jitter = jitter*10
i += 1
raise linalg.LinAlgError, "Matrix non positive definite, jitter of " + str(jitter) + " added but failed after " + str(i) + " trials."
| 26.92623 | 150 | 0.565297 | 426 | 3,285 | 4.349765 | 0.328639 | 0.056665 | 0.015111 | 0.027523 | 0.384781 | 0.351862 | 0.305451 | 0.25796 | 0.25796 | 0.25796 | 0 | 0.019644 | 0.333638 | 3,285 | 121 | 151 | 27.14876 | 0.826862 | 0.104718 | 0 | 0.511111 | 0 | 0 | 0.118846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7ad127304af82024c33b51ab7d7e16625068796 | 2,315 | py | Python | src/midiutil/midiosc.py | neonkingfr/VizBench | e41f559cb6e761d717f2f5b202482d5d8dacd2d8 | [
"MIT"
] | 7 | 2015-01-05T06:32:49.000Z | 2020-10-30T19:29:07.000Z | src/midiutil/midiosc.py | neonkingfr/VizBench | e41f559cb6e761d717f2f5b202482d5d8dacd2d8 | [
"MIT"
] | null | null | null | src/midiutil/midiosc.py | neonkingfr/VizBench | e41f559cb6e761d717f2f5b202482d5d8dacd2d8 | [
"MIT"
] | 4 | 2016-03-09T22:29:26.000Z | 2021-04-07T13:52:28.000Z | """
This module provides an interface to MIDI things for OSC
"""
import sys
import time
import traceback
import thread
import threading
import copy
import string
import re
from threading import Thread,Lock
from math import sqrt
from ctypes import *
from time import sleep
from traceback import format_exc
from array import array
from nosuch.midiutil import *
from nosuch.oscutil import *
class MidiOscHardware(MidiBaseHardware):
def __init__(self,input_name=None,output_name=None):
if input_name == None:
input_name = "9998@127.0.0.1"
self.input_name = input_name
if output_name == None:
output_name = "9999@127.0.0.1"
self.output_name = output_name
def input_devices(self):
return [self.input_name]
def output_devices(self):
return [self.input_name]
def get_input(self,input_name=None):
if input_name == None:
input_name = self.input_name
port = re.compile(".*@").search(input_name).group()[:-1]
host = re.compile("@.*").search(input_name).group()[1:]
return MidiOscHardwareInput(host,port)
def get_output(self,output_name=None):
if output_name == None:
output_name = self.output_name
port = re.compile(".*@").search(output_name).group()[:-1]
host = re.compile("@.*").search(output_name).group()[1:]
return MidiOscHardwareOutput(host,port)
class MidiOscHardwareInput(MidiBaseHardwareInput):
def __init__(self,inhost,inport):
raise Exception, "MidiOscHardwareInput isn't finished"
def open(self):
if Midi.oneThread:
Midi.oneThread._add_midiin(self)
def close(self):
if Midi.oneThread:
Midi.oneThread._remove_midiin(self)
def __str__(self):
return 'MidiInput(name="debug")'
def to_xml(self):
return '<midi_input name="debug"/>'
class MidiOscHardwareOutput(MidiBaseHardwareOutput):
def __init__(self,outhost,outport):
self.recipient = OscRecipient(outhost,outport)
def is_open(self):
return True
def open(self):
pass
def close(self):
pass
def write_msg(self,m):
o = m.to_osc()
b = createBinaryMsg(o[0],o[1])
r = self.recipient
r.osc_socket.sendto(b,(r.osc_addr,r.osc_port))
def schedule(self,msg,time=None):
Midi.schedule(self,msg,time)
def __str__(self):
return 'MidiOutput(name="debug")'
def to_xml(self):
return '<midi_output name="debug"/>'
"""
This is executed when module is loaded
"""
| 21.238532 | 59 | 0.72959 | 333 | 2,315 | 4.882883 | 0.294294 | 0.077491 | 0.04797 | 0.03321 | 0.291513 | 0.269373 | 0.198032 | 0.077491 | 0 | 0 | 0 | 0.013092 | 0.142117 | 2,315 | 108 | 60 | 21.435185 | 0.805639 | 0 | 0 | 0.246575 | 0 | 0 | 0.079365 | 0.021315 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.027397 | 0.219178 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7ae40689370a8b6c40572fb34aab6f86b6f10fd | 5,106 | py | Python | nonebot/adapters/qqguild/message.py | nonebot/adapter-qqguild | a3e4d353bfdaafb296743bc0f15ed5d643c64d85 | [
"MIT"
] | 39 | 2021-12-23T14:26:41.000Z | 2022-03-22T14:11:19.000Z | nonebot/adapters/qqguild/message.py | nonebot/adapter-qqguild | a3e4d353bfdaafb296743bc0f15ed5d643c64d85 | [
"MIT"
] | 4 | 2022-01-22T17:59:50.000Z | 2022-03-22T12:40:10.000Z | nonebot/adapters/qqguild/message.py | nonebot/adapter-qqguild | a3e4d353bfdaafb296743bc0f15ed5d643c64d85 | [
"MIT"
] | 2 | 2022-01-16T02:38:51.000Z | 2022-03-01T15:48:36.000Z | import re
from typing import Any, Type, Tuple, Union, Iterable
from nonebot.typing import overrides
from nonebot.adapters import Message as BaseMessage
from nonebot.adapters import MessageSegment as BaseMessageSegment
from .utils import escape, unescape
from .api import Message as GuildMessage
from .api import MessageArk, MessageEmbed
class MessageSegment(BaseMessageSegment["Message"]):
@classmethod
@overrides(BaseMessageSegment)
def get_message_class(cls) -> Type["Message"]:
return Message
@staticmethod
def ark(ark: MessageArk) -> "Ark":
return Ark("ark", data={"ark": ark})
@staticmethod
def embed(embed: MessageEmbed) -> "Embed":
return Embed("embed", data={"embed": embed})
@staticmethod
def emoji(id: str) -> "Emoji":
return Emoji("emoji", data={"id": id})
@staticmethod
def image(url: str) -> "Attachment":
return Attachment("attachment", data={"url": url})
@staticmethod
def mention_user(user_id: int) -> "MentionUser":
return MentionUser("mention_user", {"user_id": str(user_id)})
@staticmethod
def mention_channel(channel_id: int) -> "MentionChannel":
return MentionChannel("mention_channel", {"channel_id": str(channel_id)})
@staticmethod
def text(content: str) -> "Text":
return Text("text", {"text": content})
@overrides(BaseMessageSegment)
def is_text(self) -> bool:
return self.type == "text"
class Text(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return escape(self.data["text"])
class Emoji(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return f"<emoji:{self.data['id']}>"
class MentionUser(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return f"<@{self.data['user_id']}>"
class MentionEveryone(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return "@everyone"
class MentionChannel(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return f"<#{self.data['channel_id']}>"
class Attachment(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return f"<attachment:{self.data['url']}>"
class Embed(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return f"<embed:{self.data['embed']}>"
class Ark(MessageSegment):
@overrides(MessageSegment)
def __str__(self) -> str:
return f"<ark:{self.data['ark']}>"
class Message(BaseMessage[MessageSegment]):
@classmethod
@overrides(BaseMessage)
def get_segment_class(cls) -> Type[MessageSegment]:
return MessageSegment
@overrides(BaseMessage)
def __add__(
self, other: Union[str, MessageSegment, Iterable[MessageSegment]]
) -> "Message":
return super(Message, self).__add__(
MessageSegment.text(other) if isinstance(other, str) else other
)
@overrides(BaseMessage)
def __radd__(
self, other: Union[str, MessageSegment, Iterable[MessageSegment]]
) -> "Message":
return super(Message, self).__radd__(
MessageSegment.text(other) if isinstance(other, str) else other
)
@staticmethod
@overrides(BaseMessage)
def _construct(msg: str) -> Iterable[MessageSegment]:
text_begin = 0
for embed in re.finditer(
r"\<(?P<type>(?:@|#|emoji:))!?(?P<id>\w+?)\>",
msg,
):
content = msg[text_begin : embed.pos + embed.start()]
if content:
yield Text("text", {"text": unescape(content)})
text_begin = embed.pos + embed.end()
if embed.group("type") == "@":
yield MentionUser("mention_user", {"user_id": embed.group("id")})
elif embed.group("type") == "#":
yield MentionChannel(
"mention_channel", {"channel_id": embed.group("id")}
)
else:
yield Emoji("emoji", {"id": embed.group("id")})
content = msg[text_begin:]
if content:
yield Text("text", {"text": unescape(msg[text_begin:])})
@classmethod
def from_guild_message(cls, message: GuildMessage) -> "Message":
msg = Message()
if message.content:
msg.extend(Message(message.content))
if message.attachments:
msg.extend(
Attachment("attachment", data={"url": seg.url})
for seg in message.attachments
if seg.url
)
if message.embeds:
msg.extend(Embed("embed", data={"embed": seg}) for seg in message.embeds)
if message.ark:
msg.append(Ark("ark", data={"ark": message.ark}))
return msg
def extract_content(self) -> str:
return "".join(
str(seg)
for seg in self
if seg.type
in ("text", "emoji", "mention_user", "mention_everyone", "mention_channel")
)
| 29.859649 | 87 | 0.607521 | 534 | 5,106 | 5.657303 | 0.161049 | 0.06852 | 0.038729 | 0.105925 | 0.324396 | 0.266799 | 0.266799 | 0.24429 | 0.207216 | 0.097319 | 0 | 0.000263 | 0.255386 | 5,106 | 170 | 88 | 30.035294 | 0.794319 | 0 | 0 | 0.30597 | 0 | 0 | 0.109479 | 0.039757 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171642 | false | 0 | 0.059701 | 0.156716 | 0.470149 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
a7ae54fa73c46e74acbf26edef75a93f9daec057 | 2,689 | py | Python | creme/metrics/__init__.py | Raul9595/creme | 39cec7ac27ccd40ff0a7bdd6bceaf7ce25c1a8da | [
"BSD-3-Clause"
] | 1 | 2020-07-27T03:06:46.000Z | 2020-07-27T03:06:46.000Z | creme/metrics/__init__.py | 2torus/creme | bcc5e2a0155663a1f0ba779c68f23456695bcb54 | [
"BSD-3-Clause"
] | 1 | 2022-02-10T06:24:42.000Z | 2022-02-10T06:24:42.000Z | creme/metrics/__init__.py | igorol/creme | 60977c4accfdca08cfd76a162095ff738ef87281 | [
"BSD-3-Clause"
] | 1 | 2021-04-16T08:27:14.000Z | 2021-04-16T08:27:14.000Z | """
A set of metrics used in machine learning that can be computed in a streaming fashion, without any
loss in precision.
"""
from .accuracy import Accuracy
from .accuracy import RollingAccuracy
from .confusion import ConfusionMatrix
from .confusion import RollingConfusionMatrix
from .cross_entropy import CrossEntropy
from .cross_entropy import RollingCrossEntropy
from .fbeta import F1
from .fbeta import FBeta
from .fbeta import MacroF1
from .fbeta import MacroFBeta
from .fbeta import MicroF1
from .fbeta import MicroFBeta
from .fbeta import MultiFBeta
from .fbeta import RollingF1
from .fbeta import RollingFBeta
from .fbeta import RollingMacroF1
from .fbeta import RollingMacroFBeta
from .fbeta import RollingMicroF1
from .fbeta import RollingMicroFBeta
from .fbeta import RollingMultiFBeta
from .jaccard import Jaccard
from .log_loss import LogLoss
from .log_loss import RollingLogLoss
from .mae import MAE
from .mae import RollingMAE
from .mcc import MCC
from .mcc import RollingMCC
from .mse import MSE
from .mse import RollingMSE
from .multioutput import RegressionMultiOutput
from .precision import MacroPrecision
from .precision import MicroPrecision
from .precision import Precision
from .precision import RollingMacroPrecision
from .precision import RollingMicroPrecision
from .precision import RollingPrecision
from .recall import MacroRecall
from .recall import MicroRecall
from .recall import Recall
from .recall import RollingMacroRecall
from .recall import RollingMicroRecall
from .recall import RollingRecall
from .rmse import RMSE
from .rmse import RollingRMSE
from .rmsle import RMSLE
from .rmsle import RollingRMSLE
from .roc_auc import ROCAUC
from .smape import RollingSMAPE
from .smape import SMAPE
__all__ = [
'Accuracy',
'ConfusionMatrix',
'CrossEntropy',
'F1',
'FBeta',
'Jaccard',
'LogLoss',
'MAE',
'MacroF1',
'MacroFBeta',
'MacroPrecision',
'MacroRecall',
'MCC',
'MicroF1',
'MicroFBeta',
'MicroPrecision',
'MicroRecall',
'MSE',
'MultiFBeta',
'Precision',
'Recall',
'RegressionMultiOutput',
'RMSE',
'RMSLE',
'ROCAUC',
'RollingAccuracy',
'RollingConfusionMatrix',
'RollingCrossEntropy',
'RollingF1',
'RollingFBeta',
'RollingLogLoss',
'RollingMAE',
'RollingMacroF1',
'RollingMacroFBeta',
'RollingMacroPrecision',
'RollingMacroRecall',
'RollingMCC',
'RollingMicroF1',
'RollingMicroFBeta',
'RollingMicroPrecision',
'RollingMicroRecall',
'RollingMSE',
'RollingMultiFBeta',
'RollingPrecision',
'RollingRecall',
'RollingRMSE',
'RollingRMSLE',
'RollingSMAPE',
'SMAPE'
]
| 25.130841 | 98 | 0.748605 | 272 | 2,689 | 7.367647 | 0.279412 | 0.062874 | 0.10479 | 0.021956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005408 | 0.174786 | 2,689 | 106 | 99 | 25.367925 | 0.897702 | 0.043511 | 0 | 0 | 0 | 0 | 0.217239 | 0.033151 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.49 | 0 | 0.49 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a7aefa18f5080501a0e0759b55fb0c060403255f | 628 | py | Python | cd/checks/is_player_connected.py | Axelware/CD-bot | 8f923c09c3c8cfcff48813c6dd11ac50c410af72 | [
"MIT"
] | 2 | 2021-12-10T00:36:59.000Z | 2021-12-11T09:11:46.000Z | cd/checks/is_player_connected.py | Axelware/CD-bot | 8f923c09c3c8cfcff48813c6dd11ac50c410af72 | [
"MIT"
] | 2 | 2021-12-10T01:53:10.000Z | 2021-12-10T09:06:01.000Z | cd/checks/is_player_connected.py | Axelware/CD-bot | 8f923c09c3c8cfcff48813c6dd11ac50c410af72 | [
"MIT"
] | 1 | 2021-12-10T00:37:07.000Z | 2021-12-10T00:37:07.000Z | # Future
from __future__ import annotations
# Standard Library
from collections.abc import Callable
from typing import Literal, TypeVar
# Packages
from discord.ext import commands
# Local
from cd import custom, exceptions
__all__ = (
"is_player_connected",
)
T = TypeVar("T")
def is_player_connected() -> Callable[[T], T]:
async def predicate(ctx: custom.Context) -> Literal[True]:
if not ctx.voice_client or not ctx.voice_client.is_connected():
raise exceptions.EmbedError(description="I'm not connected to any voice channels.")
return True
return commands.check(predicate)
| 19.030303 | 95 | 0.719745 | 81 | 628 | 5.395062 | 0.567901 | 0.036613 | 0.077803 | 0.077803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192675 | 628 | 32 | 96 | 19.625 | 0.861933 | 0.06051 | 0 | 0 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a7b03e4ffb4bbb414e62639fa7c3f4af65273269 | 4,074 | py | Python | setup.py | zkbt/henrietta | 653d798b241ad5591b704967a0413a2457a4e734 | [
"MIT"
] | null | null | null | setup.py | zkbt/henrietta | 653d798b241ad5591b704967a0413a2457a4e734 | [
"MIT"
] | 12 | 2018-09-12T03:56:04.000Z | 2019-02-15T04:12:53.000Z | setup.py | zkbt/henrietta | 653d798b241ad5591b704967a0413a2457a4e734 | [
"MIT"
] | null | null | null | '''
This setup.py file sets up our package to be installable on any computer,
so that folks can `import henrietta` from within any directory.
Thanks to this file, you can...
...tell python to look for `henrietta` in the current directory (which you
can continue to edit), by typing *one* of the following commands:
`pip install -e .`
or
`python setup.py develop`
...move a copy of this code to your site-packages directory, where python will
be able to find it (but you won't be able to keep editing it), by typing *one*
of the following commands:
`pip install .`
or
`python setup.py install`
...upload the entire package to the Python Package Index, so that other folks
will be able to install your package via the simple `pip install henrietta`, by
running the following command:
`python setup.py release`
The template for this setup.py came was pieced together with help from
barentsen, christinahedges, timothydmorton, and dfm. Check them out on github
for more neat tricks!
[`python-packaging`](https://python-packaging.readthedocs.io/en/latest/index.html)
is a pretty useful resource too!
'''
# import our basic setup ingredients
from setuptools import setup, find_packages
import os,sys
# running `python setup.py release` from the command line will post to PyPI
if "release" in sys.argv[-1]:
os.system("python setup.py sdist")
# uncomment the next line to test out on test.pypi.com/project/tess-zap
#os.system("twine upload --repository-url https://test.pypi.org/legacy/ dist/*")
os.system("twine upload dist/*")
os.system("rm -rf dist/henrietta*")
sys.exit()
# a little kludge to get the version number from __version__
exec(open('henrietta/version.py').read())
# run the setup function
setup(
# people can type `import henrietta` to access this package
name = "henrietta",
# this package will only be installed if the current version doesn't exist
version = __version__,
# what's a short description of the package?
description = "Python toolkit playing with stellar brightness measurements, for ASTR3400 at CU Boulder.",
# what's a more detailed description?
long_description = open('README.md').read(),
# who's the main author?
author = "Zach Berta-Thompson",
# what's the main author's email?
author_email = "zach.bertathompson@colorado.edu",
# what's the URL for the repository?
url = "https://github.com/zkbt/henrietta",
# this figures out what subdirectories to include
packages = find_packages(),
# are the directories of data that should be accessible when installed?
include_package_data=False,
# where are those data directories?
package_data = {'henrietta':[]},
# any scripts will be copied into your $PATH, so that can run from the command line
scripts = [],
# some descriptions about this package (for searchability?)
classifiers=[
'Intended Audience :: Education',
'Intended Audience :: Science/Research',
'Programming Language :: Python',
'Topic :: Scientific/Engineering :: Astronomy'
],
# what other packages are required. these must be pip-installable
install_requires=['numpy',
'astropy',
'scipy',
'ipython',
'matplotlib',
'lightkurve>=1.0b26',
'tqdm',
'thefriendlystars>=0.0.2',
'illumination>=0.0.3',
'ipywidgets',
'jupyter',
'photutils',
'ipympl',
'scikit-image',
'emcee',
'corner'],
# the packages in `key` will be installed if folks run `pip install henrietta[key]`
extras_require={'models':['batman-package', ],
'docs':['sphinx', 'nbsphinx', 'sphinx_rtd_theme', 'numpydoc']},
# (I think just leave this set to False)
zip_safe=False,
# under what license is this code released?
license='MIT')
| 37.376147 | 109 | 0.650957 | 531 | 4,074 | 4.956686 | 0.482109 | 0.018617 | 0.024696 | 0.009878 | 0.032675 | 0.032675 | 0.032675 | 0.032675 | 0.032675 | 0 | 0 | 0.004916 | 0.251105 | 4,074 | 108 | 110 | 37.722222 | 0.857752 | 0.554737 | 0 | 0 | 0 | 0 | 0.361905 | 0.042577 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.043478 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7b37c8008c4aac92350512db834814661a375e1 | 41,807 | py | Python | zstackwoodpecker/zstackwoodpecker/operations/hybrid_operations.py | bgerxx/woodpecker | fdc51245945cc9be4d1f028988079213eb99b2ad | [
"Apache-2.0"
] | null | null | null | zstackwoodpecker/zstackwoodpecker/operations/hybrid_operations.py | bgerxx/woodpecker | fdc51245945cc9be4d1f028988079213eb99b2ad | [
"Apache-2.0"
] | null | null | null | zstackwoodpecker/zstackwoodpecker/operations/hybrid_operations.py | bgerxx/woodpecker | fdc51245945cc9be4d1f028988079213eb99b2ad | [
"Apache-2.0"
] | null | null | null | '''
All ldap operations for test.
@author: quarkonics
'''
from apibinding.api import ApiError
import apibinding.inventory as inventory
import apibinding.api_actions as api_actions
import zstackwoodpecker.test_util as test_util
import account_operations
import config_operations
import os
import inspect
def add_aliyun_key_secret(name, description, key, secret, session_uuid=None):
action = api_actions.AddAliyunKeySecretAction()
action.name = name
action.description = description
action.key = key
action.secret = secret
test_util.action_logger('Add [aliyun key secret:] %s' % key)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[aliyun key secret:] %s is added.' % key)
return evt.inventory
def del_aliyun_key_secret(uuid, session_uuid=None):
action = api_actions.DeleteAliyunKeySecretAction()
action.uuid = uuid
test_util.action_logger('Delete [aliyun key secret:] %s' % uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[aliyun key secret:] %s is deleted.' % uuid)
return evt
def update_aliyun_key_secret(uuid, name=None, description=None, session_uuid=None):
action = api_actions.UpdateAliyunKeySecretAction()
action.uuid = uuid
action.name = name
action.description = description
test_util.action_logger('Update [aliyun key secret:] %s' % uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[aliyun key secret:] %s is updated.' % uuid)
return evt
def attach_aliyun_key(uuid, session_uuid=None):
action = api_actions.AttachAliyunKeyAction()
action.uuid = uuid
test_util.action_logger('Attach [aliyun key:] %s' % uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[aliyun key:] %s is attached.' % uuid)
return evt
def detach_aliyun_key(uuid, session_uuid=None):
action = api_actions.DetachAliyunKeyAction()
action.uuid = uuid
test_util.action_logger('Detach [aliyun key:] %s' % uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[aliyun key:] %s is detached.' % uuid)
return evt
def get_oss_bucket_name_from_remote(data_center_uuid, session_uuid=None):
action = api_actions.GetOssBucketNameFromRemoteAction()
action.dataCenterUuid = data_center_uuid
test_util.action_logger('get Oss Bucket Name from Remote')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def add_oss_bucket_from_remote(data_center_uuid, oss_bucket_name, session_uuid=None):
action = api_actions.AddOssBucketFromRemoteAction()
action.dataCenterUuid = data_center_uuid
action.bucketName = oss_bucket_name
test_util.action_logger('Add [Oss Bucket From Remote:] %s %s' % (data_center_uuid, oss_bucket_name))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Oss Bucket:] %s %s is added.' % (data_center_uuid, oss_bucket_name))
return evt.inventory
def del_oss_bucket_name_in_local(uuid, session_uuid=None):
action = api_actions.DeleteOssBucketNameLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Oss File Bucket Name in local:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Oss File Bucket Name in local:] %s is deleted.' % (uuid))
return evt
def create_oss_bucket_remote(data_center_uuid, bucket_name, description, session_uuid=None):
action = api_actions.CreateOssBucketRemoteAction()
action.dataCenterUuid = data_center_uuid
action.bucketName = bucket_name
action.description = description
test_util.action_logger('Create [Oss Bucket Name Remote:] %s %s' % (data_center_uuid, bucket_name))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Oss Bucket Name Remote:] %s %s is created.' % (data_center_uuid, bucket_name))
return evt.inventory
def del_oss_bucket_remote(uuid, session_uuid=None):
action = api_actions.DeleteOssBucketRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Oss Bucket Name Remote:] %s' % uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Oss Bucket Name Remote:] %s is deleted.' % uuid)
return evt
def del_oss_bucket_file_remote(bucket_uuid, file_name, session_uuid=None):
action = api_actions.DeleteOssBucketFileRemoteAction()
action.uuid = bucket_uuid
action.fileName = file_name
test_util.action_logger('Delete [Oss Bucket File Remote:] %s %s' % (bucket_uuid, file_name))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Oss Bucket File Remote:] %s %s is deleted.' % (bucket_uuid, file_name))
return evt
def get_oss_bucket_file_from_remote(bucket_uuid, session_uuid=None):
action = api_actions.GetOssBucketFileFromRemoteAction()
action.uuid = bucket_uuid
test_util.action_logger('Get [Oss Bucket File From Remote:] %s' % bucket_uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def get_datacenter_from_remote(datacenter_type, session_uuid=None):
action = api_actions.GetDataCenterFromRemoteAction()
action.type = datacenter_type
test_util.action_logger('Get [Datacenter From Remote:] %s' % datacenter_type)
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def get_ecs_instance_type_from_remote(iz_uuid, session_uuid=None):
action = api_actions.GetEcsInstanceTypeAction()
action.identityZoneUuid = iz_uuid
test_util.action_logger('Get [Ecs Instance Type From Remote:] %s' % iz_uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.types
def add_datacenter_from_remote(datacenter_type, region_id, description, session_uuid=None):
action = api_actions.AddDataCenterFromRemoteAction()
action.type = datacenter_type
action.regionId = region_id
action.description = description
test_util.action_logger('Add [datacenter from remote:] %s %s' % (datacenter_type, region_id))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[datacenter from remote:] %s %s is added.' % (datacenter_type, region_id))
return evt.inventory
def del_datacenter_in_local(uuid, session_uuid=None):
action = api_actions.DeleteDataCenterInLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [datacenter in local:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[datacenter in local:] %s is deleted.' % uuid)
return evt
def attach_oss_bucket_to_ecs_datacenter(oss_bucket_uuid, session_uuid=None):
action = api_actions.AttachOssBucketToEcsDataCenterAction()
action.ossBucketUuid = oss_bucket_uuid
# action.dataCenterUuid = datacenter_uuid
test_util.action_logger('Attach [Oss bucket:] %s to Datacenter' % oss_bucket_uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Oss bucket:] %s is attached to Datacenter.' % oss_bucket_uuid)
return evt
def detach_oss_bucket_to_ecs_datacenter(oss_bucket_uuid, session_uuid=None):
action = api_actions.DetachOssBucketFromEcsDataCenterAction()
action.ossBucketUuid = oss_bucket_uuid
# action.dataCenterUuid = datacenter_uuid
test_util.action_logger('Detach [Oss bucket:] %s from Datacenter' % oss_bucket_uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Oss bucket:] %s is detached from Datacenter.' % oss_bucket_uuid)
return evt
def get_identity_zone_from_remote(datacenter_type, region_id, session_uuid=None):
action = api_actions.GetIdentityZoneFromRemoteAction()
action.type = datacenter_type
action.regionId = region_id
test_util.action_logger('Get [Identity zone From Remote:] %s %s' % (datacenter_type, region_id))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def add_identity_zone_from_remote(datacenter_type, datacenter_uuid, zone_id, session_uuid=None):
action = api_actions.AddIdentityZoneFromRemoteAction()
action.type = datacenter_type
action.dataCenterUuid = datacenter_uuid
action.zoneId = zone_id
test_util.action_logger('Add [identity zone from remote:] %s %s' % (datacenter_uuid, zone_id))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[identity zone from remote:] %s %s is added.' % (datacenter_uuid, zone_id))
return evt.inventory
def del_identity_zone_in_local(uuid, session_uuid=None):
action = api_actions.DeleteIdentityZoneInLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [identity zone in local:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[identity zone in local:] %s is deleted.' % uuid)
return evt
def create_ecs_vpc_remote(datacenter_uuid, name, vrouter_name, cidr_block, session_uuid=None):
action = api_actions.CreateEcsVpcRemoteAction()
action.dataCenterUuid = datacenter_uuid
action.name = name
action.vRouterName = vrouter_name
action.cidrBlock = cidr_block
test_util.action_logger('Create [Ecs VPC Remote:] %s %s %s %s' % (datacenter_uuid, name, vrouter_name, cidr_block))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs VPC Remote:] %s %s %s %s is created.' % (datacenter_uuid, name, vrouter_name, cidr_block))
return evt.inventory
def sync_ecs_vpc_from_remote(datacenter_uuid, session_uuid=None):
action = api_actions.SyncEcsVpcFromRemoteAction()
action.dataCenterUuid = datacenter_uuid
test_util.action_logger('Sync [Ecs VPC From Remote:] %s' % (datacenter_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def del_ecs_vpc_local(uuid, session_uuid=None):
action = api_actions.DeleteEcsVpcInLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Ecs VPC Local:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs VPC Local:] %s is deleted.' % (uuid))
return evt
def del_ecs_vpc_remote(uuid, session_uuid=None):
action = api_actions.DeleteEcsVpcRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Ecs VPC Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs VPC Remote:] %s is deleted.' % (uuid))
return evt
def create_ecs_vswtich_remote(vpc_uuid, identity_zone_uuid, name, cidr_block, session_uuid=None):
action = api_actions.CreateEcsVSwitchRemoteAction()
action.vpcUuid = vpc_uuid
action.identityZoneUuid = identity_zone_uuid
action.name = name
action.cidrBlock = cidr_block
test_util.action_logger('Create [Ecs VSwitch Remote:] %s %s %s %s' % (vpc_uuid, identity_zone_uuid, name, cidr_block))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs VSwitch Remote:] %s %s %s %s is created.' % (vpc_uuid, identity_zone_uuid, name, cidr_block))
return evt.inventory
def create_hybrid_eip(data_center_uuid, name, band_width, charge_type='PayByTraffic', eip_type='aliyun', session_uuid=None):
action = api_actions.CreateHybridEipAction()
action.dataCenterUuid = data_center_uuid
action.name = name
action.bandWidthMb = band_width
action.chargeType = charge_type
action.type = eip_type
test_util.action_logger('Create [Hybrid Eip:] %s %s %s %s' % (data_center_uuid, name, charge_type, eip_type))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Hybrid Eip:] %s %s %s %s is created.' % (data_center_uuid, name, charge_type, eip_type))
return evt.inventory
def del_hybrid_eip_remote(uuid, eip_type='aliyun', session_uuid=None):
action = api_actions.DeleteHybridEipRemoteAction()
action.uuid = uuid
action.type = eip_type
test_util.action_logger('Delete [Hybrid Eip Remote:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Hybrid Eip Remote:] %s is deleted.' % uuid)
return evt
def attach_hybrid_eip_to_ecs(eip_uuid, ecs_uuid, eip_type='aliyun', session_uuid=None):
action = api_actions.AttachHybridEipToEcsAction()
action.eipUuid = eip_uuid
action.ecsUuid = ecs_uuid
action.type = eip_type
test_util.action_logger('Attach [Hybrid Eip :] %s to ECS %s' % (eip_uuid, ecs_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Hybrid Eip :] %s is attached to Ecs %s.' % (eip_uuid, ecs_uuid))
return evt
def detach_hybrid_eip_from_ecs(eip_uuid, eip_type='aliyun', session_uuid=None):
action = api_actions.DetachHybridEipFromEcsAction()
action.eipUuid = eip_uuid
action.type = eip_type
test_util.action_logger('Detach [Hybrid Eip :] %s from ECS' % eip_uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Hybrid Eip :] %s is detached from Ecs.' % eip_uuid)
return evt
def sync_hybrid_eip_from_remote(data_center_uuid, eip_type='aliyun', session_uuid=None):
action = api_actions.SyncHybridEipFromRemoteAction()
action.dataCenterUuid = data_center_uuid
action.type = eip_type
test_util.action_logger('Sync [Hybrid Eip From Remote:] %s' % (data_center_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def sync_ecs_vswitch_from_remote(data_center_uuid, session_uuid=None):
action = api_actions.SyncEcsVSwitchFromRemoteAction()
action.dataCenterUuid = data_center_uuid
test_util.action_logger('Sync [Ecs VSwitch From Remote:] %s' % (data_center_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def del_ecs_vswitch_in_local(uuid, session_uuid=None):
action = api_actions.DeleteEcsVSwitchInLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Ecs VSwitch: %s] in Local' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs VSwitch: %s] in Local is deleted.' % uuid)
return evt
def del_ecs_vswitch_remote(uuid, session_uuid=None):
action = api_actions.DeleteEcsVSwitchRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Ecs VSwitch Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs VSwitch Remote:] %s is deleted.' % (uuid))
return evt
def del_ecs_instance_local(uuid, session_uuid=None):
action = api_actions.DeleteEcsInstanceLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Ecs Instance in Local:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs Instance in Local:] %s is deleted.' % (uuid))
return evt
def sync_aliyun_virtual_router_from_remote(vpc_uuid, session_uuid=None):
action = api_actions.SyncAliyunVirtualRouterFromRemoteAction()
action.vpcUuid = vpc_uuid
test_util.action_logger('Sync [Aliyun VirtualRouter From Remote:] %s' % (vpc_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def sync_router_entry_from_remote(vrouter_uuid, vrouter_type, session_uuid=None):
action = api_actions.SyncAliyunRouteEntryFromRemoteAction()
action.vRouterUuid = vrouter_uuid
action.vRouterType = vrouter_type
test_util.action_logger('Sync [Route Entry From Remote:] %s' % (vrouter_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def create_aliyun_vpc_virtualrouter_entry_remote(dst_cidr_block, vrouter_uuid, vrouter_type, next_hop_type, next_hop_uuid, session_uuid=None):
action = api_actions.CreateAliyunVpcVirtualRouterEntryRemoteAction()
action.dstCidrBlock = dst_cidr_block
action.vRouterUuid = vrouter_uuid
action.vRouterType = vrouter_type
action.nextHopType = next_hop_type
action.nextHopUuid = next_hop_uuid
test_util.action_logger('Create [VPC VirtualRouter Entry Remote:] %s %s %s %s %s' % (dst_cidr_block, vrouter_uuid, vrouter_type, next_hop_type, next_hop_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[VPC VirtualRouter Entry Remote:] %s %s %s %s %s is created.' % (dst_cidr_block, vrouter_uuid, vrouter_type, next_hop_type, next_hop_uuid))
return evt.inventory
def create_vpn_ipsec_config(name, pfs='group2', enc_alg='3des', auth_alg='sha1', session_uuid=None):
action = api_actions.CreateVpnIpsecConfigAction()
action.name = name
action.pfs = pfs
action.encAlg = enc_alg
action.authAlg = auth_alg
test_util.action_logger('Create [VPN IPsec Config:] %s %s %s %s' % (name, pfs, enc_alg, auth_alg))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[VPN IPsec Config:] %s %s %s %s is created.' % (name, pfs, enc_alg, auth_alg))
return evt.inventory
def create_vpn_ike_ipsec_config(name, psk, local_ip, remote_ip, pfs='group2', enc_alg='3des', auth_alg='sha1', version='ikev1', mode='main', session_uuid=None):
action = api_actions.CreateVpnIkeConfigAction()
action.psk = psk
action.pfs = pfs
action.localIp = local_ip
action.remoteIp = remote_ip
action.encAlg = enc_alg
action.authAlg = auth_alg
action.version = version
action.mode = mode
action.name = name
test_util.action_logger('Create [VPN Ike Config:] %s %s %s %s %s %s %s %s %s' % (name, local_ip, remote_ip, psk, pfs, enc_alg, auth_alg, version, mode))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[VPN Ike Config:] %s %s %s %s %s %s %s %s %s is created.' % (name, local_ip, remote_ip, psk, pfs, enc_alg, auth_alg, version, mode))
return evt.inventory
def create_vpc_vpn_connection(user_gatway_uuid, vpn_gateway_uuid, name, local_cidr, remote_cidr, ike_config_uuid, ipsec_config_uuid, active='true', session_uuid=None):
action = api_actions.CreateVpcVpnConnectionRemoteAction()
action.userGatewayUuid = user_gatway_uuid
action.vpnGatewayUuid = vpn_gateway_uuid
action.name = name
action.localCidr = local_cidr
action.remoteCidr = remote_cidr
action.ikeConfUuid = ike_config_uuid
action.ipsecConfUuid = ipsec_config_uuid
action.active = active
test_util.action_logger('Create [VPC VPN Connection:] %s %s' % (vpn_gateway_uuid, user_gatway_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[VPC VPN Connection:] %s %s is created.' % (vpn_gateway_uuid, user_gatway_uuid))
return evt.inventory
def create_vpc_user_vpn_gateway(data_center_uuid, gw_ip, gw_name, session_uuid=None):
action = api_actions.CreateVpcUserVpnGatewayRemoteAction()
action.dataCenterUuid = data_center_uuid
action.ip = gw_ip
action.name = gw_name
test_util.action_logger('Create [VPC User VPN Gateway:] %s %s' % (data_center_uuid, gw_ip))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[VPC User VPN Gateway:] %s %s is created.' % (data_center_uuid, gw_ip))
return evt.inventory
def del_vpc_user_vpn_gateway_remote(uuid, session_uuid=None):
action = api_actions.DeleteVpcUserVpnGatewayRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Vpc User Vpn Gateway Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Vpc User Vpn Gateway Remote:] %s is deleted.' % (uuid))
return evt
def del_vpc_vpn_connection_remote(uuid, session_uuid=None):
action = api_actions.DeleteVpcVpnConnectionRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Vpc Vpn Connection Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Vpc Vpn Connection Remote:] %s is deleted.' % (uuid))
return evt
def del_aliyun_route_entry_remote(uuid, session_uuid=None):
action = api_actions.DeleteAliyunRouteEntryRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Aliyun Route Entry Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Aliyun Route Entry Remote:] %s is deleted.' % (uuid))
return evt
def del_vpc_vpn_gateway_local(uuid, session_uuid=None):
action = api_actions.DeleteVpcVpnGatewayLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Vpc Vpn Gateway in local:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Vpc Vpn Gateway in local:] %s is deleted.' % (uuid))
return evt
def del_vpc_vpn_connection_local(uuid, session_uuid=None):
action = api_actions.DeleteVpcVpnConnectionLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Vpc Vpn Gateway Local:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Router Entry Remote:] %s is deleted.' % (uuid))
return evt
def del_vpc_ike_config_local(uuid, session_uuid=None):
action = api_actions.DeleteVpcIkeConfigLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Vpc Ike Config in Local:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Vpc Ike Config in Local:] %s is deleted.' % (uuid))
return evt
def del_vpc_ipsec_config_local(uuid, session_uuid=None):
action = api_actions.DeleteVpcIpSecConfigLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Vpc IPsec Config in Local:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Vpc IPsec Config in Local:] %s is deleted.' % (uuid))
return evt
def del_vpc_user_vpn_gateway_local(uuid, session_uuid=None):
action = api_actions.DeleteVpcUserVpnGatewayLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [Router Entry Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Router Entry Remote:] %s is deleted.' % (uuid))
return evt
def destroy_vm_instance(uuid, session_uuid=None):
action = api_actions.DestroyVmInstanceAction()
action.uuid = uuid
test_util.action_logger('Destroy [VM Instance:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[VM Instance:] %s is destroyed.' % (uuid))
return evt
def create_ecs_security_group_remote(name, vpc_uuid, session_uuid=None):
action = api_actions.CreateEcsSecurityGroupRemoteAction()
action.name = name
action.vpcUuid = vpc_uuid
test_util.action_logger('Create [Ecs Security Group Remote:] %s %s' % (name, vpc_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('Ecs Security Group Remote:] %s %s is created.' % (name, vpc_uuid))
return evt.inventory
def create_ecs_security_group_rule_remote(group_uuid, direction, protocol, port_range, cidr, policy, nic_type, priority, session_uuid=None):
action = api_actions.CreateEcsSecurityGroupRuleRemoteAction()
action.groupUuid = group_uuid
action.direction = direction
action.protocol = protocol
action.portRange = port_range
action.cidr = cidr
action.policy = policy
action.nictype = nic_type
action.priority = priority
test_util.action_logger('Create [Ecs Security Group Rule Remote:] %s %s %s %s %s %s %s %s' % (group_uuid, direction, protocol, port_range, cidr, policy, nic_type, priority))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs Security Group Rule Remote:] %s %s %s %s %s %s %s %s is created.' % (group_uuid, direction, protocol, port_range, cidr, policy, nic_type, priority))
return evt.inventory
def sync_ecs_security_group_from_remote(ecs_vpc_uuid, session_uuid=None):
action = api_actions.SyncEcsSecurityGroupFromRemoteAction()
action.ecsVpcUuid = ecs_vpc_uuid
test_util.action_logger('Sync [Security Group From Remote:] %s' % (ecs_vpc_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def sync_ecs_security_group_rule_from_remote(sg_uuid, session_uuid=None):
action = api_actions.SyncEcsSecurityGroupRuleFromRemoteAction()
action.uuid = sg_uuid
test_util.action_logger('Sync [Security Group From Remote:] %s' % (sg_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def sync_vpc_vpn_gateway_from_remote(data_center_uuid, session_uuid=None):
action = api_actions.SyncVpcVpnGatewayFromRemoteAction()
action.dataCenterUuid = data_center_uuid
test_util.action_logger('Sync [Vpc Vpn Gateway From Remote:] %s' % (data_center_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def sync_vpc_user_vpn_gateway_from_remote(data_center_uuid, session_uuid=None):
action = api_actions.SyncVpcUserVpnGatewayFromRemoteAction()
action.dataCenterUuid = data_center_uuid
test_util.action_logger('Sync [Vpc User Vpn Gateway From Remote:] %s' % (data_center_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def sync_vpc_vpn_connection_from_remote(data_center_uuid, session_uuid=None):
action = api_actions.SyncVpcVpnConnectionFromRemoteAction()
action.dataCenterUuid = data_center_uuid
test_util.action_logger('Sync [Vpc Vpn Connection From Remote:] %s' % (data_center_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def del_ecs_security_group_in_local(uuid, session_uuid=None):
action = api_actions.DeleteEcsSecurityGroupInLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [ecs security group in local:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs security group in local:] %s is deleted.' % uuid)
return evt
def del_ecs_security_group_rule_remote(uuid, session_uuid=None):
action = api_actions.DeleteEcsSecurityGroupRuleRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Ecs Security Group Rule Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs Security Group Rule Remote:] %s is deleted.' % (uuid))
return evt
def del_ecs_security_group_remote(uuid, session_uuid=None):
action = api_actions.DeleteEcsSecurityGroupRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [Ecs Security Group Remote:] %s ' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Ecs Security Group Remote:] %s is deleted.' % (uuid))
return evt
def create_ecs_image_from_local_image(bs_uuid, datacenter_uuid, image_uuid, name, session_uuid=None):
action = api_actions.CreateEcsImageFromLocalImageAction()
action.backupStorageUuid = bs_uuid
action.dataCenterUuid = datacenter_uuid
action.imageUuid = image_uuid
action.name = name
test_util.action_logger('Create Ecs Image from [Local image:] %s %s %s' % (bs_uuid, datacenter_uuid, image_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('Ecs Image is created from [Local image:] %s %s %s.' % (bs_uuid, datacenter_uuid, image_uuid))
return evt.inventory
def del_ecs_image_remote(uuid, session_uuid=None):
action = api_actions.DeleteEcsImageRemoteAction()
action.uuid = uuid
test_util.action_logger('Delete [ecs image remote:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs image remote:] %s is deleted.' % uuid)
return evt
def del_ecs_image_in_local(uuid, session_uuid=None):
action = api_actions.DeleteEcsImageLocalAction()
action.uuid = uuid
test_util.action_logger('Delete [ecs image in local:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs image in local:] %s is deleted.' % uuid)
return evt
def del_hybrid_eip_local(uuid, eip_type='aliyun', session_uuid=None):
action = api_actions.DeleteHybridEipFromLocalAction()
action.type = eip_type
action.uuid = uuid
test_util.action_logger('Delete [Hybrid Eip in local:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[Hybrid Eip in local:] %s is deleted.' % uuid)
return evt
def sync_ecs_image_from_remote(datacenter_uuid, image_type='self', session_uuid=None):
action = api_actions.SyncEcsImageFromRemoteAction()
action.dataCenterUuid = datacenter_uuid
action.type = image_type
test_util.action_logger('Sync [Ecs Image From Remote:] %s' % (datacenter_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def create_ecs_instance_from_ecs_image(ecs_root_password, image_uuid, ecs_vswitch_uuid, ecs_bandwidth, ecs_security_group_uuid, instance_offering_uuid=None, instance_type=None, private_ip_address=None, allocate_public_ip='false', name=None, ecs_console_password=None, session_uuid=None):
action = api_actions.CreateEcsInstanceFromEcsImageAction()
action.ecsRootPassword = ecs_root_password
action.ecsImageUuid = image_uuid
action.ecsVSwitchUuid = ecs_vswitch_uuid
action.instanceOfferingUuid = instance_offering_uuid
action.instanceType = instance_type
action.ecsBandWidth = ecs_bandwidth
action.ecsSecurityGroupUuid = ecs_security_group_uuid
action.privateIpAddress = private_ip_address
action.allocatePublicIp = allocate_public_ip
action.name = name
action.ecsConsolePassword = ecs_console_password
test_util.action_logger('Create Ecs Instance from [Ecs Image:] %s' % image_uuid)
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('Ecs Instance is created from [Ecs Image:] %s.' % image_uuid)
return evt.inventory
def del_ecs_instance(uuid, session_uuid=None):
action = api_actions.DeleteEcsInstanceAction()
action.uuid = uuid
test_util.action_logger('Delete [ecs instance:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs instance:] %s is deleted.' % uuid)
return evt
def sync_ecs_instance_from_remote(datacenter_uuid, only_zstack=None, session_uuid=None):
action = api_actions.SyncEcsInstanceFromRemoteAction()
action.dataCenterUuid = datacenter_uuid
action.onlyZstack = only_zstack
test_util.action_logger('Sync [Ecs Instance From Remote:] %s' % (datacenter_uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt.inventories
def update_ecs_instance(uuid, name=None, description=None, password=None, session_uuid=None):
action = api_actions.UpdateEcsInstanceAction()
action.uuid = uuid
action.name = name
action.description = description
action.password = password
test_util.action_logger('Update [Ecs Instance: %s]' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def stop_ecs_instance(uuid, session_uuid=None):
action = api_actions.StopEcsInstanceAction()
action.uuid = uuid
test_util.action_logger('Stop [ecs instance:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs instance:] %s is stopped.' % uuid)
return evt
def start_ecs_instance(uuid, session_uuid=None):
action = api_actions.StartEcsInstanceAction()
action.uuid = uuid
test_util.action_logger('Start [ecs instance:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs instance:] %s is started.' % uuid)
return evt
def reboot_ecs_instance(uuid, session_uuid=None):
action = api_actions.RebootEcsInstanceAction()
action.uuid = uuid
test_util.action_logger('Reboot [ecs instance:] %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs instance:] %s is rebooted.' % uuid)
return evt
def update_ecs_instance_vnc_password(uuid, password, session_uuid=None):
action = api_actions.UpdateEcsInstanceVncPasswordAction()
action.uuid = uuid
action.password = password
test_util.action_logger('Update [ecs instance:] vnc password %s' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.test_logger('[ecs instance:] %s vnc password is updated.' % uuid)
return evt
def update_image_guestOsType(uuid, guest_os_type, session_uuid=None):
action = api_actions.UpdateImageAction()
action.uuid = uuid
action.guestOsType = guest_os_type
test_util.action_logger('Update [image %s] guestOsType' % (uuid))
evt = account_operations.execute_action_with_session(action, session_uuid)
test_util.action_logger('[image %s] guestOsType is updated to [%s]' % (uuid, guest_os_type))
return evt
def query_ecs_image_local(condition=[], session_uuid=None):
action = api_actions.QueryEcsImageFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Ecs image from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_ecs_vpc_local(condition=[], session_uuid=None):
action = api_actions.QueryEcsVpcFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Ecs Vpc from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_ecs_vswitch_local(condition=[], session_uuid=None):
action = api_actions.QueryEcsVSwitchFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Ecs vSwitch from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_ecs_instance_local(condition=[], session_uuid=None):
action = api_actions.QueryEcsInstanceFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Ecs Instance from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_aliyun_key_secret(condition=[], session_uuid=None):
action = api_actions.QueryAliyunKeySecretAction()
action.conditions = condition
test_util.action_logger('Query Aliyun Key Secret')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_datacenter_local(condition=[], session_uuid=None):
action = api_actions.QueryDataCenterFromLocalAction()
action.conditions = condition
test_util.action_logger('Query DataCenter from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_ecs_security_group_local(condition=[], session_uuid=None):
action = api_actions.QueryEcsSecurityGroupFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Ecs Security Group from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_ecs_security_group_rule_local(condition=[], session_uuid=None):
action = api_actions.QueryEcsSecurityGroupRuleFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Ecs Security Group from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_hybrid_eip_local(condition=[], session_uuid=None):
action = api_actions.QueryHybridEipFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Hybrid Eip from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_vpc_vpn_gateway_local(condition=[], session_uuid=None):
action = api_actions.QueryVpcVpnGatewayFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Vpc Vpn Gate from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_vpc_vpn_ike_config_local(condition=[], session_uuid=None):
action = api_actions.QueryVpcIkeConfigFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Vpc Vpn Ike Config from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_vpc_vpn_ipsec_config_local(condition=[], session_uuid=None):
action = api_actions.QueryVpcIpSecConfigFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Vpc Vpn IPsec Config from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_vpc_user_vpn_gateway_local(condition=[], session_uuid=None):
action = api_actions.QueryVpcUserVpnGatewayFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Vpc User Vpn Gate from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_vpc_vpn_connection_local(condition=[], session_uuid=None):
action = api_actions.QueryVpcVpnConnectionFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Vpc Vpn Connection from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_aliyun_virtual_router_local(condition=[], session_uuid=None):
action = api_actions.QueryAliyunVirtualRouterFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Aliyun Virtual Router from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_aliyun_route_entry_local(condition=[], session_uuid=None):
action = api_actions.QueryAliyunRouteEntryFromLocalAction()
action.conditions = condition
test_util.action_logger('Query Aliyun Route Entry from local')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_oss_bucket_file_name(condition=[], session_uuid=None):
action = api_actions.QueryOssBucketFileNameAction()
action.conditions = condition
test_util.action_logger('Query Oss Bucket File Name')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def query_ipsec_connection(condition=[], session_uuid=None):
action = api_actions.QueryIPSecConnectionAction()
action.conditions = condition
test_util.action_logger('Query IPsec Connection')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def get_ecs_instance_vnc_url(uuid, session_uuid=None):
action = api_actions.GetEcsInstanceVncUrlAction()
action.uuid = uuid
test_util.action_logger('Get Ecs Instance Vpc Url')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt
def get_create_ecs_image_progress(data_center_uuid, image_uuid, session_uuid=None):
action = api_actions.GetCreateEcsImageProgressAction()
action.dataCenterUuid = data_center_uuid
action.imageUuid = image_uuid
test_util.action_logger('Get Create ECS Image Progress')
evt = account_operations.execute_action_with_session(action, session_uuid)
return evt | 50.73665 | 288 | 0.75107 | 5,389 | 41,807 | 5.506216 | 0.055855 | 0.070434 | 0.042463 | 0.064705 | 0.787349 | 0.740471 | 0.683349 | 0.619385 | 0.526843 | 0.452937 | 0 | 0.000199 | 0.158777 | 41,807 | 824 | 289 | 50.73665 | 0.843527 | 0.003349 | 0 | 0.443672 | 0 | 0.005563 | 0.137777 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132128 | false | 0.013908 | 0.011127 | 0 | 0.275382 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7b426f2174df82614170a394fb020de8df61abf | 293 | py | Python | datasets_example/populate_elastic.py | aleksbobic/csx | 151ac6644694ac585bf5d070bae7146e94c30aec | [
"MIT"
] | null | null | null | datasets_example/populate_elastic.py | aleksbobic/csx | 151ac6644694ac585bf5d070bae7146e94c30aec | [
"MIT"
] | null | null | null | datasets_example/populate_elastic.py | aleksbobic/csx | 151ac6644694ac585bf5d070bae7146e94c30aec | [
"MIT"
] | null | null | null | import requests
import sys
requests.put(f"http://localhost:9200/{sys.argv[1]}?pretty")
headers = {"Content-Type": "application/x-ndjson"}
data = open(sys.argv[2], "rb").read()
requests.post(
f"http://localhost:9200/{sys.argv[1]}/_bulk?pretty", headers=headers, data=data
)
| 22.538462 | 84 | 0.665529 | 42 | 293 | 4.619048 | 0.571429 | 0.108247 | 0.14433 | 0.185567 | 0.268041 | 0.268041 | 0.268041 | 0 | 0 | 0 | 0 | 0.043137 | 0.129693 | 293 | 12 | 85 | 24.416667 | 0.717647 | 0 | 0 | 0 | 0 | 0 | 0.441281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7c3c5875178bbdc7bd8d2fd3aaff1ff122b86d9 | 682 | py | Python | retired/example_process_discharge_simulation.py | changliao1025/pyswat | cdcda1375be8c0f71459a78438b1e9f8a22a77bc | [
"MIT"
] | 2 | 2021-12-11T01:39:00.000Z | 2022-02-15T17:57:45.000Z | retired/example_process_discharge_simulation.py | changliao1025/pyswat | cdcda1375be8c0f71459a78438b1e9f8a22a77bc | [
"MIT"
] | 5 | 2022-03-10T16:38:30.000Z | 2022-03-28T17:31:20.000Z | retired/example_process_discharge_simulation.py | changliao1025/pyswat | cdcda1375be8c0f71459a78438b1e9f8a22a77bc | [
"MIT"
] | null | null | null | from swaty.simulation.swat_main import swat_main
from swaty.swaty_read_model_configuration_file import swat_read_model_configuration_file
from swaty.classes.pycase import swaty
from swaty.postprocess.extract.swat_extract_stream_discharge import swat_extract_stream_discharge
sFilename_configuration_in = '/global/homes/l/liao313/workspace/python/swaty/swaty/shared/swat_simulation.xml'
#step 1
aConfig = swat_read_model_configuration_file(sFilename_configuration_in)
# iCase_index_in=iCase_index_in, sJob_in=sJob_in, iFlag_mode_in=iFlag_mode_in)
aConfig['sFilename_model_configuration'] = sFilename_configuration_in
oModel = swaty(aConfig)
swat_extract_stream_discharge(oModel) | 52.461538 | 110 | 0.879765 | 97 | 682 | 5.752577 | 0.360825 | 0.064516 | 0.11828 | 0.139785 | 0.107527 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006221 | 0.057185 | 682 | 13 | 111 | 52.461538 | 0.861586 | 0.121701 | 0 | 0 | 0 | 0.111111 | 0.180602 | 0.180602 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a7c4df68721fac6742030901c0c135b22a7c5979 | 861 | py | Python | ThinkPython/chap9/ex9.py | sokolowskik/Tutorials | d2681d4f18b03e00f90f9132c77f0b23b74d2629 | [
"MIT"
] | null | null | null | ThinkPython/chap9/ex9.py | sokolowskik/Tutorials | d2681d4f18b03e00f90f9132c77f0b23b74d2629 | [
"MIT"
] | null | null | null | ThinkPython/chap9/ex9.py | sokolowskik/Tutorials | d2681d4f18b03e00f90f9132c77f0b23b74d2629 | [
"MIT"
] | null | null | null | def is_reverse(i, j):
"""
Convert 2-digit numbers to strings and check if they are palindromic.
If one of the numbers has less then 2 digits, fill with zeros.
"""
str_i = str(i)
str_j = str(j)
if len(str_i) < 2:
str_i = str_i.zfill(2)
if len(str_j) < 2:
str_j = str_j.zfill(2)
return str_j[::-1] == str_i
age_diff = 15
d_age = 0
while age_diff <= 50:
reversible = 0
for d_age in range(0,80):
m_age = d_age + age_diff
if is_reverse(d_age, m_age):
reversible += 1
if reversible == 6:
print 'The daughter is', d_age, 'years old'
if reversible == 8:
print 'At the 8th time the daughter will be', d_age, 'years old and the mother will be', m_age, 'years old'
break
d_age += 1
age_diff += 1
| 26.90625 | 123 | 0.551684 | 143 | 861 | 3.13986 | 0.405594 | 0.062361 | 0.046771 | 0.035635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039146 | 0.347271 | 861 | 31 | 124 | 27.774194 | 0.759786 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7cd3abce5d928c3da35821e7b78b76d44e1ec29 | 2,465 | py | Python | trial_inputs_pb2.py | adeandrade/bayesian-optimizer | 30427943d69130179f7ccb32f63a08a1c57462f8 | [
"Apache-2.0"
] | null | null | null | trial_inputs_pb2.py | adeandrade/bayesian-optimizer | 30427943d69130179f7ccb32f63a08a1c57462f8 | [
"Apache-2.0"
] | null | null | null | trial_inputs_pb2.py | adeandrade/bayesian-optimizer | 30427943d69130179f7ccb32f63a08a1c57462f8 | [
"Apache-2.0"
] | null | null | null | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: trial_inputs.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='trial_inputs.proto',
package='com.wattpad.bayesian_optimizer',
syntax='proto3',
serialized_options=None,
serialized_pb=_b('\n\x12trial_inputs.proto\x12\x1e\x63om.wattpad.bayesian_optimizer\".\n\x0bTrialInputs\x12\x0f\n\x07version\x18\x01 \x01(\t\x12\x0e\n\x06inputs\x18\x02 \x03(\x01\x62\x06proto3')
)
_TRIALINPUTS = _descriptor.Descriptor(
name='TrialInputs',
full_name='com.wattpad.bayesian_optimizer.TrialInputs',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='version', full_name='com.wattpad.bayesian_optimizer.TrialInputs.version', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='inputs', full_name='com.wattpad.bayesian_optimizer.TrialInputs.inputs', index=1,
number=2, type=1, cpp_type=5, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=54,
serialized_end=100,
)
DESCRIPTOR.message_types_by_name['TrialInputs'] = _TRIALINPUTS
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
TrialInputs = _reflection.GeneratedProtocolMessageType('TrialInputs', (_message.Message,), dict(
DESCRIPTOR = _TRIALINPUTS,
__module__ = 'trial_inputs_pb2'
# @@protoc_insertion_point(class_scope:com.wattpad.bayesian_optimizer.TrialInputs)
))
_sym_db.RegisterMessage(TrialInputs)
# @@protoc_insertion_point(module_scope)
| 32.012987 | 196 | 0.76146 | 307 | 2,465 | 5.833876 | 0.374593 | 0.031267 | 0.080402 | 0.075377 | 0.261307 | 0.240089 | 0.204355 | 0.127303 | 0.127303 | 0.127303 | 0 | 0.027176 | 0.11927 | 2,465 | 76 | 197 | 32.434211 | 0.797789 | 0.096146 | 0 | 0.280702 | 1 | 0.017544 | 0.20162 | 0.154365 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.087719 | 0 | 0.087719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7d11260063260bb345e5b5925deed0ee559e5c2 | 725 | py | Python | ffmpeg-3.2.5/tools/zmqshell.py | huyu0415/FFmpeg | 7a3f75791cb3255805bf17126d4074a328f46c8c | [
"Apache-2.0"
] | 3,645 | 2016-08-25T09:31:17.000Z | 2022-03-25T06:28:34.000Z | ffmpeg-3.2.5/tools/zmqshell.py | huyu0415/FFmpeg | 7a3f75791cb3255805bf17126d4074a328f46c8c | [
"Apache-2.0"
] | 395 | 2020-04-18T08:22:18.000Z | 2021-12-08T13:04:49.000Z | ffmpeg-3.2.5/tools/zmqshell.py | huyu0415/FFmpeg | 7a3f75791cb3255805bf17126d4074a328f46c8c | [
"Apache-2.0"
] | 764 | 2016-08-26T09:19:00.000Z | 2022-03-22T12:07:16.000Z | #!/usr/bin/env python2
import sys, zmq, cmd
class LavfiCmd(cmd.Cmd):
prompt = 'lavfi> '
def __init__(self, bind_address):
context = zmq.Context()
self.requester = context.socket(zmq.REQ)
self.requester.connect(bind_address)
cmd.Cmd.__init__(self)
def onecmd(self, cmd):
if cmd == 'EOF':
sys.exit(0)
print 'Sending command:[%s]' % cmd
self.requester.send(cmd)
message = self.requester.recv()
print 'Received reply:[%s]' % message
try:
bind_address = sys.argv[1] if len(sys.argv) > 1 else "tcp://localhost:5555"
LavfiCmd(bind_address).cmdloop('FFmpeg libavfilter interactive shell')
except KeyboardInterrupt:
pass
| 26.851852 | 79 | 0.627586 | 91 | 725 | 4.868132 | 0.571429 | 0.099323 | 0.036117 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014545 | 0.241379 | 725 | 26 | 80 | 27.884615 | 0.790909 | 0.028966 | 0 | 0 | 0 | 0 | 0.14936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.05 | 0.05 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7d2785f99402cef40bc5312be1781d2a6eaf683 | 3,843 | py | Python | qinhaifang/src/evalTools/script/convert_label_map_to_geojson.py | SpaceNetChallenge/BuildingFootprintDetectors | 3def3c44b5847c744cd2f3356182892d92496579 | [
"Apache-2.0"
] | 161 | 2017-02-03T05:33:01.000Z | 2022-03-31T02:11:21.000Z | qinhaifang/src/evalTools/script/convert_label_map_to_geojson.py | SpaceNetChallenge/BuildingFootprintDetectors | 3def3c44b5847c744cd2f3356182892d92496579 | [
"Apache-2.0"
] | 5 | 2017-02-03T05:51:38.000Z | 2019-06-18T18:54:00.000Z | qinhaifang/src/evalTools/script/convert_label_map_to_geojson.py | SpaceNetChallenge/BuildingFootprintDetectors | 3def3c44b5847c744cd2f3356182892d92496579 | [
"Apache-2.0"
] | 76 | 2017-03-23T23:15:46.000Z | 2022-02-10T21:58:18.000Z | #!/usr/bin/env python
# encoding=gbk
"""
Convert mask to geojson format
"""
import os
import os.path
import re
import logging
import logging.config
from multiprocessing import Pool
import skimage.io as sk
import numpy as np
import scipy.io as sio
import setting
from spaceNet import geoTools as gT
import spaceNet.image_util as img_util
def process_convert_mask_to_geojson():
"""docstring for process_convert_mask_to_geojson"""
if setting.CONVERT_RES == 1:
label_map_file_list = os.listdir(setting.PREDICT_LABEL_MAP_DIR)
else:
label_map_file_list = os.listdir(setting.LABEL_MAP_DIR_4X)
pool_size = 8
pool = Pool(pool_size)
case = 0
for convert_res in pool.imap_unordered(convert_worker, label_map_file_list):
case += 1
if case % 100 == 0:
logging.info('Convert {}'.format(case))
image_id, msg = convert_res
pool.close()
pool.join()
def convert_worker(mat_file):
"""docstring for convert_worker"""
try:
if setting.CONVERT_RES == 1:
image_id = '_'.join(mat_file.split('.')[0].split('_')[1:])
print('image_id:{}'.format(image_id))
mat_file = os.path.join(setting.PREDICT_LABEL_MAP_DIR, mat_file)
mat = sio.loadmat(mat_file)
#print(mat.keys())
#exit(0)
label_map = mat['inst_img']
building_list = img_util.create_buildinglist_from_label_map(image_id, label_map)
geojson_file = os.path.join(setting.PREDICT_PIXEL_GEO_JSON_DIR, '{}_predict.geojson'.format(image_id))
else:
#print('{}'.format(mat_file))
image_id = '_'.join(mat_file.split('.')[0].split('_')[:])
#print('{}'.format(image_id))
mat_file = os.path.join(setting.LABEL_MAP_DIR_4X, mat_file)
mat = sio.loadmat(mat_file)
label_map = mat['GTinst']['Segmentation'][0][0]
building_list = img_util.create_buildinglist_from_label_map(image_id, label_map)
geojson_file = os.path.join(setting.PIXEL_GEO_JSON_DIR_4X, '{}_Pixel.geojson'.format(image_id))
gT.exporttogeojson(geojson_file, building_list)
return image_id, 'Done'
except Exception as e:
logging.warning('Convert Exception[{}] image_id[{}]'.format(e, image_id))
return image_id, e
def test_geojson():
"""docstring for test_geojson"""
label_map_file_list = os.listdir(setting.PREDICT_LABEL_MAP_DIR)
for mat_file in label_map_file_list:
image_id = '_'.join(mat_file.split('.')[0].split('_')[1:])
predict_geojson_file = os.path.join(setting.PREDICT_PIXEL_GEO_JSON_DIR, '{}_predict.geojson'.format(image_id))
image_name = os.path.join(setting.PIC_3BAND_DIR, '3band_{}.tif'.format(image_id))
img = sk.imread(image_name, True)
label_map = np.zeros(img.shape, dtype=np.uint8)
label_map = img_util.create_label_map_from_polygons(gT.importgeojson(predict_geojson_file),
label_map)
label_img = img_util.create_label_img(img, label_map)
save_file = os.path.join(setting.TMP_DIR, '{}_predict.png'.format(image_id))
sk.imsave(save_file, label_img)
truth_geojson_file = os.path.join(setting.PIXEL_GEO_JSON_DIR, '{}_Pixel.geojson'.format(image_id))
print('{}'.format(truth_geojson_file))
label_map = np.zeros(img.shape, dtype=np.uint8)
print('label_map shape{}'.format(label_map.shape))
label_map = img_util.create_label_map_from_polygons(gT.importgeojson(truth_geojson_file), label_map)
label_img = img_util.create_label_img(img, label_map)
save_file = os.path.join(setting.TMP_DIR, '{}_Pixel.png'.format(image_id))
sk.imsave(save_file, label_img)
if __name__ == '__main__':
process_convert_mask_to_geojson()
#test_geojson()
| 40.03125 | 118 | 0.674993 | 542 | 3,843 | 4.424354 | 0.197417 | 0.093411 | 0.048791 | 0.063803 | 0.581735 | 0.486239 | 0.483319 | 0.447456 | 0.435363 | 0.350292 | 0 | 0.007818 | 0.201145 | 3,843 | 95 | 119 | 40.452632 | 0.77329 | 0.033047 | 0 | 0.25 | 0 | 0 | 0.063872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.194444 | null | null | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7d285c6e1ae9ac1ca025fdba430e5dba345f5fd | 412 | py | Python | core/migrations/0008_touristspot_photo.py | isnardsilva/django-attractions-api | feade087d840b72b603d2a4bf538b8c362aa91bd | [
"MIT"
] | 1 | 2021-12-31T12:59:49.000Z | 2021-12-31T12:59:49.000Z | core/migrations/0008_touristspot_photo.py | isnardsilva/django-attractions-api | feade087d840b72b603d2a4bf538b8c362aa91bd | [
"MIT"
] | null | null | null | core/migrations/0008_touristspot_photo.py | isnardsilva/django-attractions-api | feade087d840b72b603d2a4bf538b8c362aa91bd | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-07-19 03:55
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0007_auto_20200614_0254'),
]
operations = [
migrations.AddField(
model_name='touristspot',
name='photo',
field=models.ImageField(blank=True, null=True, upload_to='core'),
),
]
| 21.684211 | 77 | 0.604369 | 46 | 412 | 5.304348 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103679 | 0.274272 | 412 | 18 | 78 | 22.888889 | 0.712375 | 0.109223 | 0 | 0 | 1 | 0 | 0.128767 | 0.063014 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a7dbae6b6e0c89662cba5d9864585c9b7e89ef3a | 444 | py | Python | tools/create_transmit_grouped_command_cron.py | Vayel/GUCEM-BVC | e5645dec332756d3c9db083abf2c8f3625a10d4d | [
"WTFPL"
] | 2 | 2016-09-23T18:02:40.000Z | 2017-04-28T18:35:59.000Z | tools/create_transmit_grouped_command_cron.py | Vayel/GUCEM-BVC | e5645dec332756d3c9db083abf2c8f3625a10d4d | [
"WTFPL"
] | 82 | 2016-09-26T14:38:31.000Z | 2018-02-12T18:47:12.000Z | tools/create_transmit_grouped_command_cron.py | Vayel/GUCEM-BVC | e5645dec332756d3c9db083abf2c8f3625a10d4d | [
"WTFPL"
] | null | null | null | import os
from cron_helper import create
JOB_COMMENT = 'BVC transmit grouped command reminder'
HERE = os.path.dirname(os.path.abspath(__file__))
def create_job(cron):
job = cron.new(
command=os.path.join(HERE, 'manage.sh transmit_grouped_command_reminder'),
comment=JOB_COMMENT,
)
job.day.every(1)
job.hour.on(2)
job.minute.on(10)
if __name__ == '__main__':
create(create_job, JOB_COMMENT)
| 21.142857 | 82 | 0.684685 | 63 | 444 | 4.492063 | 0.539683 | 0.095406 | 0.155477 | 0.212014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011268 | 0.20045 | 444 | 20 | 83 | 22.2 | 0.785915 | 0 | 0 | 0 | 0 | 0 | 0.198198 | 0.074324 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
38eb4d628bf96b1cec0ba5a9060d8732e87f164b | 276 | py | Python | runme.py | AndreWohnsland/Cocktailmaker_AW | 30efdcb85d7fb58ac2980c873c611d7b9c2b37b1 | [
"MIT"
] | 37 | 2019-07-06T11:54:08.000Z | 2022-01-21T12:26:16.000Z | runme.py | AndreWohnsland/Cocktailmaker_AW | 30efdcb85d7fb58ac2980c873c611d7b9c2b37b1 | [
"MIT"
] | 5 | 2019-12-09T07:44:08.000Z | 2022-02-01T12:00:24.000Z | runme.py | AndreWohnsland/Cocktailmaker_AW | 30efdcb85d7fb58ac2980c873c611d7b9c2b37b1 | [
"MIT"
] | 4 | 2019-07-06T12:45:01.000Z | 2021-12-29T17:09:44.000Z | import sys
from PyQt5.QtWidgets import QApplication
import src_ui.setup_mainwindow as setupui
if __name__ == "__main__":
app = QApplication(sys.argv)
w = setupui.MainScreen()
w.showFullScreen()
w.setFixedSize(800, 480)
sys.exit(app.exec_())
| 21.230769 | 42 | 0.684783 | 34 | 276 | 5.235294 | 0.735294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.213768 | 276 | 12 | 43 | 23 | 0.788018 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
38eda790aa9bd3615e42c068cced417ca94aa56a | 2,099 | py | Python | tools/database_tool.py | noahzhy/qumaishou | f776e5c750b350ca3b741fccf3e5dfd199c1296b | [
"Apache-2.0"
] | null | null | null | tools/database_tool.py | noahzhy/qumaishou | f776e5c750b350ca3b741fccf3e5dfd199c1296b | [
"Apache-2.0"
] | null | null | null | tools/database_tool.py | noahzhy/qumaishou | f776e5c750b350ca3b741fccf3e5dfd199c1296b | [
"Apache-2.0"
] | null | null | null | import os
import pandas as pd
import sys
import glob
# 导入同级目录下其他文件夹下的文件
sys.path.append("./")
db_dir_path = 'database'
def db_save(db_name, df):
# index 表示是否显示行名,default=True
df = remove_repetition(df)
if df.to_csv(os.path.join(db_dir_path, '{}.csv'.format(db_name)), index=False, sep=','):
return True
else:
return False
def remove_repetition(df, key=None):
return df.drop_duplicates(subset=key, keep='first', inplace=False)
def db_brand(db_name, df):
#字典中的 key 值即为 csv 中列名
df = remove_repetition(df)
print('db_brand:', df.shape[0])
db_save(db_name, df)
return df
def db_brand_product(db_name, df):
dataframe = pd.DataFrame(df)
print('brand product:', dataframe.shape[0])
db_save('brand_product/brand_product_{}'.format(db_name), df)
return df
def merge_brand_product_in_one():
# print(os.getcwd())
frames = []
# print(glob.glob(r'database/brand_product_*.csv'))
for i in glob.glob('database/brand_product/brand_product_*.csv'):
df = pd.read_csv(i)
frames.append(df)
result = pd.concat(frames)
# result = remove_repetition(result, 'product_No')
db_save('db_total_product', result)
pass
def intersection_db_brand():
'''合并品牌数据库,最终英文版的'''
d1 = pd.read_csv(os.path.join(db_dir_path, 'db_brand_eng.csv'))
d2 = pd.read_csv(os.path.join(db_dir_path, 'db_brand_chn.csv'))
df = pd.merge(d1, d2, how='left', on='brand_name')
df = remove_repetition(df, 'brand_name')
df = df.loc[:, ['dispShopNo_x', 'brand_name', 'brand_url_x']]
db_save('db_brand_final', df)
print('df_merged:', df.shape[0])
return df
def get_FileSize(filePath):
# filePath = unicode(filePath,'utf8')
fsize = os.path.getsize(filePath)
fsize = fsize / float(1024)
return round(fsize, 2)
def check_dir_with_brand_final():
('database/brand_product/brand_product_{}.csv')
pass
def main():
# db_brand_eng()
# db_brand_merge()
# intersection_db_brand()
merge_brand_product_in_one()
pass
if __name__ == "__main__":
main() | 23.852273 | 92 | 0.666508 | 310 | 2,099 | 4.225806 | 0.3 | 0.100763 | 0.030534 | 0.045802 | 0.201527 | 0.152672 | 0.070229 | 0.053435 | 0.053435 | 0.053435 | 0 | 0.007634 | 0.188661 | 2,099 | 88 | 93 | 23.852273 | 0.761597 | 0.159123 | 0 | 0.150943 | 0 | 0 | 0.16546 | 0.064067 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169811 | false | 0.056604 | 0.075472 | 0.018868 | 0.377358 | 0.056604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
38f0f18dc070774e4c59dd082f508779d0e46e34 | 940 | py | Python | root/tpd_near_trainstops_per_line.py | transitanalystisarel/TransitAnalystIsrael | 341de9272b352c18333ff136a00de0b97cd82216 | [
"MIT"
] | null | null | null | root/tpd_near_trainstops_per_line.py | transitanalystisarel/TransitAnalystIsrael | 341de9272b352c18333ff136a00de0b97cd82216 | [
"MIT"
] | null | null | null | root/tpd_near_trainstops_per_line.py | transitanalystisarel/TransitAnalystIsrael | 341de9272b352c18333ff136a00de0b97cd82216 | [
"MIT"
] | 3 | 2019-05-08T04:36:03.000Z | 2020-11-23T19:46:52.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# collect a set of trip_id s at all stops in a GTFS file over the selected week of the service period starting at serviceweekstartdate
# filter stops near trainstations based on input txt file - stopsneartrainstop_post_edit
# merge sets of trips at stops near each trainstation to count trips per hour and per day
#
#
import transitanalystisrael_config as cfg
import process_date
import trip_ids_at_stops_merge_near_trainstops_perday_v3
import stopswtrainstopidsandtpdperline_v1
import time
#
print("Local current time :", time.asctime( time.localtime(time.time()) ))
#
processdate = process_date.get_date_now()
trip_ids_at_stops_merge_near_trainstops_perday_v3.main(processdate, cfg.gtfspath, cfg.gtfsdirbase, cfg.processedpath, processdate)
stopswtrainstopidsandtpdperline_v1.main(processdate, cfg.processedpath)
print("Local current time :", time.asctime( time.localtime(time.time()) )) | 40.869565 | 134 | 0.811702 | 134 | 940 | 5.5 | 0.559701 | 0.043419 | 0.024423 | 0.037992 | 0.255088 | 0.255088 | 0.255088 | 0.255088 | 0.255088 | 0.143826 | 0 | 0.005988 | 0.111702 | 940 | 23 | 135 | 40.869565 | 0.876647 | 0.37234 | 0 | 0.2 | 0 | 0 | 0.068847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
38f5e5531d57aca7c42b9394241ccc224319e068 | 310 | py | Python | tests/unit/helpers_test/test_password.py | alefeans/flask-base | e3daa4ce1020ba3711908c3ba5ef88b0cc599dfe | [
"MIT"
] | 11 | 2019-10-03T18:47:49.000Z | 2022-02-01T10:42:02.000Z | tests/unit/helpers_test/test_password.py | alefeans/flask-base | e3daa4ce1020ba3711908c3ba5ef88b0cc599dfe | [
"MIT"
] | null | null | null | tests/unit/helpers_test/test_password.py | alefeans/flask-base | e3daa4ce1020ba3711908c3ba5ef88b0cc599dfe | [
"MIT"
] | 8 | 2019-10-03T18:47:53.000Z | 2021-06-07T14:47:51.000Z | import pytest
from app.helpers import check_password, encrypt_password
@pytest.mark.parametrize('sent', [
('test'),
('changeme'),
('1234123'),
])
def test_if_check_password_and_encrypt_password_works_properly(sent):
expected = encrypt_password(sent)
assert check_password(sent, expected)
| 23.846154 | 69 | 0.745161 | 37 | 310 | 5.918919 | 0.567568 | 0.178082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026217 | 0.13871 | 310 | 12 | 70 | 25.833333 | 0.794007 | 0 | 0 | 0 | 0 | 0 | 0.074194 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.1 | false | 0.4 | 0.2 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ac01a8777ade5c22566c19425f199dbb6101a624 | 8,700 | py | Python | testing/scipy_distutils-0.3.3_34.586/command/build_clib.py | fireballpoint1/fortranTOpy | 55843a62c6f0a2f8e2a777ef70193940d3d2d141 | [
"Apache-2.0"
] | 1 | 2018-08-26T05:10:56.000Z | 2018-08-26T05:10:56.000Z | testing/scipy_distutils-0.3.3_34.586/command/build_clib.py | fireballpoint1/fortranTOpy | 55843a62c6f0a2f8e2a777ef70193940d3d2d141 | [
"Apache-2.0"
] | null | null | null | testing/scipy_distutils-0.3.3_34.586/command/build_clib.py | fireballpoint1/fortranTOpy | 55843a62c6f0a2f8e2a777ef70193940d3d2d141 | [
"Apache-2.0"
] | 1 | 2018-06-26T18:06:44.000Z | 2018-06-26T18:06:44.000Z | """ Modified version of build_clib that handles fortran source files.
"""
import os
import string
import sys
import re
from glob import glob
from types import *
from distutils.command.build_clib import build_clib as old_build_clib
from distutils.command.build_clib import show_compilers
from scipy_distutils import log, misc_util
from distutils.dep_util import newer_group
from scipy_distutils.misc_util import filter_sources, \
has_f_sources, has_cxx_sources
def get_headers(directory_list):
# get *.h files from list of directories
headers = []
for dir in directory_list:
head = glob(os.path.join(dir,"*.h")) #XXX: *.hpp files??
headers.extend(head)
return headers
def get_directories(list_of_sources):
# get unique directories from list of sources.
direcs = []
for file in list_of_sources:
dir = os.path.split(file)
if dir[0] != '' and not dir[0] in direcs:
direcs.append(dir[0])
return direcs
class build_clib(old_build_clib):
description = "build C/C++/F libraries used by Python extensions"
user_options = old_build_clib.user_options + [
('fcompiler=', None,
"specify the Fortran compiler type"),
]
def initialize_options(self):
old_build_clib.initialize_options(self)
self.fcompiler = None
return
def finalize_options(self):
old_build_clib.finalize_options(self)
self.set_undefined_options('build_ext',
('fcompiler', 'fcompiler'))
#XXX: This is hackish and probably unnecessary,
# could we get rid of this?
from scipy_distutils import misc_util
extra_includes = misc_util.get_environ_include_dirs()
if extra_includes:
print "XXX: are you sure you'll need PYTHONINCLUDES env. variable??"
self.include_dirs.extend(extra_includes)
return
def have_f_sources(self):
for (lib_name, build_info) in self.libraries:
if has_f_sources(build_info.get('sources',[])):
return 1
return 0
def have_cxx_sources(self):
for (lib_name, build_info) in self.libraries:
if has_cxx_sources(build_info.get('sources',[])):
return 1
return 0
def run(self):
if not self.libraries:
return
# Make sure that library sources are complete.
for (lib_name, build_info) in self.libraries:
if not misc_util.all_strings(build_info.get('sources',[])):
raise TypeError,'Library "%s" sources contains unresolved'\
' items (call build_src before built_clib).' % (lib_name)
from distutils.ccompiler import new_compiler
self.compiler = new_compiler(compiler=self.compiler,
dry_run=self.dry_run,
force=self.force)
self.compiler.customize(self.distribution,need_cxx=self.have_cxx_sources())
libraries = self.libraries
self.libraries = None
self.compiler.customize_cmd(self)
self.libraries = libraries
self.compiler.show_customization()
if self.have_f_sources():
from scipy_distutils.fcompiler import new_fcompiler
self.fcompiler = new_fcompiler(compiler=self.fcompiler,
verbose=self.verbose,
dry_run=self.dry_run,
force=self.force)
self.fcompiler.customize(self.distribution)
libraries = self.libraries
self.libraries = None
self.fcompiler.customize_cmd(self)
self.libraries = libraries
self.fcompiler.show_customization()
self.build_libraries(self.libraries)
return
def get_source_files(self):
from build_ext import is_local_src_dir
self.check_library_list(self.libraries)
filenames = []
def visit_func(filenames,dirname,names):
if os.path.basename(dirname) in ['CVS','.svn']:
names[:] = []
return
for name in names:
if name[-1] in "#~":
continue
fullname = os.path.join(dirname,name)
if os.path.isfile(fullname):
filenames.append(fullname)
for (lib_name, build_info) in self.libraries:
sources = build_info.get('sources',[])
sources = filter(lambda s:type(s) is StringType,sources)
filenames.extend(sources)
filenames.extend(get_headers(get_directories(sources)))
depends = build_info.get('depends',[])
for d in depends:
if is_local_src_dir(d):
os.path.walk(d,visit_func,filenames)
elif os.path.isfile(d):
filenames.append(d)
return filenames
def build_libraries(self, libraries):
compiler = self.compiler
fcompiler = self.fcompiler
for (lib_name, build_info) in libraries:
sources = build_info.get('sources')
if sources is None or type(sources) not in (ListType, TupleType):
raise DistutilsSetupError, \
("in 'libraries' option (library '%s'), " +
"'sources' must be present and must be " +
"a list of source filenames") % lib_name
sources = list(sources)
lib_file = compiler.library_filename(lib_name,
output_dir=self.build_clib)
depends = sources + build_info.get('depends',[])
if not (self.force or newer_group(depends, lib_file, 'newer')):
log.debug("skipping '%s' library (up-to-date)", lib_name)
continue
else:
log.info("building '%s' library", lib_name)
macros = build_info.get('macros')
include_dirs = build_info.get('include_dirs')
extra_postargs = build_info.get('extra_compiler_args') or []
c_sources, cxx_sources, f_sources, fmodule_sources \
= filter_sources(sources)
if self.compiler.compiler_type=='msvc':
# this hack works around the msvc compiler attributes
# problem, msvc uses its own convention :(
c_sources += cxx_sources
cxx_sources = []
if fmodule_sources:
print 'XXX: Fortran 90 module support not implemented or tested'
f_sources.extend(fmodule_sources)
objects = []
if c_sources:
log.info("compiling C sources")
objects = compiler.compile(c_sources,
output_dir=self.build_temp,
macros=macros,
include_dirs=include_dirs,
debug=self.debug,
extra_postargs=extra_postargs)
if cxx_sources:
log.info("compiling C++ sources")
old_compiler = self.compiler.compiler_so[0]
self.compiler.compiler_so[0] = self.compiler.compiler_cxx[0]
cxx_objects = compiler.compile(cxx_sources,
output_dir=self.build_temp,
macros=macros,
include_dirs=include_dirs,
debug=self.debug,
extra_postargs=extra_postargs)
objects.extend(cxx_objects)
self.compiler.compiler_so[0] = old_compiler
if f_sources:
log.info("compiling Fortran sources")
f_objects = fcompiler.compile(f_sources,
output_dir=self.build_temp,
macros=macros,
include_dirs=include_dirs,
debug=self.debug,
extra_postargs=[])
objects.extend(f_objects)
self.compiler.create_static_lib(objects, lib_name,
output_dir=self.build_clib,
debug=self.debug)
return
| 38.666667 | 83 | 0.545057 | 912 | 8,700 | 4.994518 | 0.218202 | 0.029638 | 0.026345 | 0.016465 | 0.262788 | 0.24764 | 0.200659 | 0.150604 | 0.12865 | 0.105818 | 0 | 0.002581 | 0.376552 | 8,700 | 224 | 84 | 38.839286 | 0.837205 | 0.036207 | 0 | 0.227273 | 0 | 0 | 0.077803 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.085227 | null | null | 0.011364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac0406d097b2c425817270a16cec9aaa0dab57d1 | 425 | py | Python | events/migrations/0003_invitation_detail.py | ebar0n/mishteh | dd025add9b80dff2253c1ee976fc656dff3abc03 | [
"MIT"
] | null | null | null | events/migrations/0003_invitation_detail.py | ebar0n/mishteh | dd025add9b80dff2253c1ee976fc656dff3abc03 | [
"MIT"
] | null | null | null | events/migrations/0003_invitation_detail.py | ebar0n/mishteh | dd025add9b80dff2253c1ee976fc656dff3abc03 | [
"MIT"
] | null | null | null | # Generated by Django 2.2 on 2019-10-13 19:09
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [("events", "0002_auto_20191013_1712")]
operations = [
migrations.AddField(
model_name="invitation",
name="detail",
field=models.TextField(default="", verbose_name="detail"),
preserve_default=False,
)
]
| 23.611111 | 70 | 0.623529 | 45 | 425 | 5.755556 | 0.777778 | 0.07722 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095541 | 0.261176 | 425 | 17 | 71 | 25 | 0.729299 | 0.101176 | 0 | 0 | 1 | 0 | 0.134211 | 0.060526 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac045942a364b8e9223f019c563338e05ffde92d | 1,954 | py | Python | pygitscrum/print.py | thib1984/pygitscrum | 4f5f726e5b3c95f93da33c21da51953657bd0349 | [
"MIT"
] | 2 | 2021-04-23T11:10:32.000Z | 2021-04-23T11:10:41.000Z | pygitscrum/print.py | thib1984/pygitscrum | 4f5f726e5b3c95f93da33c21da51953657bd0349 | [
"MIT"
] | 2 | 2021-11-23T09:26:50.000Z | 2021-11-23T09:27:02.000Z | pygitscrum/print.py | thib1984/pygitscrum | 4f5f726e5b3c95f93da33c21da51953657bd0349 | [
"MIT"
] | null | null | null | """
print scripts
"""
from termcolor import colored
from pygitscrum.args import compute_args
import colorama
def print_resume_list(list_to_print, message):
"""
print list summary
"""
if len(list_to_print) > 0:
print("")
print(
my_colored(
message + " : ",
"green",
)
)
print(
my_colored(
"\n".join(map(str, list_to_print)),
"yellow",
)
)
print(
my_colored(
"total : " + str(len(list_to_print)),
"green",
)
)
def print_resume_map(dict_to_print, message):
"""
print dict summary
"""
if len(dict_to_print) > 0:
print("")
print(my_colored(message + " : ", "green"))
for key in dict_to_print:
print(
my_colored(
key
+ " --> "
+ str(dict_to_print[key])
+ " elements",
"yellow",
)
)
print(
my_colored(
"total : "
+ str(len(dict_to_print))
+ " --> "
+ str(sum(dict_to_print.values()))
+ " elements ",
"green",
)
)
def print_debug(message):
"""
print debug message
"""
if compute_args().debug:
print("debug : " + message)
def print_y(message):
"""
print yellow message
"""
print(my_colored(message, "yellow"))
def print_g(message):
"""
print green message
"""
print(my_colored(message, "green"))
def print_r(message):
"""
print red message
"""
print(my_colored(message, "red"))
def my_colored(message,color):
if compute_args().nocolor:
return message
return colored(message, color)
| 20.14433 | 53 | 0.449335 | 180 | 1,954 | 4.65 | 0.233333 | 0.083632 | 0.150538 | 0.125448 | 0.273596 | 0.167264 | 0.167264 | 0.09319 | 0.09319 | 0 | 0 | 0.0018 | 0.431423 | 1,954 | 96 | 54 | 20.354167 | 0.751575 | 0.06653 | 0 | 0.344262 | 0 | 0 | 0.061993 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114754 | false | 0 | 0.04918 | 0 | 0.196721 | 0.42623 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
ac062225c63cd5c3323bbc8f4dcab95e8e43641a | 261 | py | Python | test/_test_compute_m.py | yiruiliu110/eegnn | 253773c301681bb00b4789c34f48c82468ad16da | [
"MIT"
] | null | null | null | test/_test_compute_m.py | yiruiliu110/eegnn | 253773c301681bb00b4789c34f48c82468ad16da | [
"MIT"
] | null | null | null | test/_test_compute_m.py | yiruiliu110/eegnn | 253773c301681bb00b4789c34f48c82468ad16da | [
"MIT"
] | null | null | null | import torch
from estimation import compute_m
i = [[0, 1, 1, 2],
[2, 0, 2, 1]]
v_z = [3, 4, 5, 2]
v_c = [0, 1, 1, 0]
z = torch.sparse_coo_tensor(i, v_z, (3, 3))
c = torch.sparse_coo_tensor(i, v_c, (3, 3))
max_K = 10
m = compute_m(z, c, max_K)
print(m) | 15.352941 | 43 | 0.578544 | 60 | 261 | 2.316667 | 0.383333 | 0.115108 | 0.043165 | 0.28777 | 0.316547 | 0.316547 | 0 | 0 | 0 | 0 | 0 | 0.108374 | 0.222222 | 261 | 17 | 44 | 15.352941 | 0.576355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac07adb5420f61816fe1726ed429cadf16e37902 | 505 | py | Python | simfile/_private/serializable.py | garcia/simfile | 4e15660c964d8d3c0e6d1f69431138e7eb4db288 | [
"MIT"
] | 22 | 2017-04-24T05:37:13.000Z | 2022-03-08T00:41:37.000Z | simfile/_private/serializable.py | garcia/simfile | 4e15660c964d8d3c0e6d1f69431138e7eb4db288 | [
"MIT"
] | 10 | 2021-05-31T01:21:56.000Z | 2022-03-17T04:26:54.000Z | simfile/_private/serializable.py | garcia/simfile | 4e15660c964d8d3c0e6d1f69431138e7eb4db288 | [
"MIT"
] | 3 | 2019-06-05T15:23:53.000Z | 2021-09-11T02:39:36.000Z | from abc import ABCMeta, abstractmethod
from io import StringIO
from typing import TextIO
class Serializable(metaclass=ABCMeta):
@abstractmethod
def serialize(self, file: TextIO) -> None:
"""
Write the object to provided text file object as MSD.
"""
pass
def __str__(self) -> str:
"""
Convert the object to an MSD string.
"""
serialized = StringIO()
self.serialize(serialized)
return serialized.getvalue() | 24.047619 | 61 | 0.615842 | 54 | 505 | 5.685185 | 0.611111 | 0.136808 | 0.071661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.30297 | 505 | 21 | 62 | 24.047619 | 0.872159 | 0.178218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.090909 | 0.272727 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ac0b33a69404bee3fc2c70f72e63ffeda7d74b5d | 746 | py | Python | generate_hamming_command.py | zoeleeee/mnist_challenge | 8a98f7dde35ee1d7a1fb77e85ca931000fb71631 | [
"MIT"
] | null | null | null | generate_hamming_command.py | zoeleeee/mnist_challenge | 8a98f7dde35ee1d7a1fb77e85ca931000fb71631 | [
"MIT"
] | null | null | null | generate_hamming_command.py | zoeleeee/mnist_challenge | 8a98f7dde35ee1d7a1fb77e85ca931000fb71631 | [
"MIT"
] | null | null | null | import numpy as np
import os
path = 'preds'
files = os.listdir(path)
lst = []
for f in files:
if f.find('_0_HASH') == -1:
continue
if f.find('CW') == -1:
continue
if f.find('low')==-1 and f.find('high')==-1 and f.find('mix')==-1:
continue
if f.endswith('show.npy'):
lst.append(f)
for f in lst:
strs = f.split('_0_HASH_')
print(strs)
a = np.load(os.path.join(path, strs[0]+'_0_HASH_'+strs[1]))
b = np.load(os.path.join(path, strs[0]+'_20_HASH_'+strs[1]))
c = np.load(os.path.join(path, strs[0]+'_40_HASH_'+strs[1]))
d = np.load(os.path.join(path, strs[0]+'_60_HASH_'+strs[1]))
np.save(os.path.join(path, strs[0]+'_80_HASH_'+strs[1]), np.hstack((a,b,c,d)))
| 25.724138 | 82 | 0.567024 | 135 | 746 | 2.985185 | 0.318519 | 0.08933 | 0.124069 | 0.173697 | 0.37469 | 0.295285 | 0.248139 | 0.248139 | 0 | 0 | 0 | 0.044143 | 0.210456 | 746 | 28 | 83 | 26.642857 | 0.640068 | 0 | 0 | 0.136364 | 0 | 0 | 0.112903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac0d30f40fdb142e0b5b6ff9a6caa98ff58e125e | 1,257 | py | Python | app/__init__.py | annerachael/fifth_year_project | 3843b4e6315e9a5374f80a2aabc0bcb8423fd0d9 | [
"Apache-2.0"
] | null | null | null | app/__init__.py | annerachael/fifth_year_project | 3843b4e6315e9a5374f80a2aabc0bcb8423fd0d9 | [
"Apache-2.0"
] | null | null | null | app/__init__.py | annerachael/fifth_year_project | 3843b4e6315e9a5374f80a2aabc0bcb8423fd0d9 | [
"Apache-2.0"
] | null | null | null | # app/__init__.py
from flask import Flask
from redis import Redis
from rq_scheduler import Scheduler
from flask_migrate import Migrate
from flask_login import LoginManager
from flask_bootstrap import Bootstrap
from flask_sqlalchemy import SQLAlchemy
"""
This file shall contain configurations for the web app
"""
# create app
app = Flask(__name__)
db = SQLAlchemy()
migrate = Migrate()
bootstrap = Bootstrap()
# Handles login functionality eg creating and removing login sessions
login = LoginManager()
def create_app():
global app, db, migrate, login, bootstrap
import instance.config as cfg
app.config['DEBUG'] = cfg.DEBUG
app.config['SECRET_KEY'] = 'secretkey'
# database set up
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///Info.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# Initialize Redis and RQ
app.config['REDIS_URL'] = 'redis://'
app.redis = Redis.from_url(app.config['REDIS_URL'])
# The queue where periodic tasks are submitted
queue_name = 'ann_tasks'
app.scheduler = Scheduler(queue_name, connection=app.redis)
db.init_app(app)
login.init_app(app)
migrate.init_app(app, db)
bootstrap.init_app(app)
from app import models, views
return app
| 25.14 | 69 | 0.731106 | 169 | 1,257 | 5.266272 | 0.378698 | 0.060674 | 0.044944 | 0.038202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176611 | 1,257 | 49 | 70 | 25.653061 | 0.859903 | 0.142403 | 0 | 0 | 0 | 0 | 0.127849 | 0.052527 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.310345 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ac1751b7ad47eb3e921543a2e5f6b1310543b55f | 1,377 | py | Python | encode_morse.py | cmanagoli/github-upload | 9759b8ee905e1fd37b169231d2150de31e143191 | [
"MIT"
] | null | null | null | encode_morse.py | cmanagoli/github-upload | 9759b8ee905e1fd37b169231d2150de31e143191 | [
"MIT"
] | 4 | 2020-10-14T21:30:35.000Z | 2020-10-14T21:43:06.000Z | encode_morse.py | cmanagoli/github-upload | 9759b8ee905e1fd37b169231d2150de31e143191 | [
"MIT"
] | null | null | null | # Author: Chinmai Managoli
import sys as sys
# Morse code dictionary
char_to_dots = {
'A': '.-', 'B': '-...', 'C': '-.-.', 'D': '-..', 'E': '.', 'F': '..-.',
'G': '--.', 'H': '....', 'I': '..', 'J': '.---', 'K': '-.-', 'L': '.-..',
'M': '--', 'N': '-.', 'O': '---', 'P': '.--.', 'Q': '--.-', 'R': '.-.',
'S': '...', 'T': '-', 'U': '..-', 'V': '...-', 'W': '.--', 'X': '-..-',
'Y': '-.--', 'Z': '--..', ' ': ' ', '0': '-----',
'1': '.----', '2': '..---', '3': '...--', '4': '....-', '5': '.....',
'6': '-....', '7': '--...', '8': '---..', '9': '----.',
'&': '.-...', "'": '.----.', '@': '.--.-.', ')': '-.--.-', '(': '-.--.',
':': '---...', ',': '--..--', '=': '-...-', '!': '-.-.--', '.': '.-.-.-',
'-': '-....-', '+': '.-.-.', '"': '.-..-.', '?': '..--..', '/': '-..-.'
}
def encode_morse(message):
message = str(message)
message = message.upper()
try:
for x in message:
print(char_to_dots[x], end=" ")
print("\nMessage was encoded successfully")
# Exceptions
except KeyError:
print("\n" + x + " is an invalid character")
except:
print("\nThere was an error")
if __name__ == "__main__":
print("This program will encode a string into Morse. Unicode characters are not supported.")
string = input("Enter the message to be encoded: ")
encode_morse(string)
sys.exit()
| 34.425 | 96 | 0.336964 | 121 | 1,377 | 3.719008 | 0.735537 | 0.093333 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009285 | 0.217865 | 1,377 | 39 | 97 | 35.307692 | 0.408542 | 0.041394 | 0 | 0 | 0 | 0 | 0.361217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.034483 | 0 | 0.068966 | 0.172414 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac1adabe581fd30a1766857374cb20cf0b69b1b2 | 2,362 | py | Python | OnStage/player_chair.py | IanDCarroll/xox | 38feac84e81e8c00a397f7f976efee15756cd3ac | [
"MIT"
] | null | null | null | OnStage/player_chair.py | IanDCarroll/xox | 38feac84e81e8c00a397f7f976efee15756cd3ac | [
"MIT"
] | 30 | 2016-11-25T05:34:34.000Z | 2017-02-11T00:10:17.000Z | OnStage/player_chair.py | IanDCarroll/tik-tak-toe | 38feac84e81e8c00a397f7f976efee15756cd3ac | [
"MIT"
] | 1 | 2016-11-26T01:41:37.000Z | 2016-11-26T01:41:37.000Z | import sys
from Training.observer_abilities import *
from Training.cortex_3x3_caddy import *
class Player(Observer):
def __init__(self, marker_code):
self.ui = None
self.marker_code = marker_code
def get_enemy_code(self):
if self.marker_code == 10:
return 1
return 10
def move(self, table_top):
choice = self.choose(table_top)
table_top.board[choice] = self.marker_code
return table_top.board
def choose(self, table_top):
options = self.get_legal_moves(table_top.board)
return options[0]
def get_legal_moves(self, board):
legal_moves = []
for i in range(0, len(board)):
if board[i] != 1 and board[i] != 10:
legal_moves.append(i)
return legal_moves
class Human(Player):
name = 'human'
strikes = 0
def choose(self, table_top):
choice = self.get_good_input(table_top)
if self.check_conscience(choice, table_top.board):
return self.redo_move(table_top)
else:
self.reset_strikes()
return choice
def get_good_input(self, board):
try:
return int(self.ui.ask_human()) -1
except(ValueError):
return self.redo_move(board)
def check_conscience(self, choice, board):
if choice not in self.get_legal_moves(board):
return True
def redo_move(self, table_top):
self.add_a_strike(table_top)
table_top.error = True
self.ui.refresh()
return self.choose(table_top)
def add_a_strike(self, table_top):
self.strikes += 1
if self.strikes == 3:
table_top.exit = True
self.ui.refresh()
sys.exit()
def reset_strikes(self):
self.strikes = 0
class Computer(Player):
name = 'computer'
cortex = Cortex_3x3()
def choose(self, table_top):
intel = self.get_intelligence(table_top.board)
choice = self.cortex.direct_move(intel)
return choice
def get_intelligence(self, board):
return { 'board': board,
'options': self.get_legal_moves(board),
'analysis': self.scan_board(board),
'marker_code': self.marker_code,
'enemy_code': self.get_enemy_code() }
| 27.149425 | 58 | 0.595682 | 299 | 2,362 | 4.474916 | 0.237458 | 0.107623 | 0.053812 | 0.040359 | 0.161435 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011635 | 0.308637 | 2,362 | 86 | 59 | 27.465116 | 0.807716 | 0 | 0 | 0.102941 | 0 | 0 | 0.022862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.191176 | false | 0 | 0.044118 | 0.014706 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ac1eac77532b97e37684d7282cd7c2a9da13f188 | 1,332 | py | Python | src/config/contents.py | miloszowi/everyone-mention-telegram-bot | a6b441b197b743f57e089dbe32d262b87a155140 | [
"MIT"
] | 13 | 2021-09-20T17:04:28.000Z | 2022-03-15T09:27:25.000Z | src/config/contents.py | miloszowi/everyone-mention-telegram-bot | a6b441b197b743f57e089dbe32d262b87a155140 | [
"MIT"
] | null | null | null | src/config/contents.py | miloszowi/everyone-mention-telegram-bot | a6b441b197b743f57e089dbe32d262b87a155140 | [
"MIT"
] | null | null | null | # markdownv2 python-telegram-bot specific
joined = '{} joined group `{}`'
not_joined = '{} is already in group `{}`'
left = '{} left group `{}`'
not_left = '{} did not join group `{}` before'
mention_failed = 'There are no users to mention'
no_groups = 'There are no groups for this chat'
# html python-telegram-bot specific
start_text = """
Hello!
@everyone_mention_bot here.
I am here to help you with multiple user mentions.
<b>Usage</b>:
Users that joined the group by <code>/join</code> command,
can be mentioned after typing one of those in your message:
<code>@all</code>, <code>@channel</code>, <code>@chat</code>, <code>@everyone</code>, <code>@group</code> or <code>@here</code>.
If you did create a group named <code>gaming</code>, simply use <code>@gaming</code> to call users from that group.
You can also use <code>/everyone</code> command.
<b>Commands</b>:
<pre>/join {group-name}</pre>
Joins (or creates if group did not exist before) group.
<pre>/leave {group-name}</pre>
Leaves (or deletes if no other users are left) the group
<pre>/everyone {group-name}</pre>
Mentions everyone that joined the group.
<pre>/groups</pre>
Show all created groups in this chat.
<pre>/start</pre>
Show start & help text
<b>Please note</b>
<code>{group-name}</code> is not required, <code>default</code> if not given.
"""
| 30.976744 | 128 | 0.701201 | 214 | 1,332 | 4.331776 | 0.406542 | 0.03452 | 0.038835 | 0.053937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000876 | 0.143393 | 1,332 | 42 | 129 | 31.714286 | 0.811569 | 0.054805 | 0 | 0 | 0 | 0.1 | 0.91242 | 0.089968 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac2ed7f7134d3ec9fcd2d668ed386c6b314f071b | 1,306 | py | Python | FusionIIIT/applications/counselling_cell/migrations/0002_auto_20210501_1036.py | sabhishekpratap5/sonarcubeTest2 | 9bd8105e457f6feb8c38fa94b335e54783fca99e | [
"bzip2-1.0.6"
] | 1 | 2021-08-05T10:31:35.000Z | 2021-08-05T10:31:35.000Z | FusionIIIT/applications/counselling_cell/migrations/0002_auto_20210501_1036.py | sabhishekpratap5/sonarcubeTest2 | 9bd8105e457f6feb8c38fa94b335e54783fca99e | [
"bzip2-1.0.6"
] | 1 | 2021-05-05T09:50:22.000Z | 2021-05-05T09:50:22.000Z | FusionIIIT/applications/counselling_cell/migrations/0002_auto_20210501_1036.py | sabhishekpratap5/sonarcubeTest2 | 9bd8105e457f6feb8c38fa94b335e54783fca99e | [
"bzip2-1.0.6"
] | 4 | 2021-03-16T08:11:42.000Z | 2021-05-06T11:03:44.000Z | # Generated by Django 3.1.5 on 2021-05-01 10:36
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('globals', '0003_auto_20191024_1242'),
('counselling_cell', '0001_initial'),
]
operations = [
migrations.RenameField(
model_name='counsellingfaq',
old_name='counseliing_category',
new_name='counselling_category',
),
migrations.RenameField(
model_name='studentmeetingrequest',
old_name='requested_faculty_invitie',
new_name='requested_faculty_invitee',
),
migrations.RenameField(
model_name='studentmeetingrequest',
old_name='requested_student_invitie',
new_name='requested_student_invitee',
),
migrations.AlterField(
model_name='counsellingissuecategory',
name='category_id',
field=models.CharField(max_length=40, unique=True),
),
migrations.AlterField(
model_name='counsellingmeeting',
name='meeting_host',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='globals.extrainfo'),
),
]
| 31.853659 | 128 | 0.624043 | 122 | 1,306 | 6.434426 | 0.532787 | 0.057325 | 0.099363 | 0.11465 | 0.170701 | 0.170701 | 0.170701 | 0.170701 | 0 | 0 | 0 | 0.038906 | 0.271822 | 1,306 | 40 | 129 | 32.65 | 0.78654 | 0.034456 | 0 | 0.352941 | 1 | 0 | 0.266878 | 0.150119 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac3afa06674bd280c370406d538274f60a4acaa0 | 2,330 | py | Python | ex01_search.py | tbaptista/pacman | f30213e1104b794996204fa0a4ac90c583f8a2e4 | [
"Apache-2.0"
] | 1 | 2019-01-10T05:37:10.000Z | 2019-01-10T05:37:10.000Z | ex01_search.py | tbaptista/pacman | f30213e1104b794996204fa0a4ac90c583f8a2e4 | [
"Apache-2.0"
] | null | null | null | ex01_search.py | tbaptista/pacman | f30213e1104b794996204fa0a4ac90c583f8a2e4 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# -----------------------------------------------------------------------------
# Copyright (c) 2015 Tiago Baptista
# All rights reserved.
# -----------------------------------------------------------------------------
"""
Path-finding exercise using the pac-man game. Using the mouse, choose a target
location for the pac-man agent. Given this target the agent should compute the
path to that location.
"""
from __future__ import division
__docformat__ = 'restructuredtext'
__author__ = 'Tiago Baptista'
__version__ = '1.0'
import pacman
import pyafai
from pyglet.window import mouse
class SearchAgent(pacman.PacmanAgent):
def __init__(self, x, y, cell):
super(SearchAgent, self).__init__(x, y, cell)
self._target = None
self._path = []
@property
def target(self):
return self._target
@target.setter
def target(self, value):
self._target = value
# We are changing destination, so invalidate current path
if value is not None and self._path:
self._path = []
def _think(self, delta):
# If a target has been set
if self._target is not None:
self._path = [] # TODO: execute the search algorithm
self._target = None
# If we have a non empty path
if self._path:
# Execute the next action on the path
next_action = self._path.pop(0)
return [self._actions[next_action]]
class SearchDisplay(pacman.PacmanDisplay):
def on_mouse_release(self, x, y, button, modifiers):
super(SearchDisplay, self).on_mouse_release(x, y, button, modifiers)
if button == mouse.LEFT:
x1, y1 = self.world.get_cell(x, y)
# send agent to x1, y1
if isinstance(self.world.player, SearchAgent):
self.world.player.target = (x1, y1)
elif button == mouse.RIGHT:
x1, y1 = self.world.get_cell(x, y)
print("Cell: ({}, {})".format(x1, y1))
print("Valid neighbours:", self.world.graph.get_connections((x1, y1)))
def setup():
world = pacman.PacmanWorld(20, 'levels/pacman.txt')
display = SearchDisplay(world)
# create pacman agent
world.spawn_player(SearchAgent)
if __name__ == '__main__':
setup()
pyafai.run() | 28.072289 | 82 | 0.584979 | 278 | 2,330 | 4.705036 | 0.435252 | 0.009174 | 0.013761 | 0.025994 | 0.033639 | 0.033639 | 0.033639 | 0.033639 | 0 | 0 | 0 | 0.012636 | 0.25279 | 2,330 | 83 | 83 | 28.072289 | 0.738656 | 0.269099 | 0 | 0.155556 | 0 | 0 | 0.05285 | 0 | 0 | 0 | 0 | 0.012048 | 0 | 1 | 0.133333 | false | 0 | 0.088889 | 0.022222 | 0.311111 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac3bc1d1f68c8f2adb204c9c5f0374180c3d4c1e | 3,867 | py | Python | site_search/tests/test_permissions.py | AccentDesign/djangocms-site-search | 90ed1e5ab5fe96be8f1a4a74994f18164a7363aa | [
"MIT"
] | 1 | 2019-06-06T12:56:30.000Z | 2019-06-06T12:56:30.000Z | site_search/tests/test_permissions.py | AccentDesign/djangocms-site-search | 90ed1e5ab5fe96be8f1a4a74994f18164a7363aa | [
"MIT"
] | null | null | null | site_search/tests/test_permissions.py | AccentDesign/djangocms-site-search | 90ed1e5ab5fe96be8f1a4a74994f18164a7363aa | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from datetime import datetime, timedelta
from django.contrib.auth.models import User
from django.test import TestCase
from cms.api import assign_user_to_page, create_page
from ..helpers import get_request
from ..views import SearchResultsView
class PermissionsTestCase(TestCase):
def setUp(self):
self.view = SearchResultsView()
self.request = get_request('en')
self.request.GET = self.request.GET.copy()
self.request.GET['q'] = 'test_page'
self.view.request = self.request
self.user = User.objects.create_user(
username='jacob', email='jacob@…', password='top_secret')
self.other_user = User.objects.create_user(
username='fred', email='fred@…', password='top_secret')
def _create_page(self, **data):
return create_page(
title='test_page',
reverse_id='testpage',
template='test.html',
language='en',
**data
)
####################################################################
# login_required #
####################################################################
def test_not_included_when_login_required_and_user_anonymous(self):
page = self._create_page(login_required=True)
page.publish('en')
self.assertEqual(len(self.view.get_queryset()), 0)
def test_included_when_login_required_when_user_logged_in(self):
self.view.request.user = self.user
page = self._create_page(login_required=True)
page.publish('en')
self.assertEqual(len(self.view.get_queryset()), 1)
####################################################################
# page permissions #
####################################################################
def test_included_when_perm_set_and_this_user_included(self):
self.view.request.user = self.user
page = self._create_page(login_required=True)
page.publish('en')
assign_user_to_page(page, self.user, can_view=True)
self.assertEqual(len(self.view.get_queryset()), 1)
def test_not_included_when_perm_set_and_this_user_not_included(self):
self.view.request.user = self.user
page = self._create_page(login_required=True)
page.publish('en')
assign_user_to_page(page, self.other_user, can_view=True)
self.assertEqual(len(self.view.get_queryset()), 0)
def test_included_when_no_perm_set(self):
self.view.request.user = self.user
page = self._create_page(login_required=True)
page.publish('en')
self.assertEqual(len(self.view.get_queryset()), 1)
####################################################################
# ensure perms still valid when login_required was not ticked #
####################################################################
def test_included_when_perm_set_and_this_user_included_2(self):
self.view.request.user = self.user
page = self._create_page(login_required=False)
page.publish('en')
assign_user_to_page(page, self.user, can_view=True)
self.assertEqual(len(self.view.get_queryset()), 1)
def test_not_included_when_perm_set_and_this_user_not_included_2(self):
self.view.request.user = self.user
page = self._create_page(login_required=False)
page.publish('en')
assign_user_to_page(page, self.other_user, can_view=True)
self.assertEqual(len(self.view.get_queryset()), 0)
def test_included_when_no_perm_set_2(self):
self.view.request.user = self.user
page = self._create_page(login_required=False)
page.publish('en')
self.assertEqual(len(self.view.get_queryset()), 1)
| 40.28125 | 75 | 0.58495 | 451 | 3,867 | 4.731707 | 0.184035 | 0.06373 | 0.044986 | 0.067479 | 0.67104 | 0.660731 | 0.629803 | 0.629803 | 0.629803 | 0.629803 | 0 | 0.003946 | 0.213602 | 3,867 | 95 | 76 | 40.705263 | 0.695824 | 0.055081 | 0 | 0.514706 | 0 | 0 | 0.030322 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.147059 | false | 0.029412 | 0.088235 | 0.014706 | 0.264706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac3bc471644b6e8784c772369a7f273ad6a22e32 | 12,179 | py | Python | FSO_Comm_Demo.py | MansourM61/FSO-Comm-GnuRadio-Module | 44bfefaa95fb9af19f9817029f663892b0f84417 | [
"MIT"
] | 6 | 2019-10-31T10:02:49.000Z | 2022-03-03T21:42:19.000Z | FSO_Comm_Demo.py | MansourM61/FSO-Comm-GnuRadio-Module | 44bfefaa95fb9af19f9817029f663892b0f84417 | [
"MIT"
] | null | null | null | FSO_Comm_Demo.py | MansourM61/FSO-Comm-GnuRadio-Module | 44bfefaa95fb9af19f9817029f663892b0f84417 | [
"MIT"
] | 2 | 2022-01-03T07:59:44.000Z | 2022-01-30T11:25:21.000Z | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
##################################################
# GNU Radio Python Flow Graph
# Title: FSO Communication Block Modules Test
# Author: M Mansour Abadi
# Description: Modules from FSO_Comm are used in a simple FSO comunication link including various channel effects.
# Generated: Tue Oct 29 17:50:38 2019
##################################################
from distutils.version import StrictVersion
if __name__ == '__main__':
import ctypes
import sys
if sys.platform.startswith('linux'):
try:
x11 = ctypes.cdll.LoadLibrary('libX11.so')
x11.XInitThreads()
except:
print "Warning: failed to XInitThreads()"
from PyQt5 import Qt
from PyQt5 import Qt, QtCore
from gnuradio import blocks
from gnuradio import eng_notation
from gnuradio import gr
from gnuradio import qtgui
from gnuradio.eng_option import eng_option
from gnuradio.filter import firdes
from optparse import OptionParser
import FSO_Comm
import numpy
import sip
import sys
from gnuradio import qtgui
class FSO_Comm_Demo(gr.top_block, Qt.QWidget):
def __init__(self):
gr.top_block.__init__(self, "FSO Communication Block Modules Test")
Qt.QWidget.__init__(self)
self.setWindowTitle("FSO Communication Block Modules Test")
qtgui.util.check_set_qss()
try:
self.setWindowIcon(Qt.QIcon.fromTheme('gnuradio-grc'))
except:
pass
self.top_scroll_layout = Qt.QVBoxLayout()
self.setLayout(self.top_scroll_layout)
self.top_scroll = Qt.QScrollArea()
self.top_scroll.setFrameStyle(Qt.QFrame.NoFrame)
self.top_scroll_layout.addWidget(self.top_scroll)
self.top_scroll.setWidgetResizable(True)
self.top_widget = Qt.QWidget()
self.top_scroll.setWidget(self.top_widget)
self.top_layout = Qt.QVBoxLayout(self.top_widget)
self.top_grid_layout = Qt.QGridLayout()
self.top_layout.addLayout(self.top_grid_layout)
self.settings = Qt.QSettings("GNU Radio", "FSO_Comm_Demo")
if StrictVersion(Qt.qVersion()) < StrictVersion("5.0.0"):
self.restoreGeometry(self.settings.value("geometry").toByteArray())
else:
self.restoreGeometry(self.settings.value("geometry", type=QtCore.QByteArray))
##################################################
# Variables
##################################################
self.wavelength = wavelength = 850e-9
self.vis = vis = 1000
self.samp_rate = samp_rate = 32000
self.link_len = link_len = 100
self.jitter = jitter = 0.05
self.Z_0 = Z_0 = 50
self.Tx_Dia = Tx_Dia = 3e-3
self.Theta_0 = Theta_0 = 0.05
self.T_0 = T_0 = 50e-3
self.Sample_Offset = Sample_Offset = 0
self.Rx_Dia = Rx_Dia = 50e-3
self.Resp = Resp = 0.7
self.P_n = P_n = 1e-6
self.P_0 = P_0 = 1e-3
self.Gain = Gain = 1e3
self.Ext_Ratio = Ext_Ratio = 10
self.Cn2 = Cn2 = 5e-12
self.Chunck_Size = Chunck_Size = 200
self.BW = BW = 2.5e3
##################################################
# Blocks
##################################################
self.qtgui_time_sink_x_0 = qtgui.time_sink_f(
1024*20, #size
samp_rate, #samp_rate
"", #name
1 #number of inputs
)
self.qtgui_time_sink_x_0.set_update_time(0.10)
self.qtgui_time_sink_x_0.set_y_axis(-0.1e-3, 2.5e-3)
self.qtgui_time_sink_x_0.set_y_label('Amplitude', "")
self.qtgui_time_sink_x_0.enable_tags(-1, True)
self.qtgui_time_sink_x_0.set_trigger_mode(qtgui.TRIG_MODE_FREE, qtgui.TRIG_SLOPE_POS, 0.0, 0, 0, "")
self.qtgui_time_sink_x_0.enable_autoscale(False)
self.qtgui_time_sink_x_0.enable_grid(False)
self.qtgui_time_sink_x_0.enable_axis_labels(True)
self.qtgui_time_sink_x_0.enable_control_panel(False)
self.qtgui_time_sink_x_0.enable_stem_plot(False)
if not True:
self.qtgui_time_sink_x_0.disable_legend()
labels = ['', '', '', '', '',
'', '', '', '', '']
widths = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
colors = ["blue", "red", "green", "black", "cyan",
"magenta", "yellow", "dark red", "dark green", "blue"]
styles = [1, 1, 1, 1, 1,
1, 1, 1, 1, 1]
markers = [-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]
alphas = [1.0, 1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0, 1.0]
for i in xrange(1):
if len(labels[i]) == 0:
self.qtgui_time_sink_x_0.set_line_label(i, "Data {0}".format(i))
else:
self.qtgui_time_sink_x_0.set_line_label(i, labels[i])
self.qtgui_time_sink_x_0.set_line_width(i, widths[i])
self.qtgui_time_sink_x_0.set_line_color(i, colors[i])
self.qtgui_time_sink_x_0.set_line_style(i, styles[i])
self.qtgui_time_sink_x_0.set_line_marker(i, markers[i])
self.qtgui_time_sink_x_0.set_line_alpha(i, alphas[i])
self._qtgui_time_sink_x_0_win = sip.wrapinstance(self.qtgui_time_sink_x_0.pyqwidget(), Qt.QWidget)
self.top_layout.addWidget(self._qtgui_time_sink_x_0_win)
self.blocks_throttle_0 = blocks.throttle(gr.sizeof_char*1, samp_rate,True)
self.blocks_repeat_0 = blocks.repeat(gr.sizeof_float*1, 4)
self.blocks_char_to_float_0 = blocks.char_to_float(1, 1)
self.analog_random_source_x_0 = blocks.vector_source_b(map(int, numpy.random.randint(0, 2, 1000)), True)
self.FSO_Comm_Turbulence_ff_0 = FSO_Comm.Turbulence_ff(Cn2, wavelength, link_len, Rx_Dia, T_0, samp_rate)
self.FSO_Comm_Pointing_Errors_ff_0 = FSO_Comm.Pointing_Errors_ff(jitter, link_len, Tx_Dia, Theta_0, Rx_Dia, T_0, samp_rate)
self.FSO_Comm_Optical_Receiver_ff_0 = FSO_Comm.Optical_Receiver_ff(Resp, Gain, Z_0, P_n)
self.FSO_Comm_Laser_ff_0 = FSO_Comm.Laser_ff(P_0, wavelength, Ext_Ratio)
self.FSO_Comm_Geometric_Loss_ff_0 = FSO_Comm.Geometric_Loss_ff(Tx_Dia, Theta_0, link_len, Rx_Dia)
self.FSO_Comm_FogSmoke_Loss_ff_0 = FSO_Comm.FogSmoke_Loss_ff(wavelength, link_len, vis)
##################################################
# Connections
##################################################
self.connect((self.FSO_Comm_FogSmoke_Loss_ff_0, 0), (self.FSO_Comm_Optical_Receiver_ff_0, 0))
self.connect((self.FSO_Comm_Geometric_Loss_ff_0, 0), (self.FSO_Comm_Turbulence_ff_0, 0))
self.connect((self.FSO_Comm_Laser_ff_0, 0), (self.FSO_Comm_Geometric_Loss_ff_0, 0))
self.connect((self.FSO_Comm_Optical_Receiver_ff_0, 0), (self.qtgui_time_sink_x_0, 0))
self.connect((self.FSO_Comm_Pointing_Errors_ff_0, 0), (self.FSO_Comm_FogSmoke_Loss_ff_0, 0))
self.connect((self.FSO_Comm_Turbulence_ff_0, 0), (self.FSO_Comm_Pointing_Errors_ff_0, 0))
self.connect((self.analog_random_source_x_0, 0), (self.blocks_throttle_0, 0))
self.connect((self.blocks_char_to_float_0, 0), (self.blocks_repeat_0, 0))
self.connect((self.blocks_repeat_0, 0), (self.FSO_Comm_Laser_ff_0, 0))
self.connect((self.blocks_throttle_0, 0), (self.blocks_char_to_float_0, 0))
def closeEvent(self, event):
self.settings = Qt.QSettings("GNU Radio", "FSO_Comm_Demo")
self.settings.setValue("geometry", self.saveGeometry())
event.accept()
def get_wavelength(self):
return self.wavelength
def set_wavelength(self, wavelength):
self.wavelength = wavelength
self.FSO_Comm_Turbulence_ff_0.set_Wavelen(self.wavelength)
self.FSO_Comm_Laser_ff_0.set_Wavelen(self.wavelength)
self.FSO_Comm_FogSmoke_Loss_ff_0.set_Wavelen(self.wavelength)
def get_vis(self):
return self.vis
def set_vis(self, vis):
self.vis = vis
self.FSO_Comm_FogSmoke_Loss_ff_0.set_Vis(self.vis)
def get_samp_rate(self):
return self.samp_rate
def set_samp_rate(self, samp_rate):
self.samp_rate = samp_rate
self.qtgui_time_sink_x_0.set_samp_rate(self.samp_rate)
self.blocks_throttle_0.set_sample_rate(self.samp_rate)
self.FSO_Comm_Turbulence_ff_0.set_SampRate(self.samp_rate)
self.FSO_Comm_Pointing_Errors_ff_0.set_SampRate(self.samp_rate)
def get_link_len(self):
return self.link_len
def set_link_len(self, link_len):
self.link_len = link_len
self.FSO_Comm_Turbulence_ff_0.set_LinkLen(self.link_len)
self.FSO_Comm_Pointing_Errors_ff_0.set_LinkLen(self.link_len)
self.FSO_Comm_Geometric_Loss_ff_0.set_LinkLen(self.link_len)
self.FSO_Comm_FogSmoke_Loss_ff_0.set_LinkLen(self.link_len)
def get_jitter(self):
return self.jitter
def set_jitter(self, jitter):
self.jitter = jitter
self.FSO_Comm_Pointing_Errors_ff_0.set_Jitter(self.jitter)
def get_Z_0(self):
return self.Z_0
def set_Z_0(self, Z_0):
self.Z_0 = Z_0
self.FSO_Comm_Optical_Receiver_ff_0.set_Imp(self.Z_0)
def get_Tx_Dia(self):
return self.Tx_Dia
def set_Tx_Dia(self, Tx_Dia):
self.Tx_Dia = Tx_Dia
self.FSO_Comm_Pointing_Errors_ff_0.set_Tx_Dia(self.Tx_Dia)
self.FSO_Comm_Geometric_Loss_ff_0.set_Tx_Dia(self.Tx_Dia)
def get_Theta_0(self):
return self.Theta_0
def set_Theta_0(self, Theta_0):
self.Theta_0 = Theta_0
self.FSO_Comm_Pointing_Errors_ff_0.set_Tx_Theta(self.Theta_0)
self.FSO_Comm_Geometric_Loss_ff_0.set_Tx_DivAng(self.Theta_0)
def get_T_0(self):
return self.T_0
def set_T_0(self, T_0):
self.T_0 = T_0
self.FSO_Comm_Turbulence_ff_0.set_TempCorr(self.T_0)
self.FSO_Comm_Pointing_Errors_ff_0.set_TempCorr(self.T_0)
def get_Sample_Offset(self):
return self.Sample_Offset
def set_Sample_Offset(self, Sample_Offset):
self.Sample_Offset = Sample_Offset
def get_Rx_Dia(self):
return self.Rx_Dia
def set_Rx_Dia(self, Rx_Dia):
self.Rx_Dia = Rx_Dia
self.FSO_Comm_Turbulence_ff_0.set_Rx_Dia(self.Rx_Dia)
self.FSO_Comm_Pointing_Errors_ff_0.set_Rx_Dia(self.Rx_Dia)
self.FSO_Comm_Geometric_Loss_ff_0.set_Rx_Dia(self.Rx_Dia)
def get_Resp(self):
return self.Resp
def set_Resp(self, Resp):
self.Resp = Resp
self.FSO_Comm_Optical_Receiver_ff_0.set_Resp(self.Resp)
def get_P_n(self):
return self.P_n
def set_P_n(self, P_n):
self.P_n = P_n
self.FSO_Comm_Optical_Receiver_ff_0.set_P_n(self.P_n)
def get_P_0(self):
return self.P_0
def set_P_0(self, P_0):
self.P_0 = P_0
self.FSO_Comm_Laser_ff_0.set_P_avg(self.P_0)
def get_Gain(self):
return self.Gain
def set_Gain(self, Gain):
self.Gain = Gain
self.FSO_Comm_Optical_Receiver_ff_0.set_G_TIA(self.Gain)
def get_Ext_Ratio(self):
return self.Ext_Ratio
def set_Ext_Ratio(self, Ext_Ratio):
self.Ext_Ratio = Ext_Ratio
self.FSO_Comm_Laser_ff_0.set_ExtRatio(self.Ext_Ratio)
def get_Cn2(self):
return self.Cn2
def set_Cn2(self, Cn2):
self.Cn2 = Cn2
def get_Chunck_Size(self):
return self.Chunck_Size
def set_Chunck_Size(self, Chunck_Size):
self.Chunck_Size = Chunck_Size
def get_BW(self):
return self.BW
def set_BW(self, BW):
self.BW = BW
def main(top_block_cls=FSO_Comm_Demo, options=None):
if StrictVersion("4.5.0") <= StrictVersion(Qt.qVersion()) < StrictVersion("5.0.0"):
style = gr.prefs().get_string('qtgui', 'style', 'raster')
Qt.QApplication.setGraphicsSystem(style)
qapp = Qt.QApplication(sys.argv)
tb = top_block_cls()
tb.start()
tb.show()
def quitting():
tb.stop()
tb.wait()
qapp.aboutToQuit.connect(quitting)
qapp.exec_()
if __name__ == '__main__':
main()
| 36.247024 | 131 | 0.642417 | 1,842 | 12,179 | 3.873507 | 0.159609 | 0.05494 | 0.067835 | 0.057183 | 0.506097 | 0.401822 | 0.34548 | 0.243588 | 0.162579 | 0.039383 | 0 | 0.033986 | 0.224485 | 12,179 | 335 | 132 | 36.355224 | 0.72144 | 0.028738 | 0 | 0.062992 | 0 | 0 | 0.027946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003937 | 0.066929 | null | null | 0.003937 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac3e3b4c92edc26c0c53ec7942b32aed32b778c8 | 1,165 | py | Python | fapistrano/signal.py | liwushuo/fapistrano | 2a31aad01a04d7ea9108dc6f95aee9a53290459f | [
"MIT"
] | 18 | 2016-03-25T09:40:20.000Z | 2022-02-23T02:09:50.000Z | fapistrano/signal.py | liwushuo/fapistrano | 2a31aad01a04d7ea9108dc6f95aee9a53290459f | [
"MIT"
] | null | null | null | fapistrano/signal.py | liwushuo/fapistrano | 2a31aad01a04d7ea9108dc6f95aee9a53290459f | [
"MIT"
] | 3 | 2016-03-22T07:41:15.000Z | 2021-02-25T04:27:53.000Z | # -*- coding: utf-8 -*-
from functools import wraps
from .utils import run_function
class Signal(object):
def __init__(self, name, doc=''):
self.name = name
self.doc = doc
self.receivers = {}
class Namespace(dict):
def signal(self, name, doc=None):
try:
return self[name]
except KeyError:
return self.setdefault(name, Signal(name, doc))
namespace = Namespace()
def clear():
namespace.clear()
def emit(event, **data):
if event not in namespace:
return
for id, func in namespace[event].receivers.items():
run_function(func, **data)
def register(event, function):
assert callable(function), 'Function must be callable.'
namespace.signal(event).receivers[id(function)] = function
def listen(event):
def decorator(f):
@wraps(f)
def deco(*args, **kwargs):
register(event, f)
return f(*args, **kwargs)
return deco
return decorator
if __name__ == '__main__':
def handle_hello(**data):
print 'received data:', data
register('hello', handle_hello)
emit('hello', keyword='world')
| 22.843137 | 62 | 0.611159 | 138 | 1,165 | 5.043478 | 0.405797 | 0.045977 | 0.031609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001161 | 0.260944 | 1,165 | 50 | 63 | 23.3 | 0.807201 | 0.018026 | 0 | 0 | 0 | 0 | 0.055166 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0 | null | null | 0 | 0.054054 | null | null | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac435b3434ef327610b43e5ed8a12c8f4b36a43d | 1,205 | py | Python | src/normalizer.py | lucassouzamatos/water-potability-ai | 6714e894b4575a58e35cc6e1cac699f0f5f1e9bc | [
"MIT"
] | null | null | null | src/normalizer.py | lucassouzamatos/water-potability-ai | 6714e894b4575a58e35cc6e1cac699f0f5f1e9bc | [
"MIT"
] | null | null | null | src/normalizer.py | lucassouzamatos/water-potability-ai | 6714e894b4575a58e35cc6e1cac699f0f5f1e9bc | [
"MIT"
] | null | null | null | import pandas as pd
class Normalizer:
csv_data = 'dataset/water_potability.csv' # file from work data
def __init__(self) -> None:
self.dataset = pd.read_csv(self.csv_data)
self.__normalize_data__()
self.__separate__()
'''
@ convert all info to number
'''
def __normalize_data__(self) -> None:
self.dataset = self.dataset.apply(pd.to_numeric)
self.dataset['ph'] = self.dataset['ph'].fillna(self.dataset.groupby('Potability')['ph'].transform('mean'))
self.dataset['Sulfate'] = self.dataset['Sulfate'].fillna(self.dataset.groupby('Potability')['Sulfate'].transform('mean'))
self.dataset['Trihalomethanes'] = self.dataset['Trihalomethanes'].fillna(self.dataset.groupby('Potability')['Trihalomethanes'].transform('mean'))
'''
separates the dataset where clause potable or unpotable
'''
def __separate__(self):
self.dataset_potable = self.dataset.loc[self.dataset['Potability'] == 1]
self.dataset_unpotable = self.dataset.loc[self.dataset['Potability'] == 0]
self.dataset_potable = self.dataset_potable.reset_index()
self.dataset_unpotable = self.dataset_unpotable.reset_index()
if __name__ == '__main__':
normalizer = Normalizer() | 40.166667 | 149 | 0.712863 | 145 | 1,205 | 5.627586 | 0.344828 | 0.296569 | 0.0625 | 0.088235 | 0.330882 | 0.085784 | 0 | 0 | 0 | 0 | 0 | 0.001916 | 0.13361 | 1,205 | 30 | 150 | 40.166667 | 0.779693 | 0.015768 | 0 | 0 | 0 | 0 | 0.158287 | 0.026071 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.052632 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac455afb457dd1a64725218027c80809e20d17f1 | 3,594 | py | Python | mrl/g_models/generative_base.py | DarkMatterAI/mrl | e000c3570d4461c3054c882697cce55217ede552 | [
"MIT"
] | 4 | 2021-11-16T09:29:55.000Z | 2021-12-27T17:55:32.000Z | mrl/g_models/generative_base.py | DarkMatterAI/mrl | e000c3570d4461c3054c882697cce55217ede552 | [
"MIT"
] | null | null | null | mrl/g_models/generative_base.py | DarkMatterAI/mrl | e000c3570d4461c3054c882697cce55217ede552 | [
"MIT"
] | 3 | 2021-11-16T09:41:41.000Z | 2021-12-27T17:55:33.000Z | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/09_generative_models.generative_base.ipynb (unless otherwise specified).
__all__ = ['GenerativeModel', 'beam_search']
# Cell
from ..imports import *
from ..torch_imports import *
from ..torch_core import *
from ..layers import *
# Cell
class GenerativeModel(nn.Module):
'''
GenerativeModel - base generative model class
'''
def __init__(self):
super().__init__()
def forward(self, x):
raise NotImplementedError
def x_to_latent(self, x):
'''
x_to_latent - convert `x` to a latent vector
Inputs:
- `x`: `x` comes from a Dataloader. The specific
form of `x` depends on the dataloader used
Returns:
If the model in question makes use of latent vectors
for sampling or reconstruction, the function should
return a batch of latent vectors. If latent vectors
are not compatible, the function should return None
'''
raise NotImplementedError
def sample(self, **sample_kwargs):
'''
sample - sample items from tthe model
'''
raise NotImplementedError
def sample_no_grad(self, **sample_kwargs):
'no grad wrapper for sample'
with torch.no_grad():
return self.sample(**sample_kwargs)
def get_rl_tensors(self):
'''
get_rl_tensors - generate tensors needed in the training loop
'''
raise NotImplementedError
# Cell
def beam_search(model, seed_ints, k, beam_size, sl, temperature, pad_idx=None):
'''
beam_search - perform beam search using `model`
Inputs:
- `model nn.Module`: model
- `seed_ints torch.Longtensor`: seed sequence
- `k int`: top k beam sampling
- `beam_size int`: maximum number of beams to retain
- `sl int`: max sequence length
- `temperature float`: sample temperature
- `pad_idx Optional[int]`: pad index if applicable
'''
# currently only works for LSTM_LM. TODO: work for all generative models
current_device = next(model.parameters()).device
if seed_ints.ndim==1:
seed_ints = seed_ints.unsqueeze(0)
preds = seed_ints.repeat(k,1)
preds = to_device(preds, current_device)
idxs = preds[:,-1].unsqueeze(-1)
lps = idxs.new_zeros((k, 1)).float()
with torch.no_grad():
for i in range(sl):
x, hiddens, encoded = model._forward(idxs, hiddens)
x.div_(temperature)
log_probs = F.log_softmax(x, -1)
values, indices = log_probs.topk(k, dim=-1)
lps = torch.cat([lps.unsqueeze(-1).repeat(1,1,values.shape[-1]), -values], 1)
current_sl = lps.shape[1]
lps = lps.permute(0,2,1).reshape(-1,current_sl)
preds = torch.cat([preds[:,None].expand(preds.size(0), k , preds.size(1)),
indices.squeeze(1)[:,:,None].expand(preds.size(0), k, 1),], dim=2)
preds = preds.view(-1, preds.size(2))
scores = lps.sum(-1)
indices_idx = torch.arange(0,preds.size(0))[:,None].expand(preds.size(0), k).contiguous().view(-1)
sort_idx = scores.argsort()[:beam_size]
preds = preds[sort_idx]
lps = lps[sort_idx]
idxs = preds[:,-1].unsqueeze(-1)
hiddens = [(i[0][:, indices_idx[sort_idx], :],
i[1][:, indices_idx[sort_idx], :]) for i in hiddens]
if pad_idx is not None:
if (preds[:,-1]==pad_idx).all():
break
return preds, -lps | 28.299213 | 120 | 0.598219 | 463 | 3,594 | 4.49676 | 0.343413 | 0.023055 | 0.019212 | 0.027378 | 0.049472 | 0.030259 | 0 | 0 | 0 | 0 | 0 | 0.014689 | 0.280189 | 3,594 | 127 | 121 | 28.299213 | 0.790104 | 0.303561 | 0 | 0.153846 | 1 | 0 | 0.022318 | 0 | 0 | 0 | 0 | 0.007874 | 0 | 1 | 0.134615 | false | 0 | 0.076923 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac474bcb1cc36e8c400164e2a77001ca5f025265 | 498 | py | Python | venv/lib/python3.8/site-packages/webargs/__init__.py | mrunix1998/booking-flights-system | 4eab3d845c4ba6742bd550604fe69b7f101c8da4 | [
"MIT"
] | 1 | 2022-03-28T16:37:17.000Z | 2022-03-28T16:37:17.000Z | venv/venv/lib/python3.8/site-packages/webargs/__init__.py | mrunix1998/booking-flights-system | 4eab3d845c4ba6742bd550604fe69b7f101c8da4 | [
"MIT"
] | null | null | null | venv/venv/lib/python3.8/site-packages/webargs/__init__.py | mrunix1998/booking-flights-system | 4eab3d845c4ba6742bd550604fe69b7f101c8da4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from distutils.version import LooseVersion
from marshmallow.utils import missing
# Make marshmallow's validation functions importable from webargs
from marshmallow import validate
from webargs.core import dict2schema, ValidationError
from webargs import fields
__version__ = "5.3.2"
__version_info__ = tuple(LooseVersion(__version__).version)
__author__ = "Steven Loria"
__license__ = "MIT"
__all__ = ("dict2schema", "ValidationError", "fields", "missing", "validate")
| 27.666667 | 77 | 0.783133 | 56 | 498 | 6.517857 | 0.589286 | 0.090411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013667 | 0.118474 | 498 | 17 | 78 | 29.294118 | 0.817768 | 0.170683 | 0 | 0 | 0 | 0 | 0.163415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ac483bee0ecf390755efd9546940d7a56a66bf85 | 483 | py | Python | scripts/imageio_remove_bin-script.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | 1 | 2021-11-25T02:14:23.000Z | 2021-11-25T02:14:23.000Z | scripts/imageio_remove_bin-script.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | null | null | null | scripts/imageio_remove_bin-script.py | shfkdroal/Robot-Learning-in-Mixed-Adversarial-and-Collaborative-Settings | 1fa4cd6a566c8745f455fc3d2273208f21f88ced | [
"bzip2-1.0.6"
] | null | null | null | #!C:\Users\stpny\Downloads\grasp_public-master\grasp_public-master\Scripts\python.exe
# EASY-INSTALL-ENTRY-SCRIPT: 'imageio==2.5.0','console_scripts','imageio_remove_bin'
__requires__ = 'imageio==2.5.0'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('imageio==2.5.0', 'console_scripts', 'imageio_remove_bin')()
)
| 37.153846 | 86 | 0.689441 | 71 | 483 | 4.338028 | 0.535211 | 0.077922 | 0.087662 | 0.097403 | 0.25974 | 0.25974 | 0.25974 | 0.25974 | 0.25974 | 0 | 0 | 0.026128 | 0.128364 | 483 | 12 | 87 | 40.25 | 0.705463 | 0.345756 | 0 | 0 | 0 | 0 | 0.304636 | 0.076159 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ac495d1405722c44232ce6b138bdc896307b81e8 | 21,345 | py | Python | pythonweb/user/views.py | onwebbe/rasiberryPiWebManager | 14ff9f14f3f873457666fa1669fae715148538c9 | [
"Apache-2.0"
] | null | null | null | pythonweb/user/views.py | onwebbe/rasiberryPiWebManager | 14ff9f14f3f873457666fa1669fae715148538c9 | [
"Apache-2.0"
] | 7 | 2020-09-07T07:51:28.000Z | 2022-02-26T17:54:49.000Z | pythonweb/user/views.py | onwebbe/rasiberryPiWebManager | 14ff9f14f3f873457666fa1669fae715148538c9 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
import json
# Create your views here.
def info(request):
userInfo = {
'id': '4291d7da9005377ec9aec4a71ea837f',
'name': '天野远子',
'username': 'admin',
'password': '',
'avatar': '/avatar2.jpg',
'status': 1,
'telephone': '',
'lastLoginIp': '27.154.74.117',
'lastLoginTime': 1534837621348,
'creatorId': 'admin',
'createTime': 1497160610259,
'merchantCode': 'TLif2btpzg079h15bk',
'deleted': 0,
'roleId': 'admin',
'role': {}
}
roleObj = {
'permissions': [],
'id': 'admin',
'name': '管理员',
'describe': '拥有所有权限',
'status': 1,
'creatorId': 'system',
'createTime': 1497160610259,
'deleted': 0,
'permissions': [{
'roleId': 'admin',
'permissionId': 'dashboard',
'permissionName': '仪表盘',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'exception',
'permissionName': '异常页面权限',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'result',
'permissionName': '结果权限',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'profile',
'permissionName': '详细页权限',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'table',
'permissionName': '表格权限',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"import","defaultCheck":false,"describe":"导入"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'import',
'describe': '导入',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'form',
'permissionName': '表单权限',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'order',
'permissionName': '订单管理',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'permission',
'permissionName': '权限管理',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'role',
'permissionName': '角色管理',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'table',
'permissionName': '桌子管理',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'user',
'permissionName': '用户管理',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"import","defaultCheck":false,"describe":"导入"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"},{"action":"export","defaultCheck":false,"describe":"导出"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'import',
'describe': '导入',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}, {
'action': 'export',
'describe': '导出',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}],
'permissions': [{
'roleId': 'admin',
'permissionId': 'support',
'permissionName': '超级模块',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"import","defaultCheck":false,"describe":"导入"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"},{"action":"export","defaultCheck":false,"describe":"导出"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'import',
'describe': '导入',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}, {
'action': 'export',
'describe': '导出',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}, {
'roleId': 'admin',
'permissionId': 'pioverview',
'permissionName': 'pioverview',
'actions': '[{"action":"add","defaultCheck":false,"describe":"新增"},{"action":"query","defaultCheck":false,"describe":"查询"},{"action":"get","defaultCheck":false,"describe":"详情"},{"action":"update","defaultCheck":false,"describe":"修改"},{"action":"delete","defaultCheck":false,"describe":"删除"}]',
'actionEntitySet': [{
'action': 'add',
'describe': '新增',
'defaultCheck': False
}, {
'action': 'query',
'describe': '查询',
'defaultCheck': False
}, {
'action': 'get',
'describe': '详情',
'defaultCheck': False
}, {
'action': 'update',
'describe': '修改',
'defaultCheck': False
}, {
'action': 'delete',
'describe': '删除',
'defaultCheck': False
}],
'actionList': None,
'dataAccess': None
}]
}
userInfo = {
'result': {
'role': roleObj
}
}
return HttpResponse(json.dumps(userInfo, indent=4))
def nav(request):
nav = [
# // dashboard
{
'name': 'dashboard',
'parentId': -1,
'id': 1,
'meta': {
'icon': 'dashboard',
'title': '仪表盘',
'show': True
},
'component': 'RouteView',
'redirect': '/dashboard/workplace'
},
{
'name': 'workplace',
'parentId': 1,
'id': 7,
'meta': {
'title': '工作台',
'show': True
},
'component': 'Workplace'
},
{
'name': 'monitor',
'path': 'https://www.baidu.com/',
'parentId': 1,
'id': 3,
'meta': {
'title': '监控页(外部)',
'target': '_blank',
'show': True
}
},
{
'name': 'analysis',
'parentId': 1,
'id': 2,
'meta': {
'title': '分析页',
'show': True
},
'component': 'Analysis'
},
{
'name': 'tests',
'parentId': 1,
'id': 8,
'meta': {
'title': '测试功能',
'show': True
},
'component': 'TestWork'
},
# //pi overview
{
'name': 'pioverview',
'parentId': -1,
'id': 100,
'meta': {
'icon': 'dashboard',
'title': 'Pi Overview',
'show': True
},
'component': 'RouteView',
'redirect': '/pioverview/gpioOverview'
},
{
'name': 'gpioOverview',
'parentId': 100,
'id': 6,
'meta': {
'title': 'GPIO Overview'
},
'component': 'PiGPIOStatus'
},
{
'name': 'workingOverview',
'parentId': 100,
'id': 7,
'meta': {
'title': 'Woring Overview'
},
'component': 'PiWorkingStatus'
},
# // form
{
'name': 'form',
'parentId': -1,
'id': 10,
'meta': {
'icon': 'form',
'title': '表单页'
},
'redirect': '/form/base-form',
'component': 'PageView'
},
{
'name': 'basic-form',
'parentId': 10,
'id': 6,
'meta': {
'title': '基础表单'
},
'component': 'BasicForm'
},
{
'name': 'step-form',
'parentId': 10,
'id': 5,
'meta': {
'title': '分步表单'
},
'component': 'StepForm'
},
{
'name': 'advanced-form',
'parentId': 10,
'id': 4,
'meta': {
'title': '高级表单'
},
'component': 'AdvanceForm'
},
# // list
{
'name': 'list',
'parentId': -1,
'id': 10010,
'meta': {
'icon': 'table',
'title': '列表页',
'show': True
},
'redirect': '/list/table-list',
'component': 'PageView'
},
{
'name': 'table-list',
'parentId': 10010,
'id': 10011,
'path': '/list/table-list/:pageNo([1-9]\\d*)?',
'meta': {
'title': '查询表格',
'show': True
},
'component': 'TableList'
},
{
'name': 'basic-list',
'parentId': 10010,
'id': 10012,
'meta': {
'title': '标准列表',
'show': True
},
'component': 'StandardList'
},
{
'name': 'card',
'parentId': 10010,
'id': 10013,
'meta': {
'title': '卡片列表',
'show': True
},
'component': 'CardList'
},
{
'name': 'search',
'parentId': 10010,
'id': 10014,
'meta': {
'title': '搜索列表',
'show': True
},
'redirect': '/list/search/article',
'component': 'SearchLayout'
},
{
'name': 'article',
'parentId': 10014,
'id': 10015,
'meta': {
'title': '搜索列表(文章)',
'show': True
},
'component': 'SearchArticles'
},
{
'name': 'project',
'parentId': 10014,
'id': 10016,
'meta': {
'title': '搜索列表(项目)',
'show': True
},
'component': 'SearchProjects'
},
{
'name': 'application',
'parentId': 10014,
'id': 10017,
'meta': {
'title': '搜索列表(应用)',
'show': True
},
'component': 'SearchApplications'
},
# // profile
{
'name': 'profile',
'parentId': -1,
'id': 10018,
'meta': {
'title': '详情页',
'icon': 'profile',
'show': True
},
'redirect': '/profile/basic',
'component': 'RouteView'
},
{
'name': 'basic',
'parentId': 10018,
'id': 10019,
'meta': {
'title': '基础详情页',
'show': True
},
'component': 'ProfileBasic'
},
{
'name': 'advanced',
'parentId': 10018,
'id': 10020,
'meta': {
'title': '高级详情页',
'show': True
},
'component': 'ProfileAdvanced'
},
# // result
{
'name': 'result',
'parentId': -1,
'id': 10021,
'meta': {
'title': '结果页',
'icon': 'check-circle-o',
'show': True
},
'redirect': '/result/success',
'component': 'PageView'
},
{
'name': 'success',
'parentId': 10021,
'id': 10022,
'meta': {
'title': '成功',
'hiddenHeaderContent': True,
'show': True
},
'component': 'ResultSuccess'
},
{
'name': 'fail',
'parentId': 10021,
'id': 10023,
'meta': {
'title': '失败',
'hiddenHeaderContent': True,
'show': True
},
'component': 'ResultFail'
},
# // Exception
{
'name': 'exception',
'parentId': -1,
'id': 10024,
'meta': {
'title': '异常页',
'icon': 'warning',
'show': True
},
'redirect': '/exception/403',
'component': 'RouteView'
},
{
'name': '403',
'parentId': 10024,
'id': 10025,
'meta': {
'title': '403',
'show': True
},
'component': 'Exception403'
},
{
'name': '404',
'parentId': 10024,
'id': 10026,
'meta': {
'title': '404',
'show': True
},
'component': 'Exception404'
},
{
'name': '500',
'parentId': 10024,
'id': 10027,
'meta': {
'title': '500',
'show': True
},
'component': 'Exception500'
},
# // account
{
'name': 'account',
'parentId': -1,
'id': 10028,
'meta': {
'title': '个人页',
'icon': 'user',
'show': True
},
'redirect': '/account/center',
'component': 'RouteView'
},
{
'name': 'center',
'parentId': 10028,
'id': 10029,
'meta': {
'title': '个人中心',
'show': True
},
'component': 'AccountCenter'
},
# // 特殊三级菜单
{
'name': 'settings',
'parentId': 10028,
'id': 10030,
'meta': {
'title': '个人设置',
'hideHeader': True,
'hideChildren': True,
'show': True
},
'redirect': '/account/settings/base',
'component': 'AccountSettings'
},
{
'name': 'BaseSettings',
'path': '/account/settings/base',
'parentId': 10030,
'id': 10031,
'meta': {
'title': '基本设置',
'show': False
},
'component': 'BaseSettings'
},
{
'name': 'SecuritySettings',
'path': '/account/settings/security',
'parentId': 10030,
'id': 10032,
'meta': {
'title': '安全设置',
'show': False
},
'component': 'SecuritySettings'
},
{
'name': 'CustomSettings',
'path': '/account/settings/custom',
'parentId': 10030,
'id': 10033,
'meta': {
'title': '个性化设置',
'show': False
},
'component': 'CustomSettings'
},
{
'name': 'BindingSettings',
'path': '/account/settings/binding',
'parentId': 10030,
'id': 10034,
'meta': {
'title': '账户绑定',
'show': False
},
'component': 'BindingSettings'
},
{
'name': 'NotificationSettings',
'path': '/account/settings/notification',
'parentId': 10030,
'id': 10034,
'meta': {
'title': '新消息通知',
'show': False
},
'component': 'NotificationSettings'
}
]
navResult = {
'result': nav
}
return HttpResponse(json.dumps(navResult, indent=4))
| 25.747889 | 357 | 0.469056 | 1,615 | 21,345 | 6.198762 | 0.157895 | 0.217361 | 0.159824 | 0.03636 | 0.596044 | 0.581261 | 0.575467 | 0.575467 | 0.575467 | 0.568075 | 0 | 0.026364 | 0.312298 | 21,345 | 828 | 358 | 25.778986 | 0.65563 | 0.005669 | 0 | 0.5822 | 0 | 0.016069 | 0.474143 | 0.179843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002472 | false | 0.001236 | 0.011125 | 0 | 0.016069 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac4bb093b09ad6b3234a1c157636387e7fbb5f98 | 3,278 | py | Python | WHI_long_term_size_distr_including_fresh_emissions_plotting.py | annahs/atmos_research | b5853c9b12e327492f8f8ba5069bca3fd2e981c8 | [
"MIT"
] | 2 | 2018-08-17T15:25:26.000Z | 2019-04-17T16:50:00.000Z | WHI_long_term_size_distr_including_fresh_emissions_plotting.py | annahs/atmos_research | b5853c9b12e327492f8f8ba5069bca3fd2e981c8 | [
"MIT"
] | null | null | null | WHI_long_term_size_distr_including_fresh_emissions_plotting.py | annahs/atmos_research | b5853c9b12e327492f8f8ba5069bca3fd2e981c8 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
from matplotlib import dates
import os
import pickle
from datetime import datetime
from pprint import pprint
import sys
import math
import traceback
import time
os.chdir('C:/Users/Sarah Hanna/Documents/Data/WHI long term record/coatings/')
file = open('raw size and number distributions by air mass for 69.76nm to 220.11nm.binpickl', 'r')
distr_data = pickle.load(file)
file.close()
modified_distr_data = {}
interval_length = 5.0
fit_bins = []
for x in range (30,800,5):
fit_bins.append(x+2)
def lognorm(x_vals, A, w, xc):
return A/(np.sqrt(2*math.pi)*w*x_vals)*np.exp(-(np.log(x_vals/xc))**2/(2*w**2))
for air_mass, distribution_data in distr_data.iteritems():
print air_mass
#distribution_data.pop(70, None)
distr_bins_p = []
mass_distr_values = []
numb_distr_values = []
for bin, distr_values in distribution_data.iteritems(): #normalize
n_mass_val = distr_values[0]/(math.log(bin+interval_length)-math.log(bin)) #dM/dlog(VED)
mass_distr_values.append(n_mass_val)
n_numb_val = distr_values[1]/(math.log(bin+interval_length)-math.log(bin)) #d/dlog(VED)
numb_distr_values.append(n_numb_val)
distr_bins_p.append(bin+interval_length/2.0) #correction for our binning code recording bin starts as keys instead of midpoints
norm_mass_distr_values_p = []
for mass in mass_distr_values:
norm_mass = mass/np.max(mass_distr_values)
norm_mass_distr_values_p.append(norm_mass)
norm_mass_distr_values = np.array(norm_mass_distr_values_p)
norm_numb_distr_values_p = []
for numb in numb_distr_values:
norm_numb = numb/np.max(numb_distr_values)
norm_numb_distr_values_p.append(norm_numb)
norm_numb_distr_values = np.array(norm_numb_distr_values_p)
distr_bins = np.array(distr_bins_p)
fit_failure = False
try:
popt, pcov = curve_fit(lognorm, distr_bins, norm_numb_distr_values)
perr = np.sqrt(np.diag(pcov)) #from docs: To compute one standard deviation errors on the parameters use perr = np.sqrt(np.diag(pcov))
err_variables = [popt[0]-perr[0], popt[1]-perr[1], popt[2]-perr[2]]
except:
print 'fit_failure'
fit_failure = True
fit_y_vals = []
for bin in fit_bins:
if fit_failure == True:
fit_val = np.nan
else:
fit_val = lognorm(bin, popt[0], popt[1], popt[2])
fit_y_vals.append(fit_val)
err_fit_y_vals = []
for bin in fit_bins:
if fit_failure == True:
err_fit_val = np.nan
else:
err_fit_val = lognorm(bin, err_variables[0], err_variables[1], err_variables[2])
err_fit_y_vals.append(err_fit_val)
modified_distr_data[air_mass] = [distr_bins,norm_numb_distr_values,fit_bins,fit_y_vals]
pprint(modified_distr_data['GBPS'])
#plotting
fig = plt.figure()
ax1 = fig.add_subplot(111)
colors=['magenta', 'red', 'green', 'cyan', 'blue', 'black']
i=0
for air_mass, distr in modified_distr_data.iteritems():
bins = modified_distr_data[air_mass][0]
data = modified_distr_data[air_mass][1]
fit_bins = modified_distr_data[air_mass][2]
fits = modified_distr_data[air_mass][3]
m_distr = ax1.scatter(bins,data, label = air_mass,color = colors[i])
f_distr = ax1.semilogx(fit_bins,fits,color = colors[i])
ax1.set_xlim(40,500)
ax1.set_ylim(0,1.1)
i+=1
plt.legend()
plt.show()
| 26.650407 | 137 | 0.742221 | 564 | 3,278 | 4.035461 | 0.292553 | 0.101494 | 0.065905 | 0.050088 | 0.286467 | 0.130931 | 0.064148 | 0.064148 | 0.034271 | 0.034271 | 0 | 0.021893 | 0.136059 | 3,278 | 122 | 138 | 26.868852 | 0.78178 | 0.078096 | 0 | 0.070588 | 0 | 0 | 0.062396 | 0.007965 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.141176 | null | null | 0.047059 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac4ee0a9c265d35fc43a606e8c10147a4a14ebe8 | 342 | py | Python | pos_multie_print/config/docs.py | ashish-greycube/pos_multie_print | f84f36cdf32f53b200c8fe7b9c754e199094d841 | [
"MIT"
] | null | null | null | pos_multie_print/config/docs.py | ashish-greycube/pos_multie_print | f84f36cdf32f53b200c8fe7b9c754e199094d841 | [
"MIT"
] | null | null | null | pos_multie_print/config/docs.py | ashish-greycube/pos_multie_print | f84f36cdf32f53b200c8fe7b9c754e199094d841 | [
"MIT"
] | null | null | null | """
Configuration for docs
"""
# source_link = "https://github.com/[org_name]/pos_multie_print"
# docs_base_url = "https://[org_name].github.io/pos_multie_print"
# headline = "App that does everything"
# sub_heading = "Yes, you got that right the first time, everything"
def get_context(context):
context.brand_html = "POS Multiple Print"
| 28.5 | 68 | 0.739766 | 50 | 342 | 4.82 | 0.72 | 0.058091 | 0.116183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 342 | 11 | 69 | 31.090909 | 0.803333 | 0.745614 | 0 | 0 | 0 | 0 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac4f82e72b64166dbb545dc5a1c2ec940777bbee | 1,096 | py | Python | pe3.py | ChrisCalderon/project-euler | 96055343fc3ef7653184708fe350018ee751ea17 | [
"MIT"
] | 1 | 2015-12-16T05:13:30.000Z | 2015-12-16T05:13:30.000Z | pe3.py | ChrisCalderon/project-euler | 96055343fc3ef7653184708fe350018ee751ea17 | [
"MIT"
] | null | null | null | pe3.py | ChrisCalderon/project-euler | 96055343fc3ef7653184708fe350018ee751ea17 | [
"MIT"
] | null | null | null | PRIMES = [3]
def next_prime():
num = PRIMES[-1] + 2 # odd + 2 is the next odd, don't check evens for primality
is_prime = False
while True:
lim = num**0.5 # don't check for prime factors larger than this
for p in PRIMES:
if p > lim:
is_prime = True
break
elif num%p==0:
is_prime = False
break
else:
continue
if is_prime:
PRIMES.append(num)
return num
else:
num += 2
def largest_prime_factor(n):
largest = 2
while not n&1:
n >> 1 # divide out the twos
if n%3 == 0:
largest = 3
while n%3==0:
n /= 3
while n > 1:
p = next_prime()
if n%p==0:
largest = p
while n%p==0:
n /= p
return largest
def main():
# testing prime finding
# print 2, 3,
# for i in range(100):
# print next_prime(),
print largest_prime_factor(600851475143)
if __name__ == '__main__':
main()
| 22.833333 | 83 | 0.469891 | 145 | 1,096 | 3.42069 | 0.358621 | 0.056452 | 0.03629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060163 | 0.438869 | 1,096 | 47 | 84 | 23.319149 | 0.746341 | 0.184307 | 0 | 0.153846 | 0 | 0 | 0.009029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac505c7c29aa070c9931ea9a50fc3af3c4aa490f | 10,701 | py | Python | cryptoquant/api/okex/futures_api.py | studyquant/StudyQuant | 24790634ac320b25361672754558c3797f4fc9e3 | [
"Apache-2.0"
] | 74 | 2018-08-10T17:05:57.000Z | 2022-03-26T07:06:02.000Z | cryptoquant/api/okex/futures_api.py | ezailwoo/studyquant | 24790634ac320b25361672754558c3797f4fc9e3 | [
"Apache-2.0"
] | 1 | 2022-03-24T06:42:00.000Z | 2022-03-24T06:42:00.000Z | cryptoquant/api/okex/futures_api.py | ezailwoo/studyquant | 24790634ac320b25361672754558c3797f4fc9e3 | [
"Apache-2.0"
] | 18 | 2020-09-22T09:03:49.000Z | 2022-03-31T20:48:54.000Z | from .client import Client
from .consts import *
class FutureAPI(Client):
def __init__(self, api_key, api_secret_key, passphrase, use_server_time=False, first=False):
Client.__init__(self, api_key, api_secret_key, passphrase, use_server_time, first)
# query position
def get_position(self):
return self._request_without_params(GET, FUTURE_POSITION)
# query specific position
def get_specific_position(self, instrument_id):
return self._request_without_params(GET, FUTURE_SPECIFIC_POSITION + str(instrument_id) + '/position')
# query accounts info
def get_accounts(self):
return self._request_without_params(GET, FUTURE_ACCOUNTS)
# query coin account info
def get_coin_account(self, underlying):
return self._request_without_params(GET, FUTURE_COIN_ACCOUNT + str(underlying))
# query leverage
def get_leverage(self, underlying):
return self._request_without_params(GET, FUTURE_GET_LEVERAGE + str(underlying) + '/leverage')
# set leverage
def set_leverage(self, underlying, leverage, instrument_id='', direction=''):
params = {'leverage': leverage}
if instrument_id:
params['instrument_id'] = instrument_id
if direction:
params['direction'] = direction
return self._request_with_params(POST, FUTURE_SET_LEVERAGE + str(underlying) + '/leverage', params)
# query ledger
def get_ledger(self, underlying, after='', before='', limit='', type=''):
params = {}
if after:
params['after'] = after
if before:
params['before'] = before
if limit:
params['limit'] = limit
if type:
params['type'] = type
return self._request_with_params(GET, FUTURE_LEDGER + str(underlying) + '/ledger', params, cursor=True)
# take order
# def take_order(self, instrument_id, type, price, size, client_oid='', order_type='0', match_price='0'):
# params = {'client_oid': client_oid, 'instrument_id': instrument_id, 'type': type, 'order_type': order_type, 'price': price, 'size': size, 'match_price': match_price}
# return self._request_with_params(POST, FUTURE_ORDER, params)
# take order 下单
def take_order(self, client_oid,instrument_id, otype,price, size, leverage, order_type,match_price):
params = {'client_oid':client_oid,'instrument_id': instrument_id, 'type': otype, 'price': price, 'size': size, 'leverage': leverage,'order_type':order_type,'match_price':match_price}
return self._request_with_params(POST, FUTURE_ORDER, params)
# take orders
def take_orders(self, instrument_id, orders_data):
params = {'instrument_id': instrument_id, 'orders_data': orders_data}
return self._request_with_params(POST, FUTURE_ORDERS, params)
# revoke order
def revoke_order(self, instrument_id, order_id='', client_oid=''):
if order_id:
return self._request_without_params(POST, FUTURE_REVOKE_ORDER + str(instrument_id) + '/' + str(order_id))
elif client_oid:
return self._request_without_params(POST, FUTURE_REVOKE_ORDER + str(instrument_id) + '/' + str(client_oid))
# revoke orders
def revoke_orders(self, instrument_id, order_ids='', client_oids=''):
params = {}
if order_ids:
params = {'order_ids': order_ids}
elif client_oids:
params = {'client_oids': client_oids}
return self._request_with_params(POST, FUTURE_REVOKE_ORDERS + str(instrument_id), params)
# query order list
def get_order_list(self, state, instrument_id,after='', before='', limit=''):
params = {'state': state}
if after:
params['after'] = after
if before:
params['before'] = before
if limit:
params['limit'] = limit
return self._request_with_params(GET, FUTURE_ORDERS_LIST + str(instrument_id), params, cursor=True)
# query order info
def get_order_info(self, instrument_id, order_id='', client_oid=''):
if order_id:
return self._request_without_params(GET, FUTURE_ORDER_INFO + str(instrument_id) + '/' + str(order_id))
elif client_oid:
return self._request_without_params(GET, FUTURE_ORDER_INFO + str(instrument_id) + '/' + str(client_oid))
# query fills
def get_fills(self, instrument_id, order_id='', after='', before='', limit=''):
params = {'instrument_id': instrument_id}
if order_id:
params['order_id'] = order_id
if after:
params['after'] = after
if before:
params['before'] = before
if limit:
params['limit'] = limit
return self._request_with_params(GET, FUTURE_FILLS, params, cursor=True)
# set margin_mode
def set_margin_mode(self, underlying, margin_mode):
params = {'underlying': underlying, 'margin_mode': margin_mode}
return self._request_with_params(POST, FUTURE_MARGIN_MODE, params)
# close_position
def close_position(self, instrument_id, direction):
params = {'instrument_id': instrument_id, 'direction': direction}
return self._request_with_params(POST, FUTURE_CLOSE_POSITION, params)
# cancel_all
def cancel_all(self, instrument_id, direction):
params = {'instrument_id': instrument_id, 'direction': direction}
return self._request_with_params(POST, FUTURE_CANCEL_ALL, params)
# take order_algo
def take_order_algo(self, instrument_id, type, order_type, size, trigger_price='', algo_price='', callback_rate='', algo_variance='', avg_amount='', price_limit='', sweep_range='', sweep_ratio='', single_limit='', time_interval=''):
params = {'instrument_id': instrument_id, 'type': type, 'order_type': order_type, 'size': size}
if order_type == '1': # 止盈止损参数(最多同时存在10单)
params['trigger_price'] = trigger_price
params['algo_price'] = algo_price
elif order_type == '2': # 跟踪委托参数(最多同时存在10单)
params['callback_rate'] = callback_rate
params['trigger_price'] = trigger_price
elif order_type == '3': # 冰山委托参数(最多同时存在6单)
params['algo_variance'] = algo_variance
params['avg_amount'] = avg_amount
params['price_limit'] = price_limit
elif order_type == '4': # 时间加权参数(最多同时存在6单)
params['sweep_range'] = sweep_range
params['sweep_ratio'] = sweep_ratio
params['single_limit'] = single_limit
params['price_limit'] = price_limit
params['time_interval'] = time_interval
return self._request_with_params(POST, FUTURE_ORDER_ALGO, params)
# cancel_algos
def cancel_algos(self, instrument_id, algo_ids, order_type):
params = {'instrument_id': instrument_id, 'algo_ids': algo_ids, 'order_type': order_type}
return self._request_with_params(POST, FUTURE_CANCEL_ALGOS, params)
# get order_algos
def get_order_algos(self, instrument_id, order_type, status='', algo_id='', before='', after='', limit=''):
params = {'order_type': order_type}
if status:
params['status'] = status
elif algo_id:
params['algo_id'] = algo_id
if before:
params['before'] = before
if after:
params['after'] = after
if limit:
params['limit'] = limit
return self._request_with_params(GET, FUTURE_GET_ORDER_ALGOS + str(instrument_id), params)
def get_trade_fee(self):
return self._request_without_params(GET, FUTURE_TRADE_FEE)
# get products info
def get_products(self):
return self._request_without_params(GET, FUTURE_PRODUCTS_INFO)
# get depth
def get_depth(self, instrument_id, size='', depth=''):
params = {'size': size, 'depth': depth}
return self._request_with_params(GET, FUTURE_DEPTH + str(instrument_id) + '/book', params)
# get ticker
def get_ticker(self):
return self._request_without_params(GET, FUTURE_TICKER)
# get specific ticker
def get_specific_ticker(self, instrument_id):
return self._request_without_params(GET, FUTURE_SPECIFIC_TICKER + str(instrument_id) + '/ticker')
# query trades
def get_trades(self, instrument_id, after='', before='', limit=''):
params = {}
if after:
params['after'] = after
if before:
params['before'] = before
if limit:
params['limit'] = limit
return self._request_with_params(GET, FUTURE_TRADES + str(instrument_id) + '/trades', params, cursor=True)
# query k-line
def get_kline(self, instrument_id, granularity='', start='', end=''):
params = {'granularity': granularity, 'start': start, 'end': end}
# 按时间倒叙 即由结束时间到开始时间
return self._request_with_params(GET, FUTURE_KLINE + str(instrument_id) + '/candles', params)
# 按时间正序 即由开始时间到结束时间
# data = self._request_with_params(GET, FUTURE_KLINE + str(instrument_id) + '/candles', params)
# return list(reversed(data))
# query index
def get_index(self, instrument_id):
return self._request_without_params(GET, FUTURE_INDEX + str(instrument_id) + '/index')
# query rate
def get_rate(self):
return self._request_without_params(GET, FUTURE_RATE)
# query estimate price
def get_estimated_price(self, instrument_id):
return self._request_without_params(GET, FUTURE_ESTIMAT_PRICE + str(instrument_id) + '/estimated_price')
# query the total platform of the platform
def get_holds(self, instrument_id):
return self._request_without_params(GET, FUTURE_HOLDS + str(instrument_id) + '/open_interest')
# query limit price
def get_limit(self, instrument_id):
return self._request_without_params(GET, FUTURE_LIMIT + str(instrument_id) + '/price_limit')
# query limit price
def get_liquidation(self, instrument_id, status, limit='', froms='', to=''):
params = {'status': status}
if limit:
params['limit'] = limit
if froms:
params['from'] = froms
if to:
params['to'] = to
return self._request_with_params(GET, FUTURE_LIQUIDATION + str(instrument_id) + '/liquidation', params)
# query holds amount
def get_holds_amount(self, instrument_id):
return self._request_without_params(GET, HOLD_AMOUNT + str(instrument_id) + '/holds')
# query mark price
def get_mark_price(self, instrument_id):
return self._request_without_params(GET, FUTURE_MARK + str(instrument_id) + '/mark_price')
| 43.149194 | 236 | 0.657882 | 1,293 | 10,701 | 5.119876 | 0.100541 | 0.117825 | 0.097583 | 0.072508 | 0.499245 | 0.460121 | 0.42568 | 0.391692 | 0.338973 | 0.307251 | 0 | 0.001452 | 0.227642 | 10,701 | 247 | 237 | 43.323887 | 0.799516 | 0.105504 | 0 | 0.277108 | 0 | 0 | 0.076745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216867 | false | 0.012048 | 0.012048 | 0.096386 | 0.457831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac5ea208004616e2bfb96c0a007f009fdaeed064 | 2,793 | py | Python | src/expand_mnist.py | whalsey/misc | 8649cb070017a2a6c3c1cdd7fd1e37f45b251ef1 | [
"Unlicense"
] | null | null | null | src/expand_mnist.py | whalsey/misc | 8649cb070017a2a6c3c1cdd7fd1e37f45b251ef1 | [
"Unlicense"
] | null | null | null | src/expand_mnist.py | whalsey/misc | 8649cb070017a2a6c3c1cdd7fd1e37f45b251ef1 | [
"Unlicense"
] | null | null | null | """expand_mnist.py
~~~~~~~~~~~~~~~~~~
Take the 50,000 MNIST training images, and create an expanded set of
250,000 images, by displacing each training image up, down, left and
right, by one pixel. Save the resulting file to
../data/mnist_expanded.pkl.gz.
Note that this program is memory intensive, and may not run on small
systems.
"""
from __future__ import print_function
#### Libraries
# Standard library
import cPickle
import gzip
import os.path
import random
# Third-party libraries
import numpy as np
import scipy.ndimage.interpolation
import matplotlib.pyplot as plt
def sign(a):
return -1 if a < 0 else 1
print("Expanding the MNIST training set")
if os.path.exists("../data/mnist_expanded.pkl.gz"):
print("The expanded training set already exists. Exiting.")
else:
f = gzip.open("../data/mnist.pkl.gz", 'rb')
training_data, validation_data, test_data = cPickle.load(f)
f.close()
expanded_training_pairs = []
j = 0 # counter
# for each image in the training data
for x, y in zip(training_data[0], training_data[1]):
expanded_training_pairs.append((x, y))
image = np.reshape(x, (-1, 28))
j += 1
if j % 1000 == 0: print("Expanding image number ", j)
# create four new images with shifts and rotations
for _ in range(4):
# calculate x shift
shift_x = random.randint(-3, 3)
# calculate y shift
shift_y = random.randint(-3, 3)
new_img = np.roll(image, shift_x, 0)
new_img = np.roll(new_img, shift_y, 1)
# pad the shifted area with 0's
# todo - will add this later *(though it does not seem necessary)
# if sign(shift_x) == 1:
# new_img[:shift_x][:] = np.zeros((shift_x, 28))
# else:
# new_img[28-shift_x:][:] = np.zeros((shift_x, 28))
#
# if sign(shift_y) == 1:
# new_img[:][:shift_y] = np.zeros((28, shift_y))
# else:
# new_img[:][28-shift_y:] = np.zeros((28, shift_y))
# calculate degree of rotation
degree = (random.random() - 0.5) * 90
new_img = scipy.ndimage.interpolation.rotate(new_img, degree, reshape=False)
# plt.imshow(new_img)
#
# plt.pause(0.01)
# plt.clf()
expanded_training_pairs.append((np.reshape(new_img, 784), y))
random.shuffle(expanded_training_pairs)
expanded_training_data = [list(d) for d in zip(*expanded_training_pairs)]
print("Saving expanded data. This may take a few minutes.")
f = gzip.open("../data/mnist_expanded.pkl.gz", "w")
cPickle.dump((expanded_training_data, validation_data, test_data), f)
f.close()
| 29.09375 | 88 | 0.6058 | 393 | 2,793 | 4.16285 | 0.37659 | 0.040342 | 0.064181 | 0.036675 | 0.164425 | 0.09291 | 0.051345 | 0 | 0 | 0 | 0 | 0.028473 | 0.270677 | 2,793 | 95 | 89 | 29.4 | 0.774669 | 0.33942 | 0 | 0.052632 | 0 | 0 | 0.130867 | 0.032027 | 0 | 0 | 0 | 0.010526 | 0 | 1 | 0.026316 | false | 0 | 0.210526 | 0.026316 | 0.263158 | 0.131579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac6056041eeb8497e63663fa127d721d28fac540 | 17,489 | py | Python | randconv/coordinator_factory.py | jm-begon/randconv | cb7438f5876c18192e8caaf3cafd88e839c26048 | [
"BSD-3-Clause"
] | 1 | 2016-08-01T08:09:28.000Z | 2016-08-01T08:09:28.000Z | randconv/coordinator_factory.py | jm-begon/randconv | cb7438f5876c18192e8caaf3cafd88e839c26048 | [
"BSD-3-Clause"
] | null | null | null | randconv/coordinator_factory.py | jm-begon/randconv | cb7438f5876c18192e8caaf3cafd88e839c26048 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
A set of factory function to help create usual cases of coordinator
"""
__author__ = "Begon Jean-Michel <jm.begon@gmail.com>"
__copyright__ = "3-clause BSD License"
__date__ = "20 January 2015"
import math
from .image import *
from .util import (OddUniformGenerator, NumberGenerator,
CustomDiscreteNumberGenerator, GaussianNumberGenerator)
from .feature_extractor import ImageLinearizationExtractor, DepthCompressorILE
from .coordinator import (RandConvCoordinator, PyxitCoordinator)
class Const:
RND_RU = "RND_RU" # -1 (real uniform)
RND_SET = "RND_SET" # -2 (Discrete set with predifined probabilities)
RND_GAUSS = "RND_GAUSS" # (Gaussian distribution)
FGEN_ORDERED = "FGEN_ORDERED" # Ordered combination of others
FGEN_CUSTOM = "FGEN_CUSTOM" # Custom filters
FGEN_ZEROPERT = "FGEN_ZEROPERT" # Perturbation around origin
FGEN_IDPERT = "FGEN_IDPERT" # Perturbation around id filter
FGEN_IDDIST = "FGEN_IDDIST" # Maximum distance around id filter
FGEN_STRAT = "FGEN_STRAT" # Stratified scheme
POOLING_NONE = "POOLING_NONE" # 0
POOLING_AGGREG_MIN = "POOLING_AGGREG_MIN" # 1
POOLING_AGGREG_AVG = "POOLING_AGGREG_AVG" # 2
POOLING_AGGREG_MAX = "POOLING_AGGREG_MAX" # 3
POOLING_CONV_MIN = "POOLING_MW_MIN" # 4
POOLING_CONV_AVG = "POOLING_MW_AVG" # 5
POOLING_CONV_MAX = "POOLING_MW_MAX" # 6
POOLING_MORPH_OPENING = "POOLING_MORPH_OPENING" # 7
POOLING_MORPH_CLOSING = "POOLING_MORPH_CLOSING" # 8
FEATEXT_ALL = "FEATEXTRACT_ALL"
FEATEXT_SPASUB = "FEATEXTRACT_SPASUB"
def pyxit_factory(
nb_subwindows=10,
sw_min_size_ratio=0.5, sw_max_size_ratio=1.,
sw_target_width=16, sw_target_height=16,
fixed_size=False,
sw_interpolation=SubWindowExtractor.INTERPOLATION_BILINEAR,
n_jobs=-1, verbosity=10, temp_folder=None,
random=True):
"""
Factory method to create :class:`PyxitCoordinator`
Parameters
----------
nb_subwindows : int >= 0 (default : 10)
The number of subwindow to extract
sw_min_size_ratio : float > 0 (default : 0.5)
The minimum size of a subwindow expressed as the ratio of the size
of the original image
sw_max_size_ratio : float : sw_min_size_ratio
<= sw_max_size_ratio <= 1 (default : 1.)
The maximim size of a subwindow expressed as the ratio of the size
of the original image
sw_target_width : int > 0 (default : 16)
The width of the subwindows after reinterpolation
sw_target_height : int > 0 (default : 16)
The height of the subwindows after reinterpolation
fixed_size : boolean (default : False)
Whether to use fixe size subwindow. If False, subwindows are drawn
randomly. If True, the target size is use as the subwindow size and
only the position is drawn randomly
sw_interpolation : int (default :
SubWindowExtractor.INTERPOLATION_BILINEAR)
The subwindow reinterpolation algorithm. For more information, see
:class:`SubWindowExtractor`
n_jobs : int >0 or -1 (default : -1)
The number of process to spawn for parallelizing the computation.
If -1, the maximum number is selected. See also :mod:`Joblib`.
verbosity : int >= 0 (default : 10)
The verbosity level
temp_folder : string (directory path) (default : None)
The temporary folder used for memmap. If none, some default folder
will be use (see the :class:`ParallelCoordinator`)
random : bool (default : True)
Whether to use randomness or use a predefined seed
Return
------
coordinator : :class:`Coordinator`
The PyxitCoordinator (possibly decorated) corresponding to the set
of parameters
Notes
-----
- Subwindow random generator
The subwindow random generator is a :class:`NumberGenerator` base
instance (generate real nubers uniformely).
- Feature extractor
Base instance of :class:`ImageLinearizationExtractor`
"""
swngSeed = 0
#Randomness
if random:
swngSeed = None
#SubWindowExtractor
swNumGenerator = NumberGenerator(seed=swngSeed)
if fixed_size:
sw_extractor = FixTargetSWExtractor(sw_target_width,
sw_target_height,
sw_interpolation,
swNumGenerator)
else:
sw_extractor = SubWindowExtractor(sw_min_size_ratio,
sw_max_size_ratio,
sw_target_width,
sw_target_height,
sw_interpolation,
swNumGenerator)
multi_sw_extractor = MultiSWExtractor(sw_extractor, nb_subwindows, True)
#FEATURE EXTRACTOR
feature_extractor = ImageLinearizationExtractor()
#LOGGER
autoFlush = verbosity >= 45
logger = ProgressLogger(StandardLogger(autoFlush=autoFlush,
verbosity=verbosity))
#COORDINATOR
coordinator = PyxitCoordinator(multi_sw_extractor, feature_extractor, logger,
verbosity)
if n_jobs != 1:
coordinator.parallelize(n_jobs, temp_folder)
return coordinator
def get_multi_poolers(poolings, finalHeight, finalWidth):
#Aggregator
poolers = []
for height, width, policy in poolings:
if policy is Const.POOLING_NONE:
poolers.append(IdentityPooler())
elif policy is Const.POOLING_AGGREG_AVG:
poolers.append(AverageAggregator(width, height,
finalWidth,
finalHeight))
elif policy is Const.POOLING_AGGREG_MAX:
poolers.append(MaximumAggregator(width, height,
finalWidth,
finalHeight))
elif policy is Const.POOLING_AGGREG_MIN:
poolers.append(MinimumAggregator(width, height,
finalWidth,
finalHeight))
elif policy is Const.POOLING_CONV_MIN:
poolers.append(FastMWMinPooler(height, width))
elif policy is Const.POOLING_CONV_AVG:
poolers.append(FastMWAvgPooler(height, width))
elif policy is Const.POOLING_CONV_MAX:
poolers.append(FastMWMaxPooler(height, width))
elif policy is Const.POOLING_MORPH_OPENING:
poolers.append(MorphOpeningPooler(height, width))
elif policy is Const.POOLING_MORPH_CLOSING:
poolers.append(MorphClosingPooler(height, width))
return MultiPooler(poolers)
def get_number_generator(genType, min_value, max_value, seed, **kwargs):
if genType is Const.RND_RU:
value_generatorerator = NumberGenerator(min_value, max_value, seed)
elif genType is Const.RND_SET:
probLaw = kwargs["probLaw"]
value_generatorerator = CustomDiscreteNumberGenerator(probLaw, seed)
elif genType is Const.RND_GAUSS:
if "outRange" in kwargs:
outRange = kwargs["outRange"]
value_generatorerator = GaussianNumberGenerator(min_value, max_value, seed,
outRange)
else:
value_generatorerator = GaussianNumberGenerator(min_value, max_value, seed)
return value_generatorerator
def get_filter_generator(policy, parameters, nb_filterss, random=False):
if policy == Const.FGEN_ORDERED:
#Parameters is a list of tuples (policy, parameters)
ls = []
subNbFilters = int(math.ceil(nb_filterss/len(parameters)))
for subPolicy, subParameters in parameters:
ls.append(get_filter_generator(subPolicy, subParameters,
subNbFilters, random))
return OrderedMFF(ls, nb_filterss)
if policy is Const.FGEN_CUSTOM:
print "Custom filters"
return custom_finite_3_same_filter()
#Parameters is a dictionary
valSeed = None
sizeSeed = None
shuffling_seed = None
perturbationSeed = None
cell_seed = None
sparseSeed = 5
if random:
valSeed = 1
sizeSeed = 2
shuffling_seed = 3
perturbationSeed = 4
cell_seed = 5
sparseSeed = 6
min_size = parameters["min_size"]
max_size = parameters["max_size"]
size_generatorerator = OddUniformGenerator(min_size, max_size, seed=sizeSeed)
min_val = parameters["min_val"]
max_val = parameters["max_val"]
value_generator = parameters["value_generator"]
value_generatorerator = get_number_generator(value_generator, min_val, max_val,
valSeed, **parameters)
normalization = None
if "normalization" in parameters:
normalization = parameters["normalization"]
if policy is Const.FGEN_ZEROPERT:
print "Zero perturbation filters"
baseFilterGenerator = FilterGenerator(value_generatorerator, size_generatorerator,
normalisation=normalization)
elif policy is Const.FGEN_IDPERT:
print "Id perturbation filters"
baseFilterGenerator = IdPerturbatedFG(value_generatorerator, size_generatorerator,
normalisation=normalization)
elif policy is Const.FGEN_IDDIST:
print "Id distance filters"
max_dist = parameters["max_dist"]
baseFilterGenerator = IdMaxL1DistPerturbFG(value_generatorerator, size_generatorerator,
max_dist,
normalisation=normalization,
shuffling_seed=shuffling_seed)
elif policy is Const.FGEN_STRAT:
print "Stratified filters"
nb_cells = parameters["strat_nb_cells"]
minPerturbation = 0
if "minPerturbation" in parameters:
minPerturbation = parameters["minPerturbation"]
maxPerturbation = 1
if "maxPerturbation" in parameters:
maxPerturbation = parameters["maxPerturbation"]
perturbationGenerator = get_number_generator(value_generator,
minPerturbation,
maxPerturbation,
perturbationSeed)
baseFilterGenerator = StratifiedFG(min_val, max_val, nb_cells,
perturbationGenerator,
size_generatorerator,
normalisation=normalization,
cell_seed=cell_seed)
if "sparse_proba" in parameters:
print "Adding sparcity"
sparse_proba = parameters["sparse_proba"]
baseFilterGenerator = SparsityDecoratorFG(baseFilterGenerator,
sparse_proba,
sparseSeed)
print "Returning filters"
return Finite3SameFilter(baseFilterGenerator, nb_filterss)
def get_feature_extractor(policy, **kwargs):
if policy is Const.FEATEXT_SPASUB:
nbCol = kwargs.get("nbCol", 2)
return DepthCompressorILE(nbCol)
else: # Suupose Const.FEATEXT_ALL
return ImageLinearizationExtractor()
#TODO : include in randconv : (Const.FEATEXT_ALL, {}), (Const.FEATEXT_SPASUB, {"nbCol":2})
def randconv_factory(
nb_filters=5,
filter_policy=(Const.FGEN_ZEROPERT,
{"min_size": 2, "max_size": 32, "min_val": -1, "max_val": 1,
"value_generator": Const.RND_RU,
"normalization": FilterGenerator.NORMALISATION_MEANVAR}),
poolings=[(3, 3, Const.POOLING_AGGREG_AVG)],
extractor=(Const.FEATEXT_ALL, {}),
nb_subwindows=10,
sw_min_size_ratio=0.5, sw_max_size_ratio=1.,
sw_target_width=16, sw_target_height=16,
sw_interpolation=SubWindowExtractor.INTERPOLATION_BILINEAR,
include_original_img=False,
n_jobs=-1, verbosity=10, temp_folder=None,
random=True):
"""
Factory method to create :class:`RandConvCoordinator` tuned for RGB images
Parameters
----------
nb_filterss : int >= 0 (default : 5)
The number of filter
filter_policy : pair (policyType, parameters)
policyType : one of Const.FGEN_*
The type of filter generation policy to use
parameters : dict
The parameter dictionnary to forward to :func:`get_filter_generator`
poolings : iterable of triple (height, width, policy) (default :
[(3, 3, Const.POOLING_AGGREG_AVG)])
A list of parameters to instanciate the according :class:`Pooler`
height : int > 0
the height of the neighborhood window
width : int > 0
the width of the neighborhood window
policy : int in {Const.POOLING_NONE, Const.POOLING_AGGREG_MIN,
Const.POOLING_AGGREG_AVG, Const.POOLING_AGGREG_MAX,
Const.POOLING_CONV_MIN, Const.POOLING_CONV_AVG, Const.POOLING_CONV_MAX}
nb_subwindows : int >= 0 (default : 10)
The number of subwindow to extract
sw_min_size_ratio : float > 0 (default : 0.5)
The minimum size of a subwindow expressed as the ratio of the size
of the original image
sw_max_size_ratio : float : sw_min_size_ratio
<= sw_max_size_ratio <= 1 (default : 1.)
The maximim size of a subwindow expressed as the ratio of the size
of the original image
sw_target_width : int > 0 (default : 16)
The width of the subwindows after reinterpolation
sw_target_height : int > 0 (default : 16)
The height of the subwindows after reinterpolation
sw_interpolation : int (default :
SubWindowExtractor.INTERPOLATION_BILINEAR)
The subwindow reinterpolation algorithm. For more information, see
:class:`SubWindowExtractor`
include_original_img : boolean (default : False)
Whether or not to include the original image in the subwindow
extraction process
n_jobs : int >0 or -1 (default : -1)
The number of process to spawn for parallelizing the computation.
If -1, the maximum number is selected. See also :mod:`Joblib`.
verbosity : int >= 0 (default : 10)
The verbosity level
temp_folder : string (directory path) (default : None)
The temporary folder used for memmap. If none, some default folder
will be use (see the :class:`ParallelCoordinator`)
random : bool (default : True)
Whether to use randomness or use a predefined seed
Return
------
coordinator : :class:`Coordinator`
The RandConvCoordinator corresponding to the
set of parameters
Notes
-----
- Filter generator
Base instance of :class:`Finite3SameFilter` with a base instance of
:class:`NumberGenerator` for the values and
:class:`OddUniformGenerator` for the sizes
- Filter size
The filter are square (same width as height)
- Convolver
Base instance of :class:`RGBConvolver`
- Subwindow random generator
The subwindow random generator is a :class:`NumberGenerator` base
instance (generate real nubers uniformely).
- Feature extractor
Base instance of :class:`ImageLinearizationExtractor`
"""
#RANDOMNESS
swngSeed = None
if random is False:
swngSeed = 0
#CONVOLUTIONAL EXTRACTOR
#Filter generator
#Type/policy parameters, #filters, random
filter_policyType, filter_policyParam = filter_policy
filter_generator = get_filter_generator(filter_policyType, filter_policyParam,
nb_filters, random)
#Convolver
convolver = RGBConvolver()
#Aggregator
multi_pooler = get_multi_poolers(poolings, sw_target_height,
sw_target_width)
#SubWindowExtractor
swNumGenerator = NumberGenerator(seed=swngSeed)
sw_extractor = SubWindowExtractor(sw_min_size_ratio,
sw_max_size_ratio,
sw_target_width,
sw_target_height,
sw_interpolation, swNumGenerator)
multi_sw_extractor = MultiSWExtractor(sw_extractor, nb_subwindows, False)
#ConvolutionalExtractor
convolutional_extractor = ConvolutionalExtractor(filter_generator,
convolver,
multi_sw_extractor,
multi_pooler,
include_original_img)
#FEATURE EXTRACTOR
feature_extractor = get_feature_extractor(extractor[0], **extractor[1])
#COORDINATOR
coordinator = RandConvCoordinator(convolutional_extractor, feature_extractor)
if n_jobs != 1:
coordinator.parallelize(n_jobs, temp_folder)
return coordinator
| 40.204598 | 95 | 0.623535 | 1,775 | 17,489 | 5.935211 | 0.173521 | 0.01196 | 0.01851 | 0.01775 | 0.407879 | 0.371334 | 0.358994 | 0.35178 | 0.324822 | 0.314191 | 0 | 0.01014 | 0.312025 | 17,489 | 434 | 96 | 40.297235 | 0.865442 | 0.041283 | 0 | 0.227074 | 0 | 0 | 0.066083 | 0.003751 | 0 | 0 | 0 | 0.002304 | 0 | 0 | null | null | 0 | 0.021834 | null | null | 0.030568 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac61daa3c54495624b8682899688bd4fd36deaca | 13,110 | py | Python | api/config/h5Template/tanmuContent.py | jimbunny/wedding-invitation | a3648454e1105d9362f95d9f6e69055a7522e15b | [
"MIT"
] | null | null | null | api/config/h5Template/tanmuContent.py | jimbunny/wedding-invitation | a3648454e1105d9362f95d9f6e69055a7522e15b | [
"MIT"
] | null | null | null | api/config/h5Template/tanmuContent.py | jimbunny/wedding-invitation | a3648454e1105d9362f95d9f6e69055a7522e15b | [
"MIT"
] | null | null | null | tanmuContent = '''
<style>
.barrage-input-tip {
z-index: 1999;
position: absolute;
left: 10px;
width: 179.883px;
height: 35.7422px;
line-height: 35.7422px;
border-radius: 35.7422px;
box-sizing: border-box;
color: rgb(255, 255, 255);
margin-left: 45.7031px;
background-color: {{ data.tanmuBtnColor }};
opacity: 0.65;
pointer-events: initial;
padding: 0px 16.9922px;
font-size: 14.0625px;
display: block;
}
.data-box{display:none}
.barrage_box_top{width:100%;height:160px;margin:0px auto;}
.barrage_box_top .barrage-row{margin-bottom:20px;}
.barrage_box_top .barrage-item{
background-color: {{ data.tanmuColor }};margin-bottom:10px; white-space:nowrap;color:{{ data.fontColor }}; font-size: 12px; transform: scale(1); opacity: 1; transition: all 0.65s ease-in 0s;padding: 6px 8px 0px 8px; height: 32px;display: inline-block;border-radius: 25px;
}
</style>
<div class="maka-barrage-dom" style="top: 0px; left: 0px; background-color: transparent; z-index: 1000;">
<div class="barrage-content" style="position: fixed; box-sizing: border-box; padding: 11.7188px; right: 0px; bottom: 0px; z-index: 1000; width: 100%; pointer-events: none; background: linear-gradient(rgba(0, 0, 0, 0) 0%, rgba(0, 0, 0, 0.2) 100%);">
<div class="barrage-words row" style="margin-top: 11.7188px; height: 212.695px;"><div class="barrage-word" style="min-height: 32.2266px; line-height: 32.2266px; font-size: 12.8906px; padding: 4.10156px; border-radius: 22.8516px; bottom: 94.3359px; max-width: 310.547px; background-color: rgba(47, 50, 52, 0.6); transform: scale(1); opacity: 0; transition: bottom 2s ease-out 0s, opacity 0.75s linear 0.75s;">
</div>
</div>
<div class="barrage-bottom row" id="barrageBtn" style="padding-bottom: env(safe-area-inset-bottom); margin-top: 14.0625px; position: fixed; left: 11.7188px; bottom: 47px; pointer-events: initial;">
<div class="barrage-input-tip" data-toggle="modal" data-target="#myModal" style="background:{{ data.tanmuColor }}; width: 179.883px; height: 35.7422px; line-height: 35.7422px; border-radius: 35.7422px; box-sizing: border-box; color: rgb(255, 255, 255); margin-left: 45.7031px; background-color: rgb(47, 50, 52); opacity: 0.65; pointer-events: initial; padding: 0px 16.9922px; font-size: 14.0625px;">ฝากคำอวยพร...</div>
</div>
<div class="backdrop" style="position: fixed; width: 100%; height: 100%; background-color: rgba(0, 0, 0, 0); z-index: 999; display: none; top: 0px; left: 0px; pointer-events: initial;"></div>
<div class="barrage-btn tanBtn" style="padding-bottom: env(safe-area-inset-bottom); margin-top: 14.0625px; position: fixed; left: 11.7188px; bottom: 11.7188px; pointer-events: initial;">
<div class="correct-icon" id="tanmuOpen" style="background: url("https://i.ibb.co/1QmGHWV/danmu-open1.png") 0% 0% / contain no-repeat; border-radius: 100%; width: 35.7422px; height: 35.7422px;"></div>
<div class="close-icon" id="tanmuClose" style="background: url("https://i.ibb.co/QNwcxLx/danmu-close1.png") 0% 0% / contain no-repeat; border-radius: 100%; width: 35.7422px; height: 35.7422px; display: none;">
<b style="position: absolute; color: rgb(255, 255, 255); top: 2.92969px; left: 19.9219px; font-weight: 600; font-size: 8.78906px; transform: scale(0.8);">{{ data.greetings | length }}</b>
</div>
</div>
<div id="j-barrage-top" class="barrage_box barrage_box_top" style="position: fixed; box-sizing: border-box; padding: 0px; right: 0px; bottom: 0px; z-index: 1000; width: 100%; pointer-events: none;"></div>
</div>
<div class="barrage-input-wrap" id="modalShow" style="display: none; position: fixed; left: 0px; bottom: 0px;height: 0px; width: 100%; background-color:transparent; padding: 9.375px 11.7188px; box-sizing: border-box; z-index: 2000; pointer-events: initial;">
<!-- 模态框(Modal) -->
<div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true">
<div style="width:100%;" class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" style="cursor: pointer;" data-dismiss="modal" aria-hidden="true">×</button>
<h4 class="modal-title" id="myModalLabel">อวยพร</h4>
</div>
<div class="modal-body">
<form action="" id="form" class="form-horizontal">
<div class="form-group">
<div class="col-md-24" style="padding-left:10px;padding-right: 10px;">
<input type="text" class="form-control" style="width:100% !important;" name="name" placeholder="ชื่อ-นามสกุล" />
</div>
</div>
<div class="form-group">
<div class="col-md-24" style="padding-left:10px;padding-right: 10px;">
<input type="text" class="form-control" style="width:100% !important;" name="greetings" placeholder="คำอวยพร" />
</div>
</div>
<div class="form-group">
<div class="col-md-24 col-md-offset-2" style="padding-left:10px;padding-right: 10px;">
<button id="subBtn" type="submit" class="btn btn-primary" style="width:100%;">ส่ง</button>
</div>
</div>
</form>
</div>
</div><!-- /.modal-content -->
</div><!-- /.modal-dialog -->
</div>
<!-- /.modal -->
</div>
</div>
<div class="alert alert-danger hide">ส่งคำอวยพรล้มเหลว!</div>
<div class="alert alert-success hide">ส่งคำอวยพรสำเร็จ!</div>
<script src="/static/js/bootstrap.min.js"></script>
<script src="/static/js/bootstrapValidator.min.js"></script>
<script type="text/javascript" src="/static/js/index.js"></script>
<style type="text/css">
*{
padding:0;
margin:0;
}
a{
text-decoration: none;
}
.form-control{
display: inline-block;
width: auto;
padding: 6px 12px;
font-size: 14px;
line-height: 1.42857143;
color: #555;
background-color: #fff;
background-image: none;
border: 1px solid #ccc;
border-radius: 4px;
-webkit-box-shadow: inset 0 1px 1px rgba(0,0,0,.075);
box-shadow: inset 0 1px 1px rgba(0,0,0,.075);
-webkit-transition: border-color ease-in-out .15s,-webkit-box-shadow ease-in-out .15s;
-o-transition: border-color ease-in-out .15s,box-shadow ease-in-out .15s;
transition: border-color ease-in-out .15s,box-shadow ease-in-out .15s;
}
.btn{
display: inline-block;
padding: 6px 12px;
margin-bottom: 0;
font-size: 14px;
font-weight: 400;
line-height: 1.42857143;
text-align: center;
white-space: nowrap;
vertical-align: middle;
-ms-touch-action: manipulation;
touch-action: manipulation;
cursor: pointer;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
background-image: none;
border: 1px solid transparent;
border-radius: 4px;
}
.btn-primary {
color: #fff;
background-color: #337ab7;
border-color: #2e6da4;
}
/*组件主样式*/
.overflow-text{
display: block;
white-space:nowrap;
overflow:hidden;
text-overflow:ellipsis;
opacity:0;
clear: both;
padding:0 10px;
border-radius: 10px;
box-sizing: border-box;
max-width: 100%;
color:#fff;
animation:colorchange 3s infinite alternate;
-webkit-animation:colorchange 3s infinite alternate; /*Safari and Chrome*/
}
@keyframes colorchange{
0%{
color:red;
}
50%{
color:green;
}
100%{
color:#6993f9;
}
}
/*组件主样式*/
.alert{
position: fixed;
width: 50%;
margin-left: 20%;
z-index: 2000;
}
</style>
<script type="text/javascript">
var Obj;
$.ajax({
//几个参数需要注意一下
type: "GET",//方法类型
dataType: "json",//预期服务器返回的数据类型
url: "/api/v1/h5/greetings/"+{{ data.id }},//url
success: function (result) {
console.log(result);//打印服务端返回的数据(调试用)
if (result.code == 0) {
// 数据初始化
Obj = $('#j-barrage-top').barrage({
data : result.data, //数据列表
row : 1, //显示行数
time : 2500, //间隔时间
gap : 100, //每一个的间隙
position : 'fixed', //绝对定位
direction : 'bottom left', //方向
ismoseoverclose : true, //悬浮是否停止
height : 30, //设置单个div的高度
})
Obj.start();
} else {
alert("tanmu Error");
};
},
error : function() {
alert("tanmu Error");
}
});
</script>
<script>
$("#barrageBtn").click(function() {
var modalShowDiv = document.getElementById('modalShow');
modalShowDiv.style.display = 'block';
})
var kg = true; //给一个开关并赋值,用来进行后面的 if else 条件判断
$(".tanBtn").click(function() { //给button按钮一个点击事件
if (kg) { //进行判断
var tanmuOpenDiv= document.getElementById('tanmuOpen');
tanmuOpenDiv.style.display = 'block';
var tanmuCloseDiv= document.getElementById('tanmuClose');
tanmuCloseDiv.style.display='none';
Obj.start();
var barrageBtnDiv= document.getElementById('barrageBtn');
barrageBtnDiv.style.display = 'block';
} else {
var tanmuOpenDiv= document.getElementById('tanmuOpen');
tanmuOpenDiv.style.display = 'none';
var tanmuCloseDiv= document.getElementById('tanmuClose');
tanmuCloseDiv.style.display='block';
Obj.close();
var barrageBtnDiv= document.getElementById('barrageBtn');
barrageBtnDiv.style.display = 'none';
}
kg = !kg; //这里的感叹号是取反的意思,如果你没有写,当你点击切换回第一张图片时,就会不生效
})
$('#myModal').on('hidden.bs.modal', function (e) {
// 清空表单和验证
// Reset a form
document.getElementById("form").reset();
$('#form').bootstrapValidator("resetForm",true);
})
$('form').bootstrapValidator({
//默认提示
message: 'This value is not valid',
// 表单框里右侧的icon
feedbackIcons: {
valid: 'glyphicon glyphicon-ok',
invalid: 'glyphicon glyphicon-remove',
validating: 'glyphicon glyphicon-refresh'
},
excluded: [':disabled'],
submitHandler: function (validator, form, submitButton) {
// 表单提交成功时会调用此方法
// validator: 表单验证实例对象
// form jq对象 指定表单对象
// submitButton jq对象 指定提交按钮的对象
},
fields: {
name: {
message: 'ปรดกรอกชื่อ, ความยาวไม่เกิน 20 ตัวอักษร',
validators: {
notEmpty: { //不能为空
message: 'โปรดกรอกชื่อ'
},
stringLength: {
max: 20,
message: 'ความยาวไม่เกิน 20 ตัวอักษร'
},
}
},
greetings: {
message: 'โปรดกรอกคำอวยพร, ความยาวไม่เกิน 40 ตัวอักษร',
validators: {
notEmpty: {
message: 'โปรดกรอกคำอวยพร'
},
stringLength: {
max: 40,
message: 'ความยาวไม่เกิน 40 ตัวอักษร'
},
}
},
}
});
var that = this
$("#subBtn").click(function () { //非submit按钮点击后进行验证,如果是submit则无需此句直接验证
$("form").bootstrapValidator('validate'); //提交验证
if ($("form").data('bootstrapValidator').isValid()) { //获取验证结果,如果成功,执行下面代码
$.ajax({
//几个参数需要注意一下
type: "POST",//方法类型
dataType: "json",//预期服务器返回的数据类型
url: "/api/v1/h5/greetings/"+{{ data.id }},//url
data: $('#form').serialize(),
success: function (result) {
console.log(result);//打印服务端返回的数据(调试用)
if (result.code == 0) {
$("#myModal").modal('hide');
//添加评论
//此格式与dataa.js的数据格式必须一致
var addVal = {
text : result.data
}
//添加进数组
Obj.data.unshift(addVal);
$(".alert-success").addClass("show");
window.setTimeout(function(){
$(".alert-success").removeClass("show");
},1000);//显示的时间
} else {
$(".alert-danger").addClass("show");
window.setTimeout(function(){
$(".alert-danger").removeClass("show");
},1000);//显示的时间
};
},
error : function() {
{#alert("Error!");#}
$(".alert-danger").addClass("show");
window.setTimeout(function(){
$(".alert-danger").removeClass("show");
},1000);//显示的时间
}
});
}
});
</script>
''' | 39.017857 | 427 | 0.564607 | 1,499 | 13,110 | 4.955971 | 0.274183 | 0.024768 | 0.014807 | 0.014538 | 0.390362 | 0.353076 | 0.330192 | 0.316059 | 0.237448 | 0.237448 | 0 | 0.061124 | 0.272464 | 13,110 | 336 | 428 | 39.017857 | 0.713986 | 0 | 0 | 0.316614 | 0 | 0.07837 | 0.998322 | 0.2054 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.00627 | 0 | 0.00627 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac622bca39310127b42776aefdbd9c65467abc04 | 871 | py | Python | example/example2.py | xrloong/xrSolver | 4f36660b78456840f65215ffce0481cdc280f980 | [
"Apache-2.0"
] | null | null | null | example/example2.py | xrloong/xrSolver | 4f36660b78456840f65215ffce0481cdc280f980 | [
"Apache-2.0"
] | null | null | null | example/example2.py | xrloong/xrSolver | 4f36660b78456840f65215ffce0481cdc280f980 | [
"Apache-2.0"
] | null | null | null | from xrsolver import Problem
import solver
# This example is the second case from https://www.youtube.com/watch?v=WJEZh7GWHnw
s = solver.Solver()
p = Problem()
x1 = p.generateVariable("x1", lb=0, ub=3)
x2 = p.generateVariable("x2", lb=0, ub=3)
x3 = p.generateVariable("x3", lb=0, ub=3)
x4 = p.generateVariable("x4", lb=0, ub=3)
x5 = p.generateVariable("x5", lb=0, ub=3)
p.addVariable(x1)
p.addVariable(x2)
p.addVariable(x3)
p.addVariable(x4)
p.addVariable(x5)
p.appendConstraint(x1 + x2 <= 5)
p.appendConstraint(x2 <= 0.5 * (x1 + x2))
p.appendConstraint(x5 >= 0.4 * (x3 + x4))
p.appendConstraint(x1 + x2 + x3 + x4 +x5 == 10)
p.appendObjective(8.1 * x1 + 10.5 * x2 + 6.4 * x3 + 7.5 * x4 + 5.0 * x5)
s.solveProblem(p)
print("x1 =", x1.getValue())
print("x2 =", x2.getValue())
print("x3 =", x3.getValue())
print("x4 =", x4.getValue())
print("x5 =", x5.getValue())
| 23.540541 | 82 | 0.64868 | 147 | 871 | 3.843537 | 0.292517 | 0.150442 | 0.044248 | 0.053097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096515 | 0.143513 | 871 | 36 | 83 | 24.194444 | 0.660858 | 0.091848 | 0 | 0 | 1 | 0 | 0.038071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac67224e0a480ab178264f670f037b9c677d4fdc | 358 | py | Python | paint/migrations/0007_auto_20200405_1748.py | atulk17/Paint-App | 4b56455596d140cee4a9b19c71fe82364c3f3b7c | [
"BSD-2-Clause"
] | null | null | null | paint/migrations/0007_auto_20200405_1748.py | atulk17/Paint-App | 4b56455596d140cee4a9b19c71fe82364c3f3b7c | [
"BSD-2-Clause"
] | null | null | null | paint/migrations/0007_auto_20200405_1748.py | atulk17/Paint-App | 4b56455596d140cee4a9b19c71fe82364c3f3b7c | [
"BSD-2-Clause"
] | 1 | 2020-05-31T11:37:48.000Z | 2020-05-31T11:37:48.000Z | # Generated by Django 3.0.4 on 2020-04-05 12:18
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('paint', '0006_auto_20200405_1746'),
]
operations = [
migrations.AlterModelTable(
name='office_expense',
table='Office_Expense',
),
]
| 19.888889 | 48 | 0.578212 | 36 | 358 | 5.611111 | 0.833333 | 0.128713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127049 | 0.318436 | 358 | 17 | 49 | 21.058824 | 0.70082 | 0.125698 | 0 | 0 | 1 | 0 | 0.190476 | 0.078231 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac6f2dbc609bab1cd3af2ace2bafd614f0610168 | 10,226 | py | Python | sample_facemesh.py | swipswaps/mediapipe-python | 00700129ced41dcdab174cd46454f5e7e3d9e25b | [
"Apache-2.0"
] | 92 | 2021-03-09T08:27:17.000Z | 2022-03-09T08:20:48.000Z | sample_facemesh.py | swipswaps/mediapipe-python | 00700129ced41dcdab174cd46454f5e7e3d9e25b | [
"Apache-2.0"
] | 1 | 2021-12-23T05:15:26.000Z | 2022-02-21T20:35:21.000Z | sample_facemesh.py | swipswaps/mediapipe-python | 00700129ced41dcdab174cd46454f5e7e3d9e25b | [
"Apache-2.0"
] | 46 | 2021-03-08T10:24:54.000Z | 2021-12-20T07:12:48.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import copy
import argparse
import cv2 as cv
import numpy as np
import mediapipe as mp
from utils import CvFpsCalc
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--device", type=int, default=0)
parser.add_argument("--width", help='cap width', type=int, default=960)
parser.add_argument("--height", help='cap height', type=int, default=540)
parser.add_argument("--max_num_faces", type=int, default=1)
parser.add_argument("--min_detection_confidence",
help='min_detection_confidence',
type=float,
default=0.7)
parser.add_argument("--min_tracking_confidence",
help='min_tracking_confidence',
type=int,
default=0.5)
parser.add_argument('--use_brect', action='store_true')
args = parser.parse_args()
return args
def main():
# 引数解析 #################################################################
args = get_args()
cap_device = args.device
cap_width = args.width
cap_height = args.height
max_num_faces = args.max_num_faces
min_detection_confidence = args.min_detection_confidence
min_tracking_confidence = args.min_tracking_confidence
use_brect = args.use_brect
# カメラ準備 ###############################################################
cap = cv.VideoCapture(cap_device)
cap.set(cv.CAP_PROP_FRAME_WIDTH, cap_width)
cap.set(cv.CAP_PROP_FRAME_HEIGHT, cap_height)
# モデルロード #############################################################
mp_face_mesh = mp.solutions.face_mesh
face_mesh = mp_face_mesh.FaceMesh(
max_num_faces=max_num_faces,
min_detection_confidence=min_detection_confidence,
min_tracking_confidence=min_tracking_confidence,
)
# FPS計測モジュール ########################################################
cvFpsCalc = CvFpsCalc(buffer_len=10)
while True:
display_fps = cvFpsCalc.get()
# カメラキャプチャ #####################################################
ret, image = cap.read()
if not ret:
break
image = cv.flip(image, 1) # ミラー表示
debug_image = copy.deepcopy(image)
# 検出実施 #############################################################
image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
results = face_mesh.process(image)
# 描画 ################################################################
if results.multi_face_landmarks is not None:
for face_landmarks in results.multi_face_landmarks:
# 外接矩形の計算
brect = calc_bounding_rect(debug_image, face_landmarks)
# 描画
debug_image = draw_landmarks(debug_image, face_landmarks)
debug_image = draw_bounding_rect(use_brect, debug_image, brect)
cv.putText(debug_image, "FPS:" + str(display_fps), (10, 30),
cv.FONT_HERSHEY_SIMPLEX, 1.0, (0, 255, 0), 2, cv.LINE_AA)
# キー処理(ESC:終了) #################################################
key = cv.waitKey(1)
if key == 27: # ESC
break
# 画面反映 #############################################################
cv.imshow('MediaPipe Face Mesh Demo', debug_image)
cap.release()
cv.destroyAllWindows()
def calc_bounding_rect(image, landmarks):
image_width, image_height = image.shape[1], image.shape[0]
landmark_array = np.empty((0, 2), int)
for _, landmark in enumerate(landmarks.landmark):
landmark_x = min(int(landmark.x * image_width), image_width - 1)
landmark_y = min(int(landmark.y * image_height), image_height - 1)
landmark_point = [np.array((landmark_x, landmark_y))]
landmark_array = np.append(landmark_array, landmark_point, axis=0)
x, y, w, h = cv.boundingRect(landmark_array)
return [x, y, x + w, y + h]
def draw_landmarks(image, landmarks):
image_width, image_height = image.shape[1], image.shape[0]
landmark_point = []
for index, landmark in enumerate(landmarks.landmark):
if landmark.visibility < 0 or landmark.presence < 0:
continue
landmark_x = min(int(landmark.x * image_width), image_width - 1)
landmark_y = min(int(landmark.y * image_height), image_height - 1)
landmark_point.append((landmark_x, landmark_y))
cv.circle(image, (landmark_x, landmark_y), 1, (0, 255, 0), 1)
if len(landmark_point) > 0:
# 参考:https://github.com/tensorflow/tfjs-models/blob/master/facemesh/mesh_map.jpg
# 左眉毛(55:内側、46:外側)
cv.line(image, landmark_point[55], landmark_point[65], (0, 255, 0), 2)
cv.line(image, landmark_point[65], landmark_point[52], (0, 255, 0), 2)
cv.line(image, landmark_point[52], landmark_point[53], (0, 255, 0), 2)
cv.line(image, landmark_point[53], landmark_point[46], (0, 255, 0), 2)
# 右眉毛(285:内側、276:外側)
cv.line(image, landmark_point[285], landmark_point[295], (0, 255, 0),
2)
cv.line(image, landmark_point[295], landmark_point[282], (0, 255, 0),
2)
cv.line(image, landmark_point[282], landmark_point[283], (0, 255, 0),
2)
cv.line(image, landmark_point[283], landmark_point[276], (0, 255, 0),
2)
# 左目 (133:目頭、246:目尻)
cv.line(image, landmark_point[133], landmark_point[173], (0, 255, 0),
2)
cv.line(image, landmark_point[173], landmark_point[157], (0, 255, 0),
2)
cv.line(image, landmark_point[157], landmark_point[158], (0, 255, 0),
2)
cv.line(image, landmark_point[158], landmark_point[159], (0, 255, 0),
2)
cv.line(image, landmark_point[159], landmark_point[160], (0, 255, 0),
2)
cv.line(image, landmark_point[160], landmark_point[161], (0, 255, 0),
2)
cv.line(image, landmark_point[161], landmark_point[246], (0, 255, 0),
2)
cv.line(image, landmark_point[246], landmark_point[163], (0, 255, 0),
2)
cv.line(image, landmark_point[163], landmark_point[144], (0, 255, 0),
2)
cv.line(image, landmark_point[144], landmark_point[145], (0, 255, 0),
2)
cv.line(image, landmark_point[145], landmark_point[153], (0, 255, 0),
2)
cv.line(image, landmark_point[153], landmark_point[154], (0, 255, 0),
2)
cv.line(image, landmark_point[154], landmark_point[155], (0, 255, 0),
2)
cv.line(image, landmark_point[155], landmark_point[133], (0, 255, 0),
2)
# 右目 (362:目頭、466:目尻)
cv.line(image, landmark_point[362], landmark_point[398], (0, 255, 0),
2)
cv.line(image, landmark_point[398], landmark_point[384], (0, 255, 0),
2)
cv.line(image, landmark_point[384], landmark_point[385], (0, 255, 0),
2)
cv.line(image, landmark_point[385], landmark_point[386], (0, 255, 0),
2)
cv.line(image, landmark_point[386], landmark_point[387], (0, 255, 0),
2)
cv.line(image, landmark_point[387], landmark_point[388], (0, 255, 0),
2)
cv.line(image, landmark_point[388], landmark_point[466], (0, 255, 0),
2)
cv.line(image, landmark_point[466], landmark_point[390], (0, 255, 0),
2)
cv.line(image, landmark_point[390], landmark_point[373], (0, 255, 0),
2)
cv.line(image, landmark_point[373], landmark_point[374], (0, 255, 0),
2)
cv.line(image, landmark_point[374], landmark_point[380], (0, 255, 0),
2)
cv.line(image, landmark_point[380], landmark_point[381], (0, 255, 0),
2)
cv.line(image, landmark_point[381], landmark_point[382], (0, 255, 0),
2)
cv.line(image, landmark_point[382], landmark_point[362], (0, 255, 0),
2)
# 口 (308:右端、78:左端)
cv.line(image, landmark_point[308], landmark_point[415], (0, 255, 0),
2)
cv.line(image, landmark_point[415], landmark_point[310], (0, 255, 0),
2)
cv.line(image, landmark_point[310], landmark_point[311], (0, 255, 0),
2)
cv.line(image, landmark_point[311], landmark_point[312], (0, 255, 0),
2)
cv.line(image, landmark_point[312], landmark_point[13], (0, 255, 0), 2)
cv.line(image, landmark_point[13], landmark_point[82], (0, 255, 0), 2)
cv.line(image, landmark_point[82], landmark_point[81], (0, 255, 0), 2)
cv.line(image, landmark_point[81], landmark_point[80], (0, 255, 0), 2)
cv.line(image, landmark_point[80], landmark_point[191], (0, 255, 0), 2)
cv.line(image, landmark_point[191], landmark_point[78], (0, 255, 0), 2)
cv.line(image, landmark_point[78], landmark_point[95], (0, 255, 0), 2)
cv.line(image, landmark_point[95], landmark_point[88], (0, 255, 0), 2)
cv.line(image, landmark_point[88], landmark_point[178], (0, 255, 0), 2)
cv.line(image, landmark_point[178], landmark_point[87], (0, 255, 0), 2)
cv.line(image, landmark_point[87], landmark_point[14], (0, 255, 0), 2)
cv.line(image, landmark_point[14], landmark_point[317], (0, 255, 0), 2)
cv.line(image, landmark_point[317], landmark_point[402], (0, 255, 0),
2)
cv.line(image, landmark_point[402], landmark_point[318], (0, 255, 0),
2)
cv.line(image, landmark_point[318], landmark_point[324], (0, 255, 0),
2)
cv.line(image, landmark_point[324], landmark_point[308], (0, 255, 0),
2)
return image
def draw_bounding_rect(use_brect, image, brect):
if use_brect:
# 外接矩形
cv.rectangle(image, (brect[0], brect[1]), (brect[2], brect[3]),
(0, 255, 0), 2)
return image
if __name__ == '__main__':
main()
| 38.588679 | 88 | 0.552415 | 1,319 | 10,226 | 4.089462 | 0.172858 | 0.28198 | 0.05469 | 0.064516 | 0.436782 | 0.410085 | 0.346682 | 0.346682 | 0.346682 | 0.063033 | 0 | 0.097927 | 0.264033 | 10,226 | 264 | 89 | 38.734848 | 0.618788 | 0.030413 | 0 | 0.26455 | 0 | 0 | 0.02265 | 0.01047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026455 | false | 0 | 0.031746 | 0 | 0.079365 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac6f3083292de976db6a89e3601228fd50986b48 | 1,413 | py | Python | andromeda/modules/loans/views/inventory_loans.py | sango09/andromeda_api_rest | b4a3267146f4f9a985fb3f512e652d4ff354bba2 | [
"MIT"
] | 1 | 2021-09-08T18:58:16.000Z | 2021-09-08T18:58:16.000Z | andromeda/modules/loans/views/inventory_loans.py | sango09/andromeda_api_rest | b4a3267146f4f9a985fb3f512e652d4ff354bba2 | [
"MIT"
] | null | null | null | andromeda/modules/loans/views/inventory_loans.py | sango09/andromeda_api_rest | b4a3267146f4f9a985fb3f512e652d4ff354bba2 | [
"MIT"
] | null | null | null | """Vista del inventario del modulo de prestamos tecnologicos."""
# Django REST Framework
from rest_framework import viewsets, mixins
# Permisos
from rest_framework.permissions import IsAuthenticated
from andromeda.modules.inventory.permissions import IsAdmin, IsStaff
# Modelos
from andromeda.modules.loans.models import InventoryLoans
# Serializers
from andromeda.modules.loans.serializers import InventoryLoansSerializer, CreateInventoryLoansSerializer
class InventoryLoansViewSet(mixins.RetrieveModelMixin,
mixins.CreateModelMixin,
mixins.ListModelMixin,
mixins.UpdateModelMixin,
viewsets.GenericViewSet):
"""View set del inventario para el modulo de prestamos tecnologicos."""
queryset = InventoryLoans.objects.all()
def get_permissions(self):
"""Asigna los permisos basados en la acción."""
permissions = [IsAuthenticated]
if self.action in ['destroy']:
permissions.append(IsAdmin)
elif self.action in ['update', 'partial_update']:
permissions.append(IsStaff)
return (p() for p in permissions)
def get_serializer_class(self):
"""Asigna el serializer basado en la acción."""
if self.action == 'create':
return CreateInventoryLoansSerializer
return InventoryLoansSerializer
| 36.230769 | 104 | 0.685775 | 132 | 1,413 | 7.295455 | 0.5 | 0.040498 | 0.062305 | 0.060228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242038 | 1,413 | 38 | 105 | 37.184211 | 0.89916 | 0.184006 | 0 | 0 | 0 | 0 | 0.029229 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.227273 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ac7e4bb5b2639e3ce5fc5958996d880251838bdd | 317 | py | Python | setup.py | algerbrex/plex | 0d7096634d13ee4d695b580892894910eba6a4eb | [
"MIT"
] | 2 | 2018-02-15T16:26:54.000Z | 2021-11-08T12:26:12.000Z | setup.py | algerbrex/plex | 0d7096634d13ee4d695b580892894910eba6a4eb | [
"MIT"
] | null | null | null | setup.py | algerbrex/plex | 0d7096634d13ee4d695b580892894910eba6a4eb | [
"MIT"
] | null | null | null | from distutils.core import setup
setup(
name='plex',
version='0.1.0',
author='Christian Dean',
author_email='c1dea2n@gmail.com',
packages=['plex'],
license='MIT',
platforms='any',
description='Generic, lighweight regex based lexer.',
long_description=open('README.md').read(),
)
| 21.133333 | 57 | 0.649842 | 38 | 317 | 5.368421 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019305 | 0.182965 | 317 | 14 | 58 | 22.642857 | 0.76834 | 0 | 0 | 0 | 0 | 0 | 0.305994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac85b889248a7fe66df90411b1896a2b3cc25961 | 131 | py | Python | Codeforces/problems/0799/A/799A.py | object-oriented-human/competitive | 9e761020e887d8980a39a64eeaeaa39af0ecd777 | [
"MIT"
] | 2 | 2021-07-27T10:46:47.000Z | 2021-07-27T10:47:57.000Z | Codeforces/problems/0799/A/799A.py | foooop/competitive | 9e761020e887d8980a39a64eeaeaa39af0ecd777 | [
"MIT"
] | null | null | null | Codeforces/problems/0799/A/799A.py | foooop/competitive | 9e761020e887d8980a39a64eeaeaa39af0ecd777 | [
"MIT"
] | null | null | null | import math
n, t, k, d = map(int, input().split())
x = math.ceil(n/k) * t
if (d + t) < x:
print("YES")
else:
print("NO") | 13.1 | 38 | 0.503817 | 25 | 131 | 2.64 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.251908 | 131 | 10 | 39 | 13.1 | 0.673469 | 0 | 0 | 0 | 0 | 0 | 0.037879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.285714 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ac8b7fc77b1b29feaa2f6078b42fbbccbd054d3d | 1,971 | py | Python | tests/bugs/core_1894_test.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2022-02-05T11:37:13.000Z | 2022-02-05T11:37:13.000Z | tests/bugs/core_1894_test.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2021-09-03T11:47:00.000Z | 2021-09-03T12:42:10.000Z | tests/bugs/core_1894_test.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2021-06-30T14:14:16.000Z | 2021-06-30T14:14:16.000Z | #coding:utf-8
#
# id: bugs.core_1894
# title: Circular dependencies between computed fields crashs the engine
# decription:
# Checked on LI-T4.0.0.419 after commit 19.10.2016 18:26
# https://github.com/FirebirdSQL/firebird/commit/6a00b3aee6ba17b2f80a5b00def728023e347707
# -- all OK.
#
# tracker_id: CORE-1894
# min_versions: ['3.0.2']
# versions: 3.0.2
# qmid: None
import pytest
from firebird.qa import db_factory, isql_act, Action
# version: 3.0.2
# resources: None
substitutions_1 = []
init_script_1 = """"""
db_1 = db_factory(sql_dialect=3, init=init_script_1)
test_script_1 = """
recreate table t (
n integer,
n1 computed by (n),
n2 computed by (n1)
);
recreate table t2 (
n integer,
c1 computed by (1),
c2 computed by (c1)
);
alter table t alter n1 computed by (n2);
commit;
set autoddl off;
alter table t2 drop c1;
alter table t2 add c1 computed by (c2);
commit;
select * from t;
select * from t2; -- THIS LEAD SERVER CRASH (checked on WI-T4.0.0.399)
"""
act_1 = isql_act('db_1', test_script_1, substitutions=substitutions_1)
expected_stderr_1 = """
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-Cannot have circular dependencies with computed fields
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-cannot delete
-COLUMN T2.C1
-there are 1 dependencies
Statement failed, SQLSTATE = 42000
Cannot have circular dependencies with computed fields
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-cannot delete
-COLUMN T2.C1
-there are 1 dependencies
"""
@pytest.mark.version('>=3.0.2')
def test_1(act_1: Action):
act_1.expected_stderr = expected_stderr_1
act_1.execute()
assert act_1.clean_stderr == act_1.clean_expected_stderr
| 24.036585 | 106 | 0.650431 | 264 | 1,971 | 4.723485 | 0.409091 | 0.048115 | 0.009623 | 0.089816 | 0.275862 | 0.275862 | 0.275862 | 0.275862 | 0.232558 | 0.232558 | 0 | 0.089041 | 0.259259 | 1,971 | 81 | 107 | 24.333333 | 0.765068 | 0.23998 | 0 | 0.4375 | 0 | 0.020833 | 0.673414 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 1 | 0.020833 | false | 0 | 0.041667 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ba741e941c63d039ed8b54e0f39f036cca0c01c | 1,735 | py | Python | tests/widgets/test_error_dialog.py | sisoe24/NukeServerSocket | fbb95a609fcaf462aeb349597fae23dda67bf49b | [
"MIT"
] | 12 | 2021-08-01T09:41:24.000Z | 2021-12-03T02:53:10.000Z | tests/widgets/test_error_dialog.py | sisoe24/NukeServerSocket | fbb95a609fcaf462aeb349597fae23dda67bf49b | [
"MIT"
] | 5 | 2021-09-11T16:51:01.000Z | 2022-02-18T16:20:29.000Z | tests/widgets/test_error_dialog.py | sisoe24/NukeServerSocket | fbb95a609fcaf462aeb349597fae23dda67bf49b | [
"MIT"
] | 2 | 2021-08-03T16:02:27.000Z | 2021-08-06T07:51:54.000Z | """Test module for the Error dialog widget."""
import os
import logging
import pytest
from PySide2.QtGui import QClipboard
from src.widgets import error_dialog
from src.about import about_to_string
@pytest.fixture()
def error_log_path(_package):
"""Get the log directory path."""
yield os.path.join(_package, 'src', 'log', 'errors.log')
@pytest.fixture(name='report')
def create_report(qtbot, error_log_path):
"""Initialize the ErrorDialog class and create an error report.
After tests, will clean the error.logs file.
Yields:
Report: a namedtuple with the link and the port attributes.
"""
widget = error_dialog.ErrorDialog('Test Error')
qtbot.addWidget(widget)
yield widget.prepare_report()
with open(error_log_path, 'w') as _:
pass
def test_report_return_value(report):
"""Check if prepare report return is a tuple."""
assert isinstance(report, tuple)
def test_prepare_report_link(report):
"""Check if error dialog returns the issues link when clicking Report."""
assert report.link == 'https://github.com/sisoe24/NukeServerSocket/issues'
def test_prepare_report_clipboard(report):
"""Check if report gets copied into clipboard."""
assert 'NukeServerSocket' in QClipboard().text()
def test_prepare_report_file(report, error_log_path):
"""Check if the report file has the about to string information."""
with open(error_log_path) as file:
assert about_to_string() in file.read()
def test_get_critical_logger():
"""Check if method returns the critical logger file handler."""
logger = error_dialog._get_critical_logger()
assert logger.name == 'Critical'
assert isinstance(logger, logging.FileHandler)
| 27.539683 | 78 | 0.725648 | 237 | 1,735 | 5.14346 | 0.375527 | 0.045119 | 0.049221 | 0.049221 | 0.032814 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002094 | 0.174063 | 1,735 | 62 | 79 | 27.983871 | 0.848569 | 0.301441 | 0 | 0 | 0 | 0 | 0.092721 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 1 | 0.241379 | false | 0.034483 | 0.206897 | 0 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ba806cb9a29badf3e7a080781be0d67fc995823 | 1,017 | py | Python | seqlib.py | rvenkatesh99/sequence_alignment | 107c262ef25ddbf025e054339bdd29efd728033a | [
"MIT"
] | null | null | null | seqlib.py | rvenkatesh99/sequence_alignment | 107c262ef25ddbf025e054339bdd29efd728033a | [
"MIT"
] | null | null | null | seqlib.py | rvenkatesh99/sequence_alignment | 107c262ef25ddbf025e054339bdd29efd728033a | [
"MIT"
] | null | null | null | import gzip
def read_fasta(filename):
name = None
seqs = []
fp = None
if filename.endswith('.gz'):
fp = gzip.open(filename, 'rt')
else:
fp = open(filename)
for line in fp.readlines():
line = line.rstrip()
if line.startswith('>'):
if len(seqs) > 0:
seq = ''.join(seqs)
yield(name, seq)
name = line[1:]
seqs = []
else:
name = line[1:]
else:
seqs.append(line)
yield(name, ''.join(seqs))
fp.close()
def read_fastq(filename):
name = None
seqs = []
quals = []
fp = None
if filename.endswith('.gz'):
fp = gzip.open(filename, 'rt')
else:
fp = open(filename)
for line in fp.readlines():
line = line.rstrip()
if line.startswith('@'):
if len(seqs) > 0:
seq = ''.join(seqs)
qual = ''.join(quals)
yield(name, seq, qual)
name = line[1:]
seqs = []
quals = []
else:
name = line[1:]
elif line.startswith('+'):
continue
else:
seqs.append(line)
quals.append(line)
yield(name, ''.join(seqs), ''.join(quals))
fp.close()
| 17.237288 | 43 | 0.573255 | 139 | 1,017 | 4.179856 | 0.251799 | 0.082616 | 0.061962 | 0.068847 | 0.557659 | 0.557659 | 0.464716 | 0.464716 | 0.464716 | 0.464716 | 0 | 0.007752 | 0.238938 | 1,017 | 58 | 44 | 17.534483 | 0.742894 | 0 | 0 | 0.745098 | 0 | 0 | 0.012783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.019608 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3ba9c357940e99f10b2151b5ccc410817c1d8e70 | 10,559 | py | Python | test/lda/createGraphFeatures.py | bekou/graph-topic-model | 7bd99aede6c22675f738166e690174ae0917b9eb | [
"MIT"
] | 6 | 2020-01-17T13:23:35.000Z | 2022-01-15T22:49:34.000Z | learn/lda/createGraphFeatures.py | bekou/graph-topic-model | 7bd99aede6c22675f738166e690174ae0917b9eb | [
"MIT"
] | null | null | null | learn/lda/createGraphFeatures.py | bekou/graph-topic-model | 7bd99aede6c22675f738166e690174ae0917b9eb | [
"MIT"
] | 1 | 2019-05-26T15:57:35.000Z | 2019-05-26T15:57:35.000Z | import networkx as nx
import string
import numpy as np
import math
def degree_centrality(G):
centrality={}
s=1.0
centrality=dict((n,d*s) for n,d in G.degree_iter())
return centrality
def in_degree_centrality(G):
if not G.is_directed():
raise nx.NetworkXError("in_degree_centrality() not defined for undirected graphs.")
centrality={}
s=1.0
centrality=dict((n,d*s) for n,d in G.in_degree_iter())
return centrality
def out_degree_centrality(G):
if not G.is_directed():
raise nx.NetworkXError("out_degree_centrality() not defined for undirected graphs.")
centrality={}
s=1.0
centrality=dict((n,d*s) for n,d in G.out_degree_iter())
return centrality
def weighted_centrality(G):
centrality={}
s=1.0
centrality=dict((n,d*s) for n,d in G.degree_iter(weight='weight'))
return centrality
def createGraphFeatures(num_documents,clean_train_documents,unique_words,sliding_window,b,idf_par,centrality_par,centrality_col_par):
features = np.zeros((num_documents,len(unique_words)))
term_num_docs = {}
print "Creating the graph of words for collection..."
if centrality_col_par=="pagerank_centrality" or centrality_col_par=="out_degree_centrality" or centrality_col_par=="in_degree_centrality" or centrality_col_par=="betweenness_centrality_directed" or centrality_col_par=="closeness_centrality_directed":
dGcol = nx.DiGraph()
else:
dGcol = nx.Graph()
totalLen = 0
for i in range(0,num_documents):
#dG = nx.Graph()
found_unique_words = []
wordList1 = clean_train_documents[i].split(None)
wordList2 = [string.rstrip(x.lower(), ',.!?;') for x in wordList1]
docLen = len(wordList2)
totalLen += docLen
# print clean_train_documents[i]
for k, word in enumerate(wordList2):
if word not in found_unique_words:
found_unique_words.append(word)
if word not in term_num_docs:
term_num_docs[word] = 1
else:
term_num_docs[word] += 1
for j in xrange(1,sliding_window):
try:
next_word = wordList2[k + j]
if not dGcol.has_node(word):
dGcol.add_node(word)
dGcol.node[word]['count'] = 1
else:
dGcol.node[word]['count'] += 1
if not dGcol.has_node(next_word):
dGcol.add_node(next_word)
dGcol.node[next_word]['count'] = 0
if not dGcol.has_edge(word, next_word):
dGcol.add_edge(word, next_word, weight = 1)
else:
dGcol.edge[word][next_word]['weight'] += 1
except IndexError:
if not dGcol.has_node(word):
dGcol.add_node(word)
dGcol.node[word]['count'] = 1
else:
dGcol.node[word]['count'] += 1
except:
raise
avgLen = float(totalLen)/num_documents
print "Number of nodes in collection graph:"+str(dGcol.number_of_nodes())
print "Number of edges in collection graph:"+str(dGcol.number_of_edges())
print "Average document length:"+str(avgLen)
print "Number of self-loops for collection graph:"+str(dGcol.number_of_selfloops())
if idf_par=="icw":
icw_col = {}
dGcol.remove_edges_from(dGcol.selfloop_edges())
nx.write_edgelist(dGcol, "test.edgelist")
if centrality_col_par=="degree_centrality":
centrality_col = nx.degree_centrality(dGcol)
elif centrality_col_par=="pagerank_centrality":
centrality_col = pg.pagerank(dGcol)
# centrality_col = nx.pagerank(dGcol)
elif centrality_col_par=="eigenvector_centrality":
centrality_col = nx.eigenvector_centrality(dGcol,max_iter=10000,weight="weight")
elif centrality_col_par=="katz_centrality":
centrality_col = nx.katz_centrality(dGcol)
elif centrality_col_par=="betweenness_centrality" or centrality_col_par=="betweenness_centrality_directed":
centrality_col = nx.betweenness_centrality(dGcol)
elif centrality_col_par=="triangles":
centrality_col = nx.triangles(dGcol)
elif centrality_col_par=="clustering_coefficient":
centrality_col = nx.clustering(dGcol)
elif centrality_col_par=="in_degree_centrality":
centrality_col = nx.in_degree_centrality(dGcol)
elif centrality_col_par=="out_degree_centrality":
centrality_col = nx.out_degree_centrality(dGcol)
elif centrality_col_par=="core_number":
centrality_col = nx.core_number(dGcol)
elif centrality_col_par=="closeness_centrality" or centrality_col_par=="closeness_centrality_directed":
centrality_col = nx.closeness_centrality(dGcol,normalized=False)
elif centrality_col_par=="communicability_centrality":
centrality_col = nx.communicability_centrality(dGcol)
centr_sum = sum(centrality_col.values())
for k, g in enumerate(dGcol.nodes()):
if centrality_col[g]>0:
icw_col[g] = math.log10((float(centr_sum)) / (centrality_col[g]))
else:
icw_col[g] = 0
idf_col = {}
for x in term_num_docs:
idf_col[x] = math.log10((float(num_documents)+1.0) / (term_num_docs[x]))
print "Creating the graph of words for each document..."
totalNodes = 0
totalEdges = 0
for i in range( 0,num_documents ):
if centrality_par=="pagerank_centrality" or centrality_par=="out_degree_centrality" or centrality_par=="in_degree_centrality" or centrality_par=="betweenness_centrality_directed" or centrality_par=="closeness_centrality_directed":
dG = nx.DiGraph()
else:
dG = nx.Graph()
wordList1 = clean_train_documents[i].split(None)
wordList2 = [string.rstrip(x.lower(), ',.!?;') for x in wordList1]
docLen = len(wordList2)
if docLen==2 :
print wordList2
if docLen>1 and wordList2[0]!=wordList2[1] :
# print clean_train_documents[i]
for k, word in enumerate(wordList2):
for j in xrange(1,sliding_window):
try:
next_word = wordList2[k + j]
if not dG.has_node(word):
dG.add_node(word)
dG.node[word]['count'] = 1
else:
dG.node[word]['count'] += 1
if not dG.has_node(next_word):
dG.add_node(next_word)
dG.node[next_word]['count'] = 0
if not dG.has_edge(word, next_word):
dG.add_edge(word, next_word, weight = 1)
else:
dG.edge[word][next_word]['weight'] += 1
except IndexError:
if not dG.has_node(word):
dG.add_node(word)
dG.node[word]['count'] = 1
else:
dG.node[word]['count'] += 1
except:
raise
dG.remove_edges_from(dG.selfloop_edges())
if centrality_par=="degree_centrality":
#centrality = nx.degree_centrality(dG)
centrality=degree_centrality(dG)
elif centrality_par=="clustering_coefficient":
centrality = nx.clustering(dG)
elif centrality_par=="pagerank_centrality":
# centrality = pg.pagerank(dG,max_iter=10000)
centrality = nx.pagerank(dG)
elif centrality_par=="eigenvector_centrality":
centrality=nx.eigenvector_centrality(dG,max_iter=10000)
elif centrality_par=="katz_centrality":
centrality = nx.katz_centrality(dG,normalized=False)
elif centrality_par=="betweenness_centrality" or centrality_par=="betweenness_centrality_directed":
centrality = nx.betweenness_centrality(dG,normalized=False)
elif centrality_par=="triangles":
centrality = nx.triangles(dG)
elif centrality_par=="in_degree_centrality":
centrality = in_degree_centrality(dG)
elif centrality_par=="out_degree_centrality":
centrality = out_degree_centrality(dG)
elif centrality_par=="core_number":
centrality = nx.core_number(dG)
elif centrality_par=="weighted_centrality":
centrality = weighted_centrality(dG)
elif centrality_par=="closeness_centrality" or centrality_par=="closeness_centrality_directed":
centrality = nx.closeness_centrality(dG,normalized=False)
elif centrality_par=="communicability_centrality":
centrality = nx.communicability_centrality(dG)
totalNodes += dG.number_of_nodes()
totalEdges += dG.number_of_edges()
#print "Number of self-loops:"+str(dG.number_of_selfloops())
#centrality = nx.out_degree_centrality(dG)
#centrality = nx.katz_centrality(dG,max_iter=10000)
for k, g in enumerate(dG.nodes()):
# Degree centrality (local feature)
if g in unique_words:
#features[i,unique_words.index(g)] = dG.degree(nbunch=g,weight='weight') * idf_col[g]
if idf_par=="no":
features[i,unique_words.index(g)] = centrality[g]#centrality[g]/(1-b+(b*(float(docLen)/avgLen)))dG.node[g]['count']
elif idf_par=="idf":
features[i,unique_words.index(g)] = (centrality[g]/(1-b+(b*(float(docLen)/avgLen)))) * idf_col[g]
elif idf_par=="icw":
features[i,unique_words.index(g)] = (centrality[g]/(1-b+(b*(float(docLen)/avgLen)))) * icw_col[g]
print "Average number of nodes:"+str(float(totalNodes)/num_documents)
print "Average number of edges:"+str(float(totalEdges)/num_documents)
return features
| 43.632231 | 254 | 0.58604 | 1,207 | 10,559 | 4.887324 | 0.115162 | 0.079335 | 0.054246 | 0.037294 | 0.587049 | 0.457366 | 0.371419 | 0.26513 | 0.221054 | 0.21563 | 0 | 0.011148 | 0.311867 | 10,559 | 241 | 255 | 43.813278 | 0.800716 | 0.04991 | 0 | 0.321244 | 0 | 0 | 0.132635 | 0.05519 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.020725 | null | null | 0.046632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.