hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
00738c81acf3d21f96cf5541abd3f0a6c48b917d | 420 | py | Python | 875. Koko Eating Bananas.py | joshlyman/Josh-LeetCode | cc9e2cc406d2cbd5a90ee579efbcaeffb842c5ed | [
"MIT"
] | null | null | null | 875. Koko Eating Bananas.py | joshlyman/Josh-LeetCode | cc9e2cc406d2cbd5a90ee579efbcaeffb842c5ed | [
"MIT"
] | null | null | null | 875. Koko Eating Bananas.py | joshlyman/Josh-LeetCode | cc9e2cc406d2cbd5a90ee579efbcaeffb842c5ed | [
"MIT"
] | null | null | null | class Solution:
def minEatingSpeed(self, piles: List[int], H: int) -> int:
l, r = 1, max(piles)
while l < r:
m = l + (r-l) // 2
time = sum([math.ceil(i/m) for i in piles])
if time > H:
l = m + 1
else:
r = m
return l
Time: O(NlogW), where N is the number of piles, and W is the maximum size of a pile.
Space:O(1) | 30 | 84 | 0.464286 | 67 | 420 | 2.910448 | 0.61194 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016327 | 0.416667 | 420 | 14 | 85 | 30 | 0.779592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0073b0003e3ebb1ae040dc6019c2244dd1bfce9f | 562 | py | Python | cli/setup.py | Stanford-IoT-Lab/thingengine-platform-cloud | de8610ab21198edb16d215d611e0c0b1df302e2c | [
"Apache-2.0"
] | 63 | 2018-07-14T12:01:45.000Z | 2019-04-09T05:47:50.000Z | cli/setup.py | Stanford-Mobisocial-IoT-Lab/almond-cloud | de8610ab21198edb16d215d611e0c0b1df302e2c | [
"Apache-2.0"
] | 227 | 2018-06-04T16:29:48.000Z | 2019-04-10T18:22:22.000Z | cli/setup.py | Stanford-IoT-Lab/thingengine-platform-cloud | de8610ab21198edb16d215d611e0c0b1df302e2c | [
"Apache-2.0"
] | 6 | 2021-11-25T13:43:57.000Z | 2022-03-03T16:38:19.000Z | import setuptools
setuptools.setup(
name="almond-cloud-cli",
version="0.0.0",
author="Stanford OVAL",
author_email="thingpedia-admins@lists.stanford.edu",
description="Command Line Interface (CLI) for Almond Cloud development and deployment",
url="https://github.com/stanford-oval/almond-cloud",
packages=setuptools.find_packages(),
python_requires=">=3,<4",
install_requires=[
"clavier==0.1.3a3",
"kubernetes>=19.15.0,<20",
"pyyaml>=6.0,<7",
],
scripts=[
"bin/almond-cloud",
],
)
| 26.761905 | 91 | 0.633452 | 68 | 562 | 5.176471 | 0.705882 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042129 | 0.197509 | 562 | 20 | 92 | 28.1 | 0.738359 | 0 | 0 | 0.105263 | 0 | 0 | 0.466192 | 0.104982 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0078a79651ad4a37ba07af625d7f0ed91de39f9f | 650 | py | Python | setup.py | JoshPiper/Python-InterWorkshop | 1148b927afe27273f7daf25b09a3c55977c641be | [
"MIT"
] | null | null | null | setup.py | JoshPiper/Python-InterWorkshop | 1148b927afe27273f7daf25b09a3c55977c641be | [
"MIT"
] | null | null | null | setup.py | JoshPiper/Python-InterWorkshop | 1148b927afe27273f7daf25b09a3c55977c641be | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name='InterWorkshop',
version='1.0.0.dev1',
packages=['gmad', 'workshop'],
url='doctor-internet.dev',
license='MIT',
author='John Internet',
author_email='jonjon1234.github@gmail.com',
description='A Python binding for the Steam Workshop API',
install_requires=[
'certifi==2019.6.16',
'chardet==3.0.4',
'discord-webhook==0.4.1',
'environs==5.2.1',
'idna==2.8',
'marshmallow==2.20.1',
'protobuf==3.9.1',
'python-dotenv==0.10.3',
'requests==2.22.0',
'six==1.12.0',
'urllib3==1.26.5'
]
)
| 25 | 62 | 0.550769 | 86 | 650 | 4.139535 | 0.72093 | 0.011236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103093 | 0.253846 | 650 | 25 | 63 | 26 | 0.630928 | 0 | 0 | 0 | 0 | 0 | 0.484615 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
008a818651448e80ede7444c904ec64b41cdf9dc | 2,763 | py | Python | tools/dockerize/webportal/usr/share/openstack-dashboard/openstack_dashboard/backend.py | foruy/openflow-multiopenstack | 74140b041ac25ed83898ff3998e8dcbed35572bb | [
"Apache-2.0"
] | 1 | 2019-09-11T11:56:19.000Z | 2019-09-11T11:56:19.000Z | tools/dockerize/webportal/usr/share/openstack-dashboard/openstack_dashboard/backend.py | foruy/openflow-multiopenstack | 74140b041ac25ed83898ff3998e8dcbed35572bb | [
"Apache-2.0"
] | null | null | null | tools/dockerize/webportal/usr/share/openstack-dashboard/openstack_dashboard/backend.py | foruy/openflow-multiopenstack | 74140b041ac25ed83898ff3998e8dcbed35572bb | [
"Apache-2.0"
] | null | null | null | import logging
from django.conf import settings
from django.utils.translation import ugettext_lazy as _
from keystoneclient import exceptions as keystone_exceptions
from .exceptions import KeystoneAuthException
from .user import Token
from .user import AuthUser
from .utils import get_keystone_client
from .utils import get_client_addr
from .utils import get_client_type
from openstack_dashboard import api
LOG = logging.getLogger(__name__)
class KeystoneBackend(object):
"""Django authentication backend class for use with
``django.contrib.auth``.
"""
def get_user(self, user_id):
"""Returns the current user (if authenticated) based on the user ID
and session data.
Note: this required monkey-patching the ``contrib.auth`` middleware
to make the ``request`` object available to the auth backend class.
"""
if (hasattr(self, 'request') and
user_id == self.request.session["user_id"]):
username = self.request.session['username']
token = self.request.session['token']
user = AuthUser(user_id, username, token)
return user
else:
return None
def authenticate(self, request=None, username=None,
password=None, auth_url=None):
"""Authenticates a user via the Keystone Identity API. """
LOG.debug('Beginning user authentication for user "%s".' % username)
try:
auth_user = api.proxy.authenticate(request, username, password,
user_addr=get_client_addr(request),
user_type=get_client_type(request))
except:
msg = _("Invalid user name or password.")
raise KeystoneAuthException(msg)
user = AuthUser(auth_user.id, auth_user.username, auth_user.id)
if request is not None:
request.user = user
LOG.debug('Authentication completed for user "%s".' % username)
return user
def get_all_permissions(self, user, obj=None):
"""Returns a set of permission strings that this user has through
his/her Keystone "roles".
The permissions are returned as ``"openstack.{{ role.name }}"``.
"""
if user.is_anonymous() or obj is not None:
return set()
role_perms = set(["openstack.roles.%s" % role['name'].lower()
for role in user.roles])
return role_perms
def has_perm(self, user, perm, obj=None):
"""Returns True if the given user has the specified permission. """
if not user.is_active:
return False
return perm in self.get_all_permissions(user, obj)
| 36.355263 | 82 | 0.626855 | 329 | 2,763 | 5.142857 | 0.340426 | 0.024823 | 0.026596 | 0.031915 | 0.028369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.287007 | 2,763 | 75 | 83 | 36.84 | 0.858883 | 0.203764 | 0 | 0.042553 | 0 | 0 | 0.077143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0.06383 | 0.234043 | 0 | 0.489362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
008dea6297a00a2a7baea385c4d7777b91f522e7 | 3,385 | py | Python | container-applications/classified/app.py | emerginganalytics/ualr-cyber-gym | 1156bc2c85c17af02da048f40b2be875f89db0ce | [
"MIT"
] | 3 | 2020-09-02T19:18:03.000Z | 2021-04-29T20:23:01.000Z | container-applications/classified/app.py | emerginganalytics/ualr-cyber-gym | 1156bc2c85c17af02da048f40b2be875f89db0ce | [
"MIT"
] | 26 | 2021-12-23T19:37:27.000Z | 2022-03-28T04:03:41.000Z | container-applications/classified/app.py | emerginganalytics/cyberarena | 311d179a30017285571f65752eaa91b78c7097aa | [
"MIT"
] | 4 | 2020-11-20T20:38:49.000Z | 2021-04-29T20:23:12.000Z | import base64
import onetimepass
import os
from flask import abort, Flask, redirect
from flask_bootstrap import Bootstrap
from flask_login import LoginManager, UserMixin
from flask_sqlalchemy import SQLAlchemy
from globals import ds_client
from werkzeug.security import check_password_hash, generate_password_hash
# App Blueprint imports
from arena_snake.routes import arena_snake_bp
from inspect_workout.routes import inspect_bp
from sql_injection.routes import sql_injection_bp
from twofactorauth.routes import twofactorauth_bp
from wireshark.routes import wireshark_bp
from xss.routes import xss_bp
# Application instance
app = Flask(__name__)
app.secret_key = os.urandom(12)
bootstrap = Bootstrap(app)
# db Config
db = SQLAlchemy(app)
lm = LoginManager(app)
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', 'sqlite:///db.sqlite')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
class User(UserMixin, db.Model):
"""User model."""
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True)
password_hash = db.Column(db.String(128))
otp_secret = db.Column(db.String(16))
def __init__(self, **kwargs):
super(User, self).__init__(**kwargs)
if self.otp_secret is None:
# generate a random secret
self.otp_secret = base64.b32encode(os.urandom(10)).decode('utf-8')
@property
def password(self):
raise AttributeError('password is not a readable attribute')
@password.setter
def password(self, password):
self.password_hash = generate_password_hash(password)
def verify_password(self, password):
return check_password_hash(self.password_hash, password)
def get_totp_uri(self):
return 'otpauth://totp/CyberGym:{0}?secret={1}&issuer=CyberGym2FA'.format(self.username, self.otp_secret)
def verify_totp(self, token):
return onetimepass.valid_totp(token, self.otp_secret)
@lm.user_loader
def load_user(user_id):
"""User loader callback for Flask-Login."""
return User.query.get(int(user_id))
# Register app blueprints
app.register_blueprint(arena_snake_bp)
app.register_blueprint(inspect_bp)
app.register_blueprint(sql_injection_bp)
app.register_blueprint(twofactorauth_bp)
app.register_blueprint(wireshark_bp)
app.register_blueprint(xss_bp)
@app.route('/<workout_id>')
def loader(workout_id):
key = ds_client.key('cybergym-workout', workout_id)
workout = ds_client.get(key)
# Verify that request is for a valid workout and redirect based on type
valid_types = {
'wireshark': '/%s' % workout_id,
'xss': '/xss/xss_d/%s' % workout_id,
'2fa': '/tfh/%s' % workout_id,
'inspect': '/inspect/%s' % workout_id,
'sql_injection': '/sql_injection/%s' % workout_id,
'arena_snake': '/arena_snake/%s' % workout_id
}
if workout:
workout_type = workout['type']
if workout_type in valid_types:
# Any route specific logic is handled at the individual blueprint
# level. Return redirect to specific blueprint
return redirect(valid_types[workout_type])
else:
return abort(404)
# Create database if none exist
db.create_all()
if __name__ == "__main__":
# app.run(debug=True, host='0.0.0.0', port=4000)
app.run(debug=True)
| 31.055046 | 113 | 0.718168 | 459 | 3,385 | 5.061002 | 0.320261 | 0.034869 | 0.051657 | 0.047353 | 0.027551 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011819 | 0.175185 | 3,385 | 108 | 114 | 31.342593 | 0.820201 | 0.120532 | 0 | 0 | 0 | 0 | 0.11502 | 0.037212 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0.148649 | 0.202703 | 0.040541 | 0.472973 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0091802426c2077628c7833c451ff0614032fab3 | 1,278 | py | Python | blog/migrations/0001_initial.py | alireza-fm/MySite | 7c73500d893a563c6dde6b9e8286123cb676f608 | [
"MIT"
] | null | null | null | blog/migrations/0001_initial.py | alireza-fm/MySite | 7c73500d893a563c6dde6b9e8286123cb676f608 | [
"MIT"
] | null | null | null | blog/migrations/0001_initial.py | alireza-fm/MySite | 7c73500d893a563c6dde6b9e8286123cb676f608 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.2 on 2021-04-12 09:40
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Author',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('date_of_birth', models.DateField()),
('authors_biography', models.TextField(max_length=500)),
],
),
migrations.CreateModel(
name='Post',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100)),
('slug', models.CharField(max_length=50)),
('text', models.TextField()),
('date_of_publish', models.DateTimeField()),
('date_of_last_edit', models.DateTimeField()),
('author', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='blog.author')),
],
),
]
| 34.540541 | 121 | 0.567293 | 129 | 1,278 | 5.465116 | 0.496124 | 0.051064 | 0.076596 | 0.102128 | 0.300709 | 0.224113 | 0.224113 | 0.224113 | 0.224113 | 0.224113 | 0 | 0.028761 | 0.292645 | 1,278 | 36 | 122 | 35.5 | 0.751106 | 0.035211 | 0 | 0.344828 | 1 | 0 | 0.092608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
009ab7ef9e163b1812e8dc1afeeb044f72fb0def | 2,599 | py | Python | linux_plex_updater/api/PlexClient.py | amickael/Linux-Plex-Updater | a74dc480d374daf52ee4cc09b40ea34b9e6ffcd4 | [
"MIT"
] | null | null | null | linux_plex_updater/api/PlexClient.py | amickael/Linux-Plex-Updater | a74dc480d374daf52ee4cc09b40ea34b9e6ffcd4 | [
"MIT"
] | null | null | null | linux_plex_updater/api/PlexClient.py | amickael/Linux-Plex-Updater | a74dc480d374daf52ee4cc09b40ea34b9e6ffcd4 | [
"MIT"
] | null | null | null | import os
import uuid
import logging
import requests
import linux_plex_updater
class PlexClient:
def __init__(
self, username: str, password: str, host: str, port: int,
):
self.identifier = uuid.uuid4()
self.username = username
self.password = password
self.auth_token = self.login()
self.location = f"{host}:{port}"
def login(self) -> str:
# Try to load from cache
cache_file = os.path.join(
os.path.dirname(os.path.abspath(linux_plex_updater.__file__)), ".cache"
)
# Check if cached token is still valid
if os.path.isfile(cache_file):
with open(cache_file, "r") as f:
auth_token = f.read().strip()
req = requests.get(
f"{self.location}/system", params={"X-Plex-Token": auth_token}
)
if req.ok:
return auth_token
# Try to fetch auth token
req = requests.post(
"https://plex.tv/users/sign_in.json",
params={
"X-Plex-Client-Identifier": str(self.identifier),
"X-Plex-Device-Name": "Plex auto-updater",
},
auth=(self.username, self.password),
)
# If request is successful set auth token and cache for later
if req.ok:
auth_token = req.json().get("user", {}).get("authToken")
with open(cache_file, "w") as f:
f.write(auth_token)
return auth_token
# Otherwise display an error message
else:
status_codes = {401: "Invalid username or password"}
logging.error(status_codes.get(req.status_code, "Authorization failed"))
def check_updates(self) -> (str, None):
# Try to update information, retry once to refresh token if invalid
retry = 0
while True:
req = requests.get(
f"{self.location}/updater/status",
headers={"Accept": "application/json"},
params={"X-Plex-Token": self.auth_token},
)
if req.ok:
break
elif retry > 1:
return None
else:
retry += 1
self.login()
# Fetch update information, terminate if no download URL
download_url = req.json().get("MediaContainer", {}).get("downloadURL")
if not download_url:
return None
# Return true URL
req = requests.head(download_url, allow_redirects=True,)
return req.url
| 31.695122 | 84 | 0.543286 | 298 | 2,599 | 4.627517 | 0.395973 | 0.065265 | 0.02393 | 0.024656 | 0.062364 | 0.039159 | 0 | 0 | 0 | 0 | 0 | 0.004147 | 0.350519 | 2,599 | 81 | 85 | 32.08642 | 0.812796 | 0.1212 | 0 | 0.177419 | 0 | 0 | 0.130989 | 0.033407 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0.064516 | 0.080645 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
009bac159bc1f3df812657b87e17ca1b95154961 | 1,198 | py | Python | 2020/02/day02.py | GeoffRiley/AdventOfCode | 27fe8670a1923cb3b0675784f5e855ad18c29c93 | [
"Unlicense"
] | 2 | 2020-12-12T03:18:45.000Z | 2021-12-17T00:35:33.000Z | 2020/02/day02.py | GeoffRiley/AdventOfCode | 27fe8670a1923cb3b0675784f5e855ad18c29c93 | [
"Unlicense"
] | null | null | null | 2020/02/day02.py | GeoffRiley/AdventOfCode | 27fe8670a1923cb3b0675784f5e855ad18c29c93 | [
"Unlicense"
] | null | null | null | from typing import Tuple
def process_line(line: str) -> Tuple[int, int, str, str]:
parts, pwd = line.split(':')
rng, letter = parts.split()
lo, hi = map(int, rng.split('-'))
return lo, hi, letter, pwd
def verify_passwords_part1(text: str) -> int:
password_list = map(process_line, text.splitlines(keepends=False))
result = sum(lo <= pwd.count(letter) <= hi
for lo, hi, letter, pwd in password_list)
return result
def verify_passwords_part2(text: str) -> int:
password_list = map(process_line, text.splitlines(keepends=False))
result = sum((pwd[lo] == letter) ^ (pwd[hi] == letter)
for lo, hi, letter, pwd in password_list)
return result
if __name__ == '__main__':
with open('input.txt') as in_file:
example = """1-3 a: abcde
1-3 b: cdefg
2-9 c: ccccccccc"""
assert verify_passwords_part1(example) == 2
assert verify_passwords_part2(example) == 1
original = in_file.read()
part1 = verify_passwords_part1(original)
print(f'Part1: {part1}')
part2 = verify_passwords_part2(original)
print(f'Part2: {part2}')
# Part1: 528
# Part2: 497
| 29.219512 | 70 | 0.621035 | 161 | 1,198 | 4.440994 | 0.372671 | 0.125874 | 0.041958 | 0.054545 | 0.318881 | 0.318881 | 0.318881 | 0.318881 | 0.318881 | 0.318881 | 0 | 0.031042 | 0.247078 | 1,198 | 40 | 71 | 29.95 | 0.761641 | 0.017529 | 0 | 0.214286 | 0 | 0 | 0.075809 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.107143 | false | 0.357143 | 0.035714 | 0 | 0.25 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
00a4a64635433c1cf08ad0404b8152fdfe3d99b0 | 1,135 | py | Python | solutions/014_longest_common_prefix.py | abawchen/leetcode | 41d3b172a7694a46a860fbcb0565a3acccd000f2 | [
"MIT"
] | null | null | null | solutions/014_longest_common_prefix.py | abawchen/leetcode | 41d3b172a7694a46a860fbcb0565a3acccd000f2 | [
"MIT"
] | null | null | null | solutions/014_longest_common_prefix.py | abawchen/leetcode | 41d3b172a7694a46a860fbcb0565a3acccd000f2 | [
"MIT"
] | null | null | null | # Write a function to find the longest common prefix string amongst an array of strings.
class Solution:
# @param {string[]} strs
# @return {string}
def longestCommonPrefix(self, strs):
if not strs:
return ""
lcp = ""
base = strs[0]
for i in range(len(base)):
for s in strs[1:]:
if i > len(s) - 1:
return lcp
if base[i] != s[i]:
return lcp
lcp += base[i]
return lcp
# lcp = ""
# for z in zip(*strs):
# bag = set(z);
# if len(bag) == 1:
# lcp += bag.pop()
# else:
# break
# return lcp
# 84 ms
# lcp = strs[0]
# for s in strs[1:]:
# i, j, local = 0, 0, ""
# while i < len(lcp) and j < len(s):
# if lcp[i] != s[j]:
# break
# local += lcp[j]
# i, j = i+1, j+1
# if not local:
# return ""
# lcp = local
# return lcp
| 24.673913 | 89 | 0.370925 | 130 | 1,135 | 3.238462 | 0.369231 | 0.149644 | 0.038005 | 0.047506 | 0.052257 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021583 | 0.510132 | 1,135 | 45 | 90 | 25.222222 | 0.735612 | 0.456388 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00aba24aee4c502be06d405dcba8e80da329d725 | 689 | py | Python | apps/website/models/comments.py | shubham-thakare/tech-blog | d83cc9ad685f038e249d8ba6e54b6a2c07159e8b | [
"Apache-2.0"
] | null | null | null | apps/website/models/comments.py | shubham-thakare/tech-blog | d83cc9ad685f038e249d8ba6e54b6a2c07159e8b | [
"Apache-2.0"
] | 8 | 2020-12-30T09:23:00.000Z | 2022-03-12T00:53:09.000Z | apps/website/models/comments.py | shubham-thakare/tech-blog | d83cc9ad685f038e249d8ba6e54b6a2c07159e8b | [
"Apache-2.0"
] | null | null | null | from django.db import models
from apps.website.models.article import Article
STATUS_CHOICES = (
("SH", "Show"),
("HD", "Hide"),
)
class Comments(models.Model):
article = models.ForeignKey(Article, on_delete=models.CASCADE)
name = models.CharField(max_length=50, null=False)
email = models.CharField(max_length=50, null=False)
comments = models.TextField(max_length=2500, null=False)
submitted_on = models.DateTimeField(auto_now_add=True)
status = models.CharField(choices=STATUS_CHOICES, max_length=2,
default='SH')
def __str__(self):
return self.name
class Meta:
verbose_name_plural = "Comments"
| 27.56 | 67 | 0.677794 | 85 | 689 | 5.305882 | 0.541176 | 0.079823 | 0.079823 | 0.10643 | 0.155211 | 0.155211 | 0.155211 | 0 | 0 | 0 | 0 | 0.016484 | 0.207547 | 689 | 24 | 68 | 28.708333 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0.03193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0.055556 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
00b576f4bc9ac7848df5c272c6d7d4dc76746aa2 | 775 | py | Python | Python3/32.longest-valid-parentheses.py | 610yilingliu/leetcode | 30d071b3685c2131bd3462ba77c6c05114f3f227 | [
"MIT"
] | null | null | null | Python3/32.longest-valid-parentheses.py | 610yilingliu/leetcode | 30d071b3685c2131bd3462ba77c6c05114f3f227 | [
"MIT"
] | null | null | null | Python3/32.longest-valid-parentheses.py | 610yilingliu/leetcode | 30d071b3685c2131bd3462ba77c6c05114f3f227 | [
"MIT"
] | null | null | null | #
# @lc app=leetcode id=32 lang=python3
#
# [32] Longest Valid Parentheses
#
# @lc code=start
class Solution:
def longestValidParentheses(self, s):
if not s:
return 0
l = 0
r = len(s)
while s[r - 1] == '(' and r > 0:
r -= 1
while s[l] == ')' and l < r - 2:
l += 1
s = s[l : r]
if len(s) < 2:
return 0
stack = [-1]
mx = 0
for i, p in enumerate(s):
if p == '(':
stack.append(i)
elif p == ')':
stack.pop()
if not stack:
stack.append(i)
else:
mx = max(mx, i - stack[-1])
return mx
# @lc code=end
| 21.527778 | 47 | 0.370323 | 95 | 775 | 3.021053 | 0.442105 | 0.041812 | 0.083624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044041 | 0.501935 | 775 | 35 | 48 | 22.142857 | 0.699482 | 0.12129 | 0 | 0.16 | 0 | 0 | 0.005952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00bfc4b5c3d5225f29b3444a9081819c3c897810 | 300 | py | Python | tests/test_engines.py | whitegreyblack/Spaceship | 84bb4e64753b7957639d100d157adcaadcf0269a | [
"MIT"
] | 1 | 2018-03-29T14:32:14.000Z | 2018-03-29T14:32:14.000Z | tests/test_engines.py | whitegreyblack/Spaceship | 84bb4e64753b7957639d100d157adcaadcf0269a | [
"MIT"
] | null | null | null | tests/test_engines.py | whitegreyblack/Spaceship | 84bb4e64753b7957639d100d157adcaadcf0269a | [
"MIT"
] | null | null | null | from bearlibterminal import terminal as term
from spaceship.engine import Engine
from spaceship.menus.main import Main
def test_engine_init():
e = Engine()
assert isinstance(e.scene, Main)
def test_engine_run():
e = Engine()
e.run()
if __name__ == "__main__":
test_engine_run() | 21.428571 | 44 | 0.72 | 42 | 300 | 4.809524 | 0.47619 | 0.148515 | 0.108911 | 0.168317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183333 | 300 | 14 | 45 | 21.428571 | 0.82449 | 0 | 0 | 0.181818 | 0 | 0 | 0.026578 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.181818 | false | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00bff7fe5160a73a6f66ed15b5bf44572080d140 | 550 | py | Python | src/models/__init__.py | andrew-chang-dewitt/hoops-api | 3530c5127c35742aad84df8d6a5286b9f5ad3608 | [
"MIT"
] | null | null | null | src/models/__init__.py | andrew-chang-dewitt/hoops-api | 3530c5127c35742aad84df8d6a5286b9f5ad3608 | [
"MIT"
] | 10 | 2021-11-02T23:31:56.000Z | 2021-12-07T03:41:12.000Z | src/models/__init__.py | andrew-chang-dewitt/hoops | 3530c5127c35742aad84df8d6a5286b9f5ad3608 | [
"MIT"
] | null | null | null | """Data model objects."""
from .account import (
AccountChanges,
AccountIn,
AccountModel,
AccountNew,
AccountOut,
)
from .balance import (
Balance,
BalanceModel
)
from .envelope import (
EnvelopeChanges,
EnvelopeIn,
EnvelopeModel,
EnvelopeNew,
EnvelopeOut,
)
from .transaction import (
TransactionChanges,
TransactionIn,
TransactionModel,
TransactionOut,
)
from .token import (
Token,
TokenData,
)
from .user import (
UserChanges,
UserIn,
UserModel,
UserOut,
)
| 14.864865 | 26 | 0.654545 | 43 | 550 | 8.372093 | 0.72093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261818 | 550 | 36 | 27 | 15.277778 | 0.8867 | 0.034545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00c9aa9dfbb43b837e29a51f07d81915a2932e4c | 1,169 | py | Python | crawler_exercises/login_and_post_douban.py | Andrewpqc/Python_Ex | 6f1770f2c209d4510696bcbca92ee9f31b9d169b | [
"MIT"
] | null | null | null | crawler_exercises/login_and_post_douban.py | Andrewpqc/Python_Ex | 6f1770f2c209d4510696bcbca92ee9f31b9d169b | [
"MIT"
] | null | null | null | crawler_exercises/login_and_post_douban.py | Andrewpqc/Python_Ex | 6f1770f2c209d4510696bcbca92ee9f31b9d169b | [
"MIT"
] | null | null | null | from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from time import sleep
driver=webdriver.PhantomJS(executable_path="/usr/local/bin/phantomjs")
#driver=webdriver.Chrome()
print("go!")
try:
driver.get("http://www.douban.com")
driver.get_screenshot_as_file("douban.png")
username = driver.find_element_by_name("form_email")
password = driver.find_element_by_name("form_password")
submit=driver.find_element_by_class_name("bn-submit")
username.send_keys("13636038496")
password.send_keys("pqc19960320")
submit.click()
#显式等待
WebDriverWait(driver,5).until(
EC.presence_of_element_located((By.CLASS_NAME,"editor"))
)
print("entered douban!")
text=driver.find_element_by_id("isay-cont")
text.click()
text.send_keys("this is a small test for selenium")
driver.find_element_by_id("isay-submit").click()
sleep(1)
driver.get_screenshot_as_file("douban.png")
print("down!")
finally:
driver.close()
| 34.382353 | 70 | 0.747648 | 160 | 1,169 | 5.25625 | 0.44375 | 0.071344 | 0.10107 | 0.112961 | 0.204518 | 0.204518 | 0.080856 | 0 | 0 | 0 | 0 | 0.020649 | 0.130026 | 1,169 | 33 | 71 | 35.424242 | 0.806293 | 0.024808 | 0 | 0.066667 | 0 | 0 | 0.176626 | 0.02109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.2 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
00ca24947cee8a1a2439a79d5466fe74729b844c | 10,276 | py | Python | Backend/Common/GenPCCToBase.py | Errare-humanum-est/HeteroGen | 600a7bde441cc1365a465746e15564bd8de8fc37 | [
"MIT"
] | 1 | 2022-01-12T15:52:07.000Z | 2022-01-12T15:52:07.000Z | Backend/Common/GenPCCToBase.py | Errare-humanum-est/HeteroGen | 600a7bde441cc1365a465746e15564bd8de8fc37 | [
"MIT"
] | null | null | null | Backend/Common/GenPCCToBase.py | Errare-humanum-est/HeteroGen | 600a7bde441cc1365a465746e15564bd8de8fc37 | [
"MIT"
] | 1 | 2021-12-14T18:03:37.000Z | 2021-12-14T18:03:37.000Z | # Copyright (c) 2021. Nicolai Oswald
# Copyright (c) 2021. University of Edinburgh
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from typing import List
from Debug.Monitor.ClassDebug import Debug
from Backend.Common.TemplateHandler.TemplateBase import TemplateBase
class GenPCCToBase(TemplateBase):
k_if = 'IF_'
k_else = 'ELSE_'
k_endif = 'ENDIF_'
k_state = 'STATE_'
set_func_sel = {
"add": "gen_set_add_func",
"del": "gen_set_del_func",
"clear": "gen_set_clear_func",
"contains": "gen_set_contains_func",
"empty": "gen_set_empty_func",
"count": "gen_set_count_func"
}
def __init__(self):
TemplateBase.__init__(self)
def __str__(self):
return self.__name__
''' General functions '''
## New line placeholder
@staticmethod
def gen_nl():
Debug.perror("NEWLINE function not defined")
## End statement placeholder
@staticmethod
def gen_end():
Debug.perror("END STATEMENT function not defined")
''' Variable declarations '''
## Variable declaration function place holder
@staticmethod
def gen_var_decl_func(var: str = '', var_type: str = '', val: str = ''):
Debug.perror("VARIABLE DECLARATION function not defined")
## Message constructor place holder
@staticmethod
def gen_message_obj(var: str = ''):
Debug.perror('MESSAGE OBJECT function not defined')
## Integer variable declaration
@staticmethod
def gen_int(var: str = '', int_range: str = ''):
Debug.perror('INT declaration function not defined')
## Boolean variable declaration
@staticmethod
def gen_bool(var: str = ''):
Debug.perror('BOOL declaration function not defined')
## Data variable declaration
@staticmethod
def gen_data(var: str = ''):
Debug.perror('DATA declaration function not defined')
## ID variable declaration
@staticmethod
def gen_id(var: str = ''):
Debug.perror('ID declaration function not defined')
## MULTISET variable declaration
@staticmethod
def gen_vector(var: str = ''):
Debug.perror('MULTISET declaration function not defined')
## MULTISET variable declaration
@staticmethod
def gen_multiset(var: str = ''):
Debug.perror('MULTISET declaration function not defined')
''' Basic object type constructor functions'''
## Message in message var place holder
@staticmethod
def gen_in_message_var(var: str = ''):
Debug.perror("IN-MESSAGE Variable function not defined")
@staticmethod
## Message object constructor place holder
# @param msg_obj Message object type (Constructor)
# @param msg_type Coherence message type/identifier
# @param msg_src Message source
# @param msg_dest Message destination
# @param payload A list of payload declarations
def gen_message_constr(msg_obj: str = '', msg_type: str = '', msg_src: str = '', msg_dest: str = '',
payload: List[str] = None):
Debug.perror("MESSAGE CONSTRUCTOR function not defined")
## Access to message object variable function place holder
@staticmethod
def gen_in_message_obj_var_access(var: str = ''):
Debug.perror("MESSAGE OBJECT VAR ACCESS function not defined")
## Get local machine id function place holder
@staticmethod
def gen_get_machine_id_func(var: str = ''):
Debug.perror("GET MACHINE ID function not defined")
## Get remote machine id function place holder
@staticmethod
def gen_get_remote_machine_id_func(var: str = ''):
Debug.perror("GET REMOTE MACHINE ID function not defined")
''' Assignment function place holder declaration '''
## Function to generate assignment for variables and values
# @param left Variable to which value is assigned
# @param right Value or Variable that get assigned
@staticmethod
def gen_assignment(left: str = '', right: str = ''):
Debug.perror("ASSIGNMENT function not defined")
## Function to generate access to the value of a global variable
@staticmethod
def gen_global_var_val_access(var: str = ''):
Debug.perror("GLOBAL VAR ACCESS function not defined")
## Function to generate access to the value of a global variable
@staticmethod
def gen_local_var_val_access(var: str = ''):
Debug.perror("LOCAL VAR ACCESS function not defined")
## Undefine variable function place holder
@staticmethod
def gen_undefine_var(var: str = ''):
Debug.perror("VAR UNDEFINE function not defined")
## Assign next state variables
@staticmethod
def gen_next_state(arch_str: str = '', state_str: str = ''):
Debug.perror("STATE ASSIGNMENT function not defined")
''' Conditional function place holder declarations'''
## IF function place holder
# @param self The object pointer.
# @param left Left condition arg
# @param op Operator arg
# @param right Right condition arg
@staticmethod
def gen_if_func(left: str = '', op: str = '', right: str = ''):
Debug.perror("IF function not defined")
## IF NOT function place holder (Some languages do not have uniform negation pattern)
@staticmethod
def gen_if_not_func(left: str = '', op: str = '', right: str = ''):
Debug.perror("IF NOT function not defined")
@staticmethod
## ELSE function place holder
def gen_else_func(left: str = '', op: str = '', right: str = ''):
Debug.perror("ELSE function not defined")
@staticmethod
## ELSE NOT function place holder (Some languages do not have else statement so negation required)
def gen_else_not_func(left: str = '', op: str = '', right: str = ''):
Debug.perror("ELSE NOT function not defined")
## ENDIF function place holder
@staticmethod
def gen_end_if_func():
Debug.perror("ENDIF function not defined")
# Legacy
@staticmethod
def gen_cond_end_func():
GenPCCToBase.gen_end_if_func()
''' Send functions'''
## SEND function place holder
@staticmethod
def gen_send_func(vc: str, msg_var: str):
Debug.perror("SEND function not defined")
## MCAST function place holder
@staticmethod
def gen_mcast_func(vc: str, multiset: str, msg_var: str, multiset_var: str):
Debug.perror("MCAST function not defined")
## BCAST function place holder
@staticmethod
def gen_bcast_func(vc: str, cluster: str, msg_var: str):
Debug.perror("BCAST function not defined")
## ISSUE EVENT function place holder
@staticmethod
def gen_event_issue_func(arch: str, event: str):
Debug.perror("EVENT CALL function not defined")
@staticmethod
def gen_event_handle_func():
Debug.perror("EVENT HANDLE function not defined")
@staticmethod
def gen_event_wait_handled_func():
Debug.perror("EVENT WAIT HANDLED function not defined")
@staticmethod
def gen_access_func():
Debug.perror("ACCESS function not defined")
@staticmethod
def gen_break_func():
Debug.perror("BREAK function not defined")
@staticmethod
def gen_undef_func():
Debug.perror("UNDEFINE function not defined")
# Legacy
@staticmethod
def gen_end_proc_func():
Debug.perror("ENDPROCESS function not defined")
@staticmethod
def gen_end_when_func():
Debug.perror("ENDPROCESS function not defined")
''' Set function place holder declarations'''
## SET function place holder
@staticmethod
def gen_set_func():
Debug.perror("ASSIGNMENT function not defined")
## SET ADD function place holder
@staticmethod
def gen_set_add_func(set_id: str = '', set_op: str = '', var: str = ''):
Debug.perror("SET ADD function not defined")
## SET DEL function place holder
@staticmethod
def gen_set_del_func(set_id: str = '', set_op: str = '', var: str = ''):
Debug.perror("SET DEL function not defined")
## SET CLEAR function place holder
@staticmethod
def gen_set_clear_func(set_id: str = '', set_op: str = ''):
Debug.perror("SET CLEAR function not defined")
## SET CONTAINS function place holder
@staticmethod
def gen_set_contains_func(set_id: str = '', set_op: str = '', var: str = ''):
Debug.perror("SET CONTAINS function not defined")
## SET EMPTY function place holder
@staticmethod
def gen_set_empty_func(set_id: str = '', set_op: str = '', var: str = ''):
Debug.perror("SET EMPTY function not defined")
## SET COUNT function place holder
@staticmethod
def gen_set_count_func(set_id: str = '', set_op: str = ''):
Debug.perror("SET COUNT function not defined")
| 35.680556 | 104 | 0.681004 | 1,295 | 10,276 | 5.261776 | 0.193822 | 0.038744 | 0.11359 | 0.049897 | 0.430144 | 0.354564 | 0.279131 | 0.172439 | 0.162753 | 0.123863 | 0 | 0.001008 | 0.227326 | 10,276 | 287 | 105 | 35.804878 | 0.857179 | 0.325321 | 0 | 0.326797 | 0 | 0 | 0.238815 | 0.003207 | 0 | 0 | 0 | 0 | 0 | 1 | 0.300654 | false | 0 | 0.019608 | 0.006536 | 0.366013 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00ccbb7313e05ec04674c15184318d056bc5b17c | 406 | py | Python | setup.py | DSAutomations/py-parsehub | 571a024a5bfff4521bd3076c27bbb29f54b10cfb | [
"BSD-2-Clause"
] | 40 | 2015-04-19T16:00:29.000Z | 2022-02-08T12:16:01.000Z | setup.py | DSAutomations/py-parsehub | 571a024a5bfff4521bd3076c27bbb29f54b10cfb | [
"BSD-2-Clause"
] | 4 | 2015-04-15T12:02:19.000Z | 2021-09-14T10:52:59.000Z | setup.py | DSAutomations/py-parsehub | 571a024a5bfff4521bd3076c27bbb29f54b10cfb | [
"BSD-2-Clause"
] | 27 | 2016-01-06T15:40:52.000Z | 2022-02-25T22:57:51.000Z | from setuptools import find_packages, setup
setup(name="py-parsehub",
version="0.1",
description="Python3 module for interaction with Parsehub API",
author="Viktor Hronec",
author_email='zamr666@gmail.com',
platforms=["linux"],
license="BSD",
url="https://github.com/hronecviktor/py-parsehub",
packages=find_packages(), install_requires=['urllib3']
)
| 31.230769 | 69 | 0.667488 | 46 | 406 | 5.804348 | 0.804348 | 0.089888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021341 | 0.192118 | 406 | 12 | 70 | 33.833333 | 0.792683 | 0 | 0 | 0 | 0 | 0 | 0.369458 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00d1f91492275ed8dc0b61d1536d8209b92874ca | 130 | py | Python | define.py | qinflying/kivy_2048 | f3688b95382806928ea28acc8491202454cf209d | [
"MIT"
] | 1 | 2021-05-06T08:55:57.000Z | 2021-05-06T08:55:57.000Z | define.py | qinflying/kivy_2048 | f3688b95382806928ea28acc8491202454cf209d | [
"MIT"
] | null | null | null | define.py | qinflying/kivy_2048 | f3688b95382806928ea28acc8491202454cf209d | [
"MIT"
] | null | null | null | #-*- coding:utf-8 -*-
#宏定义
#字体
FONT_COMMON = "Roboto"
FONT_HEI = "FontHei"
#界面
START_MENU = "startmenu"
PLAY_MENU = "playmenu"
| 10.833333 | 24 | 0.653846 | 18 | 130 | 4.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009174 | 0.161538 | 130 | 11 | 25 | 11.818182 | 0.733945 | 0.207692 | 0 | 0 | 0 | 0 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00dc801c3e992608515714962e0e20d6d05ab2d5 | 726 | py | Python | django/BankAccount/base/migrations/0003_auto_20220309_2048.py | akrysmalski/BankAccount | b1cf04cdadeea8de1e5291bac5c5103f9d64e4c2 | [
"MIT"
] | null | null | null | django/BankAccount/base/migrations/0003_auto_20220309_2048.py | akrysmalski/BankAccount | b1cf04cdadeea8de1e5291bac5c5103f9d64e4c2 | [
"MIT"
] | null | null | null | django/BankAccount/base/migrations/0003_auto_20220309_2048.py | akrysmalski/BankAccount | b1cf04cdadeea8de1e5291bac5c5103f9d64e4c2 | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2022-03-09 19:48
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0002_auto_20220309_1545'),
]
operations = [
migrations.AlterField(
model_name='user',
name='date_of_birth',
field=models.DateField(null=True, verbose_name='Date of birth'),
),
migrations.AlterField(
model_name='user',
name='identity_number',
field=models.CharField(max_length=32, null=True, verbose_name='Identity number'),
),
migrations.AlterUniqueTogether(
name='user',
unique_together=set(),
),
]
| 25.928571 | 93 | 0.585399 | 74 | 726 | 5.581081 | 0.635135 | 0.058111 | 0.121065 | 0.140436 | 0.179177 | 0.179177 | 0 | 0 | 0 | 0 | 0 | 0.062992 | 0.300275 | 726 | 27 | 94 | 26.888889 | 0.75 | 0.059229 | 0 | 0.333333 | 1 | 0 | 0.139501 | 0.033774 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00e5b4dea6dbf4c8dc2360827ea892454cd7efc3 | 485 | py | Python | src/test/test_FPGAVisual.py | ComputerArchitectureGroupPWr/Simulio | 652a56138388d436ef1cd3f36eb2e288cedfbaac | [
"MIT"
] | null | null | null | src/test/test_FPGAVisual.py | ComputerArchitectureGroupPWr/Simulio | 652a56138388d436ef1cd3f36eb2e288cedfbaac | [
"MIT"
] | null | null | null | src/test/test_FPGAVisual.py | ComputerArchitectureGroupPWr/Simulio | 652a56138388d436ef1cd3f36eb2e288cedfbaac | [
"MIT"
] | null | null | null | from unittest import TestCase
from src.main.fpgavisio import FPGAVisual
__author__ = 'pawel'
class TestFPGAVisual(TestCase):
def test_make_simulation_movie(self):
visualisation = FPGAVisual('../data/final.csv')
visualisation.make_simulation_movie('../data/content.gif')
def test_make_simulation_movie_frame(self):
visualisation = FPGAVisual('../data/final2.csv')
visualisation.make_simulation_movie_frame(1,'../data/lol_content.gif',(35,45)) | 34.642857 | 86 | 0.738144 | 57 | 485 | 5.982456 | 0.526316 | 0.164223 | 0.222874 | 0.123167 | 0.357771 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014388 | 0.140206 | 485 | 14 | 86 | 34.642857 | 0.803357 | 0 | 0 | 0 | 0 | 0 | 0.168724 | 0.047325 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00e6703921ba353cbbd96467c0dd186d2b4f30b5 | 1,181 | py | Python | dbms/util.py | CanburakTumer/youtube_examples | ef41c88a8d8997628d54c4f7ef75c40e0794e544 | [
"Unlicense"
] | null | null | null | dbms/util.py | CanburakTumer/youtube_examples | ef41c88a8d8997628d54c4f7ef75c40e0794e544 | [
"Unlicense"
] | null | null | null | dbms/util.py | CanburakTumer/youtube_examples | ef41c88a8d8997628d54c4f7ef75c40e0794e544 | [
"Unlicense"
] | null | null | null | import logging
import os
import json
from errors import throw_input_data_is_corrupted, throw_table_does_not_exist
DATA_FILE_SUFFIX = '.db'
def generate_data_file_name(table):
return table+DATA_FILE_SUFFIX
def get_table_name_from_data_file(data_file):
return data_file.replace(DATA_FILE_SUFFIX,'')
def split_data(data):
splitted = data.split(",")
if len(splitted) != 2:
throw_input_data_is_corrupted()
return splitted[0], splitted[1]
def check_if_table_exists(table):
data_file_name = generate_data_file_name(table)
if os.path.exists(data_file_name):
return True
return False
def read_table_into_json(table):
if not check_if_table_exists(table):
throw_table_does_not_exist(table)
data_file = generate_data_file_name(table)
with open(data_file, 'r') as f:
file_data = f.read()
return json.loads(file_data) if not file_data == "" else {}
def write_json_into_table(table, json_data):
if not check_if_table_exists(table):
throw_table_does_not_exist(table)
data_file = generate_data_file_name(table)
with open(data_file, 'w') as f:
f.write(json.dumps(json_data)) | 28.804878 | 76 | 0.731583 | 184 | 1,181 | 4.293478 | 0.26087 | 0.162025 | 0.091139 | 0.101266 | 0.446835 | 0.263291 | 0.263291 | 0.263291 | 0.263291 | 0.263291 | 0 | 0.003096 | 0.179509 | 1,181 | 41 | 77 | 28.804878 | 0.812178 | 0 | 0 | 0.1875 | 1 | 0 | 0.005076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.125 | 0.0625 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
00ea0046e55ca33ff7a9bae1c1fae1d608a0a97c | 7,096 | py | Python | recsys/models.py | JustinL42/django-polls | 51a916281372d21541aa53d03c1ddbe1aab4c0e6 | [
"FSFAP"
] | null | null | null | recsys/models.py | JustinL42/django-polls | 51a916281372d21541aa53d03c1ddbe1aab4c0e6 | [
"FSFAP"
] | null | null | null | recsys/models.py | JustinL42/django-polls | 51a916281372d21541aa53d03c1ddbe1aab4c0e6 | [
"FSFAP"
] | null | null | null | from django.db import models
from django.conf import settings
from django.contrib.postgres.search import SearchVectorField
from django.contrib.auth.models import AbstractUser
from django.utils import timezone
# BOOK-RELATED MODELS
class Books(models.Model):
title = models.CharField(max_length=5125)
year = models.IntegerField()
authors = models.CharField(max_length=5125, null=True)
book_type = models.CharField(max_length=13)
isbn = models.CharField(max_length=13, null=True)
pages = models.IntegerField()
editions = models.IntegerField()
alt_titles = models.CharField(max_length=5125, null=True)
series_str_1 = models.CharField(max_length=5125, null=True)
series_str_2 = models.CharField(max_length=5125, null=True)
original_lang = models.CharField(max_length=40)
original_title = models.CharField(max_length=5125, null=True)
original_year = models.IntegerField()
isfdb_rating = models.FloatField()
award_winner = models.BooleanField()
juvenile = models.BooleanField()
stand_alone = models.BooleanField()
inconsistent = models.BooleanField()
virtual = models.BooleanField()
cover_image = models.CharField(max_length=5125, null=True)
wikipedia = models.CharField(max_length=20000, null=True)
synopsis = models.CharField(max_length=20000, null=True)
note = models.CharField(max_length=5125, null=True)
general_search = SearchVectorField(null=True)
options = {
'managed' : False,
}
def __str__(self):
return self.title
class Isbns(models.Model):
isbn = models.CharField(max_length=13)
title_id = models.IntegerField()
options = {
'managed' : False,
}
def __str__(self):
return self.isbn
class Translations(models.Model):
lowest_title_id = models.IntegerField()
title = models.CharField(max_length=5125)
year = models.IntegerField()
note = models.CharField(max_length=20000)
options = {
'managed' : False,
}
def __str__(self):
return self.title
class Contents(models.Model):
book_title_id = models.IntegerField()
content_title_id = models.IntegerField()
options = {
'managed' : False,
}
def __str__(self):
return str(self.book_title_id)
class More_Images(models.Model):
title_id = models.IntegerField()
image = models.CharField(max_length=5125)
options = {
'managed' : False,
}
def __str__(self):
return self.image
class Words(models.Model):
word = models.CharField(primary_key=True, max_length=5125)
options = {
'managed' : False,
}
def __str__(self):
return self.word
# USER-RELATED MODELS
class User(AbstractUser):
location = models.CharField(max_length=250, null=True)
age = models.IntegerField(null=True)
virtual = models.BooleanField(null=False, default=False)
class Meta:
indexes = [
models.Index(fields=['username']),
models.Index(fields=['first_name']),
]
def __str__(self):
return str(self.id) + ": " + self.first_name + " " + self.last_name
class Book_Club(models.Model):
name = models.CharField(max_length=256, null=True)
members = models.ManyToManyField(
User,
related_name="book_clubs",
verbose_name="Members of the club"
)
virtual = models.BooleanField(null=False, default=False)
virtual_member = models.OneToOneField(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
related_name='virtual_member_of',
null=True
)
class Meta:
indexes = [
models.Index(fields=['name']),
models.Index(fields=['virtual']),
]
def __str__(self):
return self.name
class Meeting(models.Model):
book_club = models.ForeignKey(
Book_Club, on_delete=models.CASCADE, null=False)
book = models.ForeignKey(
Books, on_delete=models.DO_NOTHING, null=True, db_constraint=False,)
date = models.DateField(null=True)
class Meta:
indexes = [
models.Index(fields=['book_club']),
models.Index(fields=['book']),
models.Index(fields=['date']),
]
def __str__(self):
return self.book.title + '(' + str(self.date) + ')'
class Rating(models.Model):
book = models.ForeignKey(
Books, null=False, db_constraint=False, on_delete=models.DO_NOTHING)
user = models.ForeignKey(
settings.AUTH_USER_MODEL, null=False, on_delete=models.CASCADE)
rating = models.FloatField(null=True)
predicted_rating = models.FloatField(null=True)
original_rating = models.FloatField(null=True)
original_min = models.FloatField(null=True)
original_max = models.FloatField(null=True)
saved = models.BooleanField(null=False, default=False)
blocked = models.BooleanField(null=False, default=False)
last_updated = models.DateTimeField(default=timezone.now)
class Meta:
indexes = [
models.Index(fields=['book']),
models.Index(fields=['user']),
models.Index(fields=['rating']),
models.Index(models.Func('rating', function='FLOOR'),
name='floor_rating_idx'),
models.Index(fields=['predicted_rating']),
models.Index(models.Func('predicted_rating', function='FLOOR'),
name='floor_predicted_rating_idx'),
models.Index(fields=['saved']),
models.Index(fields=['blocked']),
models.Index(fields=['last_updated'])
]
constraints = [
models.UniqueConstraint(
fields=['book', 'user'], name='OneRatingPerBookAndUser'
),
models.CheckConstraint(check=models.Q(rating__gte=1),
name="RatingAtLeast1"
),
models.CheckConstraint(check=models.Q(rating__lte=10),
name="RatingAtMost10"
),
models.CheckConstraint(check=models.Q(original_rating__gte=models.F('original_min')),
name="OriginalRatingAtLeastMin"
),
models.CheckConstraint(check=models.Q(original_rating__lte=models.F('original_max')),
name="OriginalRatingAtMostMax"
),
]
def __str__(self):
if self.rating != None:
return self.user.first_name + " rates " + \
str(self.rating) + " to " + self.book.title
else:
return self.user.first_name + " hasn't rated " + self.book.title
class DataProblem(models.Model):
book = models.ForeignKey(
Books, null=False, on_delete=models.DO_NOTHING, db_constraint=False,)
user = models.ForeignKey(
settings.AUTH_USER_MODEL, null=False, on_delete=models.CASCADE)
problem = models.CharField(max_length=32768)
class Meta:
indexes = [
models.Index(fields=['book']),
models.Index(fields=['user']),
]
def __str__(self):
return self.book.title | 30.454936 | 98 | 0.63867 | 782 | 7,096 | 5.602302 | 0.190537 | 0.040173 | 0.082173 | 0.109564 | 0.530472 | 0.444647 | 0.345127 | 0.238302 | 0.178042 | 0.13239 | 0 | 0.016021 | 0.243517 | 7,096 | 233 | 99 | 30.454936 | 0.800112 | 0.006342 | 0 | 0.324324 | 0 | 0 | 0.061054 | 0.013631 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059459 | false | 0 | 0.027027 | 0.054054 | 0.583784 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
00ed0e41f58d272b4f5fbde370d7cd8820dcbbe0 | 1,267 | py | Python | lambda/service_call.py | binxio/blog-lambda-python-37-runtime | bd4c228f0aee64a0ae5d5a6287c530be834fe930 | [
"Apache-2.0"
] | null | null | null | lambda/service_call.py | binxio/blog-lambda-python-37-runtime | bd4c228f0aee64a0ae5d5a6287c530be834fe930 | [
"Apache-2.0"
] | null | null | null | lambda/service_call.py | binxio/blog-lambda-python-37-runtime | bd4c228f0aee64a0ae5d5a6287c530be834fe930 | [
"Apache-2.0"
] | null | null | null | from __future__ import annotations
import requests
from requests.auth import HTTPBasicAuth
from dataclasses import dataclass, asdict
from mashumaro import DataClassJSONMixin
@dataclass
class Response(DataClassJSONMixin):
statusCode: int = 200
body: str = ''
@classmethod
def of(cls, status_code: int, msg: Message) -> Response:
return Response(status_code, msg.to_json())
def respond(self) -> dict:
return asdict(self)
@dataclass
class Message(DataClassJSONMixin):
message: str
def say_hello(msg: Message) -> dict:
resp = requests.post(
'https://httpbin.org/post',
json=msg.to_dict(),
auth=HTTPBasicAuth('username', 'password'),
verify=False,
timeout=2)
try:
return resp.json()['json']
except Exception as e :
return { 'msg': f'No body in response {e} -> {resp.text}' }
def handler(event: dict, context) -> dict:
try:
payload: dict = say_hello(Message("Hello World"))
payload.update({'message': f"Received from httpbin: {payload['message']}"})
msg: Message = Message.from_dict(payload)
return Response.of(200, msg).respond()
except Exception as e:
return Response.of(500, Message(str(e))).respond()
| 25.857143 | 83 | 0.652723 | 150 | 1,267 | 5.44 | 0.42 | 0.036765 | 0.041667 | 0.044118 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010183 | 0.224941 | 1,267 | 48 | 84 | 26.395833 | 0.820774 | 0 | 0 | 0.166667 | 0 | 0 | 0.115415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.027778 | 0.138889 | 0.055556 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
da95be5babfdde657883095792e6a4be8756acdb | 1,756 | py | Python | core/classes/session_user.py | taimaskhanov11/AsyncVkAccount | 58c48886545aa581eb28a1071d52e0a60aa1b8ea | [
"MIT"
] | 1 | 2021-12-26T20:40:39.000Z | 2021-12-26T20:40:39.000Z | core/classes/session_user.py | taimaskhanov11/AsyncVkAccount | 58c48886545aa581eb28a1071d52e0a60aa1b8ea | [
"MIT"
] | 1 | 2021-12-03T18:38:38.000Z | 2021-12-03T18:39:08.000Z | core/classes/session_user.py | taimaskhanov11/AsyncVkAccount | 58c48886545aa581eb28a1071d52e0a60aa1b8ea | [
"MIT"
] | null | null | null | import asyncio
from pydantic import BaseModel
# from base_user import BaseUser
class New(BaseModel):
name: str
# class SessionUser(BaseModel):
# user_id: int
# text: str = Field()
# overlord: New
#
# def __init__(self, **data):
# print(data)
# super().__init__(**data)
# # self.name = self.overlord.name
class SessionUser:
def __init__(self, user_id, text, overlord):
self._info = None
self.user_id = user_id
self.text = text
self.overlord = overlord
# def __str__(self):
# return self.user_id
def __eq__(self, other):
if isinstance(other, SessionUser):
return self.user_id == other.user_id
return self.user_id == other
# async def init(self):
# self._info = await self.get_self_info()
#
# @property
# def info(self):
# return self._info
#
# async def get_self_info(self) -> dict:
# # res = await self.vk.users.get(user_ids=user_id, fields=['bdate', 'sex', 'has_photo', 'city'])
# res = await self.overlord.api.users.get(user_ids=self.user_id,
# fields='sex, bdate, has_photo, city, photo_max_orig')
# return res[0]
# def __contains__(self, item):
# print(item)
class AuthenticatedUser:
pass
class TestUser(AuthenticatedUser):
pass
dt = {
1: SessionUser
}
if __name__ == '__main__':
s1 = SessionUser(1, 'hi', 'k')
s2 = SessionUser(1, 'hi', 'k')
# match s1:
# case dt[k]:
# print('True')
# case _:
# print('False')
# print(dir(AuthenticatedUser))
# print(s2 in lst)
# print(s2 in lst)
# asyncio.run(main())
| 21.414634 | 105 | 0.56549 | 209 | 1,756 | 4.478469 | 0.334928 | 0.064103 | 0.064103 | 0.051282 | 0.044872 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007389 | 0.306378 | 1,756 | 81 | 106 | 21.679012 | 0.761084 | 0.523349 | 0 | 0.083333 | 0 | 0 | 0.017522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.083333 | 0.083333 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
da9c744f01492e5c7e07b094dd505208cdc0b47e | 537 | py | Python | searching/migrations/0005_search_bgetbuyitnows.py | netvigator/auctions | f88bcce800b60083a5d1a6f272c51bb540b8342a | [
"MIT"
] | null | null | null | searching/migrations/0005_search_bgetbuyitnows.py | netvigator/auctions | f88bcce800b60083a5d1a6f272c51bb540b8342a | [
"MIT"
] | 13 | 2019-12-12T03:07:55.000Z | 2022-03-07T12:59:27.000Z | searching/migrations/0005_search_bgetbuyitnows.py | netvigator/auctions | f88bcce800b60083a5d1a6f272c51bb540b8342a | [
"MIT"
] | null | null | null | # Generated by Django 2.2.10 on 2020-04-19 21:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('searching', '0004_auto_20200104_2144'),
]
operations = [
migrations.AddField(
model_name='search',
name='bGetBuyItNows',
field=models.NullBooleanField(default=False, help_text='You may get an avalanche of useless junk auctionsif you turn this on -- be careful!', verbose_name='get BuyItNow auctions?'),
),
]
| 28.263158 | 193 | 0.653631 | 63 | 537 | 5.47619 | 0.825397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078624 | 0.242086 | 537 | 18 | 194 | 29.833333 | 0.769042 | 0.085661 | 0 | 0 | 1 | 0 | 0.319018 | 0.047035 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
daa1f514679ab242d2a6df6cf2b6146741fbf4e7 | 917 | py | Python | Dataset/Leetcode/train/101/271.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/101/271.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/101/271.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution:
def XXX(self, root: TreeNode) -> bool:
if not root:
return 1 == 1
# 先记录先序遍历,再翻转二叉树然后再记录先序遍历。当没有叶子节点时添加标记'a'
def dfs(res, root):
if not root:
res.append('a')
return
res.append(root.val)
dfs(res, root.left)
dfs(res, root.right)
res0 = []
dfs(res0, root)
# print(res0)
# 翻转二叉树
q = [root]
while q:
n = len(q)
for i in range(n):
node = q.pop(0)
tmp = node.left
node.left = node.right
node.right = tmp
if node.left:
q.append(node.left)
if node.right:
q.append(node.right)
res1 = []
dfs(res1, root)
# print(res1)
return res0 == res1
| 23.512821 | 49 | 0.40349 | 97 | 917 | 3.814433 | 0.391753 | 0.086486 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.496183 | 917 | 38 | 50 | 24.131579 | 0.777056 | 0.075245 | 0 | 0.071429 | 0 | 0 | 0.001189 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
daa95a28c8c759904d3dbb59300973720b82f5b8 | 4,698 | py | Python | tests/users_tests/test_users_views.py | TomerNewmanPrograms/ResuMe | 9d9b7369625fcb044f91cde86c85c91fdaf44ddd | [
"MIT"
] | null | null | null | tests/users_tests/test_users_views.py | TomerNewmanPrograms/ResuMe | 9d9b7369625fcb044f91cde86c85c91fdaf44ddd | [
"MIT"
] | null | null | null | tests/users_tests/test_users_views.py | TomerNewmanPrograms/ResuMe | 9d9b7369625fcb044f91cde86c85c91fdaf44ddd | [
"MIT"
] | null | null | null | import pytest
from django.contrib.auth.models import User
from pytest_django.asserts import assertTemplateUsed
from conftest import USERNAME, PASSWORD
def save_user(user):
try:
user = User.objects.get(username=USERNAME)
return user
except User.DoesNotExist:
user.save()
return user
@pytest.mark.django_db()
class TestProfileServerResponse:
def test_login_page_loaded(self, client):
response = client.get('/login/')
assert response.status_code == 200
assert 'users/login.html' in response.template_name
def test_valid_login_user(self, client, new_user):
new_user.save()
assert client.login(username=USERNAME, password=PASSWORD)
def test_invalid_login_user(self, client):
assert not client.login(username="Not_valid", password="Not_valid")
def test_redirection_to_main_page_after_login(self, client, new_user):
new_user.save()
response = client.post('/login/', {'username': USERNAME, 'password': PASSWORD})
assert response.status_code == 302
response = client.get(response.url)
assert response.status_code == 200
assert 'posts/feed.html', 'posts/post_list.html' in response.template_name
def test_redirection_to_main_page_after_logout(self, client, new_user):
new_user.save()
client.login(username=USERNAME, password=PASSWORD)
response = client.get('/logout/')
assert response.status_code == 302
response = client.get(response.url)
assert response.status_code == 200
assert 'posts/feed.html', 'posts/post_list.html' in response.template_name
@pytest.mark.parametrize('url', ['profile', 'edit-profile'])
def test_redirections_to_login_page_for_not_logget_in_user(self, client, url):
response = client.get(f'/{url}/')
assert response.status_code == 302
assert response.url == f'/login/?next=/{url}/'
response = client.get(response.url)
assert 'users/login.html' in response.template_name
@pytest.mark.parametrize('url', ['profile', 'edit-profile', 'users/admin'])
def test_redirect_unauthorized_user(self, client, url):
response = client.get(f'/{url}/')
assert response.status_code == 302
assert response.url == f'/login/?next=/{url}/'
response = client.get(response.url)
assert 'users/login.html' in response.template_name
@pytest.mark.parametrize('url, template_file', [('profile', 'users/profile.html'),
('edit-profile', 'users/edit-profile.html'),
('users/admin', 'users/profile_details.html')
]
)
def test_authorized_user_pages_access(self, client, url, template_file, new_user):
save_user(new_user)
client.login(username=USERNAME, password=PASSWORD)
response = client.get(f'/{url}/')
assert response.status_code == 200
assertTemplateUsed(response, template_file)
def test_access_to_valid_user_profile(self, client, persist_post):
client.login(username='MatanPeretz', password='matan1234')
response = client.get(f'/users/{persist_post.author.username}/')
assert response.status_code == 200
assertTemplateUsed(response, 'users/profile_details.html')
assert persist_post in response.context['posts_list']
@pytest.mark.parametrize('invalid_profile', ['Jonathan', '', '12', '1', '-1', ''])
def test_access_to_invalid_user_profile(self, client, invalid_profile, new_user):
save_user(new_user)
client.login(username=USERNAME, password=PASSWORD)
response = client.get(f'/users/{invalid_profile}/')
assert response.status_code == 404
def test_delete_post_from_user_profile(self, client, persist_post):
client.login(username='MatanPeretz', password='matan1234')
persist_post.delete()
response = client.get(f'/users/{persist_post.author.username}/')
assert response.status_code == 200
assertTemplateUsed(response, 'users/profile_details.html')
assert persist_post not in response.context['posts_list']
def test_post_is_not_in_other_user_profile(self, client, persist_post, persist_second_user):
client.login(username='TomerNewman', password='tomer1234')
response = client.get(f'/users/{persist_second_user.username}/')
assert response.status_code == 200
assertTemplateUsed(response, 'users/profile_details.html')
assert persist_post not in response.context['posts_list']
| 45.61165 | 97 | 0.668795 | 554 | 4,698 | 5.453069 | 0.15704 | 0.064879 | 0.073155 | 0.095333 | 0.66137 | 0.638861 | 0.595167 | 0.533598 | 0.519695 | 0.490897 | 0 | 0.014107 | 0.215411 | 4,698 | 102 | 98 | 46.058824 | 0.80548 | 0 | 0 | 0.505747 | 0 | 0 | 0.157514 | 0.05662 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.149425 | false | 0.114943 | 0.045977 | 0 | 0.229885 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dab78077040f1edc2ea23fae35c8547666d9c63a | 614 | py | Python | examples/client.py | zaibon/tcprouter | 7e9d2590e1b1d9d984ac742bd82fcbcf3d42b3ef | [
"BSD-3-Clause"
] | 1 | 2020-02-01T22:45:55.000Z | 2020-02-01T22:45:55.000Z | examples/client.py | zaibon/tcprouter | 7e9d2590e1b1d9d984ac742bd82fcbcf3d42b3ef | [
"BSD-3-Clause"
] | 1 | 2020-03-29T10:43:36.000Z | 2020-03-31T20:28:46.000Z | examples/client.py | xmonader/eltcprouter | b3435733d102c2435e9f62aa469d34c475cc31bd | [
"BSD-3-Clause"
] | null | null | null | import time
from gevent import ssl, socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Require a certificate from the server. We used a self-signed certificate
# so here ca_certs must be the server certificate itself.
ssl_sock = ssl.wrap_socket(s,
ca_certs="server.crt",
cert_reqs=ssl.CERT_REQUIRED)
ssl_sock.connect(('localhost', 5500))
# ssl_sock.connect(('localhost', 9092))
ssl_sock.sendall(b'login superadmin password\n')
while True:
time.sleep(5)
ssl_sock.sendall(b'ping null\n')
print(ssl_sock.recv(4096))
ssl_sock.close()
| 26.695652 | 74 | 0.688925 | 91 | 614 | 4.494505 | 0.571429 | 0.119804 | 0.06846 | 0.112469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026585 | 0.203583 | 614 | 22 | 75 | 27.909091 | 0.809816 | 0.270358 | 0 | 0 | 0 | 0 | 0.128378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.153846 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dab79e408effc769352f1485a0d9ecff9c84bf00 | 456 | py | Python | channel-api/src/processors/callback_delivery/__init__.py | xcantera/demo-provide-baseline | 985f391973fa6ca0761104b55077fded28f152fc | [
"CC0-1.0"
] | 3 | 2020-11-17T23:19:20.000Z | 2021-03-29T15:08:56.000Z | channel-api/src/processors/callback_delivery/__init__.py | xcantera/demo-provide-baseline | 985f391973fa6ca0761104b55077fded28f152fc | [
"CC0-1.0"
] | null | null | null | channel-api/src/processors/callback_delivery/__init__.py | xcantera/demo-provide-baseline | 985f391973fa6ca0761104b55077fded28f152fc | [
"CC0-1.0"
] | 1 | 2020-12-11T00:26:33.000Z | 2020-12-11T00:26:33.000Z | from box import Box
from src import repos
from src.processors import SelfIteratingProcessor
from src.processors import use_cases
def CallbackDelivery(config: Box = None):
use_case = use_cases.DeliverCallbackUseCase(
delivery_outbox_repo=repos.DeliveryOutbox(config.DELIVERY_OUTBOX_REPO),
topic_base_self_url=config.TOPIC_BASE_SELF_URL,
channel_url=config.CHANNEL_URL
)
return SelfIteratingProcessor(use_case=use_case)
| 32.571429 | 79 | 0.796053 | 58 | 456 | 5.965517 | 0.431034 | 0.060694 | 0.098266 | 0.132948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149123 | 456 | 13 | 80 | 35.076923 | 0.891753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
dab9892833bda504359a291b6b5f769751e715b8 | 1,156 | py | Python | app/models.py | Nasfame/Mobile-Wallet | 3925e4ecdf2c19aff248615cc580459c6360c542 | [
"MIT"
] | null | null | null | app/models.py | Nasfame/Mobile-Wallet | 3925e4ecdf2c19aff248615cc580459c6360c542 | [
"MIT"
] | null | null | null | app/models.py | Nasfame/Mobile-Wallet | 3925e4ecdf2c19aff248615cc580459c6360c542 | [
"MIT"
] | null | null | null | from datetime import datetime
from flask_login import UserMixin
from pytz import timezone
from . import db
def time_now():
IST = timezone('Asia/Kolkata')
return datetime.now(IST)
class User(db.Model, UserMixin):
__tablename__ = "user"
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(150), nullable=False, unique=True)
password = db.Column(db.String(150), nullable=False)
balance = db.Column(db.Float(44))
class Transaction(db.Model):
__tablename__ = "transaction"
id = db.Column(db.Integer, primary_key=True)
sender_id = db.Column(db.Integer, db.ForeignKey('user.id'))
receiver_id = db.Column(db.Integer, db.ForeignKey('user.id'))
amount = db.Column(db.Float(44), nullable=False)
date = db.Column(db.DateTime(timezone=True), default=time_now())
@property
def repr(self):
return {"id": self.id, "date": self.date.strftime('%d/%m/%y %H:%M:%S'), "sender_id": self.sender_id,
"receiver_id": self.receiver_id, "amount": self.amount}
# def as_dict(self):
# return {c.name: getattr(self, c.name) for c in self.__table__.columns}
| 31.243243 | 108 | 0.673875 | 166 | 1,156 | 4.548193 | 0.36747 | 0.095364 | 0.119205 | 0.063576 | 0.315232 | 0.270199 | 0.270199 | 0.18543 | 0.098013 | 0 | 0 | 0.010493 | 0.175606 | 1,156 | 36 | 109 | 32.111111 | 0.781742 | 0.08045 | 0 | 0.083333 | 0 | 0 | 0.084906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.041667 | 0.166667 | 0.041667 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
dac1ffbe25721d0c8528d32f8490678e6e7037d9 | 3,916 | py | Python | ui/views.py | h2020-westlife-eu/VRE | a85d5370767939b1971415be48a551ae6b1edc5d | [
"MIT"
] | 1 | 2016-06-28T13:13:27.000Z | 2016-06-28T13:13:27.000Z | ui/views.py | h2020-westlife-eu/VRE | a85d5370767939b1971415be48a551ae6b1edc5d | [
"MIT"
] | 12 | 2016-06-28T11:19:46.000Z | 2017-05-05T14:24:14.000Z | ui/views.py | h2020-westlife-eu/VRE | a85d5370767939b1971415be48a551ae6b1edc5d | [
"MIT"
] | null | null | null | import os
from django.conf import settings
from django.contrib.auth import logout as django_logout
from django.contrib.sites.models import Site
from django.http import HttpResponse
from django.shortcuts import render, redirect
from django.views.generic import View, TemplateView
from luna_django_commons.app.mixins import get_login_context
from .forms import (
B2DropProviderForm,
DatafileForm,
DatafileUpdateForm,
DatasetAddFileForm,
DatasetForm,
DropboxProviderForm,
FolderForm,
ForgotPasswordForm,
GDriveProviderForm,
LoginForm,
PasswordChangeForm,
RegisterForm,
ResetPasswordForm,
S3ProviderForm,
WLWebdavProviderForm,
)
class Root(TemplateView):
def get_template_names(self):
"""
Returns a list of template names to be used for the request. Must return
a list. May not be called if render_to_response is overridden.
"""
domain = Site.objects.get_current().domain
if "west-life" in domain:
return ['static_pages/landing_westlife.html']
elif "pype" in domain:
return ['static_pages/landing_pype.html']
return ['static_pages/landing_westlife.html']
def westlife_services(request):
context = get_login_context(request)
return render(request, 'static_pages/westlife/services.html', context)
def legal(request):
context = get_login_context(request)
return render(request, 'static_pages/cgv.html', context)
def internet_explorer(request):
context = get_login_context(request)
return render(request, 'static_pages/internet_explorer.html', context)
def westlife_static_page(request, page_name='fweh.html', render_kwargs=None):
if render_kwargs is None:
render_kwargs = dict()
context = get_login_context(request)
return render(request, 'static_pages/westlife/%s' % page_name, context, **render_kwargs)
#
# Debug information
#
class BuildInfo(View):
def get(self, *args, **kwargs):
version_file_path = os.path.join(settings.BASE_DIR, 'build_info.txt')
try:
with open(version_file_path, 'r') as f:
data = f.read()
except IOError:
data = 'No build information found. Probably means we are in development mode.'
return HttpResponse(data, content_type='text/plain')
class MainPage(TemplateView):
template_name = 'main.html'
def get_context_data(self, **kwargs):
context = super(MainPage, self).get_context_data(**kwargs)
user = self.request.user
context.update({
'INTERCOM_APP_ID': settings.INTERCOM_APP_ID,
'b2dropprovider_form': B2DropProviderForm(),
'wlwebdavprovider_form': WLWebdavProviderForm(),
'change_password_form': PasswordChangeForm(user=user),
'datafile_form': DatafileForm(),
'datafile_update_form': DatafileUpdateForm(),
'dataset_add_file_form': DatasetAddFileForm(),
'dataset_form': DatasetForm(),
'dropboxprovider_form': DropboxProviderForm(),
'folder_form': FolderForm(),
'forgot_password_form': ForgotPasswordForm(),
'gdriveprovider_form': GDriveProviderForm(),
'login_form': LoginForm(),
'register_form': RegisterForm(),
'reset_password_form': ResetPasswordForm(),
's3provider_form': S3ProviderForm(),
})
return context
def whoami(request):
return HttpResponse(request.user.username)
def logout(request):
django_logout(request)
#return HttpResponse('Logged out!')
return redirect('home')
def switch_login(request):
next_url=request.GET.get('next', '/virtualfolder/')
if hasattr(settings, 'SAML_CONFIG'):
return redirect('/saml2/login/?next=%s' % next_url)
else:
return redirect('/accounts/login/?next=%s' % next_url)
| 29.666667 | 92 | 0.681052 | 423 | 3,916 | 6.111111 | 0.392435 | 0.029787 | 0.029014 | 0.034043 | 0.162089 | 0.148936 | 0.105609 | 0.105609 | 0.105609 | 0.105609 | 0 | 0.002289 | 0.219101 | 3,916 | 131 | 93 | 29.89313 | 0.843035 | 0.048008 | 0 | 0.065934 | 0 | 0 | 0.185757 | 0.081235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10989 | false | 0.065934 | 0.098901 | 0.010989 | 0.395604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
dacf0cdfdc178f92920d8a19a952f18d560dafb4 | 21,968 | py | Python | digsby/src/plugins/facebook/facebookprotocol.py | ifwe/digsby | f5fe00244744aa131e07f09348d10563f3d8fa99 | [
"Python-2.0"
] | 35 | 2015-08-15T14:32:38.000Z | 2021-12-09T16:21:26.000Z | digsby/src/plugins/facebook/facebookprotocol.py | niterain/digsby | 16a62c7df1018a49eaa8151c0f8b881c7e252949 | [
"Python-2.0"
] | 4 | 2015-09-12T10:42:57.000Z | 2017-02-27T04:05:51.000Z | digsby/src/plugins/facebook/facebookprotocol.py | niterain/digsby | 16a62c7df1018a49eaa8151c0f8b881c7e252949 | [
"Python-2.0"
] | 15 | 2015-07-10T23:58:07.000Z | 2022-01-23T22:16:33.000Z | from util.callbacks import callsback
from util.threads.timeout_thread import Timer
from util.primitives import Storage as S
from traceback import print_exc
import util.primitives.structures as structures
from .fbutil import trim_profiles, extract_profile_ids
import traceback
import simplejson
import facebookapi
import hooks
from social.network import SocialFeed
from util.Events import EventMixin
import util.primitives.mapping as mapping
from logging import getLogger
from util.primitives.mapping import Ostorage
from gui import skin
import graphapi
log = getLogger("Facebook2.0")
POSTS_LIMIT = 100
FORCED_APPS = {'News Feed': 'nf',
'Status Updates': 'app_2915120374',
'Photos': 'app_2305272732',
'Links': 'app_2309869772'}
FORCED_KEYS = {
'__notification__':{'name':'Notifications',
'icon_url':'facebookicons.notifications_icon'}
}
KNOWN_APPS_LOOKUP = mapping.dictreverse(FORCED_APPS)
#COMMENTS_QUERY = "SELECT fromid, text, time, post_id, id FROM comment WHERE post_id IN (SELECT post_id FROM #posts)"
PROFILES_QUERY = """SELECT id, name, pic_square, url FROM profile WHERE id IN (SELECT viewer_id FROM #posts) OR id IN(SELECT actor_id FROM #posts) OR id in (SELECT target_id FROM #posts) OR id in (SELECT source_id FROM #posts) OR id IN (SELECT likes.sample FROM #posts) OR id IN (SELECT likes.friends FROM #posts) OR id IN (SELECT sender_id FROM #notifications)"""
#ALL_POSTS_QUERY = 'SELECT post_id, comments, permalink, created_time, updated_time, viewer_id, actor_id, target_id, source_id, message, attachment, action_links, likes FROM stream where filter_key="nf" and is_hidden=0 LIMIT 100'
BIRTHDAY_QUERY = 'select name, birthday_date, profile_url, uid from user where uid IN (select uid2 from friend where uid1=%d)'
NOW_QUERY = 'select now() from user where uid=%d'
EVENTS_QUERY = 'select eid from event where eid in (select eid from event_member where uid=me() and rsvp_status="not_replied") and start_time > now()'
STATUS_QUERY = 'select message, status_id, time, uid from status where uid=me() limit 1'
NOTIFICATIONS_QUERY = 'select notification_id,sender_id,created_time,updated_time,title_html,title_text,href,is_unread,app_id from notification where recipient_id=me()'
APP_QUERY = 'SELECT app_id,icon_url FROM application WHERE app_id IN (SELECT app_id from #notifications)'
POST_FILTER_KEY_QUERY = "select post_id, filter_key from stream where post_id in (select post_id from #latest_posts) and filter_key in (select filter_key from #filter_keys)"
FILTER_KEY_QUERY = "select filter_key, name, icon_url from stream_filter where uid=me() and ((is_visible=1 and type='application') or filter_key in ('" + "', '".join(FORCED_APPS.values()) + "')) ORDER BY rank ASC"
POST_QUERY = 'SELECT post_id, comments, permalink, created_time, updated_time, viewer_id, actor_id, target_id, source_id, message, attachment, action_links, likes FROM stream where post_id="%s"'
#UPDATED_POSTS_QUERY = 'SELECT post_id, comments, permalink, created_time, updated_time, viewer_id, actor_id, target_id, source_id, message, attachment, action_links, likes FROM stream where filter_key="nf" and is_hidden=0 and updated_time > %s LIMIT 100'
LATEST_POSTS_QUERY = ' '.join(x.strip() for x in '''
SELECT post_id, updated_time
FROM stream
WHERE filter_key="%%s" %%s ORDER BY created_time DESC
LIMIT %d
'''.strip().splitlines()) % POSTS_LIMIT
UPDATED_POSTS_QUERY = ' '.join(x.strip() for x in '''SELECT post_id, comments, permalink, created_time, updated_time, viewer_id, actor_id, target_id, source_id, message, attachment, action_links, likes
FROM stream
WHERE post_id in
(
SELECT post_id
FROM #latest_posts
WHERE updated_time > %s
) ORDER BY created_time DESC'''.strip().splitlines())
UPDATE_STREAM_QUERY = {
#'comments':COMMENTS_QUERY,
'profiles':PROFILES_QUERY}
from facebook.fbacct import FBIB
class FacebookProtocol(EventMixin):
events = EventMixin.events | set([
'stream_requested',
'not_logged_in',
'got_stream',
'status_updated',
'conn_error',
'infobox_dirty',
])
def __init__(self, acct):
self.stream_request_outstanding = True
self.acct = acct
self._init_apis()
self.last_stream = True
self.last_filter_key = self.filter_key
EventMixin.__init__(self)
self.social_feed = SocialFeed('facebook_' + self.acct.username,
'newsfeed',
self.get_post_feed,
self.htmlize_posts,
self.set_infobox_dirty)
def set_infobox_dirty(self):
self.event('infobox_dirty')
def htmlize_posts(self, posts, stream_context):
'''Convert one facebook newsfeed post into infobox HTML.'''
t = FBIB(self.acct)
#CAS: pull out the context stuff, the default FBIB grabs self.last_stream, not the one we have context for!
return t.get_html(None, set_dirty=False,
file='posts.py.xml',
dir=t.get_context()['app'].get_res_dir('base'),
context=S(posts=posts))
def get_post_feed(self):
# TODO bring back feed context.
return iter(self.last_stream.posts)
@property
def filter_key(self):
return ['nf', 'lf', 'h'][self.acct.preferred_filter_key]
@property
def hidden_posts(self):
return "and is_hidden=0" if self.acct.show_hidden_posts else ''
def get_stream(self):
self.stream_request_outstanding = True
self.do_get_stream()
def _init_apis(self):
self._init_digsby()
def _init_digsby(self, session_key='', secret='', uid=None):
access_token = getattr(self.acct, 'access_token', None)
uid = getattr(self.acct, 'uid', None),
self.digsby = graphapi.LegacyRESTAPI(access_token, uid=uid)
def do_get_stream(self, num_tries=0):
from util import default_timer
self.start_get_stream = default_timer()
if not self.digsby.logged_in:
return self.event('not_logged_in')
#refresh full stream if pref has changed
prev_filter_key, self.last_filter_key = self.last_filter_key, self.filter_key
if not isinstance(self.last_stream, dict) or prev_filter_key != self.filter_key:
kw = dict(success=lambda *a: self.get_stream_success(num_tries=num_tries, *a),
error = lambda *a: self.get_stream_error(num_tries, *a))
updated_time = 0
else:
kw = dict(success=self.update_stream_success,
error = lambda *a: self.get_stream_error(num_tries, *a))
updated_time = max(self.last_stream.posts + [S(updated_time=0)], key=lambda v: v.updated_time).updated_time
# query = self.digsby.multiquery(prepare=True,
self.last_run_multi = dict(
# birthdays = BIRTHDAY_QUERY % self.digsby.uid,
latest_posts = LATEST_POSTS_QUERY % (self.filter_key, self.hidden_posts),
posts = UPDATED_POSTS_QUERY % (('%d' % updated_time) + '+0'),
# now = NOW_QUERY % self.digsby.uid,
events = EVENTS_QUERY,
status = STATUS_QUERY,
notifications = NOTIFICATIONS_QUERY,
apps = APP_QUERY,
post_filter_keys = POST_FILTER_KEY_QUERY,
filter_keys = FILTER_KEY_QUERY,
**UPDATE_STREAM_QUERY)
self.digsby.fql.multiquery(queries=self.last_run_multi, **kw)
# alerts = self.digsby.notifications.get(prepare=True)
# self.digsby.batch.run(method_feed=[alerts, query], **kw)
def update_status(self):
self.digsby.query(STATUS_QUERY, success=self.status_updated)
def status_updated(self, status):
status = status[0]
if status is not None:
status['uid'] = self.digsby.uid
self.last_status = status
self.event('status_updated')
def update_stream_success(self, value):
return self.get_stream_success(value, update=True)
def get_stream_success(self, value, update=False, num_tries=0):
from util import default_timer
self.end_get_stream = default_timer()
log.debug('stream get took %f seconds', self.end_get_stream - self.start_get_stream)
stream = value
# v = []
# for val in value:
# v.append(simplejson.loads(val, object_hook=facebookapi.storageify))
# alerts, stream = v[:2]
self.last_alerts = Alerts(self.acct)
from facebookapi import simplify_multiquery
try:
# print stream
new_stream = simplify_multiquery(stream,keys={'posts':None,
# 'comments':None,
'latest_posts':None,
'profiles':'id',
# 'now':None,
'events':list,
'status':None,
'notifications': None,
'apps' : 'app_id',
'post_filter_keys':None,
'filter_keys':'filter_key'})# 'birthdays':'uid',})
import util.primitives.funcs as funcs
# new_stream['comments'] = dict(funcs.groupby(new_stream['comments'], lambda x: x['post_id']))
new_stream['comments'] = {}
new_stream['post_ids'] = post_ids = {}
for k, v in new_stream['filter_keys'].iteritems():
if not v.get('name'):
v['name'] = KNOWN_APPS_LOOKUP.get(k, v.get('name'))
new_stream['filter_keys'].update([(k, dict(name=d['name'],
icon_url=skin.get(d['icon_url']).path.url())) for k,d in FORCED_KEYS.items()])
new_stream['post_filter_keys'] = dict((post_id, structures.oset(p['filter_key'] for p in vals))
for post_id, vals in
funcs.groupby(new_stream['post_filter_keys'], lambda x: x['post_id']))
for post in new_stream['posts']:
post['comments']['count'] = int(post['comments']['count'])
new_stream['apps'], apps_str = {}, new_stream['apps']
for app_id, app_dict in apps_str.items():
new_stream['apps'][int(app_id)] = app_dict
try:
new_stream['now'] = new_stream['now'][0].values()[0]
except (IndexError, KeyError) as _e:
# print_exc()
import time
new_stream['now'] = time.time()
self.last_alerts.event_invites &= set(new_stream['events'])
self.last_status = (new_stream['status'][:1] or [Ostorage([('message', ''), ('status_id', 0), ('time', 0)])])[0]
self.last_status['uid'] = self.digsby.uid
if not isinstance(new_stream['posts'], list):
log.error('stream: %r', stream)
raise ValueError('Facebook returned type=%r of posts' % type(new_stream['posts']))
for post in new_stream['posts']: #get the new ones
post_ids[post['post_id']] = post
if 'notifications' in new_stream:
import lxml
for notification in new_stream['notifications']:
title_html = notification.get('title_html', None)
if title_html is None:
continue
s = lxml.html.fromstring(title_html)
s.make_links_absolute('http://www.facebook.com', resolve_base_href = False)
for a in s.findall('a'):
a.tag = 'span'
# _c = a.attrib.clear()
a.attrib['class'] = 'link notification_link'
[x.attrib.pop("data-hovercard", None) for x in s.findall(".//*[@data-hovercard]")]
notification['title_html'] = lxml.etree.tostring(s)
self.last_alerts.update_notifications(new_stream['notifications'])
if update:
latest_posts = filter(None, (post_ids.get(post_id, self.last_stream.post_ids.get(post_id)) for post_id in
structures.oset([post['post_id'] for post in new_stream['latest_posts']] +
[post['post_id'] for post in self.last_stream.posts])))[:POSTS_LIMIT]
new_stream['posts'] = latest_posts
for post in new_stream['posts']: #update the dict with the combined list
post_ids[post['post_id']] = post
for key in self.last_stream.comments:
if key in post_ids and key not in new_stream.comments:
new_stream.comments[key] = self.last_stream.comments[key]
for key in self.last_stream.profiles:
if key not in new_stream.profiles:
new_stream.profiles[key] = self.last_stream.profiles[key]
trim_profiles(new_stream)
for p in new_stream.posts: p.id = p.post_id # compatability hack for ads
self.last_stream = new_stream
self.social_feed.new_ids([p['post_id'] for p in self.last_stream.posts])
except Exception, e:
traceback.print_exc()
return self.get_stream_error(num_tries=num_tries, error=e)
self.event('got_stream')
def get_stream_error(self, num_tries, error=None, *a): #*a, **k for other kinds of errors.
if not_logged_in(error): #doesn't matter if it's really a facebook error; should fail this test if not
return self.event('not_logged_in')
elif num_tries < 2:
Timer(2, lambda: self.do_get_stream(num_tries + 1)).start()
else:
self.event('conn_error')
@callsback
def addComment(self, post_id, comment, callback=None):
self.digsby.stream.addComment(post_id=post_id, comment=comment,
success = lambda resp: self.handle_comment_resp(resp, post_id, comment, callback),
error = lambda resp: self.handle_comment_error(resp, post_id, comment, callback))
@callsback
def removeComment(self, comment_id, callback=None):
self.digsby.stream.removeComment(comment_id=comment_id,
success = lambda resp: self.handle_comment_remove_resp(resp, comment_id, callback),
error = lambda resp: self.handle_comment_remove_error(resp, comment_id, callback))
@callsback
def getComments(self, post_id, callback=None, limit=50, **k):
self.digsby.multiquery(
comments = 'SELECT fromid, text, time, post_id, id FROM comment WHERE post_id="%s" ORDER BY time DESC LIMIT %d' % (post_id, limit),
count = 'SELECT comments.count FROM stream where post_id="%s"' % post_id,
profiles = """SELECT id, name, pic_square, url FROM profile WHERE id IN (SELECT fromid FROM #comments)""",
success = lambda resp: self.handle_get_comments_resp(resp, post_id, callback),
error = lambda req, resp = None: self.handle_get_comments_error(resp or req, post_id, callback)
)
def handle_get_comments_resp(self, resp, post_id, callback):
from facebookapi import simplify_multiquery
resp = simplify_multiquery(resp,
{'comments':None,
'count':None,
'profiles':'id'}
)
resp['comments'].sort(key = lambda c: c['time'])
try:
count = resp['count'][0]['comments']['count']
try:
self.last_stream['post_ids'][post_id]['comments']['count'] = int(count)
except Exception:
traceback.print_exc()
except Exception:
num_comments = len(resp['comments'])
if num_comments >= 50:
count = -1
else:
count = num_comments
self.last_stream['comments'][post_id] = resp['comments']
self.last_stream['profiles'].update(resp['profiles'])
callback.success(post_id, count)
def handle_get_comments_error(self, resp, post_id, callback):
callback.error(resp)
def handle_comment_remove_resp(self, resp, comment_id, callback):
if resp:
for post_id, comments in self.last_stream['comments'].items():
for i, comment in enumerate(comments):
if comment['id'] == comment_id:
c = comments.pop(i)
post = self.last_stream['post_ids'][post_id]
post['comments']['count'] -= 1
callback.success(post_id)
hooks.notify('digsby.facebook.comment_removed', c)
return
def handle_comment_remove_error(self, resp, comment_id, callback):
callback.error()
@callsback
def addLike(self, post_id, callback):
self.digsby.stream.addLike(post_id=str(post_id),
success = (lambda resp: self.handle_like_resp(resp, post_id, callback)),
error = (lambda resp: self.handle_like_error(resp, post_id, callback)))
@callsback
def removeLike(self, post_id, callback):
self.digsby.stream.removeLike(post_id=post_id,
success = (lambda resp: self.handle_unlike_resp(resp, post_id, callback)),
error = (lambda resp: self.handle_unlike_error(resp, post_id, callback)))
def handle_like_resp(self, resp, post_id, callback):
post = self.last_stream['post_ids'][post_id]
post['likes'].update(user_likes=True)
post['likes']['count'] += 1
callback.success(post_id)
hooks.notify('digsby.facebook.like_added', post_id)
def handle_unlike_resp(self, resp, post_id, callback):
post = self.last_stream['post_ids'][post_id]
post['likes'].update(user_likes=False)
post['likes']['count'] -= 1
callback.success(post_id)
hooks.notify('digsby.facebook.like_removed', post_id)
#regen likes block, regen likes link block, send to callback
#regen cached post html
def handle_comment_resp(self, response, post_id, comment, callback):
comment_id = response
post = self.last_stream['post_ids'][post_id]
post['comments']['count'] += 1
import time
comment_dict = S({'fromid': post['viewer_id'],
'id': comment_id,
'post_id': post_id,
'text': comment,
'time': time.time()})
self.last_stream['comments'].setdefault(post_id, []).append(comment_dict)
callback.success(post_id, comment_dict)
hooks.notify('digsby.facebook.comment_added', comment_dict)
#regen comment, regen comment link block
#regen cached post html
def handle_comment_error(self, response, post_id, comment, callback):
callback.error(response)
def handle_like_error(self, response, post_id, callback):
callback.error(response)
def handle_unlike_error(self, response, post_id, callback):
callback.error(response)
@callsback
def get_user_name_gender(self, callback=None):
def success(info):
try:
info = info[0]
except Exception:
traceback.print_exc()
callback.error(info)
else:
if isinstance(info, dict):
callback.success(info)
else:
callback.error(info)
self.digsby.query('SELECT first_name, last_name, sex FROM user WHERE uid=' + str(self.digsby.uid), success=success, error=callback.error)
from .objects import Alerts
#not ready to mess with code that's 17000 revisions old.
#minimal subclass to get rid of the reference to a facebook object
#the only reason it is there is to grab the filters; not up to that point yet here.
#class Alerts(Alerts_Super):
# def __init__(self, notifications_get_xml=None):
# super(Alerts, self).__init__(None, notifications_get_xml)
# if hasattr(self, 'fb'):
# del self.fb
#
# def __sub__(self, other):
# ret = Alerts()
# for attr in self.stuff:
# setattr(ret, attr, getattr(self, attr) - getattr(other, attr))
# return ret
#
# def __getitem__(self, key):
# return getattr(self, key)
login_error_codes = frozenset(
[100, #no session key
102, #session invalid
104, #signature invalid (likely the secret is messed up)
] +
range(450, 455 + 1) + #session errors
[612] #permission error
)
def not_logged_in(fb_error):
return getattr(fb_error, 'code', None) in login_error_codes
| 49.927273 | 364 | 0.588219 | 2,644 | 21,968 | 4.660363 | 0.147504 | 0.03652 | 0.024996 | 0.014608 | 0.317887 | 0.243629 | 0.197614 | 0.144132 | 0.142672 | 0.127901 | 0 | 0.006524 | 0.309223 | 21,968 | 439 | 365 | 50.041002 | 0.80547 | 0.122269 | 0 | 0.158501 | 0 | 0.025937 | 0.185257 | 0.015025 | 0.002882 | 0 | 0 | 0.002278 | 0 | 0 | null | null | 0 | 0.07781 | null | null | 0.011527 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dacfc6edb6425f8b889403d0d303f723bb58b69a | 1,388 | py | Python | python/zp/primjer_12.03.py | jasarsoft/examples | d6fddfcb8c50c31fbfe170a3edd2b6c07890f13e | [
"MIT"
] | null | null | null | python/zp/primjer_12.03.py | jasarsoft/examples | d6fddfcb8c50c31fbfe170a3edd2b6c07890f13e | [
"MIT"
] | null | null | null | python/zp/primjer_12.03.py | jasarsoft/examples | d6fddfcb8c50c31fbfe170a3edd2b6c07890f13e | [
"MIT"
] | null | null | null | import os
import time
#fajlovi i folderi koje zelimo da backupujemo su specificirani u listi
izvor = ['C:\\py', '"C:\\Documents and Settings\xinjure\\Desktop\\"']
#primjetite da smo koristili duple navodnike unutar stringa, zbog imena koje sadrzi razmake
#backup ce biti sacuvan u glavnom backup direktoriju
ciljni_dir = "D:\\Resto" #zapamtite da promjeite ovo, unesite lokaciju koja vama odgovara na vasem racunaru
#3. Fajlovi se backupju u zip fajl
#trenutni datum je ime poddirektorija u glavnom direkotriju
danas = ciljani_dir + os.sep + time.strftime("%Y%m%d")
#trenutno vrijeme je ime zip arhive
sada = time.strftime("%H%M%S")
#trazi od korisnika da unese komentar koji se primjeni u izmenu zip arhive
komentar = input("Prilozie komantera backupa --> ")
if len(komantar) == 0: #provjerava da li je komantar unesen
cilj = danas + os.sep + sada + ".zip"
else:
cilj = danas + os.sep + sada + "_" + komentar.replace(" ", "_") + ".zip"
#kreiranje poddirektorijuma ukoliko on ne postoji
if not os.path.exists(danas):
os.mkdir(danas) #kreira direktorijium
print("Upjesno smo kreirali direktorijum", danas)
#koristimo zip komandu da posaljemo fajlove u zip arhivu
zip_komanda = "zip -qr {0} {1}".format(cilj, " ".join(izvor))
#pokrece backup
if os.system(zip_komanda) == 0:
print("Uspjeno smo izvrsili backupove u ", cilj)
else:
print("Backup nije uspjeo!")
| 36.526316 | 107 | 0.726945 | 203 | 1,388 | 4.940887 | 0.650246 | 0.014955 | 0.021934 | 0.027916 | 0.035892 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004303 | 0.162824 | 1,388 | 37 | 108 | 37.513514 | 0.858864 | 0.476225 | 0 | 0.105263 | 0 | 0 | 0.304348 | 0.039271 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dad88ca7f69b6249db59b9dd53934906b422453f | 5,157 | py | Python | utils/tileextractor/ExtractTiles.py | kurodenjiro/PokeRPG | ab6b76d22f2f2a81e9c6382a893e85c9bd52a20f | [
"MIT"
] | 1 | 2019-08-18T17:08:27.000Z | 2019-08-18T17:08:27.000Z | utils/tileextractor/ExtractTiles.py | kurodenjiro/PokeRPG | ab6b76d22f2f2a81e9c6382a893e85c9bd52a20f | [
"MIT"
] | null | null | null | utils/tileextractor/ExtractTiles.py | kurodenjiro/PokeRPG | ab6b76d22f2f2a81e9c6382a893e85c9bd52a20f | [
"MIT"
] | null | null | null | import Image, ImageDraw, ImageFont
import md5, sha
import sys, time, os
import re
tileSize = (16,16)
outputWidth = 8
outputSpare = 1
scale = 2
pathIn = 'input/'
pathOut = 'output/'
animatedTiles = []
class Tile :
pixels = None
hash = None
number = None
image = None
def __init__ (self) :
self.pixels = {}
self.hash = ''
self.number = 0
self.image = None
def putpixel (self, pixel, data) :
self.pixels[pixel] = data
def getpixel (self, pixel) :
return self.pixels[pixel]
def generateHash (self) :
self.hash = md5.md5(str(self)).hexdigest()
self.hash += sha.sha(str(self)).hexdigest()
def save (self, filename) :
img = Image.new("RGBA", tileSize)
for j in range(tileSize[1]) :
for i in range(tileSize[0]) :
pixel = (i, j)
img.putpixel(pixel, self.getpixel(pixel))
img.save(filename)
def __str__ (self) :
retStr = ''
for i in self.pixels :
pixel = self.getpixel(i)
retStr += str(pixel[0])
retStr += str(pixel[1])
retStr += str(pixel[2])
return retStr
def __repr__ (self) :
return self.hash
class AnimatedTile :
knownFrames = None
def __init__ (self, frames) :
self.knownFrames = frames
def createTile(firstPixel, image) :
currentPixel = [firstPixel[0], firstPixel[1]]
tile = Tile()
replace = False
for j in range(tileSize[1]) :
for i in range(tileSize[0]) :
data = image.getpixel((currentPixel[0], currentPixel[1]))
if data[3] < 255 :
data = (data[0], data[1], data[2], 255)
replace = True
tile.putpixel((i, j), data)
currentPixel[0] += 1
currentPixel[1] += 1
currentPixel[0] = firstPixel[0]
tile.generateHash()
return tile, replace
def replaceTile(firstPixel, tile, image) :
currentPixel = [firstPixel[0], firstPixel[1]]
for j in range(tileSize[1]) :
for i in range(tileSize[0]) :
data = tile.getpixel((i, j))
image.putpixel((currentPixel[0], currentPixel[1]), data)
currentPixel[0] += 1
currentPixel[1] += 1
currentPixel[0] = firstPixel[0]
return image
def writeTile(tile, out, firstPixel) :
currentPixel = [firstPixel[0], firstPixel[1]]
for i in range(tileSize[1]) :
for j in range(tileSize[0]) :
data = tile.getpixel((j, i))
out.putpixel((currentPixel[0], currentPixel[1]), data)
currentPixel[0] += 1
currentPixel[1] += 1
currentPixel[0] = firstPixel[0]
def isDuplicate(tile, hashes) :
i = 0
for hash in hashes :
if hash == tile.hash :
return True, i
i += 1
while i % outputWidth >= outputWidth - outputSpare :
i += 1
hashes.append(None)
hashes.append(tile.hash)
return False, i
def extractAnimations() :
animationsImage = Image.open(pathIn + 'animations.png')
animations = []
print "loading animation frames . . ."
tilesHigh = animationsImage.size[1] / tileSize[1]
tilesWide = animationsImage.size[0] / tileSize[0]
for j in range(tilesHigh) :
percentDone = 100 / (tilesHigh / float(j+1))
print str(percentDone)[0:5] + "%"
row = []
for i in range(tilesWide) :
pixel = (i*tileSize[0], j*tileSize[1])
if animationsImage.getpixel(pixel)[3] != 0 :
tile = createTile(pixel, animationsImage)[0]
row.append(tile)
animations.append(AnimatedTile(row))
return animations
def isAnimationFrame(tile) :
for row in animatedTiles :
for frame in row.knownFrames :
if tile.hash == frame.hash :
return row.knownFrames[0]
return None
def extractTiles(filename, filetype) :
map = re.sub("\.(png|jpg|gif)$", '', filename)
fullImage = pathIn + filename
outputImage = pathOut + map + "tiles" + filetype
image = Image.open(fullImage).convert("RGBA")
tiles = []
tileOrder = []
hashes = []
currentPixel = [0,0]
print "starting " + filename
tilesHigh = image.size[1] / tileSize[1]
tilesWide = image.size[0] / tileSize[0]
for j in range(tilesHigh) :
percentDone = 100 / (tilesHigh / float(j+1))
print str(percentDone)[0:5] + "%"
for i in range(tilesWide) :
currentPixel = (i * tileSize[0], j * tileSize[1])
if image.getpixel((currentPixel))[3] != 0 :
tile, replace = createTile(currentPixel, image)
af = isAnimationFrame(tile)
if af != None :
tile = af
replace = True
if replace :
replaceTile(currentPixel, tile, image)
isDup, id = isDuplicate(tile, hashes)
if not isDup :
tiles.append(tile)
tile.number = id
tileOrder.append(id)
outputSize = (tileSize[0] * outputWidth, tileSize[1] * (1 + len(tiles) / (outputWidth - outputSpare)))
out = Image.new(image.mode, outputSize, (255, 255, 255,0))
i = 0
j = 0
for tile in tiles :
writeTile(tile, out, (i*tileSize[0], j*tileSize[1]))
i += 1
if i == outputWidth - outputSpare + 1 :
i = 0
j += 1
size = (out.size[0] * scale, out.size[1] * scale)
print size
print "saving. . ."
out = out.resize(size, Image.NEAREST)
out.save(outputImage)
size = (image.size[0] * scale, image.size[1] * scale)
image = image.resize(size, Image.NEAREST)
image.save(pathOut + map + "noanim.png")
listing = os.listdir(pathIn)
for file in listing :
m = re.search("\.(png|jpg|gif)$", file)
if file == 'animations.png' :
animatedTiles = extractAnimations()
elif m != None :
extractTiles(file, m.group(0)) | 25.529703 | 103 | 0.651154 | 689 | 5,157 | 4.850508 | 0.174165 | 0.025135 | 0.035907 | 0.019749 | 0.258528 | 0.232795 | 0.186116 | 0.159785 | 0.159785 | 0.159785 | 0 | 0.029811 | 0.199922 | 5,157 | 202 | 104 | 25.529703 | 0.780175 | 0 | 0 | 0.196532 | 0 | 0 | 0.028693 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.023121 | null | null | 0.034682 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
daea143a9138eb46467d20a24e479bae7da4e02c | 5,333 | py | Python | ui/1.py | sarkarrajsingh1/Tensorflow-text-generator-with-GUI | 38cf5c9d80128272c5c8a9044ab59f47d87e03db | [
"MIT"
] | null | null | null | ui/1.py | sarkarrajsingh1/Tensorflow-text-generator-with-GUI | 38cf5c9d80128272c5c8a9044ab59f47d87e03db | [
"MIT"
] | null | null | null | ui/1.py | sarkarrajsingh1/Tensorflow-text-generator-with-GUI | 38cf5c9d80128272c5c8a9044ab59f47d87e03db | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file '1.ui'
#
# Created by: PyQt5 UI code generator 5.8.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(230, 20, 331, 61))
font = QtGui.QFont()
font.setPointSize(15)
font.setStrikeOut(False)
self.label.setFont(font)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setObjectName("label")
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(200, 90, 381, 61))
font = QtGui.QFont()
font.setPointSize(15)
font.setStrikeOut(False)
self.label_2.setFont(font)
self.label_2.setAlignment(QtCore.Qt.AlignCenter)
self.label_2.setObjectName("label_2")
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setGeometry(QtCore.QRect(130, 160, 531, 201))
font = QtGui.QFont()
font.setPointSize(15)
font.setStrikeOut(False)
self.label_3.setFont(font)
self.label_3.setAlignment(QtCore.Qt.AlignCenter)
self.label_3.setWordWrap(True)
self.label_3.setObjectName("label_3")
self.label_4 = QtWidgets.QLabel(self.centralwidget)
self.label_4.setGeometry(QtCore.QRect(240, 380, 301, 21))
font = QtGui.QFont()
font.setPointSize(10)
self.label_4.setFont(font)
self.label_4.setAlignment(QtCore.Qt.AlignHCenter|QtCore.Qt.AlignTop)
self.label_4.setWordWrap(True)
self.label_4.setObjectName("label_4")
self.label_5 = QtWidgets.QLabel(self.centralwidget)
self.label_5.setGeometry(QtCore.QRect(240, 410, 301, 21))
font = QtGui.QFont()
font.setPointSize(10)
self.label_5.setFont(font)
self.label_5.setAlignment(QtCore.Qt.AlignHCenter|QtCore.Qt.AlignTop)
self.label_5.setWordWrap(True)
self.label_5.setObjectName("label_5")
self.label_6 = QtWidgets.QLabel(self.centralwidget)
self.label_6.setGeometry(QtCore.QRect(240, 440, 301, 21))
font = QtGui.QFont()
font.setPointSize(10)
self.label_6.setFont(font)
self.label_6.setAlignment(QtCore.Qt.AlignHCenter|QtCore.Qt.AlignTop)
self.label_6.setWordWrap(True)
self.label_6.setObjectName("label_6")
self.label_7 = QtWidgets.QLabel(self.centralwidget)
self.label_7.setGeometry(QtCore.QRect(240, 470, 301, 21))
font = QtGui.QFont()
font.setPointSize(10)
self.label_7.setFont(font)
self.label_7.setAlignment(QtCore.Qt.AlignHCenter|QtCore.Qt.AlignTop)
self.label_7.setWordWrap(True)
self.label_7.setObjectName("label_7")
self.label_8 = QtWidgets.QLabel(self.centralwidget)
self.label_8.setGeometry(QtCore.QRect(240, 500, 301, 21))
font = QtGui.QFont()
font.setPointSize(10)
self.label_8.setFont(font)
self.label_8.setAlignment(QtCore.Qt.AlignHCenter|QtCore.Qt.AlignTop)
self.label_8.setWordWrap(True)
self.label_8.setObjectName("label_8")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(340, 540, 93, 28))
self.pushButton.setObjectName("pushButton")
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Welcome"))
self.label.setText(_translate("MainWindow", "Welcome to TextGen V1.1"))
self.label_2.setText(_translate("MainWindow", "The Next Gen ML Text Generator"))
self.label_3.setText(_translate("MainWindow", "This Project is a proof of concept that machine learning can be used and is used for various purposes but at the same time require lots and lots of computing power this project is trained on Wikipedia Articles which were collected and tokenized by Metamind "))
self.label_4.setText(_translate("MainWindow", "The Project is made by"))
self.label_5.setText(_translate("MainWindow", "Sooraj Randhir Singh"))
self.label_6.setText(_translate("MainWindow", "Sagar Verma"))
self.label_7.setText(_translate("MainWindow", "Sajal Jain"))
self.label_8.setText(_translate("MainWindow", "Saurabh Pradhan Cenation"))
self.pushButton.setText(_translate("MainWindow", "Continue"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 45.974138 | 315 | 0.689106 | 642 | 5,333 | 5.604361 | 0.257009 | 0.135075 | 0.058366 | 0.071151 | 0.320456 | 0.320456 | 0.195942 | 0.195942 | 0.195942 | 0.116732 | 0 | 0.042757 | 0.19745 | 5,333 | 115 | 316 | 46.373913 | 0.797897 | 0.032627 | 0 | 0.188119 | 1 | 0.009901 | 0.119588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019802 | false | 0 | 0.019802 | 0 | 0.049505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
daf6f9c7d30a34d3c94b6b8fe9ce4baebeac1d1a | 557 | py | Python | chapter05/selenium_search.py | Alchemy2011/spider | b98d1325fb15c05e2f377d324b52a963cbd281b7 | [
"Apache-2.0"
] | null | null | null | chapter05/selenium_search.py | Alchemy2011/spider | b98d1325fb15c05e2f377d324b52a963cbd281b7 | [
"Apache-2.0"
] | null | null | null | chapter05/selenium_search.py | Alchemy2011/spider | b98d1325fb15c05e2f377d324b52a963cbd281b7 | [
"Apache-2.0"
] | null | null | null | from selenium import webdriver
def main():
driver = webdriver.Chrome()
driver.get('http://127.0.0.1:8000/places/default/search')
driver.find_element_by_id('search_term').send_keys('.')
driver.execute_script("document.getElementById('page_size').options[1].text = '1000'")
driver.find_element_by_id('search').click()
driver.implicitly_wait(30)
links = driver.find_elements_by_css_selector('#results a')
countries = [link.text for link in links]
driver.close()
print countries
if __name__ == '__main__':
main()
| 29.315789 | 90 | 0.70377 | 75 | 557 | 4.92 | 0.68 | 0.081301 | 0.092141 | 0.102981 | 0.146341 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0.035865 | 0.149013 | 557 | 18 | 91 | 30.944444 | 0.742616 | 0 | 0 | 0 | 0 | 0 | 0.251347 | 0.093357 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dafbd40cad0eb57f596dddcee79aa818cb434531 | 3,650 | py | Python | src/car_fft.py | pradolucas/SSVEP-EEG-MLP-OLS | 13838520b361ae8b9a1eb1eb7310df87307b9e14 | [
"MIT"
] | null | null | null | src/car_fft.py | pradolucas/SSVEP-EEG-MLP-OLS | 13838520b361ae8b9a1eb1eb7310df87307b9e14 | [
"MIT"
] | null | null | null | src/car_fft.py | pradolucas/SSVEP-EEG-MLP-OLS | 13838520b361ae8b9a1eb1eb7310df87307b9e14 | [
"MIT"
] | null | null | null | import numpy as np
from scipy.fft import fft
def CAR(X, labels):
N = X.shape
N_classes = len(np.unique(labels))
data10 = np.zeros((N[0], N[1], 1))
data11 = np.zeros((N[0], N[1], 1))
data12 = np.zeros((N[0], N[1], 1))
data13 = np.zeros((N[0], N[1], 1))
for trial in range(N[2]): ## Média de cada um os trials de todos canais
data = X[:,:,trial]
X_med = np.mean(data, axis = 1).reshape(data.shape[0])
data_car = data - X_med.reshape((X.shape[0],1))
if (labels[trial] == 0):
data10 = np.append(data10, data_car.reshape((data_car.shape[0], data_car.shape[1], 1)), axis = 2)#append na terceira dimensão, dos trials
elif (labels[trial] == 1):
data11 = np.append(data11, data_car.reshape((data_car.shape[0], data_car.shape[1], 1)), axis = 2)
elif (labels[trial] == 2):
data12 = np.append(data12, data_car.reshape((data_car.shape[0], data_car.shape[1], 1)), axis = 2)
elif (labels[trial] == 3):
data13 = np.append(data13, data_car.reshape((data_car.shape[0], data_car.shape[1], 1)), axis = 2)
data10 = np.delete(data10, 0, axis=2)
data11 = np.delete(data11, 0, axis=2)
data12 = np.delete(data12, 0, axis=2)
data13 = np.delete(data13, 0, axis=2)
return data10,data11,data12,data13
def Ext_fft (N, fs, data10, data11, data12, data13, out_chans):
"""
args:
N -> x.shape
fs -> sampling frequency
dataX -> matrix (1536,16, 12)
out_chan -> canais a serem excluidos
"""
N_class = 4; N_trials = 12; n_harmonicas = 2
N_pos = ((N[0]/fs)*np.array([np.array([10,11,12,13])*i for i in range(1,n_harmonicas+1)])).ravel().astype(int)
val_chans = np.array(range(1,17))
val_chans = np.delete(val_chans, [np.where(val_chans == c) for c in out_chans]) #Cria array(1:16) dps exclui os valores a partir de out_chan
N_chans = val_chans.shape[0]
n_features = N_pos.shape[0]
F_dez=np.zeros((N_trials,N_chans*N_class*n_harmonicas)) #vetor de trials X (canais*classes)
F_onze=np.zeros((N_trials,N_chans*N_class*n_harmonicas))
F_doze=np.zeros((N_trials,N_chans*N_class*n_harmonicas))
F_treze=np.zeros((N_trials,N_chans*N_class*n_harmonicas))
for trial in range(0,N_trials):
Chans_XY=0
for chans in val_chans-1:
a = abs(fft(data10[:,chans,trial])) # roda pela posição de N_pos 10,11,12,13
b = abs(fft(data11[:,chans,trial]))
c = abs(fft(data12[:,chans,trial]))
d = abs(fft(data13[:,chans,trial]))
F_dez[trial,Chans_XY+np.array(range(0,n_features))] = a[N_pos[range(0,n_features)]]; # roda pela posição de N_pos 10,11,12,13
F_onze[trial,Chans_XY+np.array(range(0,n_features))] = b[N_pos[range(0,n_features)]]; # roda pela posição de N_pos 10,11,12,13
F_doze[trial,Chans_XY+np.array(range(0,n_features))] = c[N_pos[range(0,n_features)]]; # roda pela posição de N_pos 10,11,12,13
F_treze[trial,Chans_XY+np.array(range(0,n_features))] = d[N_pos[range(0,n_features)]]; # roda pela posição de N_pos 10,11,12,13
Chans_XY += n_features
return F_dez, F_onze, F_doze, F_treze
def CAR_FFT(X,labels, fs):
# FILTRO CAR
d10, d11, d12, d13 = CAR(X,labels)
# EXTRAÇÃO FFT
out_chans = []
#out_chans = [1, 2, 3, 4, 10, 14, 15,16]
F_dez, F_onze, F_doze, F_treze = Ext_fft (*(X.shape, fs, d10, d11, d12, d13), out_chans = out_chans)
F_all = np.vstack([F_dez, F_onze, F_doze, F_treze])
return F_all | 44.512195 | 149 | 0.605205 | 625 | 3,650 | 3.368 | 0.1792 | 0.013302 | 0.042755 | 0.057007 | 0.395249 | 0.395249 | 0.395249 | 0.372447 | 0.343943 | 0.279335 | 0 | 0.078551 | 0.236164 | 3,650 | 82 | 150 | 44.512195 | 0.676471 | 0.153973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0 | 1 | 0.053571 | false | 0 | 0.035714 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
dafea2c686231896944f55b0c44273c242a20cee | 849 | py | Python | parser/osm_parser_mapking.py | justinxutianyu/comp90019 | 8814a9b83845a6fb79df2a6519ad58933adeda7b | [
"MIT"
] | 1 | 2018-03-28T05:38:33.000Z | 2018-03-28T05:38:33.000Z | parser/osm_parser_mapking.py | justinxutianyu/comp90019 | 8814a9b83845a6fb79df2a6519ad58933adeda7b | [
"MIT"
] | null | null | null | parser/osm_parser_mapking.py | justinxutianyu/comp90019 | 8814a9b83845a6fb79df2a6519ad58933adeda7b | [
"MIT"
] | 1 | 2020-09-20T12:16:24.000Z | 2020-09-20T12:16:24.000Z | import geog
import networkx as nx
import osmgraph
# By default any way with a highway tag will be loaded
g = osmgraph.parse_file('hawaii-latest.osm.bz2') # or .osm or .pbf
for n1, n2 in g.edges_iter():
c1, c2 = osmgraph.tools.coordinates(g, (n1, n2))
g[n1][n2]['length'] = geog.distance(c1, c2)
import random
start = random.choice(g.nodes())
end = random.choice(g.nodes())
path = nx.shortest_path(g, start, end, 'length')
coords = osmgraph.tools.coordinates(g, path)
# Find the sequence of roads to get from start to end
edge_names = [g[n1][n2].get('name') for n1, n2 in osmgraph.tools.pairwise(path)]
import itertools
names = [k for k, v in itertools.groupby(edge_names)]
print(names)
# Visualize the path using geojsonio.py
import geojsonio
import json
geojsonio.display(json.dumps({'type': 'LineString', 'coordinates': coords})) | 30.321429 | 80 | 0.718492 | 138 | 849 | 4.384058 | 0.521739 | 0.033058 | 0.024793 | 0.029752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020661 | 0.144876 | 849 | 28 | 81 | 30.321429 | 0.812672 | 0.186101 | 0 | 0 | 0 | 0 | 0.090247 | 0.030568 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.368421 | 0 | 0.368421 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
daffae650b4cac45f75d1dd51f783f6589ac9f78 | 727 | py | Python | StevenKyritsis_HW12.py | stevenkyritsis/NJIT-CS100 | 9466489c4de80610cadb5c1eac4d87fd81bfb6d3 | [
"MIT"
] | null | null | null | StevenKyritsis_HW12.py | stevenkyritsis/NJIT-CS100 | 9466489c4de80610cadb5c1eac4d87fd81bfb6d3 | [
"MIT"
] | null | null | null | StevenKyritsis_HW12.py | stevenkyritsis/NJIT-CS100 | 9466489c4de80610cadb5c1eac4d87fd81bfb6d3 | [
"MIT"
] | null | null | null | '''
Steven Kyritsis
CS100-031 Fall 2021
HW12 December 10, 2021
'''
#1
def safeOpen(inFile):
try:
file = open(inFile)
return file
except:
return None
#2
def safeFloat(inFloat):
try:
newFloat = float(inFloat)
return newFloat
except ValueError:
return 0.0
#3
def averageSpeed():
count = 0
while (count < 2):
inFile = input('Please input a file: ')
file = safeOpen(inFile)
if file = None:
count += 1
else:
break
contents = file.read()
lst = contents.split()
for item in lst:
if safeFloat(item) > 2.0:
accum += safeFloat(item)
avg = accum / len(lst)
return avg | 18.175 | 47 | 0.543329 | 86 | 727 | 4.593023 | 0.534884 | 0.070886 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059957 | 0.357634 | 727 | 40 | 48 | 18.175 | 0.785867 | 0.004127 | 0 | 0.071429 | 0 | 0 | 0.032012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
daffdb9455aa48dcd12cd0e1505bc6d5e7a7d07f | 1,916 | py | Python | examples/ball-example/main.py | zenvarlab/pyobjus | 7aee8c3c8a0f6a595813eeee9dd7555d15e22791 | [
"MIT"
] | null | null | null | examples/ball-example/main.py | zenvarlab/pyobjus | 7aee8c3c8a0f6a595813eeee9dd7555d15e22791 | [
"MIT"
] | null | null | null | examples/ball-example/main.py | zenvarlab/pyobjus | 7aee8c3c8a0f6a595813eeee9dd7555d15e22791 | [
"MIT"
] | null | null | null | from random import random
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.properties import NumericProperty, ReferenceListProperty, ObjectProperty
from kivy.vector import Vector
from kivy.clock import Clock
from kivy.graphics import Color
from pyobjus import autoclass
class Ball(Widget):
velocity_x = NumericProperty(0)
velocity_y = NumericProperty(0)
h = NumericProperty(0)
velocity = ReferenceListProperty(velocity_x, velocity_y)
def move(self):
self.pos = Vector(*self.velocity) + self.pos
class PyobjusGame(Widget):
ball = ObjectProperty(None)
screen = ObjectProperty(autoclass('UIScreen').mainScreen())
bridge = ObjectProperty(autoclass('bridge').alloc().init())
sensitivity = ObjectProperty(50)
br_slider = ObjectProperty(None)
def __init__(self, *args, **kwargs):
super(PyobjusGame, self).__init__()
self.bridge.startAccelerometer()
def __dealloc__(self, *args, **kwargs):
self.bridge.stopAccelerometer()
super(PyobjusGame, self).__dealloc__()
def reset_ball_pos(self):
self.ball.pos = self.width / 2, self.height / 2
def on_bright_slider_change(self):
self.screen.brightness = self.br_slider.value
def update(self, dt):
self.ball.move()
self.ball.velocity_x = self.bridge.ac_x * self.sensitivity
self.ball.velocity_y = self.bridge.ac_y * self.sensitivity
if (self.ball.y < 0) or (self.ball.top >= self.height):
self.reset_ball_pos()
self.ball.h = random()
if (self.ball.x < 0) or (self.ball.right >= self.width):
self.reset_ball_pos()
self.ball.h = random()
class PyobjusBallApp(App):
def build(self):
game = PyobjusGame()
Clock.schedule_interval(game.update, 1.0/60.0)
return game
if __name__ == '__main__':
PyobjusBallApp().run()
| 29.030303 | 82 | 0.673278 | 235 | 1,916 | 5.302128 | 0.310638 | 0.064205 | 0.035313 | 0.038523 | 0.049759 | 0.049759 | 0.049759 | 0.049759 | 0 | 0 | 0 | 0.00929 | 0.213466 | 1,916 | 65 | 83 | 29.476923 | 0.817518 | 0 | 0 | 0.083333 | 0 | 0 | 0.011482 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.145833 | false | 0 | 0.166667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
970f6482050bf709cb922751a75043b07cb64396 | 167 | py | Python | ABC151-200/ABC188/abc188_c.py | billyio/atcoder | 9d16765f91f28deeb7328fcc6c19541ee790941f | [
"MIT"
] | 1 | 2021-02-01T08:48:07.000Z | 2021-02-01T08:48:07.000Z | ABC151-200/ABC188/abc188_c.py | billyio/atcoder | 9d16765f91f28deeb7328fcc6c19541ee790941f | [
"MIT"
] | null | null | null | ABC151-200/ABC188/abc188_c.py | billyio/atcoder | 9d16765f91f28deeb7328fcc6c19541ee790941f | [
"MIT"
] | null | null | null | # ac
N = int(input())
A = list(map(int,input().split()))
mid = int(2**N/2)
left, right = A[:mid], A[mid:]
second = min(max(left), max(right))
print(A.index(second)+1) | 20.875 | 35 | 0.598802 | 32 | 167 | 3.125 | 0.5625 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020548 | 0.125749 | 167 | 8 | 36 | 20.875 | 0.664384 | 0.011976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
971e11a191a2c90f19801ef91e07b01ad3290c0c | 466 | py | Python | pytoast/decorators/step.py | daniloster/pytoast | b0ceb02134aa4249888faeb8b5617e562aba6002 | [
"MIT"
] | null | null | null | pytoast/decorators/step.py | daniloster/pytoast | b0ceb02134aa4249888faeb8b5617e562aba6002 | [
"MIT"
] | null | null | null | pytoast/decorators/step.py | daniloster/pytoast | b0ceb02134aa4249888faeb8b5617e562aba6002 | [
"MIT"
] | null | null | null | from pytoast import output
steps = []
def step(expression=None):
global steps
if not expression:
raise RuntimeError('A step must have a match expression')
def decorator(f):
steps.append((expression, f))
return decorator
def collect_steps(runner):
for (expression, f) in steps:
output.write('* adding step definition: {} \
'.format(expression))
runner.add_step_definition(expression, f)
| 21.181818 | 65 | 0.637339 | 54 | 466 | 5.444444 | 0.574074 | 0.112245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266094 | 466 | 21 | 66 | 22.190476 | 0.859649 | 0 | 0 | 0 | 0 | 0 | 0.075107 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9727e5083b9c6fda2a8ba98b81b03990d001d425 | 987 | py | Python | code/permuted_matrices/sol_559.py | bhavinjawade/project-euler-solutions | 56bf6a282730ed4b9b875fa081cf4509d9939d98 | [
"Apache-2.0"
] | 2 | 2020-07-16T08:16:32.000Z | 2020-10-01T07:16:48.000Z | code/permuted_matrices/sol_559.py | Psingh12354/project-euler-solutions | 56bf6a282730ed4b9b875fa081cf4509d9939d98 | [
"Apache-2.0"
] | null | null | null | code/permuted_matrices/sol_559.py | Psingh12354/project-euler-solutions | 56bf6a282730ed4b9b875fa081cf4509d9939d98 | [
"Apache-2.0"
] | 1 | 2021-05-07T18:06:08.000Z | 2021-05-07T18:06:08.000Z |
# -*- coding: utf-8 -*-
'''
File name: code\permuted_matrices\sol_559.py
Author: Vaidic Joshi
Date created: Oct 20, 2018
Python Version: 3.x
'''
# Solution to Project Euler Problem #559 :: Permuted Matrices
#
# For more information see:
# https://projecteuler.net/problem=559
# Problem Statement
'''
An ascent of a column j in a matrix occurs if the value of column j is smaller than the value of column j+1 in all rows.
Let P(k, r, n) be the number of r x n matrices with the following properties:
The rows are permutations of {1, 2, 3, ... , n}.
Numbering the first column as 1, a column ascent occurs at column j<n if and only if j is not a multiple of k.
For example, P(1, 2, 3) = 19, P(2, 4, 6) = 65508751 and P(7, 5, 30) mod 1000000123 = 161858102.
Let Q(n) =$\, \displaystyle \sum_{k=1}^n\,$ P(k, n, n).
For example, Q(5) = 21879393751 and Q(50) mod 1000000123 = 819573537.
Find Q(50000) mod 1000000123.
'''
# Solution
# Solution Approach
'''
'''
| 26.675676 | 120 | 0.668693 | 172 | 987 | 3.819767 | 0.552326 | 0.042618 | 0.030441 | 0.048706 | 0.05175 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140665 | 0.2077 | 987 | 36 | 121 | 27.416667 | 0.699488 | 0.309017 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9728a7f60627eb144fdd226ccd9b8be7db14bd8e | 207 | py | Python | 02 - Curso Em Video/Aula 14/E - 064.py | GabrielTrentino/Python_Basico | f13f6448c275c14896337d2018b04cbf5a54efd3 | [
"MIT"
] | null | null | null | 02 - Curso Em Video/Aula 14/E - 064.py | GabrielTrentino/Python_Basico | f13f6448c275c14896337d2018b04cbf5a54efd3 | [
"MIT"
] | null | null | null | 02 - Curso Em Video/Aula 14/E - 064.py | GabrielTrentino/Python_Basico | f13f6448c275c14896337d2018b04cbf5a54efd3 | [
"MIT"
] | null | null | null | soma = 0
valor = 0
cont = -1
while valor != 999:
soma += valor
cont += 1
valor = int(input('Digite o valor: [999 para parar] '))
print('A soma de {} termos é igual a {}'.format(cont,soma)) | 25.875 | 60 | 0.574879 | 33 | 207 | 3.606061 | 0.606061 | 0.084034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065789 | 0.2657 | 207 | 8 | 61 | 25.875 | 0.717105 | 0 | 0 | 0 | 0 | 0 | 0.323383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
972cc3da44f5ed2e6e2c8871a62b889010d3b6ae | 4,335 | py | Python | tests/scale_test.py | StackStorm/search | 2b4bc04bea4f0aecc79abaad917d14a938c613b6 | [
"Apache-2.0"
] | null | null | null | tests/scale_test.py | StackStorm/search | 2b4bc04bea4f0aecc79abaad917d14a938c613b6 | [
"Apache-2.0"
] | null | null | null | tests/scale_test.py | StackStorm/search | 2b4bc04bea4f0aecc79abaad917d14a938c613b6 | [
"Apache-2.0"
] | null | null | null | from argparse import ArgumentParser
from mock import MagicMock
from multiprocessing import Process
from search.datasource.session import DBInfo
from search.worker.computenode import ComputeNodeHandler
from search.worker.instance import InstanceHandler
from tests import scale_gen
from tests import faker
import subprocess
import time
import os
import random
#config = ConfigParser()
#config.read('../config/config.ini')
#logUtils.setup_logging(config)
# class ScaleTest(unittest.TestCase):
#
# def test_load_scale_data(self):
# self.instance.engine = scale_gen.ScaleDataProvider(InstanceHandlerScaleTest.data_files)
# self.instance.on_start()
SOLR_START_WAIT = 5
def _gen_random_memory(_):
return { 'used': random.randint(1, 32768) }
def _is_script_available(script, pass_exception = False):
try:
with open(script):
return True
except:
if pass_exception:
print '%s not found.' % script
else:
raise
return False
def _start_solr(solr_scripts_location):
# kill old solr instance
kill_script = '%s/%s' % (solr_scripts_location, 'kill.sh')
if _is_script_available(kill_script):
subprocess.call(kill_script, shell=True)
# start a clean solr instance
dropindex_script = '%s/%s' % (solr_scripts_location, 'dropindex.sh')
if _is_script_available(dropindex_script):
subprocess.call(dropindex_script, shell=True)
# start a new instance of solr
run_script = '%s/%s' % (solr_scripts_location, 'run.sh')
if _is_script_available(run_script, True):
dev_null = open(os.devnull, 'w')
subprocess.Popen(run_script, stdout=dev_null, stderr=subprocess.STDOUT, shell=True)
#subprocess.Popen(run_script, shell=True)
def _run_computenode_test(computenode_files):
print 'Adding computenodes to Solr.'
start_time = time.time()
solr_url = 'http://localhost:8983/solr/'
db_info = DBInfo(host='fake', port='123', creds='foo:bar')
ampq_info = None
libvirt_info = {'user':'fake'}
computenode_handler = ComputeNodeHandler(solr_url, db_info, ampq_info, libvirt_info)
computenode_handler.engine = scale_gen.ScaleDataProvider(computenode_files)
computenode_handler._get_host_memory = MagicMock(side_effect=_gen_random_memory)
computenode_handler.on_start()
print 'computenodes added in %f.' % (time.time() - start_time)
def _run_instance_test(instance_files):
print 'Adding instances to Solr.'
start_time = time.time()
solr_url = 'http://localhost:8983/solr/'
db_info = DBInfo(host='fake', port='123', creds='foo:bar')
ampq_info = None
libvirt_info = {'user':'fake'}
instance_handler = InstanceHandler(solr_url, db_info, ampq_info, libvirt_info)
instance_handler._get_users = MagicMock(return_value = dict(zip(['u'+str(i) for i in range(0, 10)], faker.username(10))))
instance_handler._get_projects = MagicMock(return_value = dict(zip(['p'+str(i) for i in range(0, 10)], faker.projectname(10))))
instance_handler._get_images = MagicMock(return_value = dict(zip(['i'+str(i) for i in range(0, 10)], faker.imagename(10))))
instance_handler.engine = scale_gen.ScaleDataProvider(instance_files)
instance_handler.on_start()
print 'instances added in %f.' % (time.time() - start_time)
def main():
parser = ArgumentParser()
parser.add_argument('-l', '--location', help='location of the file.')
parser.add_argument('-n', '--number', help='number of resources.')
parser.add_argument('-s', '--solr', help='location of solr scripts like kill & run.')
args = parser.parse_args()
# generate scale data
computenode_files, instance_files = scale_gen.generate_data(args.location, int(args.number))
_start_solr(args.solr)
# give solr sometime to start.
# TODO(manask) - Implement a deterministic way.
time.sleep(SOLR_START_WAIT)
processes = []
p = Process(target=_run_computenode_test, args=(computenode_files,)) # without the trailing , a list with 1 element is not passed down as a list
p.start()
processes.append(p)
p = Process(target=_run_instance_test, args=(instance_files,))
p.start()
processes.append(p)
while len(processes) > 0:
processes.pop().join()
print 'Exit.'
if __name__ == '__main__':
main() | 40.138889 | 148 | 0.70842 | 572 | 4,335 | 5.127622 | 0.312937 | 0.016366 | 0.023184 | 0.031708 | 0.278213 | 0.173883 | 0.146267 | 0.146267 | 0.105353 | 0.081827 | 0 | 0.0106 | 0.17301 | 4,335 | 108 | 149 | 40.138889 | 0.807531 | 0.135409 | 0 | 0.168675 | 0 | 0 | 0.102866 | 0 | 0 | 0 | 0 | 0.009259 | 0 | 0 | null | null | 0.024096 | 0.144578 | null | null | 0.072289 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9730afa3f9d2ae6bd7a756766d74350562eae562 | 1,649 | py | Python | app/Tools/auth.py | andrerclaudio/agnes | 7335ff23c09e8e442fa596bad0033e660eec976f | [
"MIT"
] | null | null | null | app/Tools/auth.py | andrerclaudio/agnes | 7335ff23c09e8e442fa596bad0033e660eec976f | [
"MIT"
] | null | null | null | app/Tools/auth.py | andrerclaudio/agnes | 7335ff23c09e8e442fa596bad0033e660eec976f | [
"MIT"
] | null | null | null | # Build-in modules
import configparser
import logging
import os
from functools import wraps
# from cryptography.fernet import Fernet
# from werkzeug.security import generate_password_hash, check_password_hash
from flask import jsonify, request
def authorization(f):
@wraps(f)
def decorated(*args, **kwargs):
if 'CLOUD' not in os.environ:
config = configparser.ConfigParser()
config.read_file(open('config.ini'))
# Agnes API key
# key = config['AGNES_KEY']['key']
# enc = Fernet(key)
# Agnes API secret
secret = config['AGNES_SECRET']['secret']
# token = f.encrypt(secret)
else:
# Agnes API key
# key = os.environ['AGNES_KEY']
# enc = Fernet(key)
# Agnes API secret
secret = os.environ['AGNES_SECRET']
# token = f.encrypt(secret)
try:
header = str(request.headers.get('Authorization')).split(' ')
if len(header) > 1:
token = header[1]
else:
raise Exception('API token not valid!')
if not token or token != secret:
raise Exception('API token not valid!')
else:
return f(*args, **kwargs)
except Exception as e:
logging.exception(e, exc_info=False)
message = {'message': 'Invalid Credentials.'}
# Making the message looks good
resp = jsonify(message)
# Sending OK response
resp.status_code = 401
return resp
return decorated
| 29.981818 | 75 | 0.550637 | 176 | 1,649 | 5.096591 | 0.4375 | 0.035674 | 0.024526 | 0.031215 | 0.200669 | 0.144928 | 0.078038 | 0.078038 | 0 | 0 | 0 | 0.004695 | 0.354154 | 1,649 | 54 | 76 | 30.537037 | 0.837559 | 0.23772 | 0 | 0.16129 | 1 | 0 | 0.101531 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.16129 | 0 | 0.322581 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9735554e4c7ab506fa493c02058b45cfc01e4f6a | 1,322 | py | Python | Apps/serializers.py | gordiig/Un_RSOI_Curs_Auth | 743a423131730b0e7a025fcae5564a1d932fa92d | [
"MIT"
] | null | null | null | Apps/serializers.py | gordiig/Un_RSOI_Curs_Auth | 743a423131730b0e7a025fcae5564a1d932fa92d | [
"MIT"
] | 2 | 2021-03-19T00:42:05.000Z | 2021-03-30T12:57:46.000Z | Apps/serializers.py | gordiig/Un_RSOI_Curs_Auth | 743a423131730b0e7a025fcae5564a1d932fa92d | [
"MIT"
] | null | null | null | from rest_framework import serializers
from rest_framework.validators import UniqueValidator
from Apps.models import App
class AppSerializer(serializers.ModelSerializer):
"""
Сериализатор приложения
"""
id = serializers.CharField(required=True, allow_null=False, allow_blank=False,
validators=[UniqueValidator(queryset=App.objects.all())])
secret = serializers.CharField(required=True, allow_null=False, allow_blank=False, write_only=True)
created_by = serializers.IntegerField(read_only=True, source='created_by.id')
is_internal = serializers.BooleanField(read_only=True)
class Meta:
model = App
fields = [
'id',
'secret',
'created_by',
'is_internal',
]
def create(self, validated_data):
new = App.objects.create(id=validated_data['id'], secret=validated_data['secret'])
return new
class AppForTokenSerializer(serializers.ModelSerializer):
"""
Сериализатор для токена
"""
id = serializers.CharField(required=True, allow_null=False, allow_blank=False)
secret = serializers.CharField(required=True, allow_null=False, allow_blank=False)
class Meta:
model = App
fields = [
'id',
'secret',
]
| 30.744186 | 103 | 0.655825 | 137 | 1,322 | 6.175182 | 0.364964 | 0.094563 | 0.132388 | 0.1513 | 0.380615 | 0.380615 | 0.380615 | 0.307329 | 0.307329 | 0.307329 | 0 | 0 | 0.242814 | 1,322 | 42 | 104 | 31.47619 | 0.845155 | 0.035552 | 0 | 0.344828 | 0 | 0 | 0.046624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.103448 | 0 | 0.517241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9749832101e4d816c3eeffcb7de0aedde32638fd | 1,166 | py | Python | _MOM/_Meta/__init__.py | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | 6 | 2016-12-10T17:51:10.000Z | 2021-10-11T07:51:48.000Z | _MOM/_Meta/__init__.py | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | null | null | null | _MOM/_Meta/__init__.py | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | 3 | 2020-03-29T07:37:03.000Z | 2021-01-21T16:08:40.000Z | # -*- coding: utf-8 -*-
# Copyright (C) 2009-2010 Mag. Christian Tanzer. All rights reserved
# Glasauergasse 32, A--1130 Wien, Austria. tanzer@swing.co.at
# ****************************************************************************
# This package is part of the package _MOM.
#
# This module is licensed under the terms of the BSD 3-Clause License
# <http://www.c-tanzer.at/license/bsd_3c.html>.
# ****************************************************************************
#
#++
# Name
# MOM.Meta.__init__
#
# Purpose
# Initialize package `MOM.Meta`
#
# Revision Dates
# 17-Sep-2009 (CT) Creation (factored from TOM.Meta)
# ««revision-date»»···
#--
from _TFL.Package_Namespace import Derived_Package_Namespace
from _TFL import TFL
from _MOM import MOM
import _TFL._Meta
Meta = Derived_Package_Namespace (parent = TFL.Meta)
MOM._Export ("Meta")
del Derived_Package_Namespace
__doc__ = """
.. moduleauthor:: Christian Tanzer <tanzer@swing.co.at>
`MOM.Meta` provides meta classes for the definition and
implementation of essential object models (see :mod:`MOM<_MOM>`).
"""
### __END__ MOM.Meta.__init__
| 27.116279 | 78 | 0.598628 | 140 | 1,166 | 4.807143 | 0.564286 | 0.041605 | 0.102526 | 0.044577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02359 | 0.163808 | 1,166 | 42 | 79 | 27.761905 | 0.659487 | 0.555746 | 0 | 0 | 0 | 0 | 0.375254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
974aec52de225b83c87cd67a3a19cce83f8d0d3b | 4,244 | py | Python | Knots/KnotCalc.py | nborggren/Aleph | 251dad70ddda9192630d8226c1316de3c3bc0b1b | [
"CC0-1.0"
] | null | null | null | Knots/KnotCalc.py | nborggren/Aleph | 251dad70ddda9192630d8226c1316de3c3bc0b1b | [
"CC0-1.0"
] | null | null | null | Knots/KnotCalc.py | nborggren/Aleph | 251dad70ddda9192630d8226c1316de3c3bc0b1b | [
"CC0-1.0"
] | null | null | null | from knots import *
from knot_analysis import Draw_Knot
from ROOT import TLine, TCanvas
globalvars = {} # We will store the calculator's variables here
def lookup(map, name):
for x,v in map:
if x==name: return v
if name not in globalvars.keys(): print 'Undefined:', name
return globalvars.get(name, 0)
from string import *
import re
from yappsrt import *
class CalculatorScanner(Scanner):
patterns = [
('"in"', re.compile('in')),
('"="', re.compile('=')),
('"let"', re.compile('let')),
('r"\\)"', re.compile('\\)')),
('"\\("', re.compile('\(')),
('"/"', re.compile('/')),
('"[*]"', re.compile('[*]')),
('"-"', re.compile('-')),
('"[+]"', re.compile('[+]')),
('"set"', re.compile('set')),
('[ \r\t\n]+', re.compile('[ \r\t\n]+')),
('END', re.compile('$')),
('NUM', re.compile('[0-9]+')),
('VAR', re.compile('[a-zA-Z_]+')),
]
def __init__(self, str):
Scanner.__init__(self,None,['[ \r\t\n]+'],str)
class Calculator(Parser):
def goal(self):
_token_ = self._peek('"set"', 'NUM', 'VAR', '"\\("', '"let"')
if _token_ != '"set"':
expr = self.expr([])
END = self._scan('END')
expr = tie(expr)
expr = scale(expr,.85)
knot = Draw_Knot(expr)
print '=', expr
return expr
else:# == '"set"'
self._scan('"set"')
VAR = self._scan('VAR')
expr = self.expr([])
END = self._scan('END')
globalvars[VAR] = expr
print VAR, '=', expr
return expr
def expr(self, V):
factor = self.factor(V)
n = factor
while self._peek('"[+]"', '"-"', 'END', 'r"\\)"', '"in"', '"[*]"', '"/"') in ['"[+]"', '"-"']:
_token_ = self._peek('"[+]"', '"-"')
if _token_ == '"[+]"':
self._scan('"[+]"')
factor = self.factor(V)
n = add(n,factor)
else:# == '"-"'
self._scan('"-"')
factor = self.factor(V)
n = n-factor
return n
def factor(self, V):
term = self.term(V)
v = term
while self._peek('"[*]"', '"/"', '"[+]"', '"-"', 'END', 'r"\\)"', '"in"') in ['"[*]"', '"/"']:
_token_ = self._peek('"[*]"', '"/"')
if _token_ == '"[*]"':
self._scan('"[*]"')
term = self.term(V)
v = multiply(v,term)
else:# == '"/"'
self._scan('"/"')
term = self.term(V)
v = v/term
return v
def term(self, V):
_token_ = self._peek('NUM', 'VAR', '"\\("', '"let"')
if _token_ == 'NUM':
NUM = self._scan('NUM')
if atoi(NUM)==1:
return tangle(0,0)
return sum([tangle(0,0) for i in range(atoi(NUM))])
elif _token_ == 'VAR':
VAR = self._scan('VAR')
return lookup(V, VAR)
elif _token_ == '"\\("':
self._scan('"\\("')
expr = self.expr(V)
self._scan('r"\\)"')
return expr
else:# == '"let"'
self._scan('"let"')
VAR = self._scan('VAR')
self._scan('"="')
expr = self.expr(V)
V = [(VAR, expr)] + V
self._scan('"in"')
expr = self.expr(V)
return expr
def parse(rule, text):
P = Calculator(CalculatorScanner(text))
return wrap_error_reporter(P, rule)
if __name__=='__main__':
print '?!'*21
print 'Welcome to the Knot Calculator'
print 'Nathan Borggren'
print 'SUNY Stony Brook, Physics'
print 'June 2009'
print '?!'*21
# We could have put this loop into the parser, by making the
# `goal' rule use (expr | set var expr)*, but by putting the
# loop into Python code, we can make it interactive (i.e., enter
# one expression, get the result, enter another expression, etc.)
while 1:
try: s = raw_input('>>> ')
except EOFError: break
if not strip(s): break
parse('goal', s)
print 'Bye.'
| 30.314286 | 102 | 0.441093 | 463 | 4,244 | 3.902808 | 0.282937 | 0.070836 | 0.030437 | 0.049806 | 0.215827 | 0.18041 | 0.157167 | 0.084117 | 0.084117 | 0.084117 | 0 | 0.006849 | 0.346371 | 4,244 | 139 | 103 | 30.532374 | 0.644557 | 0.077757 | 0 | 0.220339 | 0 | 0 | 0.115808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.050847 | null | null | 0.084746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
975212db73ca892e962e10887a01f3b585b2c6d0 | 296 | py | Python | hw3/test.py | ZhenghaoFei/DEEPRLHW | 35d3709f657eba84b2cae31e04d97105b30089aa | [
"MIT"
] | null | null | null | hw3/test.py | ZhenghaoFei/DEEPRLHW | 35d3709f657eba84b2cae31e04d97105b30089aa | [
"MIT"
] | null | null | null | hw3/test.py | ZhenghaoFei/DEEPRLHW | 35d3709f657eba84b2cae31e04d97105b30089aa | [
"MIT"
] | null | null | null | import tensorflow as tf
m = tf.Variable([[2, 3], [1, 2], [2, 2], [4, 3]])
# m = [['a', 'b'], ['c', 'd']]
idx = tf.range(0, 32)
a = tf.range(0, 32)
act_idx = tf.stack([idx, a], axis=1)
k = act_idx
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
print sess.run(k) | 17.411765 | 49 | 0.574324 | 56 | 296 | 2.964286 | 0.535714 | 0.024096 | 0.096386 | 0.120482 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061475 | 0.175676 | 296 | 17 | 50 | 17.411765 | 0.618852 | 0.094595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
975a70cf14d371cc569479e59bdef3f0f2142943 | 2,816 | py | Python | src/jk_mediawiki/lsfile/MediaWikiLocalSettingsArrayAppend.py | jkpubsrc/python-module-jk-mediawiki | 5d76a060f0ed46c072d44e8084f6fa40d16e6069 | [
"Apache-1.1"
] | null | null | null | src/jk_mediawiki/lsfile/MediaWikiLocalSettingsArrayAppend.py | jkpubsrc/python-module-jk-mediawiki | 5d76a060f0ed46c072d44e8084f6fa40d16e6069 | [
"Apache-1.1"
] | null | null | null | src/jk_mediawiki/lsfile/MediaWikiLocalSettingsArrayAppend.py | jkpubsrc/python-module-jk-mediawiki | 5d76a060f0ed46c072d44e8084f6fa40d16e6069 | [
"Apache-1.1"
] | null | null | null |
import os
from jk_utils import *
from jk_utils.tokenizer import *
from ..impl.lang_support_php import *
class MediaWikiLocalSettingsArrayAppend(object):
# ================================================================================================================================
# ==== Constructor Methods
def __init__(self, changedFlag:ChangedFlag, lineNo:int, colNo:int, bIsActive:bool, varName:str, value):
assert isinstance(changedFlag, ChangedFlag)
assert isinstance(lineNo, int)
assert isinstance(colNo, int)
assert isinstance(bIsActive, bool)
assert isinstance(varName, str)
assert isinstance(value, TypedValue)
self.__changedFlag = changedFlag
self.__lineNo = lineNo
self.__colNo = colNo
self.__bIsActive = bIsActive
self.__varName = varName
self.__value = value
#
# ================================================================================================================================
# ==== Properties
@property
def lineNo(self) -> int:
return self.__lineNo
#
@property
def colNo(self) -> int:
return self.__colNo
#
@property
def varName(self) -> str:
return self.__varName
#
@property
def value(self):
return self.__value
#
@property
def isActive(self) -> bool:
return self.__bIsActive
#
@property
def isCommentedOut(self) -> bool:
return not self.__bIsActive
#
# ================================================================================================================================
# ==== Methods
def setValue(self, value):
assert isinstance(value, TypedValue)
self.__value = value
self.__changedFlag.setChanged(True)
#
def toPHP(self):
ret = "" if self.__bIsActive else "#=# "
ret += "$" + self.__varName
ret += "[] = "
ret += self.__value.toPHP()
ret += ";"
return ret
#
def __str__(self):
return self.toPHP()
#
def __repr__(self):
return self.toPHP()
#
def activate(self):
if not self.__bIsActive:
self.__bIsActive = True
self.__changedFlag.setChanged(True)
#
def deactivate(self):
if self.__bIsActive:
self.__bIsActive = False
self.__changedFlag.setChanged(True)
#
# ================================================================================================================================
# ==== Static Methods
@staticmethod
def parseFromDict(changedFlag:ChangedFlag, dataMap:dict):
assert isinstance(changedFlag, ChangedFlag)
assert isinstance(dataMap, dict)
lineNo = dataMap["lineNo"]
colNo = dataMap["colNo"]
bIsActive = dataMap["active"]
varName = dataMap["varName"]
varType = dataMap["varType"]
assert varType == "value"
value = dataMap["value"]
assert isinstance(value, TypedValue)
return MediaWikiLocalSettingsArrayAppend(changedFlag, lineNo, colNo, bIsActive, varName, value)
#
#
| 20.554745 | 131 | 0.570668 | 251 | 2,816 | 6.155378 | 0.215139 | 0.10356 | 0.040777 | 0.060194 | 0.21165 | 0.069903 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150213 | 2,816 | 136 | 132 | 20.705882 | 0.645633 | 0.209162 | 0 | 0.25 | 0 | 0 | 0.023701 | 0 | 0 | 0 | 0 | 0 | 0.152778 | 1 | 0.194444 | false | 0 | 0.055556 | 0.111111 | 0.402778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
975deaee839277e8f53a7a7e3bc55d65ca55be9c | 1,449 | py | Python | src/reference_book/migrations/0001_initial.py | zmiterpimenau/PiLib | 301b51c5402f86da801d437ea943d10ea01c8fc1 | [
"Apache-2.0"
] | null | null | null | src/reference_book/migrations/0001_initial.py | zmiterpimenau/PiLib | 301b51c5402f86da801d437ea943d10ea01c8fc1 | [
"Apache-2.0"
] | null | null | null | src/reference_book/migrations/0001_initial.py | zmiterpimenau/PiLib | 301b51c5402f86da801d437ea943d10ea01c8fc1 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.1.2 on 2020-10-20 22:00
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Author',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('author', models.CharField(max_length=50, verbose_name='Имя автора')),
],
),
migrations.CreateModel(
name='Genre',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('genres', models.CharField(max_length=50, verbose_name='Жанр книги')),
],
),
migrations.CreateModel(
name='Publisher',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('publisher', models.CharField(max_length=50, verbose_name='Издательство')),
],
),
migrations.CreateModel(
name='Serie',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('series', models.CharField(max_length=50, verbose_name='Номер серии')),
],
),
]
| 33.697674 | 114 | 0.559006 | 141 | 1,449 | 5.602837 | 0.368794 | 0.111392 | 0.126582 | 0.116456 | 0.587342 | 0.587342 | 0.587342 | 0.4 | 0.4 | 0.4 | 0 | 0.022886 | 0.306418 | 1,449 | 42 | 115 | 34.5 | 0.763184 | 0.031056 | 0 | 0.571429 | 1 | 0 | 0.079173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
97614563914fd56ff120db87dc313ddf01a6c37e | 1,547 | py | Python | jobs/migrations/0002_auto_20201219_0911.py | Platz-Work/platzi-work-backend | 367bb3b150ac22f673c4a2e01babecb19f1ee9ec | [
"MIT"
] | null | null | null | jobs/migrations/0002_auto_20201219_0911.py | Platz-Work/platzi-work-backend | 367bb3b150ac22f673c4a2e01babecb19f1ee9ec | [
"MIT"
] | 13 | 2020-12-18T01:32:03.000Z | 2020-12-19T22:57:35.000Z | jobs/migrations/0002_auto_20201219_0911.py | Platz-Work/platzi-work-backend | 367bb3b150ac22f673c4a2e01babecb19f1ee9ec | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2020-12-19 14:11
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('jobs', '0001_initial'),
]
operations = [
migrations.AlterModelOptions(
name='category',
options={'ordering': ['-created_at'], 'verbose_name_plural': 'Categories'},
),
migrations.AlterField(
model_name='profile',
name='category',
field=models.ForeignKey(blank=True, error_messages={'blank': "The category_id field can't be blank.", 'invalid': 'The category_id field is invalid.', 'null': "The category_id field can't be null."}, null=True, on_delete=django.db.models.deletion.PROTECT, to='jobs.category'),
),
migrations.AlterField(
model_name='profile',
name='salary_end',
field=models.PositiveIntegerField(blank=True, error_messages={'blank': "The salary_end field can't be blank.", 'invalid': 'The salary_end field is invalid.', 'null': "The salary_end field can't be null."}, null=True, verbose_name='Salary up'),
),
migrations.AlterField(
model_name='profile',
name='salary_start',
field=models.PositiveIntegerField(blank=True, error_messages={'blank': "The salary_start field can't be blank.", 'invalid': 'The salary_start field is invalid.', 'null': "The salary_start field can't be null."}, null=True, verbose_name='Salary from'),
),
]
| 45.5 | 287 | 0.641241 | 187 | 1,547 | 5.176471 | 0.320856 | 0.049587 | 0.055785 | 0.068182 | 0.619835 | 0.60124 | 0.494835 | 0.28719 | 0.221074 | 0.221074 | 0 | 0.015781 | 0.221719 | 1,547 | 33 | 288 | 46.878788 | 0.788206 | 0.029089 | 0 | 0.444444 | 1 | 0 | 0.348 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.074074 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
97642f21d4dcb2cc049efde35e1b7cc86810455d | 7,937 | py | Python | tests/experiment_client_test.py | mljar/mljar-api-python | 78afae67e9f8a26a457f7bae29367f9b3d7f85b4 | [
"Apache-2.0"
] | 42 | 2016-12-01T11:07:16.000Z | 2022-01-20T05:01:56.000Z | tests/experiment_client_test.py | kamrankausar/mljar-api-python | 78afae67e9f8a26a457f7bae29367f9b3d7f85b4 | [
"Apache-2.0"
] | 10 | 2017-06-28T08:02:44.000Z | 2018-04-10T07:10:04.000Z | tests/experiment_client_test.py | kamrankausar/mljar-api-python | 78afae67e9f8a26a457f7bae29367f9b3d7f85b4 | [
"Apache-2.0"
] | 11 | 2017-03-27T13:29:02.000Z | 2021-04-04T19:16:51.000Z | '''
ExperimentClient tests.
'''
import os
import unittest
import pandas as pd
import time
from mljar.client.project import ProjectClient
from mljar.client.dataset import DatasetClient
from mljar.client.experiment import ExperimentClient
from .project_based_test import ProjectBasedTest, get_postfix
class ExperimentClientTest(ProjectBasedTest):
def setUp(self):
proj_title = 'Test project-01'+get_postfix()
proj_task = 'bin_class'
self.expt_title = 'Test experiment-01'
self.validation_kfolds = 5
self.validation_shuffle = True
self.validation_stratify = True
self.validation_train_split = None
self.algorithms = ['xgb']
self.metric = 'logloss'
self.tuning_mode = 'Normal'
self.time_constraint = 1
self.create_enseble = False
# setup project
self.project_client = ProjectClient()
self.project = self.project_client.create_project(title = proj_title, task = proj_task)
# add training data
df = pd.read_csv('tests/data/test_1.csv')
cols = ['sepal length', 'sepal width', 'petal length', 'petal width']
target = 'class'
dc = DatasetClient(self.project.hid)
self.dataset = dc.add_dataset_if_not_exists(df[cols], df[target])
def tearDown(self):
# wait before clean, to have time to initialize models
time.sleep(60)
# clean
self.project_client.delete_project(self.project.hid)
def test_create_with_kfold_cv(self):
#Create experiment test with k-fold CV.
# add experiment
ec = ExperimentClient(self.project.hid)
self.assertNotEqual(ec, None)
# there should be none experiments
experiments = ec.get_experiments()
self.assertEqual(experiments, [])
# create new experiment
experiment = ec.add_experiment_if_not_exists(self.dataset, None, self.expt_title, self.project.task,
self.validation_kfolds, self.validation_shuffle,
self.validation_stratify, self.validation_train_split,
self.algorithms, self.metric,
self.tuning_mode, self.time_constraint, self.create_enseble)
self.assertNotEqual(experiment, None)
self.assertEqual(experiment.title, self.expt_title)
self.assertEqual(experiment.validation_scheme, "5-fold CV, Shuffle, Stratify")
self.assertEqual(experiment.metric, self.metric)
# get all experiments, should be only one
experiments = ec.get_experiments()
self.assertEqual(len(experiments), 1)
# get experiment by hid, there should be the same
experiment_2 = ec.get_experiment(experiment.hid)
self.assertEqual(experiment_2.hid, experiment.hid)
self.assertEqual(experiment_2.title, experiment.title)
self.assertEqual(experiment_2.metric, experiment.metric)
self.assertEqual(experiment_2.validation_scheme, experiment.validation_scheme)
self.assertTrue(experiment.equal(experiment_2))
# test __str__ method
self.assertTrue('id' in str(experiment_2))
self.assertTrue('title' in str(experiment_2))
self.assertTrue('metric' in str(experiment_2))
self.assertTrue('validation' in str(experiment_2))
def test_create_with_train_split(self):
#Create experiment with validation by train split.
# add experiment
ec = ExperimentClient(self.project.hid)
self.assertNotEqual(ec, None)
# there should be none experiments
experiments = ec.get_experiments()
self.assertEqual(experiments, [])
# create new experiment
experiment = ec.add_experiment_if_not_exists(self.dataset, None, self.expt_title, self.project.task,
self.validation_kfolds, self.validation_shuffle,
self.validation_stratify, 0.72,
self.algorithms, self.metric,
self.tuning_mode, self.time_constraint, self.create_enseble)
self.assertNotEqual(experiment, None)
self.assertEqual(experiment.title, self.expt_title)
self.assertEqual(experiment.validation_scheme, "Split 72/28, Shuffle, Stratify")
def test_create_with_validation_dataset(self):
#Create experiment with validation with dataset.
# add vald dataset
cols = ['sepal length', 'sepal width', 'petal length', 'petal width']
target = 'class'
df = pd.read_csv('tests/data/test_1_vald.csv')
dc = DatasetClient(self.project.hid)
vald_dataset = dc.add_dataset_if_not_exists(df[cols], df[target])
# add experiment
ec = ExperimentClient(self.project.hid)
self.assertNotEqual(ec, None)
# there should be none experiments
experiments = ec.get_experiments()
self.assertEqual(experiments, [])
# create new experiment
experiment = ec.add_experiment_if_not_exists(self.dataset, vald_dataset, self.expt_title, self.project.task,
self.validation_kfolds, self.validation_shuffle,
self.validation_stratify, 0.72,
self.algorithms, self.metric,
self.tuning_mode, self.time_constraint, self.create_enseble)
self.assertNotEqual(experiment, None)
self.assertEqual(experiment.title, self.expt_title)
self.assertEqual(experiment.validation_scheme, "With dataset")
def test_create_if_exists(self):
#Create experiment after experiment is already in project.
# add experiment
ec = ExperimentClient(self.project.hid)
self.assertNotEqual(ec, None)
# there should be none experiments
experiments = ec.get_experiments()
self.assertEqual(experiments, [])
# create new experiment
experiment = ec.add_experiment_if_not_exists(self.dataset, None, self.expt_title, self.project.task,
self.validation_kfolds, self.validation_shuffle,
self.validation_stratify, self.validation_train_split,
self.algorithms, self.metric,
self.tuning_mode, self.time_constraint, self.create_enseble)
self.assertNotEqual(experiment, None)
# get all experiments, should be only one
experiments = ec.get_experiments()
self.assertEqual(len(experiments), 1)
# try to create the same experiment
experiment_2 = ec.add_experiment_if_not_exists(self.dataset, None, self.expt_title, self.project.task,
self.validation_kfolds, self.validation_shuffle,
self.validation_stratify, self.validation_train_split,
self.algorithms, self.metric,
self.tuning_mode, self.time_constraint, self.create_enseble)
self.assertNotEqual(experiment, None)
# get all experiments, should be only one
experiments = ec.get_experiments()
self.assertEqual(len(experiments), 1)
# both should be the same
self.assertEqual(experiment_2.hid, experiment.hid)
self.assertEqual(experiment_2.title, experiment.title)
self.assertEqual(experiment_2.metric, experiment.metric)
self.assertEqual(experiment_2.validation_scheme, experiment.validation_scheme)
self.assertTrue(experiment.equal(experiment_2))
if __name__ == "__main__":
unittest.main()
| 46.415205 | 116 | 0.630969 | 844 | 7,937 | 5.744076 | 0.143365 | 0.063531 | 0.077351 | 0.028053 | 0.714315 | 0.688325 | 0.667079 | 0.667079 | 0.656766 | 0.656766 | 0 | 0.007081 | 0.28827 | 7,937 | 170 | 117 | 46.688235 | 0.851124 | 0.106967 | 0 | 0.589744 | 0 | 0 | 0.043669 | 0.006664 | 0 | 0 | 0 | 0 | 0.316239 | 1 | 0.051282 | false | 0 | 0.068376 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
976d3ad94f1048dc94c36968033932ee0b34a306 | 553 | py | Python | examples/simpletest.py | bbaumg/Python_TSL2561 | 0bf09cbe319d51673331ee02d852bd0ba5d0ab3e | [
"MIT"
] | null | null | null | examples/simpletest.py | bbaumg/Python_TSL2561 | 0bf09cbe319d51673331ee02d852bd0ba5d0ab3e | [
"MIT"
] | null | null | null | examples/simpletest.py | bbaumg/Python_TSL2561 | 0bf09cbe319d51673331ee02d852bd0ba5d0ab3e | [
"MIT"
] | null | null | null | import time
import TSL2561
chip = TSL2561.TSL2561()
while True:
chip.power_on()
print("Raw Channel 0 = " + str(chip.read_channel0()))
print("Raw Channel 1 = " + str(chip.read_channel1()))
print("Lux Channel 0 = " + str(chip.calculate_lux(chip.read_channel0())))
print("Lux Channel 1 = " + str(chip.calculate_lux(chip.read_channel1())))
print("Full Spectrum Lux = " + str(chip.get_full_lux()))
print("IR Spectrum Lux = " + str(chip.get_ir_lux()))
print("Visible Lux = " + str(chip.get_visible_lux()))
print("")
chip.power_off()
time.sleep(1)
| 32.529412 | 74 | 0.685353 | 83 | 553 | 4.39759 | 0.313253 | 0.134247 | 0.082192 | 0.106849 | 0.263014 | 0.147945 | 0 | 0 | 0 | 0 | 0 | 0.04375 | 0.132007 | 553 | 16 | 75 | 34.5625 | 0.716667 | 0 | 0 | 0 | 0 | 0 | 0.209765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0.533333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
9784af589cdf2dd28607e6c72a6a99e719fdf94a | 2,639 | py | Python | microcosm_daemon/tests/test_state_machine.py | globality-corp/microcosm-daemon | 86c384462c00d98431d61ed4890b980b0424cb36 | [
"Apache-2.0"
] | null | null | null | microcosm_daemon/tests/test_state_machine.py | globality-corp/microcosm-daemon | 86c384462c00d98431d61ed4890b980b0424cb36 | [
"Apache-2.0"
] | 10 | 2016-03-25T00:37:22.000Z | 2022-02-09T21:16:16.000Z | microcosm_daemon/tests/test_state_machine.py | globality-corp/microcosm-daemon | 86c384462c00d98431d61ed4890b980b0424cb36 | [
"Apache-2.0"
] | 2 | 2016-12-19T22:41:21.000Z | 2019-03-17T03:46:31.000Z | """
State machine tests.
"""
from unittest.mock import patch
from hamcrest import (
assert_that,
calling,
equal_to,
is_,
raises,
)
from microcosm.api import create_object_graph
from microcosm_daemon.error_policy import FatalError
from microcosm_daemon.sleep_policy import SleepNow
from microcosm_daemon.state_machine import StateMachine
def test_step_to_same_func():
"""
Test taking a step to the same function.
"""
graph = create_object_graph("example", testing=True)
def func(graph):
pass
state_machine = StateMachine(graph, initial_state=func)
next_func = state_machine.step()
assert_that(next_func, is_(equal_to(func)))
def test_step_to_different_func():
"""
Test taking a step to a different function.
"""
graph = create_object_graph("example", testing=True)
def func1(graph):
return func2
def func2(graph):
pass
state_machine = StateMachine(graph, initial_state=func1)
next_func = state_machine.step()
assert_that(next_func, is_(equal_to(func2)))
def test_step_to_sleep():
"""
Test taking a step to sleep.
"""
graph = create_object_graph("example", testing=True)
def func(graph):
raise SleepNow()
state_machine = StateMachine(graph, initial_state=func)
with patch.object(graph.sleep_policy, "sleep") as mocked_sleep:
next_func = state_machine.step()
assert_that(mocked_sleep.call_count, is_(equal_to(1)))
assert_that(next_func, is_(equal_to(func)))
def test_step_to_error_non_strict():
"""
Test taking a step to an error.
"""
graph = create_object_graph("example", testing=True)
def func(graph):
raise Exception()
state_machine = StateMachine(graph, initial_state=func)
assert_that(graph.error_policy.strict, is_(equal_to(False)))
next_func = state_machine.step()
assert_that(next_func, is_(equal_to(func)))
def test_step_to_fatal_non_strict():
"""
Test taking a step to an error.
"""
graph = create_object_graph("example", testing=True)
def func(graph):
raise FatalError()
state_machine = StateMachine(graph, initial_state=func)
assert_that(graph.error_policy.strict, is_(equal_to(False)))
assert_that(calling(state_machine.step), raises(FatalError))
def test_run_until_fatal_error():
"""
Running the state machine terminates on fatal error.
"""
graph = create_object_graph("example", testing=True)
def func(graph):
raise FatalError()
state_machine = StateMachine(graph, initial_state=func)
state_machine.run()
| 22.555556 | 67 | 0.696476 | 347 | 2,639 | 5.002882 | 0.18732 | 0.103687 | 0.068548 | 0.076037 | 0.648041 | 0.638249 | 0.614055 | 0.566244 | 0.511521 | 0.477535 | 0 | 0.002848 | 0.201592 | 2,639 | 116 | 68 | 22.75 | 0.821073 | 0.095112 | 0 | 0.5 | 0 | 0 | 0.020587 | 0 | 0 | 0 | 0 | 0 | 0.155172 | 1 | 0.224138 | false | 0.034483 | 0.103448 | 0.017241 | 0.344828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9785858e55c6a980d917be99f19b1b5ecc4b62cf | 5,040 | py | Python | Advanced Network Management/Assignment 1/A1.py | Sahandfer/Tsinghua | 0c1944aee8395e57429f27d9b0dd478460dcc8b3 | [
"CC-BY-4.0"
] | 1 | 2021-07-14T15:37:18.000Z | 2021-07-14T15:37:18.000Z | Advanced Network Management/Assignment 1/A1.py | Sahandfer/Tsinghua | 0c1944aee8395e57429f27d9b0dd478460dcc8b3 | [
"CC-BY-4.0"
] | null | null | null | Advanced Network Management/Assignment 1/A1.py | Sahandfer/Tsinghua | 0c1944aee8395e57429f27d9b0dd478460dcc8b3 | [
"CC-BY-4.0"
] | null | null | null | import glob, os
import matplotlib
import numpy as np
import pandas as pd
import datetime as dt
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
%matplotlib inline
# Reading the files
csv_files = glob.glob("dataset/*.csv")
dataset = []
def sort_file_name(filename):
return int(os.path.basename(filename)[:-4])
csv_files.sort(key=sort_file_name)
# Creating the dataframe
for file in csv_files:
content = pd.read_csv(file)
dataset.append(content)
dataset = pd.concat(dataset, axis=0, ignore_index=True)
# Timezone
matplotlib.rcParams['timezone'] = 'Asia/Shanghai'
# Task 1 & 2
def get_dates():
start = pd.Timestamp('2014-09-22',tz='Asia/Shanghai')
end = pd.Timestamp('2014-10-06',tz='Asia/Shanghai')
dates = []
time_diff = end - start
for i in range(time_diff.days + 1):
new_date = start + pd.Timedelta(days=i)
dates.append(new_date)
return dates
for col in ["Tnet", "Tbrowser", "Tserver", "Tother"]:
temp[col+'_100'] = (temp[col]/temp['SRT'])*100
d = dataset.copy()
start_time= d.iloc[0]['Timestamp']
time_diff = d['Timestamp'] - start_time
d['period'] = time_diff// 600
temp = d.groupby(['period'])[["Timestamp", "Tnet", "Tbrowser", "Tserver", "Tother", "SRT"]].mean()
temp['time'] = pd.to_datetime(temp['Timestamp'], unit='s')
temp['date'] = temp['time'].dt.date
fig, ax = plt.subplots()
dates = get_dates()
temp.plot( x='time', y='SRT', kind='line', legend=False, title="Average SRT every 10 minutes", figsize= (10, 5), ax=ax)
ax.set_xlabel("Date (Year 2014)")
ax.set_ylabel("Average SRT (ms)")
ax.get_xaxis().set_major_formatter(mdates.DateFormatter('%b %d'))
ax.set_xlim(pd.Timestamp('2014-09-21',tz='Asia/Shanghai'), pd.Timestamp('2014-10-07',tz='Asia/Shanghai'))
plt.xticks(dates, rotation = 90)
plt.show()
fig.savefig("figure_1", dpi=300)
fig, ax = plt.subplots()
temp.plot.area(x='time', y=["Tnet", "Tbrowser", "Tserver", "Tother"],title="SRT components stacked area chart", colormap="GnBu", figsize= (10, 5), ax=ax)
ax.set_xlabel("Date (Year 2014)")
ax.set_ylabel("Response time (ms)")
ax.get_xaxis().set_major_formatter(mdates.DateFormatter('%b %d'))
ax.set_xlim(pd.Timestamp('2014-09-21'), pd.Timestamp('2014-10-07'))
plt.xticks(dates, rotation = 90)
plt.show()
fig.savefig("figure_2", dpi=300)
fig, ax = plt.subplots()
temp.plot.area(x='time', y=["Tnet_100", "Tbrowser_100", "Tserver_100", "Tother_100"],title="SRT components 100% stacked area chart", colormap="GnBu", figsize= (10, 5), ax=ax)
ax.set_xlabel("Date (Year 2014)")
ax.set_ylabel("Percentage (%)")
ax.get_xaxis().set_major_formatter(mdates.DateFormatter('%b %d'))
ax.set_xlim(pd.Timestamp('2014-09-21'), pd.Timestamp('2014-10-07'))
plt.xticks(dates, rotation = 90)
plt.show()
fig.savefig("figure_3", dpi=300)
# Task 3
fig, ax = plt.subplots()
dataset.hist(column="SRT", cumulative=True, density=1, bins=1000, figsize=(10,8), ax=ax)
ax.set_title('CDF chart of SRT')
ax.set_xlabel("Search Response Time (ms)")
ax.set_ylabel("Probability")
ax.grid(False)
plt.show()
fig.savefig("figure_4", dpi=300)
# Task 4
fig, ax = plt.subplots()
dataset.hist(column="#Images", cumulative=True, density=1, bins=1000, figsize=(10,8), ax=ax)
ax.set_title('CDF chart of #Images')
ax.set_xlabel("Number of images embedded in the result page")
ax.set_ylabel("Probability")
ax.grid(False)
plt.show()
fig.savefig("figure_5", dpi=300)
# Task 5
d = dataset.copy()
start_time= d.iloc[0]['Timestamp']
time_diff = d['Timestamp'] - start_time
d['period'] = time_diff// 60
temp1 = d.groupby(['period']).agg(count=('period', 'size'), Timestamp =('Timestamp', 'mean')).reset_index()
temp1['time'] = pd.to_datetime(temp1['Timestamp'], unit='s').dt.tz_localize('UTC').dt.tz_convert('Asia/Shanghai')
temp1['date'] = temp1['time'].dt.date
fig, ax = plt.subplots()
temp1.plot(x='time', y='count', kind='line', legend=False, title="PVs per minute", figsize= (10, 5), ax=ax)
ax.set_xlabel("Date (Year 2014)")
ax.set_ylabel("Page views")
ax.get_xaxis().set_major_formatter(mdates.DateFormatter('%b %d'))
ax.set_xlim(pd.Timestamp('2014-09-21'), pd.Timestamp('2014-10-06'))
plt.xticks(dates, rotation = 90)
plt.show()
fig.savefig("figure_6", dpi=300)
# Task 6
fig, ax = plt.subplots()
mask = dataset['Province'].isin(['None'])
te = dataset[~mask]
tt=te.groupby(['Province']).agg(count=('Province', 'size'), province=("Province", "first")).sort_values('count', ascending=False)
tt.plot(x='province', y="count", kind="bar", legend=False, title="Average SRT every 10 minutes", figsize= (10, 10), width=0.75, ax=ax,align='edge')
ax.set_title('PVs per Province')
ax.set_xlabel("Province")
ax.set_ylabel("Total number of PVs")
ax.grid(False)
plt.xticks(rotation = 90)
plt.show()
fig.savefig("figure_7", dpi=300)
# Task 7
p = dataset.groupby(['UA']).agg(radius=('UA', 'count'), UA=('UA', 'first'))
pie = p.plot.pie(y='radius', figsize=(8,8), subplots=True, autopct='%1.1f%%', startangle=270, title='PVs of each UA', legend=False)
plt.show()
fig = pie[0].get_figure()
fig.savefig("figure_8.png", dpi=300) | 32.727273 | 174 | 0.689087 | 806 | 5,040 | 4.21464 | 0.236973 | 0.03091 | 0.044157 | 0.03297 | 0.469532 | 0.448337 | 0.448337 | 0.403886 | 0.403886 | 0.403886 | 0 | 0.050821 | 0.105952 | 5,040 | 154 | 175 | 32.727273 | 0.703063 | 0.018849 | 0 | 0.353982 | 0 | 0 | 0.232577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.061947 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9786c44d84f3e0421ce3ad748f9979de5fdfd0f8 | 1,345 | py | Python | pyleecan/Methods/Simulation/VarSimu/gen_datakeeper_list.py | carbon-drive/pyleecan | e89d4fe97f23f6182c19127d2c6a2133614e169d | [
"Apache-2.0"
] | 95 | 2019-01-23T04:19:45.000Z | 2022-03-17T18:22:10.000Z | pyleecan/Methods/Simulation/VarSimu/gen_datakeeper_list.py | ecs-kev/pyleecan | 1faedde4b24acc6361fa1fdd4e980eaec4ca3a62 | [
"Apache-2.0"
] | 366 | 2019-02-20T07:15:08.000Z | 2022-03-31T13:37:23.000Z | pyleecan/Methods/Simulation/VarSimu/gen_datakeeper_list.py | ecs-kev/pyleecan | 1faedde4b24acc6361fa1fdd4e980eaec4ca3a62 | [
"Apache-2.0"
] | 74 | 2019-01-24T01:47:31.000Z | 2022-02-25T05:44:42.000Z | from ....Classes.DataKeeper import DataKeeper
from ....Functions.Load.import_class import import_class
def gen_datakeeper_list(self, ref_simu):
"""Generate default DataKeepers according the reference simulation type"""
datakeeper_list = []
# To avoid adding twice a DataKeeper
symbol_list = [dk.symbol for dk in self.datakeeper_list]
# For multi-simulation of multi-simulation (different DataKeepe<r)
is_multi = ref_simu.var_simu is not None
# Dynamic import to avoid loop
InputCurrent = import_class("pyleecan.Classes", "InputCurrent", "")
# Save speed
if not is_multi and "N0" not in symbol_list:
datakeeper_list.append(
DataKeeper(
name="Speed",
symbol="N0",
unit="rpm",
keeper="lambda output: output.elec.N0",
)
)
# Get default datakeeper
if ref_simu.elec or isinstance(ref_simu.input, InputCurrent):
datakeeper_list.extend(self.get_elec_datakeeper(symbol_list, is_multi=is_multi))
if ref_simu.mag:
datakeeper_list.extend(self.get_mag_datakeeper(symbol_list, is_multi=is_multi))
if ref_simu.force:
datakeeper_list.extend(
self.get_force_datakeeper(symbol_list, is_multi=is_multi)
)
self.datakeeper_list.extend(datakeeper_list)
| 35.394737 | 88 | 0.67658 | 169 | 1,345 | 5.153846 | 0.372781 | 0.144661 | 0.091848 | 0.082664 | 0.230769 | 0.137773 | 0.137773 | 0.098737 | 0.098737 | 0.098737 | 0 | 0.002918 | 0.235688 | 1,345 | 37 | 89 | 36.351351 | 0.844358 | 0.172491 | 0 | 0 | 1 | 0 | 0.062557 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.12 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9792c168c5d105cccf02b6f845f731ab71aa1427 | 983 | py | Python | Demo/sgi/al/playold.py | 1byte2bytes/cpython | 7fbaeb819ca7b20dca048217ff585ec195e999ec | [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
] | 5 | 2022-03-26T21:53:36.000Z | 2022-03-30T21:47:20.000Z | Demo/sgi/al/playold.py | 1byte2bytes/cpython | 7fbaeb819ca7b20dca048217ff585ec195e999ec | [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
] | 6 | 2020-11-18T15:48:14.000Z | 2021-05-03T21:20:50.000Z | Demo/sgi/al/playold.py | 1byte2bytes/cpython | 7fbaeb819ca7b20dca048217ff585ec195e999ec | [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
] | 2 | 2015-07-16T08:14:13.000Z | 2022-03-27T01:55:17.000Z | # Play old style sound files (Guido's private format)
import al, sys, time
import AL
BUFSIZE = 8000
def main():
if len(sys.argv) < 2:
f = sys.stdin
filename = sys.argv[0]
else:
if len(sys.argv) <> 2:
sys.stderr.write('usage: ' + \
sys.argv[0] + ' filename\n')
sys.exit(2)
filename = sys.argv[1]
f = open(filename, 'r')
#
magic = f.read(4)
extra = ''
if magic == '0008':
rate = 8000
elif magic == '0016':
rate = 16000
elif magic == '0032':
rate = 32000
else:
sys.stderr.write('no magic header; assuming 8k samples/sec.\n')
rate = 8000
extra = magic
#
pv = [AL.OUTPUT_RATE, rate]
al.setparams(AL.DEFAULT_DEVICE, pv)
c = al.newconfig()
c.setchannels(AL.MONO)
c.setwidth(AL.SAMPLE_8)
port = al.openport(filename, 'w', c)
if extra:
port.writesamps(extra)
while 1:
buf = f.read(BUFSIZE)
if not buf: break
port.writesamps(buf)
while port.getfilled() > 0:
time.sleep(0.1)
try:
main()
except KeyboardInterrupt:
sys.exit(1)
| 18.903846 | 65 | 0.636826 | 155 | 983 | 4.019355 | 0.490323 | 0.05618 | 0.025682 | 0.038523 | 0.041734 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061381 | 0.204476 | 983 | 51 | 66 | 19.27451 | 0.735294 | 0.051882 | 0 | 0.090909 | 0 | 0 | 0.080819 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.045455 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
97986092a8ee0db7b1d3e42b0266c32cb6daf513 | 8,377 | py | Python | classifiers/pseudolabel_sents.py | thinkmpink/police-fatalities-sample | a79fe6433c62879c6d1b9618ae2fd40c4a9aae11 | [
"MIT"
] | null | null | null | classifiers/pseudolabel_sents.py | thinkmpink/police-fatalities-sample | a79fe6433c62879c6d1b9618ae2fd40c4a9aae11 | [
"MIT"
] | null | null | null | classifiers/pseudolabel_sents.py | thinkmpink/police-fatalities-sample | a79fe6433c62879c6d1b9618ae2fd40c4a9aae11 | [
"MIT"
] | null | null | null | import argparse, functools as ft, getpass
import itertools as it, json, numpy as np, time
from spacy.en import English
from spacy.tokens.doc import Doc
from pathos.pp import ParallelPool
from pathos.threading import ThreadPool
FATAL_SYNSET = set(["dead","death","died","die","fatal","killed",
"kill","lethal","murder", "shoot", "shot", "shooting"
"killing", "deaths", "shootings", "killings"])
"""
Returns 1 if the target function of a token returns true.
Recurses to the root, checking each head using the target function. If the root
is reached without hitting the target, return 0.
"""
def matches_head(tok, target):
if target(tok): return 1
elif tok.dep_ == 'ROOT': return 0
else: return matches_head(tok.head, target)
#head_in_fv = ft.partial(matches_head, target=lambda x: x in FATAL_SYNSET)
"""
Extract features from some window of text: a char span or a sentence, etc.
Return a list containing the features
:param text: the text from which to extract the features
:param name: the victim name associated with the label
:param features: the features to extract, default is unigram counts
:param fatal_vbs: a synset (set) of fatality verbs
"""
def extract_features(text, name, features, fatal_vbs=FATAL_SYNSET):
feats = [0]*4
if 'fatal_verbs' in features:
num_vbs = len([word for word in text if word.orth_ in fatal_vbs])
## if we just pick this feature, can short circuit
feats[0] = 1 if num_vbs else 0
feats[1] = num_vbs
if 'parse' in features:
dobjs = [tok for tok in text if tok.dep_=='dobj'] #TODO: gens not listcomps
ct_fv_dobj = len([tok for tok in dobjs if tok.head.orth_ in fatal_vbs])
#ct_fv_dobj = len(filter(head_in_fv, filter(is_dobj, text)))
##ct_fv_dobj = sum(1 for _ in it.ifilter(head_in_fv, it.ifilter(is_dobj, text)))
feats[2] = ct_fv_dobj
nsps = [tok for tok in text if tok.dep_=='nsubjpass']
ct_fv_nsubjpass = len([tok for tok in nsps if tok.head.orth_ in fatal_vbs])
#ct_fv_nsubjpass = len(filter(head_in_fv, filter(is_nsubjpass, text)))
##ct_fv_nsubjpass = sum(1 for _ in it.ifilter(head_in_fv, it.ifilter(is_nsubjpass, text)))
feats[3] = ct_fv_nsubjpass
return feats
"""
Gets first instances where parts of a "first_name last_name" are substrings
in some context frame (e.g. sentence, token_span)
"""
def get_name_matches(name, text):
#matches = []
parts = name.split()
first = parts[0].capitalize()
last = parts[1].capitalize()
first_l = first.lower()
last_l = last.lower()
if first in text: return True #matches.append(first)
if last in text: return True #matches.append(last)
if first_l in text: return True #matches.append(first_l)
if last_l in text: return True #matches.append(last_l)
else: return False
#return matches
"""
Returns a sentence label with all demanded features. The label means:
the sentence contains the name of a victim.
The returned tuples contain the following:
- label: 1 if the sentence contains a victim of a police killing, -1
otherwise
- name: the name of the victim associated with the label
- matches: a boolean of whether the name was in the sentence
- feats: an array of integers representing the selected features
:param input_: an iterable of unicode documents
:param labels_: a list of labels (as dicts)
:nlp: a spaCy NLP object that can run a tokenizer, tagger, POS, dep parse
"""
def process_all_docs(input_, labels_, n_threads, batch_size, nlp):
docs = (doc.decode('utf-8') for doc in input_)
#pool = ParallelPool(nodes=n_threads)
pool = ThreadPool(nodes=n_threads)
doc_queue = nlp.pipe(docs, batch_size=batch_size, n_threads=n_threads)
return pool.imap(process_doc, doc_queue, it.repeat(labeled_tuples))
def process_doc(doc, labeled_tuples):
docid = doc[0].orth_
doc_labels = [label for label in labeled_tuples if label['docid']==docid]
for labeled_tuple in doc_labels:
name = labeled_tuple['name']
label = labeled_tuple['victim']
contexts = doc.sents
for context in contexts:
feats = extract_features(context, name, ['fatal_verbs', 'parse'])
matches = get_name_matches(name, context.text)
yield (label, name, matches, feats)
def main(n_threads=2, batch_size=4000):
parser = argparse.ArgumentParser(description="""Extract features from a
document containing docids in the beginning of each line. Use
(docid, name) pairs to locate the document, then extract relevant
features. Requires spaCy for NLP pipeline""")
parser.add_argument('-f', '--readfile', type=str, required=True,
help="Data filepath, e.g. lynx_docs/<dataset>_lx.tsv")
parser.add_argument('-l', '--labelfile', type=str, required=True,
help="Label filepath, e.g. classifiers/<dataset>_labels.json")
parser.add_argument('-t', '--nthreads', type=int,
help="Number of threads for processing docs. Recommend 30.")
parser.add_argument('-b', '--batchsize', type=int,
help="Number of documents to queue. Recommend 10000.")
parser.add_argument('-d', '--ndocs', type=int, required=True,
help="Number of docs to process. Try wc -l <READFILE>")
args = parser.parse_args()
if args.nthreads: n_threads = args.nthreads
if args.batchsize: batch_size = args.batchsize
with open(args.labelfile, 'r') as l
labels = [json.loads(line) for line in l]
f = open(args.readfile, 'r')
docs = (doc for doc in f)
print "Loading spaCy ..."
load_time = time.clock()
nlp = English()
load_time = time.clock() - load_time
print "Successfully loaded spaCy English in ", str(load_time), "s"
max_sent_per_doc = 10000
features_X = np.zeros((args.ndocs * max_sent_per_doc, 4), dtype=np.int8)
features_y = np.zeros((args.ndocs * max_sent_per_doc,), dtype=np.int8)
#for i, info in enumerate(sentence_feats):
i = 0
sentence_feats = process_all_docs(docs, labels, n_threads, batch_size, nlp)
for gen in sentence_feats:
for info in gen:
if not i%1000: print i, " docs processed"
feats = info[3]
name_label = info[0]
name_in_sent = info[2]
label = 1 if name_label==1 and name_in_sent else -1
features_X[i] = feats
features_y[i] = label
i += 1
print "Num sentences:", i
np.save("train_X_" + args.labelfile, features_X)
np.save("train_y_" + args.labelfile, features_y)
f.close()
if __name__ == "__main__":
main()
################## IN PROGRESS / DEPRECATED ###################
"""
Deserialize a serialized parse and try the same extraction logic as process_all_labels
"""
def process_all_labels_from_bytes(label_handle, logs, nlp):
labeled_tuples = [json.loads(line) for line in label_handle]
nlp_doc = Doc(nlp.vocab)
handles = (open(f, 'rb') for f in logs)
b_gens = (Doc.read_bytes(h) for h in handles)
docs = decode_docs(nlp_doc, b_gens)
for line in docs:
docid = line[0].orth_
doc_labels = filter(lambda x: x['docid']==docid, labeled_tuples)
for labeled_tuple in doc_labels:
name = labeled_tuple['name']
label = labeled_tuple['victim']
contexts = line.sents
for context in contexts:
feats = extract_features(context,name, ['fatal_verbs', 'parse'])
matches = get_name_matches(name, map(lambda x: x.orth_, context))
yield (label, name, matches, feats, context)
def decode_docs(nlp_doc, b_gens):
for b_gen in b_gens:
for b_doc in b_gen:
yield nlp_doc.from_bytes(b_doc)
"""
Returns all token spans of length `context param` in `doc`
Ex: doc = "Hi there dog", context_param=2
assert span_contexts(doc, context_param) == [u'Hi there', u'there dog']
:param spacy_doc: a spacy document object from which to pull the windows
:param context_param: number of words window
"""
#TODO: BUGGY! Returns empty list if context param > len(tokens(doc))
def spacy_span(spacy_doc, context_param=10):
return map(lambda x: " ".join(x),
zip(*(doc[i:] for i in xrange(context_param)))
)
| 38.077273 | 98 | 0.664558 | 1,243 | 8,377 | 4.31778 | 0.248592 | 0.005962 | 0.007453 | 0.008198 | 0.199739 | 0.169182 | 0.151295 | 0.107695 | 0.088318 | 0.078256 | 0 | 0.008627 | 0.22514 | 8,377 | 219 | 99 | 38.251142 | 0.81821 | 0.086069 | 0 | 0.07874 | 0 | 0 | 0.141314 | 0.009961 | 0 | 0 | 0 | 0.009132 | 0 | 0 | null | null | 0.031496 | 0.047244 | null | null | 0.031496 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
97a716c69dad9839af2fa60471bca2c16464c178 | 6,443 | py | Python | examples/protocols/http_server/file_serving/http_server_file_serving_test.py | iPlon-org/esp-idf | a5227db2a75102ca1a17860188c3c352a529a01b | [
"Apache-2.0"
] | 8,747 | 2016-08-18T14:58:24.000Z | 2022-03-31T20:58:55.000Z | examples/protocols/http_server/file_serving/http_server_file_serving_test.py | iPlon-org/esp-idf | a5227db2a75102ca1a17860188c3c352a529a01b | [
"Apache-2.0"
] | 8,603 | 2016-08-20T08:55:56.000Z | 2022-03-31T23:04:01.000Z | examples/protocols/http_server/file_serving/http_server_file_serving_test.py | iPlon-org/esp-idf | a5227db2a75102ca1a17860188c3c352a529a01b | [
"Apache-2.0"
] | 6,380 | 2016-08-18T18:17:00.000Z | 2022-03-31T22:25:57.000Z | #!/usr/bin/env python
#
# Copyright 2021 Espressif Systems (Shanghai) CO LTD
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import hashlib
import http.client
import os
import re
import tiny_test_fw
import ttfw_idf
from idf_http_server_test import adder as client
from tiny_test_fw import Utility
@ttfw_idf.idf_example_test(env_tag='Example_WIFI_Protocols')
def test_examples_protocol_http_server_file_serving(env, extra_data): # type: (tiny_test_fw.Env.Env, None) -> None # pylint: disable=unused-argument
# Acquire DUT
dut1 = env.get_dut('http file_serving', 'examples/protocols/http_server/file_serving', dut_class=ttfw_idf.ESP32DUT)
# Get binary file
binary_file = os.path.join(dut1.app.binary_path, 'file_server.bin')
bin_size = os.path.getsize(binary_file)
ttfw_idf.log_performance('file_server_bin_size', '{}KB'.format(bin_size // 1024))
Utility.console_log('Erasing the flash on the chip')
# erase the flash
dut1.erase_flash()
# Upload binary and start testing
Utility.console_log('Starting http file serving simple test app')
dut1.start_app()
# Parse IP address of STA
Utility.console_log('Waiting to connect with AP')
got_ip = dut1.expect(re.compile(r'IPv4 address: (\d+\.\d+\.\d+\.\d+)'), timeout=30)[0]
# Expected logs
dut1.expect('Initializing SPIFFS', timeout=30)
got_port = dut1.expect(re.compile(r"Starting HTTP Server on port: '(\d+)'"), timeout=30)[0]
Utility.console_log('Got IP : ' + got_ip)
Utility.console_log('Got Port : ' + got_port)
# Run test script
conn = client.start_session(got_ip, got_port)
# upload a file onto the server
upload_data = 'Test data to be sent to the server'
upload_file_name = 'example.txt'
upload_file_hash = hashlib.md5(upload_data.encode('UTF-8'))
upload_file_digest = upload_file_hash.digest()
Utility.console_log('\nTesting the uploading of file on the file server')
client.postreq(conn, '/upload/' + str(upload_file_name), upload_data)
try:
dut1.expect('File reception complete', timeout=10)
except Exception:
Utility.console_log('Failed the test to upload file on the file server')
raise
Utility.console_log('Passed the test to uploaded file on the file server')
# Download the uploaded file from the file server
Utility.console_log("\nTesting for Download of \"existing\" file from the file server")
download_data = client.getreq(conn, '/' + str(upload_file_name))
try:
dut1.expect('File sending complete', timeout=10)
except Exception:
Utility.console_log('Failed the test to download existing file from the file server')
raise
Utility.console_log('Passed the test to downloaded existing file from the file server')
download_file_hash = hashlib.md5(download_data)
download_file_digest = download_file_hash.digest()
if download_file_digest != upload_file_digest:
raise RuntimeError('The md5 hash of the downloaded file does not match with that of the uploaded file')
# Upload existing file on the file server
Utility.console_log("\nTesting the upload of \"already existing\" file on the file server")
client.postreq(conn, '/upload/' + str(upload_file_name), data=None)
try:
dut1.expect('File already exists : /spiffs/' + str(upload_file_name), timeout=10)
except Exception:
Utility.console_log('Failed the test for uploading existing file on the file server')
raise
Utility.console_log('Passed the test for uploading existing file on the file server')
# Previous URI was an invalid URI so the server should have closed the connection.
# Trying to send request to the server
try:
client.getreq(conn, '/')
except http.client.RemoteDisconnected:
# It is correct behavior that the connection was closed by the server
pass
except Exception:
Utility.console_log('Connection was not closed successfully by the server after last invalid URI')
raise
conn = client.start_session(got_ip, got_port)
# Delete the existing file from the file server
Utility.console_log("\nTesting the deletion of \"existing\" file on the file server")
client.postreq(conn, '/delete/' + str(upload_file_name), data=None)
try:
dut1.expect('Deleting file : /' + str(upload_file_name), timeout=10)
except Exception:
Utility.console_log('Failed the test for deletion of existing file on the file server')
raise
Utility.console_log('Passed the test for deletion of existing file on the file server')
conn = client.start_session(got_ip, got_port)
# Try to delete non existing file from the file server
Utility.console_log("\nTesting the deletion of \"non existing\" file on the file server")
client.postreq(conn, '/delete/' + str(upload_file_name), data=None)
try:
dut1.expect('File does not exist : /' + str(upload_file_name), timeout=10)
except Exception:
Utility.console_log('Failed the test for deleting non existing file on the file server')
raise
Utility.console_log('Passed the test for deleting non existing file on the file server')
conn = client.start_session(got_ip, got_port)
# Try to download non existing file from the file server
Utility.console_log("\nTesting for Download of \"non existing\" file from the file server")
download_data = client.getreq(conn, '/' + str(upload_file_name))
try:
dut1.expect('Failed to stat file : /spiffs/' + str(upload_file_name), timeout=10)
except Exception:
Utility.console_log('Failed the test to download non existing file from the file server')
raise
Utility.console_log('Passed the test to downloaded non existing file from the file server')
if __name__ == '__main__':
test_examples_protocol_http_server_file_serving() # pylint: disable=no-value-for-parameter
| 43.533784 | 149 | 0.719851 | 942 | 6,443 | 4.774947 | 0.230361 | 0.05558 | 0.090707 | 0.037572 | 0.501112 | 0.474655 | 0.471543 | 0.436639 | 0.412406 | 0.408181 | 0 | 0.009973 | 0.19075 | 6,443 | 147 | 150 | 43.829932 | 0.852704 | 0.201149 | 0 | 0.311828 | 0 | 0 | 0.346568 | 0.012713 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010753 | false | 0.075269 | 0.086022 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
97af64566d82221c7208c36ac5931c23656c29f7 | 588 | py | Python | setup.py | danicarrion/carto-python | 631b018f065960baa35473e2087ce598560b9e17 | [
"BSD-3-Clause"
] | null | null | null | setup.py | danicarrion/carto-python | 631b018f065960baa35473e2087ce598560b9e17 | [
"BSD-3-Clause"
] | null | null | null | setup.py | danicarrion/carto-python | 631b018f065960baa35473e2087ce598560b9e17 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from setuptools import setup
try:
with open('requirements.txt') as f:
required = f.read().splitlines()
except:
required = ['requests>=2.7.0', 'pyrestcli>=0.6.4']
try:
with open('test_requirements.txt') as f:
test_required = f.read().splitlines()
except:
pass
setup(name="carto",
author="Daniel Carrión",
author_email="daniel@carto.com",
description="SDK around CARTO's APIs",
version="1.8.1",
url="https://github.com/CartoDB/carto-python",
install_requires=required,
packages=["carto"])
| 24.5 | 54 | 0.62415 | 76 | 588 | 4.776316 | 0.644737 | 0.038567 | 0.060606 | 0.099174 | 0.15978 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021413 | 0.205782 | 588 | 23 | 55 | 25.565217 | 0.755889 | 0.035714 | 0 | 0.210526 | 0 | 0 | 0.309735 | 0.037168 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
97afb71c7e9794f54839dad5cb3620f7ebd2594c | 617 | py | Python | proc/updatepreprint/setup.py | paratiuid/search-journals | f49391fb68cbc057250895f4a127c1befae28cd2 | [
"BSD-2-Clause"
] | null | null | null | proc/updatepreprint/setup.py | paratiuid/search-journals | f49391fb68cbc057250895f4a127c1befae28cd2 | [
"BSD-2-Clause"
] | null | null | null | proc/updatepreprint/setup.py | paratiuid/search-journals | f49391fb68cbc057250895f4a127c1befae28cd2 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup
setup(
name="UpdatePrePrint",
version='0.1-beta',
description="Update Pre-Print articles to Solr",
author="SciELO",
author_email="scielo-dev@googlegroups.com",
license="BSD",
url="https://github.com/scieloorg/search-journals/tree/beta/proc/updatepreprint",
keywords='solr api lucene scielo',
maintainer_email='jamil.atta@scielo.org',
classifiers=[
"Topic :: System",
"Topic :: Utilities",
"Programming Language :: Python",
"Operating System :: POSIX :: Linux",
],
test_suite='tests'
)
| 28.045455 | 85 | 0.649919 | 69 | 617 | 5.768116 | 0.811594 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004057 | 0.200972 | 617 | 21 | 86 | 29.380952 | 0.803245 | 0.032415 | 0 | 0 | 0 | 0 | 0.520134 | 0.080537 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.052632 | 0 | 0.052632 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
97b8b1daa6034510ae337b2235e381b62f1dfd4a | 924 | py | Python | python/easy/1304_Find_N_Unique_Integers_Sum_up_to_Zero.py | JackWang0107/leetcode | c02932190b639ef87a8d0fcd07d9cd6ec7344a67 | [
"MIT"
] | 1 | 2021-05-22T03:27:33.000Z | 2021-05-22T03:27:33.000Z | python/easy/1304_Find_N_Unique_Integers_Sum_up_to_Zero.py | JackWang0107/leetcode | c02932190b639ef87a8d0fcd07d9cd6ec7344a67 | [
"MIT"
] | null | null | null | python/easy/1304_Find_N_Unique_Integers_Sum_up_to_Zero.py | JackWang0107/leetcode | c02932190b639ef87a8d0fcd07d9cd6ec7344a67 | [
"MIT"
] | null | null | null | from typing import *
class Solution:
# 24 ms, faster than 98.52% of Python3 online submissions for Find N Unique Integers Sum up to Zero.
# 14.2 MB, less than 91.76% of Python3 online submissions for Find N Unique Integers Sum up to Zero.
def sumZero(self, n: int) -> List[int]:
ans = []
if n % 2 == 1:
ans = [0]
for i in range(1, n//2+1):
ans.append(i)
ans.append(-i)
return ans
# 20 ms, faster than 99.71% of Python3 online submissions for Find N Unique Integers Sum up to Zero.
# 14.2 MB, less than 91.76% of Python3 online submissions for Find N Unique Integers Sum up to Zero.
def sumZero(self, n: int) -> List[int]:
result = [x for i in range(1, n//2+1) for x in {i, -i}]
if n % 2:
result.append(0)
return result
if __name__ == "__main__":
so = Solution()
print(so.sumZero(5))
| 35.538462 | 105 | 0.587662 | 152 | 924 | 3.519737 | 0.355263 | 0.06729 | 0.11215 | 0.194393 | 0.654206 | 0.654206 | 0.654206 | 0.654206 | 0.598131 | 0.598131 | 0 | 0.065625 | 0.307359 | 924 | 25 | 106 | 36.96 | 0.770313 | 0.428571 | 0 | 0.111111 | 0 | 0 | 0.015326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0 | 0.333333 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
97bfdf745212a8cd679a42ca0dc0da4fad2f527e | 343 | py | Python | setup.py | MorelLaw228/ocr-web-api-project | 12132daec38f29439d4accb2e70ba03c10d7b1c5 | [
"Apache-2.0"
] | null | null | null | setup.py | MorelLaw228/ocr-web-api-project | 12132daec38f29439d4accb2e70ba03c10d7b1c5 | [
"Apache-2.0"
] | null | null | null | setup.py | MorelLaw228/ocr-web-api-project | 12132daec38f29439d4accb2e70ba03c10d7b1c5 | [
"Apache-2.0"
] | null | null | null | from setuptools import setup
setup(
name='ocr-web-api-project',
version='1.1',
packages=[''],
url='https://github.com/MorelLaw228/ocr-web-api-project',
license='',
author='morellatel',
author_email='morellatel@gmail.com',
description='Projet d\'OCR de données médicales réalisé dans le cadre de stage de M1'
)
| 26.384615 | 89 | 0.676385 | 46 | 343 | 5.021739 | 0.73913 | 0.051948 | 0.077922 | 0.138528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021127 | 0.172012 | 343 | 12 | 90 | 28.583333 | 0.792254 | 0 | 0 | 0 | 0 | 0 | 0.323615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
97c4fda75533de62a044626aecc15cb63af06671 | 835 | py | Python | pycolims/menus/Menu_Factory.py | daniel-avalos/pycolims | 74363595e5c6c795f235d9a6f7476093dd4dfefe | [
"MIT"
] | 1 | 2022-01-21T17:32:09.000Z | 2022-01-21T17:32:09.000Z | pycolims/menus/Menu_Factory.py | daniel-avalos/pycolims | 74363595e5c6c795f235d9a6f7476093dd4dfefe | [
"MIT"
] | 1 | 2019-02-17T06:53:38.000Z | 2019-02-17T07:27:35.000Z | pycolims/menus/Menu_Factory.py | daniel-avalos/pycolims | 74363595e5c6c795f235d9a6f7476093dd4dfefe | [
"MIT"
] | null | null | null | from pycolims.menus import _menu_single, _menu_multi
class SingleMenu(_menu_single.SelectSingle):
"""Given a list, prompt for selection of a single item\n
Returns the selected item\n"""
class MultiMenu(_menu_multi.SelectMulti):
"""Given a list, prompt for selection of items in a list.\n
Returns a list with nested booleans indicating if selected or not\n
[item for [boolean, item] in menu.run(menu_in) if boolean]\n\n
Once init, call run()"""
class _FactorySingle(_menu_single.SelectSingleFactory, ):
"""Importable single menu builder"""
class _FactoryMulti(_menu_multi.SelectMutliFactory, ):
"""Importable multi menu builder"""
build_single = _FactorySingle()
"""Externally callable pointer to init factory"""
build_multi = _FactoryMulti()
"""Externally callable pointer to init factory"""
| 28.793103 | 71 | 0.743713 | 111 | 835 | 5.423423 | 0.45045 | 0.033223 | 0.033223 | 0.053156 | 0.225914 | 0.225914 | 0.099668 | 0 | 0 | 0 | 0 | 0 | 0.158084 | 835 | 28 | 72 | 29.821429 | 0.85633 | 0.413174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
97ca3d957d6ee83b3587051eff22ef6e1b093626 | 471 | py | Python | scraper1830/scraper_cli.py | SiddharthNVenkatesh/1830-game-log-scraper | 5b446bec0715e8d7954bf6c0ff2e3f8619be9213 | [
"MIT"
] | null | null | null | scraper1830/scraper_cli.py | SiddharthNVenkatesh/1830-game-log-scraper | 5b446bec0715e8d7954bf6c0ff2e3f8619be9213 | [
"MIT"
] | null | null | null | scraper1830/scraper_cli.py | SiddharthNVenkatesh/1830-game-log-scraper | 5b446bec0715e8d7954bf6c0ff2e3f8619be9213 | [
"MIT"
] | null | null | null | """
Created on Sat Oct 30 19:29:30 2021
@author: siddharthvenkatesh
This is a command line interface for scraper1830.
"""
import click
from .scraper1830 import Scraper1830
@click.group()
def cli_entry():
pass
@cli_entry.command()
@click.option(
"--id", prompt="Enter Game ID", help="The id for the 1830 game on 18xx.games"
)
def plot_history(id):
scraper = Scraper1830(id)
scraper.plot_player_history()
if __name__ == "__main__":
cli_entry()
| 16.241379 | 81 | 0.70276 | 67 | 471 | 4.731343 | 0.641791 | 0.07571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087855 | 0.178344 | 471 | 28 | 82 | 16.821429 | 0.731266 | 0.244161 | 0 | 0 | 0 | 0 | 0.181034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
97d2a229fccfdef7f56f9b50b19d282624ff6376 | 418 | py | Python | pex/tools/command.py | ShellAddicted/pex | f1060b784fc9c4337a514ed21357ea9e8c2e4f41 | [
"Apache-2.0"
] | 2,160 | 2015-01-06T17:57:39.000Z | 2022-03-30T19:59:01.000Z | pex/tools/command.py | sthagen/pex | 9bd4c178c93556faad3c8a1e75989c9288d09416 | [
"Apache-2.0"
] | 1,242 | 2015-01-22T14:56:46.000Z | 2022-03-31T18:02:38.000Z | pex/tools/command.py | Satertek/pex | 64de1c4cf031118ef446ac98a8c164c91c23bb9b | [
"Apache-2.0"
] | 248 | 2015-01-15T13:34:50.000Z | 2022-03-26T01:24:18.000Z | # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import absolute_import
from abc import abstractmethod
from pex.commands.command import Command, Result
from pex.pex import PEX
class PEXCommand(Command):
@abstractmethod
def run(self, pex):
# type: (PEX) -> Result
raise NotImplementedError()
| 24.588235 | 66 | 0.741627 | 52 | 418 | 5.865385 | 0.634615 | 0.045902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017595 | 0.184211 | 418 | 16 | 67 | 26.125 | 0.876833 | 0.354067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8ada7c9398b46556df1e4189571c752be500f628 | 3,342 | py | Python | blog/models.py | pythoncali/portal | 8a5b75e229683da14d6b1cbe701800aa1806e091 | [
"BSD-3-Clause"
] | 1 | 2015-08-23T21:30:38.000Z | 2015-08-23T21:30:38.000Z | blog/models.py | djangocali/portal | 8a5b75e229683da14d6b1cbe701800aa1806e091 | [
"BSD-3-Clause"
] | 24 | 2015-08-30T06:07:19.000Z | 2016-12-11T15:52:55.000Z | blog/models.py | pythoncali/portal | 8a5b75e229683da14d6b1cbe701800aa1806e091 | [
"BSD-3-Clause"
] | 6 | 2015-08-22T16:25:19.000Z | 2016-03-12T21:28:47.000Z | # -*- coding: utf-8 -*-
from django.db import models
from taggit.managers import TaggableManager
from autoslug import AutoSlugField
from django.conf import settings
"""
## TODO ##
1. Como el direccionamiento estara ligado al uso de slugs, tanto en el caso de
las categorias como de las entradas,es necesario encontrar una manera de poder
conservar la validez de los enlaces, aun cuando el campo slug sea modificado
porque se cambio el titulo de la entrada o el nombre de la categoria.
2. Es importante incluir una manera mucho mas efectiva de incluir una imagen en
las entradas, hasta ahora es muy precaria la manera en que esta siendo usada.
3. ¿Como añadir la funcionalidad de los comentarios a las entradas? Ademas
considerar la posibilidad que esos comentarios puedan ser validados por el
autor o por un administrador del sistema, para evitar comentarios innecesarios
o poco validos para el uso de la comunidad.
"""
class Categoria(models.Model):
"""Clase que define el modelo para la creacion de categorias que permitan
clasificar con mayor facilidad las diferentes publicaciones escritas en el
blog de la comunidad
"""
creado_en = models.DateTimeField(auto_now_add=True)
modificado_en = models.DateTimeField(auto_now=True)
nombre = models.CharField(max_length=20, unique=True, null=False)
slug = AutoSlugField(populate_from='nombre', unique=True, editable=False)
class Meta:
verbose_name = "Categoria"
verbose_name_plural = "Categorias"
ordering = ('nombre',)
def __str__(self):
return self.nombre
class ArticuloManager(models.Manager):
"""Manager para la clase Articulo, contiene algunas de las funcionalidades
frecuentes de los modelos, siguiendo el lineamiento de usar una estructura
mas robusta en los modelos."""
def get_published(self):
"""Metodo para garantizar que siempre se muestren unicamente los
articulos que ya han sido publicados por los autores en el blog.
"""
articulo = Articulo.objects.filter(estado='p')
return articulo
def get_drafts(self):
"""Metodo con el fin de invocar los articulos en estado de borrador
"""
articulo = Articulo.objects.filter(estado='b')
return articulo
class Articulo(models.Model):
"""Clase para definir el modelo y estructura de las publicaciones del
portal
"""
ESTADO = (
('b', 'borrador'),
('p', 'publicado'),
)
creado_en = models.DateTimeField(auto_now_add=True, editable=False)
modificado_en = models.DateTimeField(auto_now=True)
titulo = models.CharField(max_length=50, null=False, unique=True)
imagen_destacada = models.ImageField(upload_to='articulos/', null=True,
blank=True)
contenido = models.TextField(null=False)
autor = models.ForeignKey(settings.AUTH_USER_MODEL)
categoria = models.ForeignKey(Categoria)
tags = TaggableManager()
slug = AutoSlugField(populate_from='titulo', unique=True, editable=False)
estado = models.CharField(max_length=1, choices=ESTADO, default='b')
objects = ArticuloManager()
class Meta:
verbose_name = "Articulo"
verbose_name_plural = "Articulos"
ordering = ('-creado_en',)
def __str__(self):
return self.titulo
| 37.550562 | 79 | 0.710652 | 434 | 3,342 | 5.391705 | 0.435484 | 0.008547 | 0.035897 | 0.042735 | 0.117949 | 0.07094 | 0.07094 | 0.035043 | 0 | 0 | 0 | 0.003416 | 0.21155 | 3,342 | 88 | 80 | 37.977273 | 0.88425 | 0.191203 | 0 | 0.177778 | 0 | 0 | 0.051173 | 0 | 0 | 0 | 0 | 0.034091 | 0 | 1 | 0.088889 | false | 0 | 0.088889 | 0.044444 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8add1b7a78dfb35db498ebfb2fd001c73838ad92 | 527 | py | Python | giturlparse/platforms/__init__.py | JulianVolodia/giturlparse | 0ae754bb289a0f097893c65c27de295e85ae43c6 | [
"Apache-2.0"
] | 20 | 2018-03-19T21:40:43.000Z | 2022-02-07T09:26:17.000Z | giturlparse/platforms/__init__.py | JulianVolodia/giturlparse | 0ae754bb289a0f097893c65c27de295e85ae43c6 | [
"Apache-2.0"
] | 23 | 2018-03-09T10:35:48.000Z | 2022-01-30T17:35:46.000Z | giturlparse/platforms/__init__.py | JulianVolodia/giturlparse | 0ae754bb289a0f097893c65c27de295e85ae43c6 | [
"Apache-2.0"
] | 15 | 2018-03-20T13:23:34.000Z | 2022-03-29T03:43:33.000Z | from .assembla import AssemblaPlatform
from .base import BasePlatform
from .bitbucket import BitbucketPlatform
from .friendcode import FriendCodePlatform
from .github import GitHubPlatform
from .gitlab import GitLabPlatform
# Supported platforms
PLATFORMS = [
# name -> Platform object
("github", GitHubPlatform()),
("bitbucket", BitbucketPlatform()),
("friendcode", FriendCodePlatform()),
("assembla", AssemblaPlatform()),
("gitlab", GitLabPlatform()),
# Match url
("base", BasePlatform()),
]
| 27.736842 | 42 | 0.72296 | 44 | 527 | 8.659091 | 0.477273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159393 | 527 | 18 | 43 | 29.277778 | 0.860045 | 0.100569 | 0 | 0 | 0 | 0 | 0.091489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8ae6cb6bdb03906a9fea321b254d1c10db5d3c9d | 1,503 | py | Python | began/train.py | norveclibalikci/MyDeepLearning | d72e3d70aa11c8d8b05f0f72744a9cafa30e808a | [
"MIT"
] | 2 | 2018-05-15T10:28:48.000Z | 2018-09-28T13:38:11.000Z | began/train.py | norveclibalikci/MyDeepLearning | d72e3d70aa11c8d8b05f0f72744a9cafa30e808a | [
"MIT"
] | null | null | null | began/train.py | norveclibalikci/MyDeepLearning | d72e3d70aa11c8d8b05f0f72744a9cafa30e808a | [
"MIT"
] | 1 | 2019-10-25T13:24:05.000Z | 2019-10-25T13:24:05.000Z | from __future__ import print_function
from math import log10
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from dataset import DataSetFromFolder
import torch.cuda
from torch.autograd import Variable
import torch.backends.cudnn as cudnn
import torchvision.utils as vutils
from models import generator_model,dicriminator_model
from util import initialize_weights
from torchvision import transforms
import numpy as np
from os.path import join
h = 64
n = 128
batch_size = 16
cudnn.benchmark = True
lr = 1e-3
dtype = torch.cuda.FloatTensor
torch.cuda.manual_seed(619)
path = "dataset/"
train_transform = transforms.Compose([transforms.CenterCrop(160),
transforms.Scale(size=64),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# train_set = DataSetFromFolder(join(join(path,"CelebA"), "train"), train_transform)
# train_loader = DataLoader(dataset=train_set, num_workers=1, batch_size=16, shuffle=True)
G = generator_model(h, n).type(dtype)
G.apply(initialize_weights)
D = dicriminator_model(h, n).type(dtype)
D.apply(initialize_weights)
real_data = Variable(torch.FloatTensor(batch_size, 3, 64, 64)).type(dtype)
logits_real = Variable(torch.FloatTensor(batch_size, h)).type(dtype)
logits_fake = Variable(torch.FloatTensor(batch_size, h)).type(dtype)
optimG = optim.Adam(G.parameters(), lr=lr)
optimD = optim.Adam(D.parameters(), lr=lr)
| 25.474576 | 90 | 0.745842 | 218 | 1,503 | 5.018349 | 0.37156 | 0.010969 | 0.013711 | 0.018282 | 0.148995 | 0.08958 | 0.08958 | 0.08958 | 0.010969 | 0.010969 | 0 | 0.030398 | 0.146374 | 1,503 | 58 | 91 | 25.913793 | 0.822292 | 0.113772 | 0 | 0 | 0 | 0 | 0.006029 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.432432 | 0 | 0.432432 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8ae7af477b7bde5b42b35528c7ad77a67aa36201 | 708 | py | Python | Tracker_Development/Tracker_Troubleshooting/Tracker_Tree_Reconstruction.py | The-Kristina/CellComp | 29ec7690e0d9adb1a6214937ca41fd1dadce18c6 | [
"CNRI-Python",
"RSA-MD",
"Xnet",
"Net-SNMP",
"X11"
] | 7 | 2019-05-13T10:07:44.000Z | 2022-03-01T16:20:48.000Z | Tracker_Development/Tracker_Troubleshooting/Tracker_Tree_Reconstruction.py | The-Kristina/CellComp | 29ec7690e0d9adb1a6214937ca41fd1dadce18c6 | [
"CNRI-Python",
"RSA-MD",
"Xnet",
"Net-SNMP",
"X11"
] | null | null | null | Tracker_Development/Tracker_Troubleshooting/Tracker_Tree_Reconstruction.py | The-Kristina/CellComp | 29ec7690e0d9adb1a6214937ca41fd1dadce18c6 | [
"CNRI-Python",
"RSA-MD",
"Xnet",
"Net-SNMP",
"X11"
] | 3 | 2020-04-23T18:13:20.000Z | 2020-11-11T18:46:48.000Z | # TODO: Find out if you can reconstruct chopped trees from the tracker:
import sys
sys.path.append("../")
from Cell_IDs_Analysis.Plotter_Lineage_Trees import PlotLineageTree
raw_file = "/Volumes/lowegrp/Data/Kristina/MDCK_90WT_10Sc_NoComp/17_07_24/pos13/analysis/channel_RFP/cellIDdetails_raw.txt"
xml_file = "/Volumes/lowegrp/Data/Kristina/MDCK_90WT_10Sc_NoComp/17_07_24/pos13/tracks/tracks_type2.xml"
for line in open(raw_file, "r"):
line = line.rstrip().split("\t")
if line[0] == "Cell_ID" or len(line) < 8:
continue
if int(line[5]) == 0 and line[7] == "False":
print (line)
PlotLineageTree(root_ID=int(line[0]), cell_ID=int(line[0]), xml_file=xml_file, show=True)
| 39.333333 | 123 | 0.724576 | 114 | 708 | 4.280702 | 0.578947 | 0.043033 | 0.07377 | 0.090164 | 0.241803 | 0.241803 | 0.241803 | 0.241803 | 0.241803 | 0.241803 | 0 | 0.052373 | 0.137006 | 708 | 17 | 124 | 41.647059 | 0.746318 | 0.097458 | 0 | 0 | 0 | 0.083333 | 0.343799 | 0.315542 | 0 | 0 | 0 | 0.058824 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8aef4b35b75cbb908a78f5d4b58f0250c481298b | 4,567 | py | Python | scripts/compare_neigh_overlap.py | cabrittin/elegansbrainmap | 049a26a094e085bacc70f5b05ea04a007d00eb2c | [
"MIT"
] | 2 | 2021-04-07T04:27:32.000Z | 2021-08-15T19:26:21.000Z | scripts/compare_neigh_overlap.py | cabrittin/elegansbrainmap | 049a26a094e085bacc70f5b05ea04a007d00eb2c | [
"MIT"
] | null | null | null | scripts/compare_neigh_overlap.py | cabrittin/elegansbrainmap | 049a26a094e085bacc70f5b05ea04a007d00eb2c | [
"MIT"
] | 2 | 2021-03-08T05:53:08.000Z | 2021-04-07T04:27:37.000Z | """
compare_neigh_overlap.py
Plots distributions of Jaccard distances for overlapping ipsilateral
neighborhoods (blue) and homologous contralateral neighborhoods (red)
in the adult and L4.
crated: Christopher Brittin
data: 01 November 2018
"""
import os
from configparser import ConfigParser,ExtendedInterpolation
import argparse
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Brittin modules
from connectome.load import from_db
from connectome.format_graphs import low_pass_edge_filter,mid_pass_edge_filter,high_pass_edge_filter
from networks.stats import get_neighborhood_similarity,get_neighborhood_overlap_similarity
import ioaux
CONFIG = os.environ['CONFIG']
SOURCE = "data/neighborhood_similarity.csv"
def add_neighborhood_similarity(data,label,A,reflected,left,right):
for (a,b) in get_neighborhood_similarity(A,reflected,left):
if b == -1 or b > 1: continue
data.append(label + ['homologous',a,b])
def add_overlap_similarity(data,label,A,vertices):
for (a,b) in get_neighborhood_overlap_similarity(A,vertices):
if b == -1 or b > 1: continue
data.append(label + ['proximal',a,b])
def ipsilateral_pass_filter(G,args):
[left,right] = args
edges = []
for e in G.es:
u = G.vs[e.source]['name']
v = G.vs[e.target]['name']
c1 = (u in left) and (v in right)
c2 = (u in right) and (v in left)
if c1 or c2: edges.append(e)
G.delete_edges(edges)
def contralateral_pass_filter(G,args):
[left,right] = args
edges = []
for e in G.es:
u = G.vs[e.source]['name']
v = G.vs[e.target]['name']
c1 = (u in left) and (v in right)
c2 = (u in right) and (v in left)
if not (c1 or c2): edges.append(e)
G.delete_edges(edges)
def add_data(cfg,data,_label,edge_filter=None,args=None):
N2U = 'N2U'
JSH = 'JSH'
left = ioaux.read.into_list(cfg['mat']['left_nodes'])
right = ioaux.read.into_list(cfg['mat']['right_nodes'])
lrmap = ioaux.read.into_lr_dict(cfg['mat']['lrmap'])
#_remove = ['VC01','VD01','VB01','VB02','HSNL','HSNR','PVNL','PVNR']
_remove = ['VC01','VD01','VB01','VB02','HSNL','HSNR','PVNL','PVNR','PLNL','PLNR','PVR','PVR.']
label = ['Adult L/R'] + _label
n2u = from_db(N2U,adjacency=True,remove=_remove)
if edge_filter: edge_filter(n2u.A,args)
reflected = n2u.A.map_vertex_names(lrmap)
add_neighborhood_similarity(data,label,n2u.A,reflected,left,right)
add_overlap_similarity(data,label,n2u.A,left + right)
label = ['L4 L/R'] + _label
jsh = from_db(JSH,adjacency=True,remove=_remove)
if edge_filter: edge_filter(jsh.A,args)
reflected = jsh.A.map_vertex_names(lrmap)
add_neighborhood_similarity(data,label,jsh.A,reflected,left,right)
add_overlap_similarity(data,label,jsh.A,left + right)
label = ['Adult/L4'] + _label
vertices = sorted((set(n2u.neurons)&set(jsh.neurons))-set(_remove))
add_neighborhood_similarity(data,label,n2u.A,jsh.A,left,right)
add_overlap_similarity(data,label,n2u.A,vertices)
add_overlap_similarity(data,label,jsh.A,vertices)
def run(_cfg,source_data=None):
cfg = ConfigParser(interpolation=ExtendedInterpolation())
cfg.read(_cfg)
left = ioaux.read.into_list(cfg['mat']['left_nodes'])
right = ioaux.read.into_list(cfg['mat']['right_nodes'])
data = []
add_data(cfg,data,['all','all'])
add_data(cfg,data,['all','low'],edge_filter=low_pass_edge_filter,args=35)
add_data(cfg,data,['all','mid'],edge_filter=mid_pass_edge_filter,args=(35,66))
add_data(cfg,data,['all','high'],edge_filter=high_pass_edge_filter,args=66)
add_data(cfg,data,['ipsilateral','all'],edge_filter=ipsilateral_pass_filter,args=[left,right])
add_data(cfg,data,['contralateral','all'],edge_filter=contralateral_pass_filter,args=[left,right])
df = pd.DataFrame(data,columns=["Comparison","Network","Edge threshold","Measure","Cell","Jaccard Distance"])
if source_data: df.to_csv(source_data,index=False)
if __name__=="__main__":
parser = argparse.ArgumentParser(description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('-c','--config',
dest = 'config',
action = 'store',
default = CONFIG,
required = False,
help = 'Config file')
params = parser.parse_args()
run(params.config,source_data=SOURCE)
| 37.434426 | 113 | 0.671338 | 642 | 4,567 | 4.580997 | 0.255452 | 0.054403 | 0.058143 | 0.033322 | 0.469228 | 0.393064 | 0.344101 | 0.317579 | 0.317579 | 0.211493 | 0 | 0.01507 | 0.186337 | 4,567 | 121 | 114 | 37.743802 | 0.776372 | 0.070506 | 0 | 0.247191 | 0 | 0 | 0.085714 | 0.007556 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067416 | false | 0.089888 | 0.11236 | 0 | 0.179775 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8af0217ebf291f4d520d2da316ab7baed961b7af | 1,051 | py | Python | nidaqmx/examples/contrib/alternate_on_off_slow.py | jkfindeisen/pylibnidaqmx | 959dfe0db49574c4fc528a9f80632702167852c1 | [
"BSD-3-Clause"
] | 1 | 2015-06-10T11:10:26.000Z | 2015-06-10T11:10:26.000Z | nidaqmx/examples/contrib/alternate_on_off_slow.py | jkfindeisen/pylibnidaqmx | 959dfe0db49574c4fc528a9f80632702167852c1 | [
"BSD-3-Clause"
] | null | null | null | nidaqmx/examples/contrib/alternate_on_off_slow.py | jkfindeisen/pylibnidaqmx | 959dfe0db49574c4fc528a9f80632702167852c1 | [
"BSD-3-Clause"
] | null | null | null | # with relatively loose timing constraints
# turn on and off a digital output
from __future__ import division
from numpy import *
import labdaq.daqmx as daqmx
import labdaq.daq as daq
import threading,time
def min2sec(minutes):
return minutes*60.0
###### setup script parameters ########
long_duration = min2sec(1.0) # twenty minutes
duration = 25.0 # sec
onvoltage = 5.0
offvoltage = 0.0
onstate = False
gCurrentTimer = None
state_verbal= {True:'On', False: 'Off'}
def change_voltage(onstate):
print onstate
if onstate:
daq.set_voltage_ch0(onvoltage)
else :
daq.set_voltage_ch0(offvoltage)
def cycle(onstate=onstate):
print "hi! Starting up loop of alternating on voltage %f with off voltage of %f every %f seconds or %f minutes" % (onvoltage, offvoltage, duration, sec2min(duration))
while 1:
onstate = not onstate
change_voltage(onstate)
time.sleep(duration)
# gCurrentTimer = threading.Timer(duration, cycle, (onstate,))
if __name__=='__main__':
cycle()
| 25.02381 | 170 | 0.69648 | 139 | 1,051 | 5.122302 | 0.52518 | 0.033708 | 0.05618 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021557 | 0.205519 | 1,051 | 41 | 171 | 25.634146 | 0.831138 | 0.169363 | 0 | 0 | 0 | 0.035714 | 0.135991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.178571 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c100032b5b5a7dde6ff12319f76414fc7eba6bf6 | 772 | py | Python | core/migrations/0008_auto_20200409_0247.py | scottstanie/fashbowl | d723bd175ca37ec3d001ab01b1e552f8682c6cb6 | [
"MIT"
] | 1 | 2020-04-02T16:20:31.000Z | 2020-04-02T16:20:31.000Z | core/migrations/0008_auto_20200409_0247.py | scottstanie/fashbowl | d723bd175ca37ec3d001ab01b1e552f8682c6cb6 | [
"MIT"
] | 9 | 2020-04-02T00:14:48.000Z | 2021-06-10T20:02:14.000Z | core/migrations/0008_auto_20200409_0247.py | scottstanie/fashbowl | d723bd175ca37ec3d001ab01b1e552f8682c6cb6 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.10 on 2020-04-09 02:47
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0007_auto_20200409_0223'),
]
operations = [
migrations.RenameField(
model_name='game',
old_name='active_guessing',
new_name='is_live_round',
),
migrations.RemoveField(
model_name='game',
name='round_done',
),
migrations.RemoveField(
model_name='game',
name='round_start_time',
),
migrations.AddField(
model_name='game',
name='round_start_timeint',
field=models.IntegerField(blank=True, null=True),
),
]
| 24.125 | 61 | 0.555699 | 76 | 772 | 5.421053 | 0.618421 | 0.087379 | 0.126214 | 0.123786 | 0.286408 | 0.286408 | 0.208738 | 0 | 0 | 0 | 0 | 0.062136 | 0.332902 | 772 | 31 | 62 | 24.903226 | 0.737864 | 0.059585 | 0 | 0.4 | 1 | 0 | 0.160221 | 0.031768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c102ec5e1e8bbc86e993c52fbaaef1711f018ec5 | 1,111 | py | Python | tests/unit/time_manager/schemas/test_user.py | krjakbrjak/time_management | 54ed55ddf0da9500722da4d0aafbc71ebfccc205 | [
"MIT"
] | null | null | null | tests/unit/time_manager/schemas/test_user.py | krjakbrjak/time_management | 54ed55ddf0da9500722da4d0aafbc71ebfccc205 | [
"MIT"
] | 8 | 2021-11-10T11:05:34.000Z | 2021-11-14T16:07:25.000Z | tests/unit/time_manager/schemas/test_user.py | krjakbrjak/time_management | 54ed55ddf0da9500722da4d0aafbc71ebfccc205 | [
"MIT"
] | null | null | null | import pytest
from time_manager.schemas.user import (
UserBase,
UserCredentials,
UserDB,
UserDBBase,
validate_username,
)
@pytest.mark.parametrize(
"username,should_raise",
[
("", True),
(" ", True),
(" -", True),
("- ", True),
("-a", True),
("a ", True),
("a", False),
("a!@#$%^&*()_12qw", False),
("a !@#$%^&*()_12qw", True),
],
)
def test_username(username, should_raise):
if should_raise:
with pytest.raises(ValueError):
validate_username(username)
else:
assert validate_username(username) == username
def test_models():
UserBase()
UserDBBase()
UserDB(id=1, hashed_password="aa", username="user_1", email="a@a.a")
UserCredentials(username="a", password="")
with pytest.raises(ValueError):
UserBase(username="")
UserDBBase(username="")
UserDB(id=1, hashed_password="aa", username="user 1")
UserCredentials(username="a-b", password="")
UserDB(id=1, hashed_password="aa", username="user_1", email="a@a.")
| 24.688889 | 75 | 0.567057 | 114 | 1,111 | 5.385965 | 0.324561 | 0.104235 | 0.043974 | 0.07329 | 0.208469 | 0.208469 | 0.208469 | 0.208469 | 0.208469 | 0.14658 | 0 | 0.012063 | 0.253825 | 1,111 | 44 | 76 | 25.25 | 0.728589 | 0 | 0 | 0.153846 | 0 | 0 | 0.090909 | 0.018902 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0.051282 | false | 0.128205 | 0.051282 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c10424a73e9b47e96076baec558e6fce79f127ed | 335 | py | Python | Chapter10/10_03_print_statements.py | PacktPublishing/Mastering-ArcGIS-Enterprise-Administration | 66edf0c2878f7766cdb9203a78bfa7a25218cdd0 | [
"MIT"
] | 6 | 2018-03-24T15:58:18.000Z | 2022-03-20T00:39:39.000Z | Chapter10/10_03_print_statements.py | PacktPublishing/Mastering-ArcGIS-Enterprise-Administration | 66edf0c2878f7766cdb9203a78bfa7a25218cdd0 | [
"MIT"
] | null | null | null | Chapter10/10_03_print_statements.py | PacktPublishing/Mastering-ArcGIS-Enterprise-Administration | 66edf0c2878f7766cdb9203a78bfa7a25218cdd0 | [
"MIT"
] | null | null | null | import arcpy
result = arcpy.GetCount_management(
r"C:\Projects\GDBs\Sandbox.gdb\StudyAreas"
)
record_count = int(result.getOutput(0))
if record_count > 0:
arcpy.FeatureClassToFeatureClass_conversion(
r"C:\Projects\GDBs\Sandbox.gdb\StudyAreas",
r"C:\Projects\GDBs\Sandbox.gdb",
"ActiveStudyAreas"
)
| 25.769231 | 51 | 0.707463 | 40 | 335 | 5.825 | 0.525 | 0.025751 | 0.128755 | 0.180258 | 0.39485 | 0.39485 | 0.291845 | 0 | 0 | 0 | 0 | 0.007168 | 0.167164 | 335 | 12 | 52 | 27.916667 | 0.827957 | 0 | 0 | 0 | 0 | 0 | 0.364179 | 0.316418 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c1119481eb7156d282fc552db269120ae81a3060 | 3,239 | py | Python | modules/action/exploit_jexboss.py | mrpnkt/apt2 | 542fb0593069c900303421f3f24a499ce8f3a6a8 | [
"MIT"
] | 37 | 2018-08-24T20:13:19.000Z | 2022-02-22T08:41:24.000Z | modules/action/exploit_jexboss.py | zu3s/apt2-1 | 67325052d2713a363183c23188a67e98a379eec7 | [
"MIT"
] | 4 | 2020-06-14T23:16:45.000Z | 2021-03-08T14:18:21.000Z | modules/action/exploit_jexboss.py | zu3s/apt2-1 | 67325052d2713a363183c23188a67e98a379eec7 | [
"MIT"
] | 23 | 2018-11-15T13:00:09.000Z | 2021-08-07T18:53:04.000Z | import os
import re
from core.actionModule import actionModule
from core.utils import Utils
from core.keystore import KeyStore as kb
class exploit_jexboss(actionModule):
def __init__(self, config, display, lock):
super(exploit_jexboss, self).__init__(config, display, lock)
self.title = "Run JexBoss and look for vulnerabilities"
self.shortName = "jexboss"
self.description = "execute [jexboss.py -mode file-scan] on each web service"
self.requirements = ["jexboss"]
self.types = ["web"]
self.safeLevel = 4
self.triggers = ["newService_https", "newService_http"]
def getTargets(self):
self.targets = kb.get('service/http', 'service/https')
def process(self):
# load any targets we are interested in
self.getTargets()
# loop over each target
for t in self.targets:
# verify we have not tested this host before
ports = kb.get('service/http/' + t + '/tcp')
for port in ports:
if not self.seentarget(t + str(port)):
self.addseentarget(t+str(port))
outfile = self.config["proofsDir"] + self.shortName + "_" + t + "_" + str(port) + "_" + Utils.getRandStr(10) + ".txt"
command = "python " + self.config["jexboss"] + " -mode file-scan -out " + outfile + " -file <(echo \"http://" + t + ":" + str(port) + "\")"
result = Utils.execWait(command)
kb.add("host/" + t + "/files/" + self.shortName + "/" + outfile.replace("/", "%2F" ))
contents = []
with open (outfile, "r") as myfile:
contents = myfile.readlines()
for line in contents:
m = re.match(r'^.*VULNERABLE TO (.*)\].*', line)
if (m):
vuln = m.group(1).strip()
self.addVuln(t, self.shortName + "-" + vuln, {"port": port, "output": outfile.replace("/", "%2F")})
ports = kb.get('service/https/' + t + '/tcp')
for port in ports:
if not self.seentarget(t + str(port)):
self.addseentarget(t+str(port))
outfile = self.config["proofsDir"] + self.shortName + "_" + t + "_" + str(port) + "_" + Utils.getRandStr(10) + ".txt"
command = "python " + self.config["jexboss"] + " -mode file-scan -out " + outfile + " -file <(echo \"https://" + t + ":" + str(port) + "\")"
result = Utils.execWait(command)
kb.add("host/" + t + "/files/" + self.shortName + "/" + outfile.replace("/", "%2F" ))
contents = []
with open (outfile, "r") as myfile:
contents = myfile.readlines()
for line in contents:
m = re.match(r'^.*VULNERABLE TO (.*)\].*', line)
if (m):
vuln = m.group(1).strip()
self.addVuln(t, self.shortName + "-" + vuln, {"port": port, "output": outfile.replace("/", "%2F")})
return
| 46.942029 | 160 | 0.490584 | 330 | 3,239 | 4.760606 | 0.321212 | 0.020369 | 0.040738 | 0.020369 | 0.572884 | 0.572884 | 0.572884 | 0.572884 | 0.572884 | 0.572884 | 0 | 0.005291 | 0.358135 | 3,239 | 68 | 161 | 47.632353 | 0.750361 | 0.031491 | 0 | 0.528302 | 0 | 0 | 0.159591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.09434 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c122fb13e2179412fff65f5e6f740a35636c9100 | 374 | py | Python | core/element.py | david-wm-sanders/oeo | 1ce26ddcd83d957183eefaae508c2a6800d47e94 | [
"MIT"
] | 1 | 2019-07-28T04:40:16.000Z | 2019-07-28T04:40:16.000Z | core/element.py | david-wm-sanders/oeo | 1ce26ddcd83d957183eefaae508c2a6800d47e94 | [
"MIT"
] | null | null | null | core/element.py | david-wm-sanders/oeo | 1ce26ddcd83d957183eefaae508c2a6800d47e94 | [
"MIT"
] | null | null | null | from enum import Enum, unique
@unique
class Element(Enum):
Normal = 1
Fight = 2
Flying = 3
Poison = 4
Ground = 5
Rock = 6
Bug = 7
Ghost = 8
Steel = 9
Fire = 10
Water = 11
Grass = 12
Electric = 13
Psychic = 14
Ice = 15
Dragon = 16
Dark = 17
def __repr__(self):
return "Element.%s" % self.name | 14.96 | 39 | 0.524064 | 51 | 374 | 3.764706 | 0.901961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110132 | 0.393048 | 374 | 25 | 39 | 14.96 | 0.735683 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.045455 | 0.045455 | 0.954545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c125de4c20ff54c0489638adfc6d88c63976f6dc | 703 | py | Python | episodes/templatetags/episode_extras.py | jinpark/something-sunshine-django | e5c6dcea527dce6bc6bb95d3916567e028cf6c8c | [
"MIT"
] | null | null | null | episodes/templatetags/episode_extras.py | jinpark/something-sunshine-django | e5c6dcea527dce6bc6bb95d3916567e028cf6c8c | [
"MIT"
] | null | null | null | episodes/templatetags/episode_extras.py | jinpark/something-sunshine-django | e5c6dcea527dce6bc6bb95d3916567e028cf6c8c | [
"MIT"
] | null | null | null | from django import template
from django.template.defaultfilters import stringfilter
from django.conf import settings
from urlparse import urlparse
register = template.Library()
@register.filter(is_safe=True)
@stringfilter
def xml_escape(string):
"""Replaces all unescaped xml characters"""
return string.replace('&', '&').replace('<', '<').replace('>', '>').replace('"', '"').replace("'", ''')
@register.simple_tag
def static_url(url):
""" returns the static url-ed version of the path, and not the s3 version """
full_s3_path = urlparse(url).path
relative_path = "/".join(full_s3_path.split('/')[2:])
return u"{}{}".format(settings.STATIC_BUCKET_URL, relative_path) | 33.47619 | 127 | 0.716927 | 93 | 703 | 5.290323 | 0.548387 | 0.060976 | 0.04065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0064 | 0.110953 | 703 | 21 | 128 | 33.47619 | 0.7808 | 0.153627 | 0 | 0 | 0 | 0 | 0.061644 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c1359b0cad8edb5f3898299237a3223d50feda6f | 2,592 | py | Python | baiocas/transports/base.py | ViViDboarder/baiocas | c542e5febdefc62ce36ad3f3ec6358a0017de831 | [
"BSD-3-Clause"
] | null | null | null | baiocas/transports/base.py | ViViDboarder/baiocas | c542e5febdefc62ce36ad3f3ec6358a0017de831 | [
"BSD-3-Clause"
] | 1 | 2015-07-03T16:45:12.000Z | 2015-07-15T11:37:24.000Z | baiocas/transports/base.py | ViViDboarder/baiocas | c542e5febdefc62ce36ad3f3ec6358a0017de831 | [
"BSD-3-Clause"
] | 1 | 2019-07-25T00:36:19.000Z | 2019-07-25T00:36:19.000Z | import logging
import urllib.parse
from tornado.ioloop import IOLoop
from baiocas import errors
from baiocas.channel_id import ChannelId
from baiocas.message import Message
class Transport(object):
DEFAULT_MAXIMUM_NETWORK_DELAY = 10000
OPTION_MAXIMUM_NETWORK_DELAY = 'maximum_network_delay'
def __init__(self, io_loop=None, **options):
self.log = logging.getLogger('%s.%s' % (self.__module__, self.name))
self.io_loop = io_loop or IOLoop.instance()
self._client = None
self._options = {}
self.url = None
self.configure(**options)
def __repr__(self):
return self.name
@property
def name(self):
raise NotImplementedError('Must be implemented by child classes')
@property
def parsed_url(self):
return self._parsed_url
@property
def options(self):
return self._options.copy()
def _get_url(self):
return self._url
def _set_url(self, url):
parsed_url = urllib.parse.urlparse(url or '')
if url is not None:
if not parsed_url.hostname:
raise errors.ConnectionStringError(url, self)
self._url = url
self._parsed_url = parsed_url
url = property(_get_url, _set_url)
def abort(self):
self.log.debug('Transport aborted')
def accept(self, bayeux_version):
raise NotImplementedError('Must be implemented by child classes')
def configure(self, **options):
if not options:
return
self._options.update(options)
self.log.debug('Options changed to: %s' % self._options)
def get_timeout(self, messages):
timeout = self._options.get(
self.OPTION_MAXIMUM_NETWORK_DELAY,
self.DEFAULT_MAXIMUM_NETWORK_DELAY
)
if len(messages) == 1 and messages[0].channel == ChannelId.META_CONNECT:
advice = messages[0].advice
if not advice or Message.FIELD_TIMEOUT not in advice:
advice = self._client.advice
timeout += advice[Message.FIELD_TIMEOUT]
return timeout
def register(self, client, url=None):
self.log.debug('Executing registration callback')
self._client = client
self.url = url
def reset(self):
self.log.debug('Transport reset')
def send(self, messages, sync=False):
raise NotImplementedError('Must be implemented by child classes')
def unregister(self):
self.log.debug('Executing unregistration callback')
self._client = None
self.url = None
| 28.173913 | 80 | 0.64429 | 311 | 2,592 | 5.170418 | 0.289389 | 0.026119 | 0.05908 | 0.05597 | 0.137438 | 0.106343 | 0.106343 | 0.106343 | 0.072139 | 0 | 0 | 0.004208 | 0.26659 | 2,592 | 91 | 81 | 28.483516 | 0.841662 | 0 | 0 | 0.144928 | 0 | 0 | 0.097222 | 0.008102 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0.086957 | 0.057971 | 0.449275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c135cb816e69695f06cac9753a1bcf1475a0ce0a | 321 | py | Python | setup.py | ifigueroap/hdptest | 46ae712da290bcb561f84283bb72e356ea86bf21 | [
"MIT"
] | null | null | null | setup.py | ifigueroap/hdptest | 46ae712da290bcb561f84283bb72e356ea86bf21 | [
"MIT"
] | null | null | null | setup.py | ifigueroap/hdptest | 46ae712da290bcb561f84283bb72e356ea86bf21 | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(
name='hdptest',
version='0.3',
scripts=[],
author="Ismael Figueroa",
author_email="ifigueroap@gmail.com",
description="Mecanismo simple para tests en Jupyter Notebooks",
url="https://github.com/ifigueroap/hdptest",
packages=setuptools.find_packages()
) | 26.75 | 67 | 0.70405 | 36 | 321 | 6.222222 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007407 | 0.158879 | 321 | 12 | 68 | 26.75 | 0.822222 | 0 | 0 | 0 | 0 | 0 | 0.403727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c13bedc7dfebd9fe9ece54a3be5162405d6f40c5 | 1,628 | py | Python | server/server/events/migrations/0002_auto_20210624_1022.py | omert-visiblerisk/connective | c6b81700b35e2d8355ad1535b182093595fff8b7 | [
"MIT"
] | 4 | 2021-07-05T10:49:26.000Z | 2021-11-24T11:34:43.000Z | server/server/events/migrations/0002_auto_20210624_1022.py | omert-visiblerisk/connective | c6b81700b35e2d8355ad1535b182093595fff8b7 | [
"MIT"
] | 39 | 2021-06-21T15:02:37.000Z | 2022-02-28T15:07:42.000Z | server/server/events/migrations/0002_auto_20210624_1022.py | Eyal-VR/connective | 46857dd79dc58f63c3afb9791ecf8adf853a6c57 | [
"MIT"
] | 17 | 2021-06-16T08:59:45.000Z | 2021-09-29T11:35:38.000Z | # Generated by Django 3.1.11 on 2021-06-24 07:22
import django.core.validators
from django.db import migrations, models
import server.utils.model_fields
class Migration(migrations.Migration):
dependencies = [
('users', '0004_auto_20210624_1022'),
('events', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='event',
name='has_summary',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='event',
name='slug',
field=models.CharField(default=server.utils.model_fields.random_slug, max_length=40, null=True),
),
migrations.AddField(
model_name='event',
name='summary_children_behavior',
field=models.IntegerField(blank=True, null=True, validators=[django.core.validators.MinValueValidator(0), django.core.validators.MaxValueValidator(10)]),
),
migrations.AddField(
model_name='event',
name='summary_general_notes',
field=models.CharField(blank=True, max_length=400, null=True),
),
migrations.AddField(
model_name='event',
name='summary_general_rating',
field=models.IntegerField(blank=True, null=True, validators=[django.core.validators.MinValueValidator(0), django.core.validators.MaxValueValidator(10)]),
),
migrations.AlterField(
model_name='event',
name='consumers',
field=models.ManyToManyField(blank=True, to='users.Consumer'),
),
]
| 34.638298 | 165 | 0.621007 | 164 | 1,628 | 6.030488 | 0.420732 | 0.054601 | 0.084934 | 0.109201 | 0.500506 | 0.500506 | 0.427705 | 0.427705 | 0.380182 | 0.277048 | 0 | 0.03894 | 0.2586 | 1,628 | 46 | 166 | 35.391304 | 0.780447 | 0.028256 | 0 | 0.475 | 1 | 0 | 0.11519 | 0.057595 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.075 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c14cb202d2f14fc0f1f8292d6bea7e64029a92e2 | 391 | py | Python | asynctb/_tests/conftest.py | shamrin/asynctb | d06a77cb2e496108e73adc65e27d1d313592685a | [
"Apache-2.0",
"MIT"
] | 6 | 2020-07-02T16:43:15.000Z | 2021-11-29T06:30:57.000Z | asynctb/_tests/conftest.py | shamrin/asynctb | d06a77cb2e496108e73adc65e27d1d313592685a | [
"Apache-2.0",
"MIT"
] | 2 | 2020-07-02T09:00:50.000Z | 2020-07-26T06:39:09.000Z | asynctb/_tests/conftest.py | shamrin/asynctb | d06a77cb2e496108e73adc65e27d1d313592685a | [
"Apache-2.0",
"MIT"
] | 2 | 2020-07-05T01:25:37.000Z | 2021-01-26T08:00:06.000Z | import pytest
from asynctb._registry import HANDLING_FOR_CODE
from asynctb._glue import ensure_installed
@pytest.fixture
def local_registry():
ensure_installed()
prev_contents = list(HANDLING_FOR_CODE.items())
yield
HANDLING_FOR_CODE.clear()
HANDLING_FOR_CODE.update(prev_contents)
@pytest.fixture
def isolated_registry(local_registry):
HANDLING_FOR_CODE.clear()
| 21.722222 | 51 | 0.792839 | 51 | 391 | 5.705882 | 0.431373 | 0.189003 | 0.257732 | 0.137457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132992 | 391 | 17 | 52 | 23 | 0.858407 | 0 | 0 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.230769 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c1512b991182de5c84f4cb76bf9c7b36cfead1a2 | 7,785 | py | Python | scripts/create_catalog_for_gdal2tiles_tiles.py | granularag/pyspatial | 8c6d53eeeabf1eedd222db95812453f83a6685bb | [
"BSD-3-Clause"
] | 76 | 2016-03-23T19:47:19.000Z | 2021-04-27T21:58:16.000Z | scripts/create_catalog_for_gdal2tiles_tiles.py | granularag/pyspatial | 8c6d53eeeabf1eedd222db95812453f83a6685bb | [
"BSD-3-Clause"
] | 15 | 2016-02-27T02:33:00.000Z | 2018-01-11T18:53:39.000Z | scripts/create_catalog_for_gdal2tiles_tiles.py | granularag/pyspatial | 8c6d53eeeabf1eedd222db95812453f83a6685bb | [
"BSD-3-Clause"
] | 11 | 2016-02-10T22:53:05.000Z | 2019-01-04T21:39:53.000Z |
import os
import sys
import json
import math
import pprint
import argparse
import logging
# log = logging.getLogger('geos.py')
# log.propagate = False
log = logging.getLogger(__file__)
# logging.basicConfig(stream = sys.stderr, level=logging.DEBUG, format='%(filename)s:%(lineno)s %(levelname)s:%(message)s')
logging.basicConfig(stream = sys.stderr, level=logging.DEBUG, format='%(lineno)s %(levelname)s:%(message)s')
log.setLevel(logging.INFO)
from osgeo import gdal
import pygeoj
from pyspatial.vector import read_layer, read_geojson
from pyspatial.utils import projection_from_string
from pyspatial import globalmaptiles
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Create json catalog file and geojson file.')
parser.add_argument('tiles_path', help='The path to a zoom level of some gdal2tiles tiles')
parser.add_argument('--dest_catalog', dest="catalog_filename",
help='The output path for the json catalog file',
default=None)
parser.add_argument('--dest_geojson', dest="geojson_filename",
help='The output path for the geojson file',
default=None)
parser.add_argument('--tiles_size', dest='tiles_size', type=int, default=256,
help=('Specify the tiles size in pixels '
'(assumes both x and y are the same)'))
args = parser.parse_args()
######################## Creating the geojson file
geojson = pygeoj.new()
geojson.define_crs(type="name", name="urn:ogc:def:crs:OGC:1.3:CRS84")
g = globalmaptiles.GlobalMercator()
for dirname, subdirList, filename_list in os.walk(args.tiles_path):
for filename in filename_list:
if filename.endswith('.png'):
rel_path = '.' + os.sep + os.path.join(dirname, filename)
tms_z, tms_x, tms_y = rel_path.rstrip('.png').split(os.sep)[-3:]
tms_z, tms_x, tms_y = map(int, (tms_z, tms_x, tms_y))
minLat, minLon, maxLat, maxLon = g.TileLatLonBounds(tms_x, tms_y, tms_z)
# minLat, minLon = lower left corner
# maxLat, maxLon = upper right corner
tile_upper_left_corner = minLon, maxLat
tile_upper_right_corner = maxLon, maxLat
tile_lower_right_corner = maxLon, minLat
tile_lower_left_corner = minLon, minLat
log.debug("tile_upper_left_corner " + str(tile_upper_left_corner) )
log.debug("tile_upper_right_corner " + str(tile_upper_right_corner) )
log.debug("tile_lower_right_corner " + str(tile_lower_right_corner) )
log.debug("tile_lower_left_corner " + str(tile_lower_left_corner) )
feature = pygeoj.Feature(geometry={"type":"Polygon", "coordinates":[[tile_upper_left_corner,
tile_upper_right_corner,
tile_lower_right_corner,
tile_lower_left_corner,
tile_upper_left_corner]]},
properties={"location": rel_path })
geojson.add_feature(feature)
# geojson.add_all_bboxes()
# geojson.add_unique_id()
# geojson.save(args.geojson_filename, indent=4) # geojson.update_bbox() inside
geojson.update_bbox()
if args.geojson_filename is not None:
with open(args.geojson_filename, "w+b") as outf:
outf.write(json.dumps(geojson._data, indent=4, sort_keys=True))
else:
print json.dumps(geojson._data, indent=4, sort_keys=True)
######################## Creating the catalog file
minLon_raster, minLat_raster, maxLon_raster, maxLat_raster = geojson.bbox
log.debug("geojson.bbox " + str(geojson.bbox) )
raster_upper_left_corner = minLon_raster, maxLat_raster
raster_upper_right_corner = maxLon_raster, maxLat_raster
raster_lower_right_corner = maxLon_raster, minLat_raster
raster_lower_left_corner = minLon_raster, minLat_raster
log.debug("raster_upper_left_corner " + str(raster_upper_left_corner) )
log.debug("raster_lower_right_corner " + str(raster_lower_right_corner) )
# for i in range(len(geojson)):
# feature = geojson.get_feature(i)
# print feature.geometry.bbox
minLon_tile, minLat_tile, maxLon_tile, maxLat_tile = geojson.get_feature(0).geometry.bbox
nb_tiles_x = (minLon_raster - maxLon_raster) / (minLon_tile - maxLon_tile)
nb_tiles_y = (minLat_raster - maxLat_raster) / (minLat_tile - maxLat_tile)
log.debug('nb_tiles_x %s => %s' % (nb_tiles_x, round(nb_tiles_x)) )
log.debug('nb_tiles_y %s => %s' % (nb_tiles_y, round(nb_tiles_y)) )
nb_tiles_x = int(round(nb_tiles_x))
nb_tiles_y = int(round(nb_tiles_y))
raster_size_x = nb_tiles_x * args.tiles_size
raster_size_y = nb_tiles_y * args.tiles_size
geotrans_left_value = minLon_raster
geotrans_delta_x = (maxLon_raster - minLon_raster) / raster_size_x
geotrans_rotation_x = 0.0
geotrans_top_value = maxLat_raster
geotrans_rotation_y = 0.0
geotrans_delta_y = (minLat_raster - maxLat_raster) / raster_size_y
geoTransform = (geotrans_left_value, geotrans_delta_x, geotrans_rotation_x,
geotrans_top_value, geotrans_rotation_y, geotrans_delta_y)
log.debug("geoTransform\n " + pprint.pformat(geoTransform) )
assert raster_lower_right_corner[0] == geotrans_left_value + raster_size_x * geotrans_delta_x
assert raster_lower_right_corner[1] == geotrans_top_value + raster_size_y * geotrans_delta_y
coordinate_system_EPSG4326 = """
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433],
AUTHORITY["EPSG","4326"]]
"""
# https://en.wikipedia.org/wiki/Web_Mercator
coordinate_system_EPSG3857 = """
PROJCS["WGS 84 / Pseudo-Mercator",
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.0174532925199433,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4326"]],
PROJECTION["Mercator_1SP"],
PARAMETER["central_meridian",0],
PARAMETER["scale_factor",1],
PARAMETER["false_easting",0],
PARAMETER["false_northing",0],
UNIT["metre",1,
AUTHORITY["EPSG","9001"]],
AXIS["X",EAST],
AXIS["Y",NORTH],
EXTENSION["PROJ4","+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +wktext +no_defs"],
AUTHORITY["EPSG","3857"]]
"""
catalog = { "CoordinateSystem": coordinate_system_EPSG4326.replace(' ','').replace('\n',''),
"GeoTransform": geoTransform,
"GridSize": args.tiles_size,
"Path": args.tiles_path,
"Size": [ raster_size_x, raster_size_y],
"Tile_structure": "%d/%d.png",
"TMS_z": tms_z
}
if args.catalog_filename is not None:
with open(args.catalog_filename, "w+b") as outf:
outf.write(json.dumps(catalog, indent=4, sort_keys=True))
else:
print json.dumps(catalog, indent=4, sort_keys=True)
| 43.25 | 151 | 0.620039 | 952 | 7,785 | 4.776261 | 0.24895 | 0.036288 | 0.031669 | 0.020893 | 0.252034 | 0.207609 | 0.144711 | 0.11832 | 0.111062 | 0.046184 | 0 | 0.034387 | 0.260373 | 7,785 | 179 | 152 | 43.49162 | 0.755297 | 0.072961 | 0 | 0.119403 | 0 | 0.007463 | 0.275549 | 0.110164 | 0 | 0 | 0 | 0 | 0.014925 | 0 | null | null | 0 | 0.089552 | null | null | 0.029851 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c15a5e8403813ad9bc2916dbdbb786b43c935f5d | 2,110 | py | Python | src/onegov/town6/views/files.py | politbuero-kampagnen/onegov-cloud | 20148bf321b71f617b64376fe7249b2b9b9c4aa9 | [
"MIT"
] | null | null | null | src/onegov/town6/views/files.py | politbuero-kampagnen/onegov-cloud | 20148bf321b71f617b64376fe7249b2b9b9c4aa9 | [
"MIT"
] | null | null | null | src/onegov/town6/views/files.py | politbuero-kampagnen/onegov-cloud | 20148bf321b71f617b64376fe7249b2b9b9c4aa9 | [
"MIT"
] | null | null | null | from onegov.core.security import Private, Public
from onegov.file import File
from onegov.org.views.files import view_file_details, \
view_get_image_collection, view_upload_general_file, \
view_upload_image_file, view_file_digest, handle_sign,\
view_get_file_collection
from onegov.town6 import TownApp
from onegov.town6.layout import DefaultLayout, GeneralFileCollectionLayout, \
ImageFileCollectionLayout
from onegov.org.models import (
GeneralFile,
GeneralFileCollection,
ImageFileCollection,
)
@TownApp.html(model=GeneralFileCollection, template='files.pt',
permission=Private)
def town_view_file_collection(self, request):
return view_get_file_collection(
self, request, GeneralFileCollectionLayout(self, request))
@TownApp.html(model=GeneralFile, permission=Private, name='details')
def view_town_file_details(self, request):
return view_file_details(self, request, DefaultLayout(self, request))
@TownApp.html(model=ImageFileCollection, template='images.pt',
permission=Private)
def view_town_image_collection(self, request):
return view_get_image_collection(
self, request, ImageFileCollectionLayout(self, request))
@TownApp.html(model=GeneralFileCollection, name='upload',
request_method='POST', permission=Private)
def view_town_upload_general_file(self, request):
return view_upload_general_file(
self, request, DefaultLayout(self, request))
@TownApp.html(model=ImageFileCollection, name='upload',
request_method='POST', permission=Private)
def view_town_upload_image_file(self, request):
return view_upload_image_file(self, request, DefaultLayout(self, request))
@TownApp.html(model=GeneralFileCollection, name='digest', permission=Public)
def view_town_file_digest(self, request):
return view_file_digest(self, request, DefaultLayout(self, request))
@TownApp.html(model=File, name='sign', request_method='POST',
permission=Private)
def town_handle_sign(self, request):
return handle_sign(self, request, DefaultLayout(self, request))
| 35.762712 | 78 | 0.770142 | 246 | 2,110 | 6.365854 | 0.182927 | 0.14751 | 0.07152 | 0.08046 | 0.49553 | 0.390805 | 0.284163 | 0.237548 | 0.204981 | 0.077905 | 0 | 0.001098 | 0.136493 | 2,110 | 58 | 79 | 36.37931 | 0.858397 | 0 | 0 | 0.116279 | 0 | 0 | 0.027488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162791 | false | 0 | 0.139535 | 0.162791 | 0.465116 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
c15bc0e32de354e3e61d123c20d9b6c3b029f90e | 1,958 | py | Python | tests/test_ilqr.py | lassepe/pyILQR | 47bda0fc2abefe26acd547b9e0c91941e2b85a55 | [
"MIT"
] | 6 | 2021-11-09T02:37:20.000Z | 2021-12-25T12:57:35.000Z | tests/test_ilqr.py | lassepe/pyILQR | 47bda0fc2abefe26acd547b9e0c91941e2b85a55 | [
"MIT"
] | null | null | null | tests/test_ilqr.py | lassepe/pyILQR | 47bda0fc2abefe26acd547b9e0c91941e2b85a55 | [
"MIT"
] | null | null | null | import numpy as np
from pyilqr.costs import QuadraticCost
from pyilqr.example_costs import SetpointTrackingCost
from pyilqr.example_dynamics import UnicycleDynamics, BicycleDynamics
from pyilqr.ocp import OptimalControlProblem
from pyilqr.ilqr import ILQRSolver
from pyilqr.strategies import FunctionStrategy
def test_ilqr_unicycle():
dynamics = UnicycleDynamics(0.1)
horizon = 100
x0 = np.array([0, 0, -0.3, 0.1])
state_cost = SetpointTrackingCost(np.eye(4), x_target=np.array([1, 1, 0, 0]))
input_cost = QuadraticCost(np.eye(2), np.zeros(2))
ocp = OptimalControlProblem(dynamics, state_cost, input_cost, horizon)
solver = ILQRSolver(ocp)
initial_strategy = FunctionStrategy(lambda x, t: np.array([0, 0]))
initial_xs, initial_us, _ = dynamics.rollout(x0, initial_strategy, horizon)
initial_cost = state_cost.trajectory_cost(initial_xs) + input_cost.trajectory_cost(
initial_us
)
xs, us, has_converged = solver.solve(x0, initial_strategy)
assert has_converged
assert (
state_cost.trajectory_cost(xs) + input_cost.trajectory_cost(us) < initial_cost
)
def test_ilqr_bicycle():
dynamics = BicycleDynamics(0.1)
horizon = 100
x0 = np.array([0, 0, -0.3, 0.1, 0])
state_cost = SetpointTrackingCost(np.eye(5), x_target=np.array([1, 1, 0, 0, 0]))
input_cost = QuadraticCost(np.eye(2), np.zeros(2))
ocp = OptimalControlProblem(dynamics, state_cost, input_cost, horizon)
solver = ILQRSolver(ocp)
initial_strategy = FunctionStrategy(lambda x, t: np.array([0, 0]))
initial_xs, initial_us, _ = dynamics.rollout(x0, initial_strategy, horizon)
initial_cost = state_cost.trajectory_cost(initial_xs) + input_cost.trajectory_cost(
initial_us
)
xs, us, has_converged = solver.solve(x0, initial_strategy)
assert has_converged
assert (
state_cost.trajectory_cost(xs) + input_cost.trajectory_cost(us) < initial_cost
)
| 35.6 | 87 | 0.722676 | 263 | 1,958 | 5.1673 | 0.201521 | 0.013245 | 0.10596 | 0.02649 | 0.736571 | 0.686534 | 0.686534 | 0.686534 | 0.662252 | 0.662252 | 0 | 0.029593 | 0.171604 | 1,958 | 54 | 88 | 36.259259 | 0.808261 | 0 | 0 | 0.55814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 1 | 0.046512 | false | 0 | 0.162791 | 0 | 0.209302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c165ed942a8a6a18c750b464de2dca9115910970 | 1,787 | py | Python | src/web/modules/entrance/migrations/0036_auto_20170426_1205.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 5 | 2018-03-08T17:22:27.000Z | 2018-03-11T14:20:53.000Z | src/web/modules/entrance/migrations/0036_auto_20170426_1205.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 263 | 2018-03-08T18:05:12.000Z | 2022-03-11T23:26:20.000Z | src/web/modules/entrance/migrations/0036_auto_20170426_1205.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 6 | 2018-03-12T19:48:19.000Z | 2022-01-14T04:58:52.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.5 on 2017-04-26 07:05
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import modules.entrance.models.checking
class Migration(migrations.Migration):
dependencies = [
('entrance', '0035_auto_20170425_1244'),
]
operations = [
migrations.AlterModelOptions(
name='userincheckinggroup',
options={},
),
migrations.AlterField(
model_name='checkingcomment',
name='created_at',
field=models.DateTimeField(auto_now_add=True, db_index=True),
),
migrations.AlterField(
model_name='checkinglock',
name='locked_until',
field=models.DateTimeField(db_index=True, default=modules.entrance.models.checking.get_locked_timeout),
),
migrations.AlterField(
model_name='entranceexamtasksolution',
name='created_at',
field=models.DateTimeField(auto_now_add=True, db_index=True),
),
migrations.AlterField(
model_name='userincheckinggroup',
name='created_at',
field=models.DateTimeField(auto_now_add=True, db_index=True),
),
migrations.AlterField(
model_name='userincheckinggroup',
name='group',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='all_time_users', to='entrance.CheckingGroup'),
),
migrations.AlterField(
model_name='userincheckinggroup',
name='is_actual',
field=models.BooleanField(default=True, help_text='True для актуальных записей, False для исторических'),
),
]
| 34.365385 | 141 | 0.632345 | 175 | 1,787 | 6.251429 | 0.44 | 0.109689 | 0.137112 | 0.159049 | 0.335466 | 0.335466 | 0.287934 | 0.287934 | 0.287934 | 0.287934 | 0 | 0.024981 | 0.260772 | 1,787 | 51 | 142 | 35.039216 | 0.803179 | 0.038053 | 0 | 0.5 | 1 | 0 | 0.175408 | 0.04021 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c16aaabc7a7c0e443d5dfb16bd02de1d7cda11b2 | 13,986 | py | Python | Lib/site-packages/wx-2.8-msw-unicode/wx/tools/Editra/src/extern/dexml/__init__.py | ekkipermana/robotframework-test | 243ca26f69962f8cf20cd7d054e0ff3e709bc7f4 | [
"bzip2-1.0.6"
] | 11 | 2017-09-30T05:47:28.000Z | 2019-04-15T11:58:40.000Z | Lib/site-packages/wx-2.8-msw-unicode/wx/tools/Editra/src/extern/dexml/__init__.py | ekkipermana/robotframework-test | 243ca26f69962f8cf20cd7d054e0ff3e709bc7f4 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/wx-2.8-msw-unicode/wx/tools/Editra/src/extern/dexml/__init__.py | ekkipermana/robotframework-test | 243ca26f69962f8cf20cd7d054e0ff3e709bc7f4 | [
"bzip2-1.0.6"
] | 7 | 2018-02-13T10:22:39.000Z | 2019-07-04T07:39:28.000Z | """
dexml: a dead-simple Object-XML mapper for Python
Let's face it: xml is a fact of modern life. I'd even go so far as to say
that it's *good* at what is does. But that doesn't mean it's easy to work
with and it doesn't mean that we have to like it. Most of the time, XML
just needs to get the hell out of the way and let you do some actual work
instead of writing code to traverse and manipulate yet another DOM.
The dexml module takes the obvious mapping between XML tags and Python objects
and lets you capture that as cleanly as possible. Loosely inspired by Django's
ORM, you write simple class definitions to define the expected structure of
your XML document. Like so:
>>> import dexml
>>> from dexml import fields
>>> class Person(dexml.Model):
... name = fields.String()
... age = fields.Integer(tagname='age')
Then you can parse an XML document into an object like this:
>>> p = Person.parse("<Person name='Foo McBar'><age>42</age></Person>")
>>> p.name
u'Foo McBar'
>>> p.age
42
And you can render an object into an XML document like this:
>>> p = Person(name="Handsome B. Wonderful",age=36)
>>> p.render()
'<?xml version="1.0" ?><Person name="Handsome B. Wonderful"><age>36</age></Person>'
Malformed documents will raise a ParseError:
>>> p = Person.parse("<Person><age>92</age></Person>")
Traceback (most recent call last):
...
ParseError: required field not found: 'name'
Of course, it gets more interesting when you nest Model definitions, like this:
>>> class Group(dexml.Model):
... name = fields.String(attrname="name")
... members = fields.List(Person)
...
>>> g = Group(name="Monty Python")
>>> g.members.append(Person(name="John Cleese",age=69))
>>> g.members.append(Person(name="Terry Jones",age=67))
>>> g.render(fragment=True)
'<Group name="Monty Python"><Person name="John Cleese"><age>69</age></Person><Person name="Terry Jones"><age>67</age></Person></Group>'
There's support for XML namespaces, default field values, case-insensitive
parsing, and more fun stuff. Check out the documentation on the following
classes for more details:
:Model: the base class for objects that map into XML
:Field: the base class for individual model fields
:Meta: meta-information about how to parse/render a model
"""
__ver_major__ = 0
__ver_minor__ = 3
__ver_patch__ = 7
__ver_sub__ = ""
__version__ = "%d.%d.%d%s" % (__ver_major__,__ver_minor__,__ver_patch__,__ver_sub__)
import copy
from xml.dom import minidom
## Local Imports
import fields
from _util import *
class Model(object):
"""Base class for dexml Model objects.
Subclasses of Model represent a concrete type of object that can parsed
from or rendered to an XML document. The mapping to/from XML is controlled
by two things:
* attributes declared on an inner class named 'meta'
* fields declared using instances of fields.Field
Here's a quick example:
class Person(dexml.Model):
# This overrides the default tagname of 'Person'
class meta
tagname = "person"
# This maps to a 'name' attributr on the <person> tag
name = fields.String()
# This maps to an <age> tag within the <person> tag
age = fields.Integer(tagname='age')
See the 'Meta' class in this module for available meta options, and the
'fields' submodule for available field types.
"""
__metaclass__ = ModelMetaclass
_fields = []
def __init__(self,**kwds):
"""Default Model constructor.
Keyword arguments that correspond to declared fields are processed
and assigned to that field.
"""
for f in self._fields:
val = kwds.get(f.field_name)
setattr(self,f.field_name,val)
@classmethod
def parse(cls,xml):
"""Produce an instance of this model from some xml.
The given xml can be a string, a readable file-like object, or
a DOM node; we might add support for more types in the future.
"""
self = cls()
node = self._make_xml_node(xml)
self.validate_xml_node(node)
# Keep track of fields that have successfully parsed something
fields_found = []
# Try to consume all the node's attributes
attrs = node.attributes.values()
for field in self._fields:
unused_attrs = field.parse_attributes(self,attrs)
if len(unused_attrs) < len(attrs):
fields_found.append(field)
attrs = unused_attrs
for attr in attrs:
self._handle_unparsed_node(attr)
# Try to consume all child nodes
if self.meta.order_sensitive:
self._parse_children_ordered(node,self._fields,fields_found)
else:
self._parse_children_unordered(node,self._fields,fields_found)
# Check that all required fields have been found
for field in self._fields:
if field.required and field not in fields_found:
err = "required field not found: '%s'" % (field.field_name,)
raise ParseError(err)
field.parse_done(self)
# All done, return the instance so created
return self
def _parse_children_ordered(self,node,fields,fields_found):
"""Parse the children of the given node using strict field ordering."""
cur_field_idx = 0
for child in node.childNodes:
idx = cur_field_idx
# If we successfully break out of this loop, one of our
# fields has consumed the node.
while idx < len(fields):
field = fields[idx]
res = field.parse_child_node(self,child)
if res is PARSE_DONE:
if field not in fields_found:
fields_found.append(field)
cur_field_idx = idx + 1
break
if res is PARSE_MORE:
if field not in fields_found:
fields_found.append(field)
cur_field_idx = idx
break
if res is PARSE_CHILDREN:
self._parse_children_ordered(child,[field],fields_found)
cur_field_idx = idx
break
idx += 1
else:
self._handle_unparsed_node(child)
def _parse_children_unordered(self,node,fields,fields_found):
"""Parse the children of the given node using loose field ordering."""
done_fields = {}
for child in node.childNodes:
idx = 0
# If we successfully break out of this loop, one of our
# fields has consumed the node.
while idx < len(fields):
if idx in done_fields:
idx += 1
continue
field = fields[idx]
res = field.parse_child_node(self,child)
if res is PARSE_DONE:
done_fields[idx] = True
if field not in fields_found:
fields_found.append(field)
break
if res is PARSE_MORE:
if field not in fields_found:
fields_found.append(field)
break
if res is PARSE_CHILDREN:
self._parse_children_unordered(child,[field],fields_found)
break
idx += 1
else:
self._handle_unparsed_node(child)
def _handle_unparsed_node(self,node):
if not self.meta.ignore_unknown_elements:
if node.nodeType == node.ELEMENT_NODE:
err = "unknown element: %s" % (node.nodeName,)
raise ParseError(err)
elif node.nodeType in (node.TEXT_NODE,node.CDATA_SECTION_NODE):
if node.nodeValue.strip():
err = "unparsed text node: %s" % (node.nodeValue,)
raise ParseError(err)
elif node.nodeType == node.ATTRIBUTE_NODE:
if not node.nodeName.startswith("xml"):
err = "unknown attribute: %s" % (node.name,)
raise ParseError(err)
def render(self,encoding=None,fragment=False,nsmap=None):
"""Produce XML from this model's instance data.
A unicode string will be returned if any of the objects contain
unicode values; specifying the 'encoding' argument forces generation
of an ASCII string.
By default a complete XML document is produced, including the
leading "<?xml>" declaration. To generate an XML fragment set
the 'fragment' argument to True.
The 'nsmap' argument maintains the current stack of namespace
prefixes used during rendering; it maps each prefix to a list of
namespaces, with the first item in the list being the current
namespace for that prefix. This argument should never be given
directly; it is for internal use by the rendering routines.
"""
if nsmap is None:
nsmap = {}
data = []
if not fragment:
if encoding:
s = '<?xml version="1.0" encoding="%s" ?>' % (encoding,)
data.append(s)
else:
data.append('<?xml version="1.0" ?>')
data.extend(self._render(nsmap))
xml = "".join(data)
if encoding:
xml = xml.encode(encoding)
return xml
def _render(self,nsmap):
"""Render this model as an XML fragment."""
# Determine opening and closing tags
pushed_ns = False
if self.meta.namespace:
namespace = self.meta.namespace
prefix = self.meta.namespace_prefix
try:
cur_ns = nsmap[prefix]
except KeyError:
cur_ns = []
nsmap[prefix] = cur_ns
if prefix:
tagname = "%s:%s" % (prefix,self.meta.tagname)
open_tag_contents = [tagname]
if not cur_ns or cur_ns[0] != namespace:
cur_ns.insert(0,namespace)
pushed_ns = True
open_tag_contents.append('xmlns:%s="%s"'%(prefix,namespace))
close_tag_contents = tagname
else:
open_tag_contents = [self.meta.tagname]
if not cur_ns or cur_ns[0] != namespace:
cur_ns.insert(0,namespace)
pushed_ns = True
open_tag_contents.append('xmlns="%s"'%(namespace,))
close_tag_contents = self.meta.tagname
else:
open_tag_contents = [self.meta.tagname]
close_tag_contents = self.meta.tagname
# Find the attributes and child nodes
attrs = []
children = []
num = 0
for f in self._fields:
val = getattr(self,f.field_name)
attrs.extend(f.render_attributes(self,val,nsmap))
children.extend(f.render_children(self,val,nsmap))
if len(attrs) + len(children) == num and f.required:
raise RenderError("Field '%s' is missing" % (f.field_name,))
# Actually construct the XML
if pushed_ns:
nsmap[prefix].pop(0)
open_tag_contents.extend(attrs)
if children:
yield "<%s>" % (" ".join(open_tag_contents),)
for chld in children:
yield chld
yield "</%s>" % (close_tag_contents,)
else:
yield "<%s />" % (" ".join(open_tag_contents),)
@staticmethod
def _make_xml_node(xml):
"""Transform a variety of input formats to an XML DOM node."""
try:
ntype = xml.nodeType
except AttributeError:
if isinstance(xml,basestring):
try:
xml = minidom.parseString(xml)
except Exception, e:
raise XmlError(e)
elif hasattr(xml,"read"):
try:
xml = minidom.parse(xml)
except Exception, e:
raise XmlError(e)
else:
raise ValueError("Can't convert that to an XML DOM node")
node = xml.documentElement
else:
if ntype == xml.DOCUMENT_NODE:
node = xml.documentElement
else:
node = xml
return node
@classmethod
def validate_xml_node(cls,node):
"""Check that the given xml node is valid for this object.
Here 'valid' means that it is the right tag, in the right
namespace. We might add more eventually...
"""
if node.nodeType != node.ELEMENT_NODE:
err = "Class '%s' got a non-element node"
err = err % (cls.__name__,)
raise ParseError(err)
equals = (lambda a, b: a == b) if cls.meta.case_sensitive else (lambda a, b: a.lower() == b.lower())
if not equals(node.localName, cls.meta.tagname):
err = "Class '%s' got tag '%s' (expected '%s')"
err = err % (cls.__name__,node.localName,
cls.meta.tagname)
raise ParseError(err)
if cls.meta.namespace:
if node.namespaceURI != cls.meta.namespace:
err = "Class '%s' got namespace '%s' (expected '%s')"
err = err % (cls.__name__,node.namespaceURI,
cls.meta.namespace)
raise ParseError(err)
else:
if node.namespaceURI:
err = "Class '%s' got namespace '%s' (expected no namespace)"
err = err % (cls.__name__,node.namespaceURI,)
raise ParseError(err)
| 38.423077 | 137 | 0.573931 | 1,724 | 13,986 | 4.529002 | 0.230858 | 0.02395 | 0.018443 | 0.009221 | 0.286501 | 0.230277 | 0.178279 | 0.146644 | 0.130123 | 0.121542 | 0 | 0.004213 | 0.338052 | 13,986 | 363 | 138 | 38.528926 | 0.839166 | 0.036537 | 0 | 0.393365 | 0 | 0 | 0.050656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.018957 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c17aa46e6ae1b0337a5db043b62d8f5fe25f058a | 361 | py | Python | Backend/src/energytransfers/migrations/0002_auto_20200514_1708.py | Valle1806/EnergyCorp | aba09105eedcb7dc694b201e50953e19e4e2936b | [
"MIT"
] | 1 | 2021-01-21T08:30:57.000Z | 2021-01-21T08:30:57.000Z | Backend/src/energytransfers/migrations/0002_auto_20200514_1708.py | ChristianTaborda/Energycorp | 2447b5af211501450177b0b60852dcb31d6ca12d | [
"MIT"
] | null | null | null | Backend/src/energytransfers/migrations/0002_auto_20200514_1708.py | ChristianTaborda/Energycorp | 2447b5af211501450177b0b60852dcb31d6ca12d | [
"MIT"
] | null | null | null | # Generated by Django 3.0.3 on 2020-05-14 17:08
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('energytransfers', '0001_initial'),
]
operations = [
migrations.RenameField(
model_name='counter',
old_name='counter',
new_name='value',
),
]
| 19 | 47 | 0.581717 | 37 | 361 | 5.567568 | 0.783784 | 0.106796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075697 | 0.304709 | 361 | 18 | 48 | 20.055556 | 0.74502 | 0.124654 | 0 | 0 | 1 | 0 | 0.146497 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c17c60fab9c88ecd3b5fc9a676b4ed6073207983 | 564 | py | Python | backend/unpp_api/apps/partner/migrations/0017_auto_20170920_0639.py | unicef/un-partner-portal | 73afa193a5f6d626928cae0025c72a17f0ef8f61 | [
"Apache-2.0"
] | 6 | 2017-11-21T10:00:44.000Z | 2022-02-12T16:51:48.000Z | backend/unpp_api/apps/partner/migrations/0017_auto_20170920_0639.py | unicef/un-partner-portal | 73afa193a5f6d626928cae0025c72a17f0ef8f61 | [
"Apache-2.0"
] | 995 | 2017-07-31T02:08:36.000Z | 2022-03-08T22:44:03.000Z | backend/unpp_api/apps/partner/migrations/0017_auto_20170920_0639.py | unicef/un-partner-portal | 73afa193a5f6d626928cae0025c72a17f0ef8f61 | [
"Apache-2.0"
] | 1 | 2021-07-21T10:45:15.000Z | 2021-07-21T10:45:15.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.3 on 2017-09-20 06:39
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('partner', '0016_auto_20170920_0340'),
]
operations = [
migrations.AlterField(
model_name='partnerprofile',
name='year_establishment',
field=models.PositiveSmallIntegerField(blank=True, help_text='Enter valid year.', null=True, verbose_name='Year of establishment'),
),
]
| 26.857143 | 143 | 0.661348 | 62 | 564 | 5.822581 | 0.790323 | 0.044321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075515 | 0.225177 | 564 | 20 | 144 | 28.2 | 0.750572 | 0.120567 | 0 | 0 | 1 | 0 | 0.20284 | 0.046653 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c17cf7c6c8d62937b059636d643d69a97534ccb1 | 2,111 | py | Python | backend/proxy/api.py | LukasPietzschmann/FitnessApp | d47858aff9b5a70d7664c29c766d06c6cccb6b09 | [
"MIT"
] | null | null | null | backend/proxy/api.py | LukasPietzschmann/FitnessApp | d47858aff9b5a70d7664c29c766d06c6cccb6b09 | [
"MIT"
] | null | null | null | backend/proxy/api.py | LukasPietzschmann/FitnessApp | d47858aff9b5a70d7664c29c766d06c6cccb6b09 | [
"MIT"
] | null | null | null | '''
Authors: Lukas Pietzschmann, Johannes Schenker, Vincent Ugrai
'''
# FIXME Behebe möglichen fehler mit Flask:
# https://github.com/flask-restful/flask-restful/pull/913
# import flask.scaffold
# flask.helpers._endpoint_from_view_func = flask.scaffold._endpoint_from_view_funcfrom flask import Flask, request, make_response
from flask import Flask, request, make_response
from flask_cors import CORS
from dotenv import load_dotenv
from os import environ as env
import requests
import json
app = Flask(__name__)
CORS(app, supports_credentials=True)
# The Proxy receives all API-Requests and forwards them to the corresponding service
@app.route('/<path:path>', methods=["GET", "POST", "PUT", "DELETE"])
def proxy(path):
base = None
port = None
body = None
method = {"GET": requests.get, "POST": requests.post, "PUT": requests.put, "DELETE": requests.delete}
if path.startswith("user") or path.startswith("group") or path.startswith("login") or path.startswith("logout"):
port = env.get("USER_GROUP_PORT")
base = env.get("USER_GROUP_BASE")
elif path.startswith("workoutPlan") or path.startswith("category"):
port = env.get("WORKOUT_PORT")
base = env.get("WORKOUT_BASE")
elif path.startswith("wikiHow"):
port = env.get("WIKI_HOW_PORT")
base = env.get("WIKI_HOW_BASE")
elif path.startswith("shoppingsearch"):
port = env.get("SHOPPING_PORT")
base = env.get("SHOPPING_BASE")
else:
return "The Proxy is not aware of this URL", 404
if request.method in ["POST", "PUT"]:
try:
body = request.get_json() if request.content_type == "application/json" else json.loads(request.get_data().decode("utf-8"))
except json.decoder.JSONDecodeError:
body = None
res = method[request.method](f"{base}:{port}/{path}", json=body, headers=request.headers)
response = make_response((res.text, res.status_code, res.headers.items()))
if cookies := res.cookies:
for (name, val) in cookies.get_dict().items():
response.set_cookie(key=name, value=val, secure=False, max_age=360 * 60 * 60 * 24)
return response
if __name__ == '__main__':
load_dotenv()
app.run(host="0.0.0.0", debug=True) | 37.035088 | 129 | 0.730459 | 308 | 2,111 | 4.857143 | 0.431818 | 0.074866 | 0.042781 | 0.037433 | 0.055481 | 0.055481 | 0.055481 | 0.055481 | 0 | 0 | 0 | 0.010817 | 0.124112 | 2,111 | 57 | 130 | 37.035088 | 0.798269 | 0.185694 | 0 | 0.047619 | 0 | 0 | 0.179742 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 1 | 0.02381 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c17d78b310f5a4a664ac432ca977fbcf3adff1f9 | 603 | py | Python | ambientlight/__main__.py | ArnauP/responsive-ambientlight | eafecd96ee0463216751009dc989170a8363305f | [
"MIT"
] | null | null | null | ambientlight/__main__.py | ArnauP/responsive-ambientlight | eafecd96ee0463216751009dc989170a8363305f | [
"MIT"
] | null | null | null | ambientlight/__main__.py | ArnauP/responsive-ambientlight | eafecd96ee0463216751009dc989170a8363305f | [
"MIT"
] | null | null | null | from PyQt5.QtWidgets import QApplication
from PyQt5.QtGui import QIcon, QFont
from .controllers.simulation_controller import SimulationController
from .controllers.main_controller import MainController
from .utils import utils
def main():
app = QApplication([])
app.setWindowIcon(QIcon(utils.get_path('ambientlight/resources/icons/app_icon.svg')))
utils.load_style_sheet(utils.get_path('ambientlight/resources/style/style.css'), app)
font = QFont('Arial', 11, QFont.Bold)
app.setFont(font)
main_ctrl = MainController()
app.exec_()
if __name__ == "__main__":
main()
| 27.409091 | 89 | 0.752902 | 74 | 603 | 5.905405 | 0.513514 | 0.04119 | 0.05492 | 0.10984 | 0.15103 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007678 | 0.135987 | 603 | 21 | 90 | 28.714286 | 0.831094 | 0 | 0 | 0 | 0 | 0 | 0.15257 | 0.131012 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c184b59a4d7c701ab5051885c9d01ad5ca81b129 | 7,717 | py | Python | tests/conftest.py | ranhorev/pysmartthings | 23e3c360cbcaf460258b779a268d0d07f12ad279 | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | ranhorev/pysmartthings | 23e3c360cbcaf460258b779a268d0d07f12ad279 | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | ranhorev/pysmartthings | 23e3c360cbcaf460258b779a268d0d07f12ad279 | [
"Apache-2.0"
] | null | null | null | """Define test configuration."""
import glob
import re
import pytest
from pysmartthings.api import (
API_APP,
API_APP_OAUTH,
API_APP_OAUTH_GENERATE,
API_APP_SETTINGS,
API_APPS,
API_BASE,
API_DEVICE,
API_DEVICE_COMMAND,
API_DEVICE_HEALTH,
API_DEVICE_STATUS,
API_DEVICES,
API_INSTALLEDAPP,
API_INSTALLEDAPPS,
API_LOCATION,
API_LOCATIONS,
API_OAUTH_TOKEN,
API_ROOM,
API_ROOMS,
API_SCENE_EXECUTE,
API_SCENES,
API_SUBSCRIPTION,
API_SUBSCRIPTIONS,
Api,
)
from pysmartthings.smartthings import SmartThings
from .utilities import ClientMocker
APP_ID = "c6cde2b0-203e-44cf-a510-3b3ed4706996"
AUTH_TOKEN = "9b3fd445-42d6-441b-b386-99ea51e13cb0"
CLIENT_ID = "7cd4d474-7b36-4e03-bbdb-4cd4ae45a2be"
CLIENT_SECRET = "9b3fd445-42d6-441b-b386-99ea51e13cb0"
DEVICE_ID = "743de49f-036f-4e9c-839a-2f89d57607db"
INSTALLED_APP_ID = "4514eb36-f5fd-4ab2-9520-0597acd1d212"
LOCATION_ID = "397678e5-9995-4a39-9d9f-ae6ba310236b"
ROOM_ID = "7715151d-0314-457a-a82c-5ce48900e065"
REFRESH_TOKEN = "a86a5c8e-0014-44a6-8980-5846633972dd"
SUBSCRIPTION_ID = "7bdf5909-57c4-41f3-9089-e520513bd92a"
SCENE_ID = "9b58411f-5d26-418d-b193-3434a77c484a"
DEVICE_COMMAND_PATTERN = re.compile(r"(device_command_post_[a-z_]+)")
def register_device_commands(mocker):
"""Register all device commands."""
files = glob.glob("tests/json/device_command_post_*.json")
for file in files:
match = DEVICE_COMMAND_PATTERN.search(file)
if match:
mocker.post(
API_DEVICE_COMMAND.format(device_id=DEVICE_ID),
request=match.group(),
response={"results": [{"status": "ACCEPTED"}]},
)
def register_url_mocks(mocker):
"""Register the URLs we need to mock."""
mocker.default_headers = {"Authorization": "Bearer " + AUTH_TOKEN}
mocker.base_url = API_BASE
# Locations
mocker.get(API_LOCATIONS, response="locations")
mocker.get(API_LOCATION.format(location_id=LOCATION_ID), response="location")
# Rooms
mocker.get(API_ROOMS.format(location_id=LOCATION_ID), response="rooms")
mocker.get(
API_ROOM.format(location_id=LOCATION_ID, room_id=ROOM_ID), response="room"
)
mocker.post(
API_ROOMS.format(location_id=LOCATION_ID), request="room_post", response="room"
)
mocker.put(
API_ROOM.format(location_id=LOCATION_ID, room_id=ROOM_ID),
request="room_put",
response="room",
)
mocker.delete(
API_ROOM.format(location_id=LOCATION_ID, room_id=ROOM_ID), response={}
)
# Devices
mocker.get(API_DEVICES, response="devices")
mocker.get(
API_DEVICES,
response="devices_filtered",
params=[
("locationId", LOCATION_ID),
("capability", "switch"),
("deviceId", "edd26ac6-d156-4505-9647-3b20118ae4d1"),
("deviceId", "be1a61ce-c2a4-4b32-bf8c-31de6d3fa7dd"),
],
)
mocker.get(API_DEVICE.format(device_id=DEVICE_ID), response="device")
mocker.get(API_DEVICE_STATUS.format(device_id=DEVICE_ID), response="device_status")
mocker.get(API_DEVICE_HEALTH.format(device_id=DEVICE_ID), response="device_health")
# Device Commands
register_device_commands(mocker)
# Apps
mocker.get(API_APPS, response="apps")
mocker.get(API_APP.format(app_id=APP_ID), response="app_get")
mocker.post(API_APPS, request="app_post_request", response="app_post_response")
mocker.put(
API_APP.format(app_id=APP_ID),
request="app_put_request",
response="app_put_response",
)
mocker.delete(API_APP.format(app_id=APP_ID), response={})
mocker.get(API_APP_SETTINGS.format(app_id=APP_ID), response="app_settings")
mocker.put(
API_APP_SETTINGS.format(app_id=APP_ID),
request="app_settings",
response="app_settings",
)
mocker.get(API_APP_OAUTH.format(app_id=APP_ID), response="app_oauth_get_response")
mocker.put(
API_APP_OAUTH.format(app_id=APP_ID),
request="app_oauth_put_request",
response="app_oauth_put_response",
)
mocker.post(
API_APP_OAUTH_GENERATE.format(app_id=APP_ID),
request="app_oauth_generate_request",
response="app_oauth_generate_response",
)
# InstalledApps
mocker.get(API_INSTALLEDAPPS, response="installedapps_get_response")
mocker.request(
"get",
"https://api.smartthings.com/installedapps?"
"currentLocationId=NWMwM2U1MTgtMTE4YS00NGNiLTg1YWQtNzg3N2Qw"
"YjMwMmU0¤tOffset=MA",
headers=mocker.default_headers,
response="installedapps_get_response_2",
)
mocker.get(
API_INSTALLEDAPP.format(installed_app_id=INSTALLED_APP_ID),
response="installedapp_get_response",
)
mocker.delete(
API_INSTALLEDAPP.format(installed_app_id=INSTALLED_APP_ID),
response={"count": 1},
)
# InstallApp Subscriptions
mocker.get(
API_SUBSCRIPTIONS.format(installed_app_id=INSTALLED_APP_ID),
response="subscriptions_get_response",
)
mocker.delete(
API_SUBSCRIPTIONS.format(installed_app_id=INSTALLED_APP_ID),
response={"count": 3},
)
mocker.post(
API_SUBSCRIPTIONS.format(installed_app_id=INSTALLED_APP_ID),
request="subscription_post_request",
response="subscription_post_response",
)
mocker.get(
API_SUBSCRIPTION.format(
installed_app_id=INSTALLED_APP_ID, subscription_id=SUBSCRIPTION_ID
),
response="subscription_capability_get_response",
)
mocker.get(
API_SUBSCRIPTION.format(
installed_app_id=INSTALLED_APP_ID,
subscription_id="498752fd-db87-4a5e-95f5-25a0e412838d",
),
response="subscription_device_get_response",
)
mocker.delete(
API_SUBSCRIPTION.format(
installed_app_id=INSTALLED_APP_ID, subscription_id=SUBSCRIPTION_ID
),
response={"count": 1},
)
# OAuth Token
mocker.request("post", API_OAUTH_TOKEN, response="token_response")
# Scenes
mocker.get(API_SCENES, response="scenes")
mocker.get(
API_SCENES,
response="scenes_location_filter",
params=[("locationId", LOCATION_ID)],
)
mocker.post(
API_SCENE_EXECUTE.format(scene_id=SCENE_ID), response={"status": "success"}
)
@pytest.fixture
def smartthings(event_loop):
"""Fixture for testing against the SmartThings class."""
# Python 3.5 doesn't support yield in an async method so we have to
# run the creation and clean-up of the session in the loop manually.
mocker = ClientMocker()
register_url_mocks(mocker)
session = event_loop.run_until_complete(__create_session(event_loop, mocker))
yield SmartThings(session, AUTH_TOKEN)
event_loop.run_until_complete(session.close())
@pytest.fixture
def api(event_loop):
"""Fixture for testing against the API."""
# Python 3.5 doesn't support yield in an async method so we have to
# run the creation and clean-up of the session in the loop manually.
mocker = ClientMocker()
register_url_mocks(mocker)
session = event_loop.run_until_complete(__create_session(event_loop, mocker))
yield Api(session, AUTH_TOKEN)
event_loop.run_until_complete(session.close())
async def __create_session(event_loop, mocker):
return mocker.create_session(event_loop)
| 33.406926 | 88 | 0.674096 | 911 | 7,717 | 5.399561 | 0.217344 | 0.03456 | 0.04879 | 0.022769 | 0.442976 | 0.398862 | 0.390323 | 0.326286 | 0.250457 | 0.23826 | 0 | 0.052395 | 0.220941 | 7,717 | 230 | 89 | 33.552174 | 0.765802 | 0.071142 | 0 | 0.260638 | 1 | 0 | 0.206377 | 0.147391 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0 | 0.031915 | 0 | 0.058511 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c18d893bcaf1aa2c9ed8f540980d7bca6ade7aa8 | 2,391 | py | Python | dicomtrolley/core.py | michelkok/dicomtrolley | e20def0fc6f3ee132368a168e07ed3c17c5431fe | [
"Apache-2.0"
] | null | null | null | dicomtrolley/core.py | michelkok/dicomtrolley | e20def0fc6f3ee132368a168e07ed3c17c5431fe | [
"Apache-2.0"
] | null | null | null | dicomtrolley/core.py | michelkok/dicomtrolley | e20def0fc6f3ee132368a168e07ed3c17c5431fe | [
"Apache-2.0"
] | null | null | null | """Provides common base classes that allow different modules to talk to each other."""
from itertools import chain
from typing import Sequence
from pydantic.main import BaseModel
from pydicom.dataset import Dataset
from dicomtrolley.exceptions import DICOMTrolleyException
class DICOMObject(BaseModel):
"""An object in the DICOM world. Base for Study, Series, Instance.
dicomtrolley search methods always return instances based on DICOMObject
dicomtrolley download methods take instances based on DICOMObject as input
"""
uid: str
data: Dataset
def __str__(self):
return type(self).__name__ + " " + self.uid
def all_instances(self):
"""
Returns
-------
List[Instance]
All instances contained in this object
"""
raise NotImplementedError()
class Instance(DICOMObject):
parent: "Series"
def all_instances(self):
"""A list containing this instance itself. To match other signatures"""
return [self]
class Series(DICOMObject):
instances: Sequence[Instance]
parent: "Study"
def all_instances(self):
"""Each instance contained in this series"""
return self.instances
class Study(DICOMObject):
series: Sequence[Series]
def all_instances(self):
"""Return each instance contained in this study"""
return list(chain(*(x.instances for x in self.series)))
class Searcher:
"""Something that can search for DICOM studies. Base class."""
def find_studies(self, query) -> Sequence[Study]:
raise NotImplementedError()
def find_study(self, query) -> Study:
"""Like find_studies, but guarantees exactly one result. Exception if not.
This method is meant for searches that contain unique identifiers like
StudyInstanceUID, AccessionNumber, etc.
Raises
------
DICOMTrolleyException
If no results or more than one result is returned by query
"""
results = self.find_studies(query)
if len(results) == 0 or len(results) > 1:
raise DICOMTrolleyException(
f"Expected exactly one study for query '{query}', but"
f" found {len(results)}"
)
return results[0]
Instance.update_forward_refs() # enables pydantic validation
Series.update_forward_refs()
| 27.170455 | 86 | 0.66123 | 273 | 2,391 | 5.717949 | 0.417582 | 0.038437 | 0.038437 | 0.048687 | 0.066624 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001688 | 0.256796 | 2,391 | 87 | 87 | 27.482759 | 0.876759 | 0.370974 | 0 | 0.157895 | 0 | 0 | 0.062222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184211 | false | 0 | 0.131579 | 0.026316 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c19879ad79651dab2e29e6eb5d7cb2748c0fe7b3 | 2,548 | py | Python | src/awsCluster/chipSeq/ChipPipelineManager.py | AspirinCode/jupyter-genomics | d45526fab3de8fcc3d9fef005d4e39368ff3dfdc | [
"MIT"
] | 2 | 2019-01-04T08:17:27.000Z | 2021-04-10T02:59:35.000Z | src/cirrus_ngs/deprecated/chipSeq/ChipPipelineManager.py | miko-798/cirrus-ngs | 2c005f0fe29e298652ed2164e08ada75e908229b | [
"MIT"
] | null | null | null | src/cirrus_ngs/deprecated/chipSeq/ChipPipelineManager.py | miko-798/cirrus-ngs | 2c005f0fe29e298652ed2164e08ada75e908229b | [
"MIT"
] | 2 | 2021-09-10T02:57:51.000Z | 2021-09-21T00:16:56.000Z | __author__ = 'Guorong Xu<g1xu@ucsd.edu>'
import os
import YamlFileMaker
from util import DesignFileLoader
from cfnCluster import ConnectionManager
workspace = "/shared/workspace/ChiPSeqPipeline"
data_dir = "/shared/workspace/data_archive/ChiPSeq"
## run all analysis from download, alignment, counting and differential calculation.
def run_analysis(ssh_client, workflow, project_name, analysis_steps,
s3_input_files_address, sample_list, group_list, style, genome, s3_output_files_address):
yaml_file = project_name + ".yaml"
print "making the yaml file..."
YamlFileMaker.make_yaml_file(yaml_file, workflow, project_name, analysis_steps, s3_input_files_address,
sample_list, group_list, s3_output_files_address, genome, style)
print "copying yaml file to remote master node..."
ConnectionManager.copy_file(ssh_client, yaml_file, workspace + "/" + workflow + "/yaml_examples")
## Remove the local yaml file
os.remove(yaml_file)
print "executing pipeline..."
ConnectionManager.execute_command(ssh_client, "sh " + workspace + "/" + workflow + "/run.sh "
+ workspace + "/" + workflow + "/yaml_examples/" + yaml_file)
## checking your jobs status
def check_processing_status(ssh_client):
print "checking processing status"
ConnectionManager.execute_command(ssh_client, "cat " + workspace + "/nohup.out")
## checking your jobs status
def check_jobs_status(ssh_client):
print "checking jobs status"
ConnectionManager.execute_command(ssh_client, "qstat")
## checking your host status
def check_host_status(ssh_client):
print "checking qhost status"
ConnectionManager.execute_command(ssh_client, "qhost")
if __name__ == '__main__':
workflow = "homer_workflow"
analysis_steps = ["fastqc"]
style = "histone"
genome = "hg19"
s3_input_files_address = "s3://ccbb-analysis/Guorong/jupyter-genomics/data_archive/test_data/ChiPSeq/RA2284"
s3_output_files_address = "s3://ccbb-analysis/Guorong/jupyter-genomics/data_archive/analysis_data/ChiPSeq"
project_name = "Sample_cDNA"
ssh_client = ""
design_file = "/Users/guorongxu/Desktop/workspace/projects/jupyter-genomics_bitbucket/data/awsCluster/chipSeq_design_example.txt"
sample_list, group_list = DesignFileLoader.load_chipseq_design_file(design_file)
run_analysis(ssh_client, workflow, project_name, analysis_steps,
s3_input_files_address, sample_list, group_list, style, genome, s3_output_files_address) | 41.770492 | 133 | 0.742936 | 308 | 2,548 | 5.814935 | 0.327922 | 0.055276 | 0.026801 | 0.042434 | 0.419877 | 0.350642 | 0.240089 | 0.240089 | 0.240089 | 0.240089 | 0 | 0.007955 | 0.161303 | 2,548 | 61 | 134 | 41.770492 | 0.830136 | 0.072998 | 0 | 0 | 0 | 0.02439 | 0.273384 | 0.145833 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.097561 | null | null | 0.146341 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c19ed6615f84f7912d55685bdf60f361e59dc517 | 318 | py | Python | data-science/exercicios/livro-introducao-a-programacao-com-python/capitulo-2/exercicio2-6.py | joaovictor-loureiro/data-science | 21ad240e1db94d614e54fcb3fbf6ef74a78af9d8 | [
"MIT"
] | null | null | null | data-science/exercicios/livro-introducao-a-programacao-com-python/capitulo-2/exercicio2-6.py | joaovictor-loureiro/data-science | 21ad240e1db94d614e54fcb3fbf6ef74a78af9d8 | [
"MIT"
] | null | null | null | data-science/exercicios/livro-introducao-a-programacao-com-python/capitulo-2/exercicio2-6.py | joaovictor-loureiro/data-science | 21ad240e1db94d614e54fcb3fbf6ef74a78af9d8 | [
"MIT"
] | null | null | null | # Exercício 2.6 - Modifique o programa da listagem 2.11, de forma que ele calcule um
# aumento de 15% para um salário de R$ 750
salario = 750.00
taxa_aumento = 0.15
aumento = salario * taxa_aumento
novo_salario = salario + aumento
print('\nSálario de R$750.00 + aumento de 15' + '% = ' + 'R$%.2f\n' % (novo_salario)) | 35.333333 | 85 | 0.694969 | 54 | 318 | 4.018519 | 0.555556 | 0.082949 | 0.101382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.18239 | 318 | 9 | 85 | 35.333333 | 0.734615 | 0.386792 | 0 | 0 | 0 | 0 | 0.253886 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c1a090099134b68fb76ca41e7d96a09d8f2db91e | 1,117 | py | Python | licensed/Docker-x86_64/build_scripts/set_sparknlp_env.py | DevinTDHa/spark-nlp-starter-scripts | b04c5a185d3e56ecc36aa420b4a8b78e62ee43ac | [
"Apache-2.0"
] | 1 | 2021-06-21T04:35:46.000Z | 2021-06-21T04:35:46.000Z | licensed/Docker-x86_64/build_scripts/set_sparknlp_env.py | DevinTDHa/spark-nlp-starter-scripts | b04c5a185d3e56ecc36aa420b4a8b78e62ee43ac | [
"Apache-2.0"
] | 3 | 2021-06-11T15:05:51.000Z | 2021-06-24T17:25:06.000Z | licensed/Docker-x86_64/build_scripts/set_sparknlp_env.py | hatrungduc/spark-nlp-starter-scripts | b04c5a185d3e56ecc36aa420b4a8b78e62ee43ac | [
"Apache-2.0"
] | null | null | null | import json
import os
import re
import os.path
sparknlp_versions = {"PUBLIC_VERSION": "3.1.0"}
WORKDIR = "/build_scripts"
with open(WORKDIR + "/spark_nlp_for_healthcare.json", "r") as f:
sparknlp_versions.update(json.load(f).items())
with open(WORKDIR + "/spark_ocr.json", "r") as f:
ocr_version_pattern = re.compile(r"(\d+\.\d+\.\d+).*?(spark\d+)")
for k, v in json.load(f).items():
sparknlp_versions[k] = v
if k == "SPARK_OCR_sparknlp_versions":
k = "JSL_OCR_sparknlp_versions"
sparknlp_versions[k] = v
if k == "JSL_OCR_SECRET":
k = "SPARK_OCR_SECRET"
sparknlp_versions[k] = v
if k == "OCR_VERSION":
match = ocr_version_pattern.findall(v)
if match:
ocr_base_version, ocr_spark_version = match[0]
sparknlp_versions["OCR_BASE_VERSION"] = ocr_base_version
sparknlp_versions["OCR_SPARK_VERSION"] = ocr_spark_version
with open(WORKDIR + "/SPARKNLP_VERSIONS", "w+") as f:
for k, v in sparknlp_versions.items():
f.write(f"export {k}={v}\n")
| 33.848485 | 74 | 0.61504 | 155 | 1,117 | 4.16129 | 0.283871 | 0.272868 | 0.105426 | 0.083721 | 0.097674 | 0.097674 | 0 | 0 | 0 | 0 | 0 | 0.004751 | 0.246195 | 1,117 | 32 | 75 | 34.90625 | 0.761283 | 0 | 0 | 0.111111 | 0 | 0 | 0.241719 | 0.098478 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.148148 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c1a8a733880fd472a1bd5dcd7ca9c42d18cdec3f | 2,432 | py | Python | books/models.py | TareqMonwer/reading-track | dfa1e922703a1fc6d2fa65bc3449dab54c49074a | [
"bzip2-1.0.6"
] | 1 | 2020-11-01T22:02:43.000Z | 2020-11-01T22:02:43.000Z | books/models.py | TareqMonwer/reading-track | dfa1e922703a1fc6d2fa65bc3449dab54c49074a | [
"bzip2-1.0.6"
] | 7 | 2020-06-05T20:34:48.000Z | 2022-03-12T00:16:02.000Z | books/models.py | TareqMonwer/reading-track | dfa1e922703a1fc6d2fa65bc3449dab54c49074a | [
"bzip2-1.0.6"
] | null | null | null | import datetime
from django.db import models
from django.utils import timezone
class Book(models.Model):
STATUS_CHOICES = (
('C', 'Completed'),
('R', 'Reading'),
('W', 'WhishList'),
)
PRIORITY_CHOICES = (
('h', 'high'),
('m', 'mid'),
('l', 'low'),
)
name = models.CharField(max_length=300)
read_online = models.URLField(max_length=300, null=True, blank=True)
author = models.CharField(max_length=100)
details = models.TextField(blank=True, null=True)
due_date = models.DateField(blank=True, null=True)
status = models.CharField(max_length=1, choices=STATUS_CHOICES)
priority = models.CharField(max_length=1, choices=PRIORITY_CHOICES)
rating = models.IntegerField(null=True, blank=True)
class Meta:
ordering = ['due_date', 'name']
unique_together = ['name', 'author']
def __str__(self):
return f'{self.name} by {self.author}'
def get_due_date(self):
if self.due_date:
return self.due_date
else:
return 'Not set yet'
def get_alert_message(self):
d = self.due_date
due_date = datetime.datetime(d.year, d.month, d.day, 1, 1)
if due_date <= timezone.now() + datetime.timedelta(days=7):
return "read now"
class Blog(models.Model):
STATUS_CHOICES = (
('C', 'Completed'),
('R', 'Reading'),
('W', 'WhishList'),
)
PRIORITY_CHOICES = (
('h', 'high'),
('m', 'mid'),
('l', 'low'),
)
title = models.CharField(max_length=150)
link = models.URLField(max_length=300)
status = models.CharField(max_length=1, choices=STATUS_CHOICES)
due_date = models.DateField(blank=True, null=True)
priority = models.CharField(max_length=1, choices=PRIORITY_CHOICES)
class Meta:
verbose_name = "Blog/Article"
verbose_name_plural = "Blog/Articles"
ordering = ['due_date', 'title']
def __str__(self):
return f'{self.title}'
def get_due_date(self):
if self.due_date:
return self.due_date
else:
return 'Not set yet'
def get_alert_message(self):
d = self.due_date
due_date = datetime.datetime(d.year, d.month, d.day, 1, 1)
if due_date <= timezone.now() + datetime.timedelta(days=7):
return "read now" | 28.952381 | 72 | 0.588405 | 297 | 2,432 | 4.643098 | 0.272727 | 0.081218 | 0.091371 | 0.121827 | 0.677302 | 0.639594 | 0.609137 | 0.609137 | 0.552574 | 0.39884 | 0 | 0.01418 | 0.275082 | 2,432 | 84 | 73 | 28.952381 | 0.768009 | 0 | 0 | 0.666667 | 0 | 0 | 0.090423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.043478 | 0.028986 | 0.550725 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
c1b3de72204768f876d20194d771ef1cc3b6869e | 2,698 | py | Python | dashboard/views/containers.py | alefeans/docker_dashboard | f6f73a066fd2b97a1ad7682f3d8d2cd2436cd62c | [
"MIT"
] | 3 | 2019-07-23T13:53:45.000Z | 2019-08-08T12:43:45.000Z | dashboard/views/containers.py | alefeans/docker_dashboard | f6f73a066fd2b97a1ad7682f3d8d2cd2436cd62c | [
"MIT"
] | 4 | 2020-02-12T01:03:09.000Z | 2022-02-10T12:39:16.000Z | dashboard/views/containers.py | alefeans/docker_dashboard | f6f73a066fd2b97a1ad7682f3d8d2cd2436cd62c | [
"MIT"
] | 3 | 2020-12-22T09:38:46.000Z | 2022-03-29T08:11:38.000Z | from django.shortcuts import render
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
from dashboard.helpers import (list_container,
start_container,
stop_container,
create_container,
delete_container)
def containers_list(request):
resp = list_container()
if not resp:
return render(request, 'dashboard/containers.html', {})
containers = []
try:
for r in resp:
container = {
'id': r['Id'][:12],
'name': r['Names'][0].replace('/', ''),
'image': r['Image'],
'status': r['Status'],
'state': r['State']
}
containers.append(container)
return render(request, 'dashboard/containers.html', {'containers': containers})
except (KeyError, TypeError):
return render(request, 'dashboard/containers.html', {})
class StartContainer(APIView):
def post(self, request):
try:
container = request.data['name']
except KeyError:
return Response(status=status.HTTP_400_BAD_REQUEST)
resp = start_container(container)
if resp:
return Response(status=status.HTTP_200_OK)
return Response(status=status.HTTP_400_BAD_REQUEST)
class StopContainer(APIView):
def post(self, request):
try:
container = request.data['name']
except KeyError:
return Response(status=status.HTTP_400_BAD_REQUEST)
resp = stop_container(container)
if resp:
return Response(status=status.HTTP_200_OK)
return Response(status=status.HTTP_400_BAD_REQUEST)
class CreateContainer(APIView):
def post(self, request):
try:
image = {"Image": request.data['name']}
except KeyError:
return Response(status=status.HTTP_400_BAD_REQUEST)
created = create_container(image)
if created:
started = start_container(created['Id'])
if started:
return Response(status=status.HTTP_200_OK)
return Response(status=status.HTTP_400_BAD_REQUEST)
class DeleteContainer(APIView):
def post(self, request):
try:
container = request.data['name']
except KeyError:
return Response(status=status.HTTP_400_BAD_REQUEST)
resp = delete_container(container)
if resp:
return Response(status=status.HTTP_200_OK)
return Response(status=status.HTTP_400_BAD_REQUEST)
| 29.326087 | 89 | 0.59785 | 277 | 2,698 | 5.65343 | 0.209386 | 0.10728 | 0.153257 | 0.199234 | 0.628352 | 0.628352 | 0.583653 | 0.517241 | 0.517241 | 0.517241 | 0 | 0.020878 | 0.307635 | 2,698 | 91 | 90 | 29.648352 | 0.817452 | 0 | 0 | 0.478261 | 0 | 0 | 0.057079 | 0.027798 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072464 | false | 0 | 0.072464 | 0 | 0.42029 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c1caa2434f33830a78ae22257b52e16fcdd9ebf0 | 352 | py | Python | xerosdk/apis/contacts.py | akshay-codemonk/xero-sdk-py | 7a8ce6dee9914becd8737800e05f33102d780e1c | [
"MIT"
] | null | null | null | xerosdk/apis/contacts.py | akshay-codemonk/xero-sdk-py | 7a8ce6dee9914becd8737800e05f33102d780e1c | [
"MIT"
] | null | null | null | xerosdk/apis/contacts.py | akshay-codemonk/xero-sdk-py | 7a8ce6dee9914becd8737800e05f33102d780e1c | [
"MIT"
] | 1 | 2020-01-06T04:57:44.000Z | 2020-01-06T04:57:44.000Z | """
Xero Contacts API
"""
from .api_base import ApiBase
class Contacts(ApiBase):
"""
Class for Contacts API
"""
GET_CONTACTS = "/api.xro/2.0/contacts"
def get_all(self):
"""
Get all contacts
Returns:
List of all contacts
"""
return self._get_request(Contacts.GET_CONTACTS)
| 14.666667 | 55 | 0.571023 | 41 | 352 | 4.756098 | 0.512195 | 0.169231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008368 | 0.321023 | 352 | 23 | 56 | 15.304348 | 0.807531 | 0.261364 | 0 | 0 | 0 | 0 | 0.106061 | 0.106061 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.