hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63cb9cfcd1d1bed86874e36912a9244d3c5563c5 | 443 | py | Python | hold-tests/test_role_ans_dev.py | pahoughton/ansible-pahoughton | dba7c014e43fa232bea05b84f96d5b9115800089 | [
"CC-BY-3.0"
] | null | null | null | hold-tests/test_role_ans_dev.py | pahoughton/ansible-pahoughton | dba7c014e43fa232bea05b84f96d5b9115800089 | [
"CC-BY-3.0"
] | null | null | null | hold-tests/test_role_ans_dev.py | pahoughton/ansible-pahoughton | dba7c014e43fa232bea05b84f96d5b9115800089 | [
"CC-BY-3.0"
] | null | null | null | #!/usr/bin/env python3
# 2018-10-13 (cc) <paul4hough@gmail.com>
'''
pips:
- testinfra
- molecule
- tox
'''
class test_role_ans_dev (object):
''' test_role_ans_dev useless class
'''
assert packages.installed(
yaml.array( pkgs[common],
pkgs[os][common],
pkgs[os][major], )
)
assert pips.installed()
assert validate.python("https://github.com/python/stuff")
| 17.72 | 61 | 0.573363 | 51 | 443 | 4.862745 | 0.705882 | 0.064516 | 0.08871 | 0.112903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.277652 | 443 | 24 | 62 | 18.458333 | 0.74375 | 0.311512 | 0 | 0 | 0 | 0 | 0.106897 | 0 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0 | true | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63cd37f1d19aa602021ac5d7d2664ac5f4cbcbd3 | 1,296 | py | Python | images/migrations/0001_initial.py | Hoofeycheng/Bookmarks | f2721633cd39393f0c92993579071679bb975ab0 | [
"MIT"
] | null | null | null | images/migrations/0001_initial.py | Hoofeycheng/Bookmarks | f2721633cd39393f0c92993579071679bb975ab0 | [
"MIT"
] | null | null | null | images/migrations/0001_initial.py | Hoofeycheng/Bookmarks | f2721633cd39393f0c92993579071679bb975ab0 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.4 on 2019-01-21 03:56
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Image',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=200)),
('slug', models.CharField(blank=True, max_length=200)),
('image', models.ImageField(upload_to='image/%Y')),
('url', models.URLField()),
('description', models.TextField(blank=True)),
('created', models.DateField(auto_now_add=True, db_index=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='images_created', to=settings.AUTH_USER_MODEL)),
('user_like', models.ManyToManyField(blank=True, related_name='image_liked', to=settings.AUTH_USER_MODEL)),
],
options={
'db_table': 'images',
},
),
]
| 37.028571 | 149 | 0.604938 | 139 | 1,296 | 5.47482 | 0.52518 | 0.031537 | 0.063075 | 0.082786 | 0.060447 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021944 | 0.261574 | 1,296 | 34 | 150 | 38.117647 | 0.77325 | 0.034722 | 0 | 0 | 1 | 0 | 0.083267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63cd6a4844d309d64fcae3b172caa3687fb25c61 | 951 | py | Python | src/python/detectors/os_command_injection/os_command_injection.py | martinschaef/amazon-codeguru-reviewer-python-detectors | 7452471b7ac5e1f2e1a0bfbd0f615d98f160e0e4 | [
"Apache-2.0"
] | 18 | 2022-01-27T22:50:22.000Z | 2022-02-15T17:41:24.000Z | src/python/detectors/os_command_injection/os_command_injection.py | martinschaef/amazon-codeguru-reviewer-python-detectors | 7452471b7ac5e1f2e1a0bfbd0f615d98f160e0e4 | [
"Apache-2.0"
] | 1 | 2022-01-31T21:36:18.000Z | 2022-02-22T17:09:54.000Z | src/python/detectors/os_command_injection/os_command_injection.py | martinschaef/amazon-codeguru-reviewer-python-detectors | 7452471b7ac5e1f2e1a0bfbd0f615d98f160e0e4 | [
"Apache-2.0"
] | 7 | 2022-02-10T21:50:50.000Z | 2022-03-28T14:21:10.000Z | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
# {fact rule=os-command-injection@v1.0 defects=1}
def exec_command_noncompliant():
from paramiko import client
from flask import request
address = request.args.get("address")
cmd = "ping -c 1 %s" % address
client = client.SSHClient()
client.connect("ssh.samplehost.com")
# Noncompliant: address argument is not sanitized.
client.exec_command(cmd)
# {/fact}
# {fact rule=os-command-injection@v1.0 defects=0}
def exec_command_compliant():
from paramiko import client
from flask import request
address = request.args.get("address")
# Compliant: address argument is sanitized (shell-escaped).
address = shlex.quote(request.args.get("address"))
cmd = "ping -c 1 %s" % address
client = client.SSHClient()
client.connect("ssh.samplehost.com")
client.exec_command(cmd)
# {/fact}
| 32.793103 | 69 | 0.701367 | 126 | 951 | 5.246032 | 0.420635 | 0.066566 | 0.06354 | 0.09531 | 0.639939 | 0.567322 | 0.567322 | 0.567322 | 0.458396 | 0.458396 | 0 | 0.012755 | 0.175605 | 951 | 28 | 70 | 33.964286 | 0.830357 | 0.338591 | 0 | 0.823529 | 0 | 0 | 0.130856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.235294 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
63cdddd1832970e9972b2c05b624ca631cca30ea | 922 | py | Python | src/Dao/Parts.py | asarfara/PartCost | 99b7485ad811d41ca86a61b922faf51facdabcd3 | [
"MIT"
] | null | null | null | src/Dao/Parts.py | asarfara/PartCost | 99b7485ad811d41ca86a61b922faf51facdabcd3 | [
"MIT"
] | null | null | null | src/Dao/Parts.py | asarfara/PartCost | 99b7485ad811d41ca86a61b922faf51facdabcd3 | [
"MIT"
] | null | null | null | import logging
import json
from typing import List
from src.Entity.Part import Part
from datetime import date
from sqlite3 import Connection
class Parts:
def __init__(self, file_name: str, logger: logging, connection: Connection):
self.file_name = file_name
self.logger = logger
self.connection = connection
def insert_parts(self, parts: List[Part]):
"""Insert collection of parts into the database.
Args:
parts (List[Part]): Collection of parts.
"""
cursor = self.connection.cursor()
for part in parts:
self.logger.debug("Inserting into price_parts {0}".format(json.dumps(part.__dict__)))
cursor.execute('INSERT INTO parts_price (name, price, supplier, type, date) VALUES (?,?,?,?,?)', [part.name, part.price, part.supplier, part.type, date.today()])
self.connection.commit()
return None
| 28.8125 | 173 | 0.654013 | 114 | 922 | 5.166667 | 0.412281 | 0.040747 | 0.040747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002845 | 0.237527 | 922 | 31 | 174 | 29.741935 | 0.834993 | 0.105206 | 0 | 0 | 0 | 0 | 0.13602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
63d54e098e0531437161c2fb674cb796cbee3e48 | 4,783 | py | Python | check_ec2_events.py | chartbeat/check_ec2_events | 3c72752dcb2908ec8c7c4eaf7e5788c6f88ec5aa | [
"Apache-2.0"
] | 2 | 2015-11-05T11:38:09.000Z | 2019-04-21T12:22:30.000Z | check_ec2_events.py | chartbeat/check_ec2_events | 3c72752dcb2908ec8c7c4eaf7e5788c6f88ec5aa | [
"Apache-2.0"
] | null | null | null | check_ec2_events.py | chartbeat/check_ec2_events | 3c72752dcb2908ec8c7c4eaf7e5788c6f88ec5aa | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# Author: Justin Lintz
# Copyright 2013 Chartbeat
# http://www.chartbeat.com
#
# Nagios check to alert on any retiring instances or
# instances that need rebooting
#
import getopt
import sys
import re
from datetime import datetime
from datetime import timedelta
from boto.ec2 import connect_to_region
from boto.exception import EC2ResponseError
# Setup IAM User with read-only EC2 access
KEY_ID = ""
ACCESS_KEY = ""
REGION = "us-east-1"
OK = 0
WARNING = 1
CRITICAL = 2
UNKNOWN = 3
def get_instances(instance_ids):
"""
Return an Instance objects for the given instance ids
@param instance_ids: Instance ids (list)
@return: Instance objects (dict)
"""
instances = dict()
conn = connect_to_region(REGION, aws_access_key_id=KEY_ID, aws_secret_access_key=ACCESS_KEY)
try:
reservations = conn.get_all_instances(instance_ids)
except EC2ResponseError, ex:
print 'Got exception when calling EC2 for instances (%s): %s' % \
(", ".join(instance_ids), ex.error_message)
return instances
for r in reservations:
if len(r.instances) and r.instances[0].id in instance_ids:
instances[r.instances[0].id] = r.instances[0].tags["Name"]
return instances
class AmazonEventCheck(object):
"""
Nagios check for the Amazon events.
Will warn/error if any pending events based on time till event occurs
"""
def __init__(self):
pass
def _get_instances_pending_events(self):
"""
Get list of instances that have pending events.
@return: List(Instance, String , Datetime), List of (Instance, instance
Event, Scheduled Date) for hosts with pending events
"""
conn = connect_to_region(REGION, aws_access_key_id=KEY_ID, aws_secret_access_key=ACCESS_KEY)
stats = conn.get_all_instance_status()
next_token = stats.next_token
while next_token != None:
next_stats = conn.get_all_instance_status(next_token=next_token)
stats.extend(next_stats)
next_token = next_stats.next_token
ret = []
for stat in stats:
if stat.events:
for event in stat.events:
if re.match('^\[Completed\]', event.description):
continue
ret.append([stat.id, event.code, event.not_before])
if len(ret) > 0:
instances = get_instances([stat[0] for stat in ret])
for stat in ret:
stat.insert(1, instances[stat[0]])
return ret
def check(self, critical_threshold):
"""
Check pending instance events, alert if
event time is less than critical_threshold
Warn otherwise
@param critical_threshold: int, number of days before an event that nagios should alert
"""
events = self._get_instances_pending_events()
if not events:
print 'OK: no pending events'
return OK
critical_events = []
warning_events = []
for event in events:
event_time = datetime.strptime(event[3], '%Y-%m-%dT%H:%M:%S.000Z')
# Are we close enough to the instance event that we should alert?
if datetime.utcnow() > (event_time - timedelta(days=critical_threshold)):
critical_events.append(event)
else:
warning_events.append(event)
if critical_events:
print 'CRITICAL: instances with events in %d days - %s' % (critical_threshold, ", ".join(["%s(%s)" % (event[0], event[1]) for event in critical_events]))
return CRITICAL
print 'WARNING: instances with scheduled events %s' % (", ".join(["%s(%s)" % (event[0], event[1]) for event in warning_events]))
return WARNING
def usage():
print >> sys.stderr, 'Usage: %s [-h|--help] [-A <aws_access_key_id>] [-S <aws_secret_access_key>] [-R <region>] [-c <day>]' % sys.argv[0]
def main():
try:
opts, args = getopt.getopt(sys.argv[1:], "hA:S:R:c:", ["help"])
except getopt.GetoptError:
usage()
return UNKNOWN
global KEY_ID, ACCESS_KEY, REGION
critical_threshold = 2
for o, a in opts:
if o in ("-h", "--help"):
usage()
return UNKNOWN
if o in ("-A"):
KEY_ID = a
if o in ("-S"):
ACCESS_KEY = a
if o in ("-R"):
REGION = a
if o in ("-c"):
critical_threshold = int(a)
if KEY_ID == "" or ACCESS_KEY == "":
usage()
return UNKNOWN
eventcheck = AmazonEventCheck()
return eventcheck.check(critical_threshold)
if __name__ == '__main__':
sys.exit(main())
| 30.081761 | 165 | 0.605896 | 610 | 4,783 | 4.586885 | 0.281967 | 0.041816 | 0.008935 | 0.015011 | 0.112223 | 0.097927 | 0.097927 | 0.097927 | 0.070765 | 0.070765 | 0 | 0.009409 | 0.28894 | 4,783 | 158 | 166 | 30.272152 | 0.81329 | 0.057913 | 0 | 0.123711 | 0 | 0.010309 | 0.09782 | 0.012228 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010309 | 0.072165 | null | null | 0.051546 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
891d4ed726e77fd986edee372d297804eaad447d | 385 | py | Python | protera_stability/engine/__init__.py | stepp1/protera-stability | 62f70af00b9475a0b0aeba39fa6ae57f0bb25b34 | [
"MIT"
] | 1 | 2021-11-05T02:14:31.000Z | 2021-11-05T02:14:31.000Z | protera_stability/engine/__init__.py | stepp1/protera-stability | 62f70af00b9475a0b0aeba39fa6ae57f0bb25b34 | [
"MIT"
] | null | null | null | protera_stability/engine/__init__.py | stepp1/protera-stability | 62f70af00b9475a0b0aeba39fa6ae57f0bb25b34 | [
"MIT"
] | null | null | null | from protera_stability.engine.default import get_cfg, setup_train, DefaultTrainer
from protera_stability.engine.lightning_train import (
default_cbs,
DataModule,
LitProteins,
TrainingPl,
)
__all__ = [
"DataModule",
"DefaultTrainer",
"LitProteins",
"TrainingPl",
"default_cbs",
"get_cfg",
"setup_train",
]
assert __all__ == sorted(__all__)
| 19.25 | 81 | 0.698701 | 38 | 385 | 6.526316 | 0.5 | 0.08871 | 0.16129 | 0.209677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197403 | 385 | 19 | 82 | 20.263158 | 0.802589 | 0 | 0 | 0 | 0 | 0 | 0.192208 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89287107bf8b41baf6a2e18560e187efeb94ad6b | 574 | py | Python | activities/migrations/0003_alter_currentactivitie_importance_level.py | CiganOliviu/MyWorkflow | 85951c2e8ebdb3e970fcc0b3e24bd319360b852a | [
"Apache-2.0"
] | null | null | null | activities/migrations/0003_alter_currentactivitie_importance_level.py | CiganOliviu/MyWorkflow | 85951c2e8ebdb3e970fcc0b3e24bd319360b852a | [
"Apache-2.0"
] | null | null | null | activities/migrations/0003_alter_currentactivitie_importance_level.py | CiganOliviu/MyWorkflow | 85951c2e8ebdb3e970fcc0b3e24bd319360b852a | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-04 19:36
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('activities', '0002_rename_currentactivity_currentactivitie'),
]
operations = [
migrations.AlterField(
model_name='currentactivitie',
name='importance_level',
field=models.CharField(choices=[('Optional', 'Optional'), ('Important', 'Important'), ('Very important', 'Very important'), ('Critical', 'Critical')], default='None', max_length=25),
),
]
| 30.210526 | 194 | 0.642857 | 56 | 574 | 6.482143 | 0.767857 | 0.071625 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04646 | 0.212544 | 574 | 18 | 195 | 31.888889 | 0.756637 | 0.078397 | 0 | 0 | 1 | 0 | 0.318786 | 0.083491 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
892ea50cc3da67caa05d44561bab96944c809088 | 2,465 | py | Python | mfctracker/management/commands/syncusers.py | kevans91/mfctracker | c86f1538229df126081331edef3bfd19c9ef1345 | [
"BSD-2-Clause"
] | 3 | 2016-10-19T05:01:31.000Z | 2019-06-06T18:20:11.000Z | mfctracker/management/commands/syncusers.py | kevans91/mfctracker | c86f1538229df126081331edef3bfd19c9ef1345 | [
"BSD-2-Clause"
] | 3 | 2017-11-28T17:31:58.000Z | 2021-04-12T02:37:27.000Z | mfctracker/management/commands/syncusers.py | kevans91/mfctracker | c86f1538229df126081331edef3bfd19c9ef1345 | [
"BSD-2-Clause"
] | 1 | 2020-06-26T14:05:53.000Z | 2020-06-26T14:05:53.000Z | # Copyright (c) 2016-2019 Oleksandr Tymoshenko <gonzo@bluezbox.com>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
from django.conf import settings
from django.contrib.auth.models import User
from django.core.management.base import BaseCommand, CommandError
from django.utils.crypto import get_random_string
from mfctracker.models import Commit, UserProfile
class Command(BaseCommand):
help = 'Create users for every known committer'
def handle(self, *args, **options):
committers = set(Commit.objects.values_list('author', flat=True).distinct())
for committer in committers:
try:
user = User.objects.get(username=committer)
if user.profile is None:
profile = UserProfile.objects.create(share_token=get_random_string(length=8), user=user)
profile.save()
except User.DoesNotExist:
email = '{}@{}'.format(committer, settings.SVN_EMAIL_DOMAIN)
password = get_random_string(length=32)
self.stdout.write('User does not exist, adding: {}'.format(committer, password))
User.objects.create_user(committer, email, password)
| 52.446809 | 108 | 0.723732 | 320 | 2,465 | 5.540625 | 0.55 | 0.022561 | 0.025381 | 0.025945 | 0.103779 | 0.076706 | 0.076706 | 0.076706 | 0.076706 | 0.076706 | 0 | 0.006646 | 0.206491 | 2,465 | 46 | 109 | 53.586957 | 0.899796 | 0.539554 | 0 | 0 | 0 | 0 | 0.072072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.15 | 0.25 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
892ff412cb0abfd15e5fd8375b604f4b8bd19d90 | 399 | py | Python | Exercises/015.py | GuilhermeRds1921/Python3-Guanabara | 24cd85b076e1074a5602e54c420bcc8e70cc1854 | [
"MIT"
] | null | null | null | Exercises/015.py | GuilhermeRds1921/Python3-Guanabara | 24cd85b076e1074a5602e54c420bcc8e70cc1854 | [
"MIT"
] | null | null | null | Exercises/015.py | GuilhermeRds1921/Python3-Guanabara | 24cd85b076e1074a5602e54c420bcc8e70cc1854 | [
"MIT"
] | null | null | null | # Escreva um programa que pergunte a quantidade de Km
# percorridos por um carro alugado e a quantidade de dias pelos
# quais ele foi alugado. Calcule o preço a pagar, sabendo que o carro
# custa R$60 por dia e R$0.15 por Km rodado.
km = float(input("Quantos km percorreu?: "))
dia = int(input("Quantos dias ele foi alugado?: "))
print("O valor a ser pago é: R${:.2f}".format(km * 0.15 + dia * 60)) | 49.875 | 69 | 0.704261 | 74 | 399 | 3.797297 | 0.581081 | 0.078292 | 0.092527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033742 | 0.182957 | 399 | 8 | 70 | 49.875 | 0.828221 | 0.561404 | 0 | 0 | 0 | 0 | 0.491228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
893019fb8b63370b0487a73ea074042709b31d91 | 3,241 | py | Python | utils/make_package.py | j123123/llilc | 9d6a522deb6b0769e449f27a087a9d3a7ab2a255 | [
"MIT"
] | 1,712 | 2015-04-13T22:16:59.000Z | 2022-03-17T18:44:42.000Z | utils/make_package.py | j123123/llilc | 9d6a522deb6b0769e449f27a087a9d3a7ab2a255 | [
"MIT"
] | 770 | 2015-04-13T22:10:20.000Z | 2022-03-15T01:30:06.000Z | utils/make_package.py | j123123/llilc | 9d6a522deb6b0769e449f27a087a9d3a7ab2a255 | [
"MIT"
] | 206 | 2015-04-13T22:17:19.000Z | 2022-02-19T12:30:19.000Z | #!/usr/bin/env python
import sys
import argparse
import os
import subprocess
import platform
import io
import string
try:
# For Python >= 3.0
from urllib.request import urlopen
except ImportError:
# For Python < 3.0
from urllib2 import urlopen
import shutil
import stat
def run(args):
nugetFolder = os.path.join(args.target, ".nuget")
print("\nEnsuring folder: %s" % nugetFolder )
if not os.path.exists(nugetFolder):
os.makedirs(nugetFolder)
nugetExe = os.path.join(nugetFolder, "nuget.exe")
if not os.path.exists(nugetExe):
nugetOrg = "http://nuget.org/nuget.exe"
print("Downloading... %s" % nugetOrg )
response = urlopen(nugetOrg)
output = open(nugetExe,'wb')
output.write(response.read())
output.close()
# Ensure it's executable
st = os.stat(nugetExe)
os.chmod(nugetExe, st.st_mode | stat.S_IEXEC)
if (sys.platform != "win32"):
# shutil.which can be used for python 3.3 or later, instead.
for mono in ["/usr/bin/mono", "/usr/local/bin/mono"]:
if os.path.exists(mono):
monopath = mono
if not monopath:
raise "mono is required to run nuget.exe"
nugetExe = monopath + " " + nugetExe
nugetSpec = os.path.join(nugetFolder, os.path.basename(args.nuspec))
if args.nuspec != nugetSpec:
print("\nCopying " + args.nuspec + " to " + nugetSpec)
shutil.copyfile(args.nuspec, nugetSpec)
if args.json != None:
nugetJson = os.path.join(nugetFolder, os.path.basename(args.json))
if args.json != nugetJson:
print("\nCopying " + args.json + " to " + nugetJson)
shutil.copyfile(args.json, nugetJson)
nugetCommand = nugetExe + " pack " + nugetSpec \
+ " -NoPackageAnalysis -NoDefaultExcludes" \
" -OutputDirectory %s" % nugetFolder
ret = os.system(nugetCommand)
return ret
def main(argv):
parser = argparse.ArgumentParser(description=
"Download nuget and run it to create a package using the given nuspec. " \
"Example: make_package.py " \
"--target f:\llilc-rel\\bin\Release " \
"--nuspec f:\llilc\lib\ObjWriter\.nuget\Microsoft.Dotnet.ObjectWriter.nuspec",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("--target", metavar="PATH",
default=None,
help="path to a target directory that contains files that will " \
"packaged")
parser.add_argument("--nuspec", metavar="PATH",
default=None,
help="path to a nuspec file. This file is assumed to be under " \
"a child directory (.nuget) of the target by convetion")
parser.add_argument("--json", metavar="PATH",
default=None,
help="path to a json file. This file is used to create " \
"a redirection package")
args,unknown = parser.parse_known_args(argv)
if unknown:
print("Unknown argument(s): ", ", ".join(unknown))
return -3
returncode=0
if args.target == None:
print("--target is not specified.")
return -3
if args.nuspec == None:
print("--nuspec is not specified")
return -3
returncode = run(args)
return returncode
if __name__ == "__main__":
returncode = main(sys.argv[1:])
sys.exit(returncode)
| 30.575472 | 81 | 0.650108 | 410 | 3,241 | 5.1 | 0.365854 | 0.025825 | 0.01913 | 0.030129 | 0.135342 | 0.084648 | 0.084648 | 0.084648 | 0 | 0 | 0 | 0.00556 | 0.223079 | 3,241 | 105 | 82 | 30.866667 | 0.824464 | 0.042271 | 0 | 0.071429 | 0 | 0 | 0.261136 | 0.029374 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8933583a90eda099c254b38c7fb555a1d3d870bb | 4,622 | py | Python | tests/unit_tests/test_tethys_services/test_admin.py | msouff/tethys | 45795d1e6561d5db8fddd838f4d1ae1d91dbb837 | [
"BSD-2-Clause"
] | 79 | 2015-10-05T13:13:28.000Z | 2022-02-01T12:30:33.000Z | tests/unit_tests/test_tethys_services/test_admin.py | msouff/tethys | 45795d1e6561d5db8fddd838f4d1ae1d91dbb837 | [
"BSD-2-Clause"
] | 542 | 2015-08-12T22:11:32.000Z | 2022-03-29T22:18:08.000Z | tests/unit_tests/test_tethys_services/test_admin.py | msouff/tethys | 45795d1e6561d5db8fddd838f4d1ae1d91dbb837 | [
"BSD-2-Clause"
] | 71 | 2016-01-16T01:03:41.000Z | 2022-03-31T17:55:54.000Z | import unittest
from unittest import mock
from django.utils.translation import ugettext_lazy as _
from tethys_services.models import DatasetService, SpatialDatasetService, WebProcessingService, PersistentStoreService
from tethys_services.admin import DatasetServiceForm, SpatialDatasetServiceForm, WebProcessingServiceForm,\
PersistentStoreServiceForm, DatasetServiceAdmin, SpatialDatasetServiceAdmin, WebProcessingServiceAdmin,\
PersistentStoreServiceAdmin
class TestTethysServicesAdmin(unittest.TestCase):
def setUp(self):
self.expected_labels = {
'public_endpoint': _('Public Endpoint')
}
def tearDown(self):
pass
def test_DatasetServiceForm(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'engine', 'endpoint', 'public_endpoint', 'apikey', 'username', 'password')
ret = DatasetServiceForm(mock_args)
self.assertEqual(DatasetService, ret.Meta.model)
self.assertEqual(expected_fields, ret.Meta.fields)
self.assertTrue('password' in ret.Meta.widgets)
self.assertEqual(self.expected_labels, ret.Meta.labels)
def test_SpatialDatasetServiceForm(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'engine', 'endpoint', 'public_endpoint', 'apikey', 'username', 'password')
ret = SpatialDatasetServiceForm(mock_args)
self.assertEqual(SpatialDatasetService, ret.Meta.model)
self.assertEqual(expected_fields, ret.Meta.fields)
self.assertTrue('password' in ret.Meta.widgets)
self.assertEqual(self.expected_labels, ret.Meta.labels)
def test_WebProcessingServiceForm(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'endpoint', 'public_endpoint', 'username', 'password')
ret = WebProcessingServiceForm(mock_args)
self.assertEqual(WebProcessingService, ret.Meta.model)
self.assertEqual(expected_fields, ret.Meta.fields)
self.assertTrue('password' in ret.Meta.widgets)
self.assertEqual(self.expected_labels, ret.Meta.labels)
def test_PersistentStoreServiceForm(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'engine', 'host', 'port', 'username', 'password')
ret = PersistentStoreServiceForm(mock_args)
self.assertEqual(PersistentStoreService, ret.Meta.model)
self.assertEqual(expected_fields, ret.Meta.fields)
self.assertTrue('password' in ret.Meta.widgets)
def test_DatasetServiceAdmin(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'engine', 'endpoint', 'public_endpoint', 'apikey', 'username', 'password')
ret = DatasetServiceAdmin(mock_args, mock_args)
self.assertEqual(DatasetServiceForm, ret.form)
self.assertEqual(expected_fields, ret.fields)
def test_SpatialDatasetServiceAdmin(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'engine', 'endpoint', 'public_endpoint', 'apikey', 'username', 'password')
ret = SpatialDatasetServiceAdmin(mock_args, mock_args)
self.assertEqual(SpatialDatasetServiceForm, ret.form)
self.assertEqual(expected_fields, ret.fields)
def test_WebProcessingServiceAdmin(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'endpoint', 'public_endpoint', 'username', 'password')
ret = WebProcessingServiceAdmin(mock_args, mock_args)
self.assertEqual(WebProcessingServiceForm, ret.form)
self.assertEqual(expected_fields, ret.fields)
def test_PersistentStoreServiceAdmin(self):
mock_args = mock.MagicMock()
expected_fields = ('name', 'engine', 'host', 'port', 'username', 'password')
ret = PersistentStoreServiceAdmin(mock_args, mock_args)
self.assertEqual(PersistentStoreServiceForm, ret.form)
self.assertEqual(expected_fields, ret.fields)
def test_admin_site_register(self):
from django.contrib import admin
registry = admin.site._registry
self.assertIn(DatasetService, registry)
self.assertIsInstance(registry[DatasetService], DatasetServiceAdmin)
self.assertIn(SpatialDatasetService, registry)
self.assertIsInstance(registry[SpatialDatasetService], SpatialDatasetServiceAdmin)
self.assertIn(WebProcessingService, registry)
self.assertIsInstance(registry[WebProcessingService], WebProcessingServiceAdmin)
self.assertIn(PersistentStoreService, registry)
self.assertIsInstance(registry[PersistentStoreService], PersistentStoreServiceAdmin)
| 43.603774 | 118 | 0.720251 | 429 | 4,622 | 7.610723 | 0.146853 | 0.049005 | 0.044104 | 0.039204 | 0.478714 | 0.478714 | 0.440735 | 0.440735 | 0.440735 | 0.440735 | 0 | 0 | 0.177845 | 4,622 | 105 | 119 | 44.019048 | 0.859211 | 0 | 0 | 0.3875 | 0 | 0 | 0.094331 | 0 | 0 | 0 | 0 | 0 | 0.3875 | 1 | 0.1375 | false | 0.1625 | 0.075 | 0 | 0.225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8933611dccd35f84e21b95c6ea6cb9c0aef59e5b | 633 | py | Python | Codefights/arcade/intro/level-6/25.Array-Replace/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codefights/arcade/intro/level-6/25.Array-Replace/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codefights/arcade/intro/level-6/25.Array-Replace/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python3
from solution1 import arrayReplace as f
qa = [
([1, 2, 1], 1, 3,
[3, 2, 3]),
([1, 2, 3, 4, 5], 3, 0,
[1, 2, 0, 4, 5]),
([1, 1, 1], 1, 10,
[10, 10, 10]),
([1, 2, 1, 2, 1], 2, 1,
[1, 1, 1, 1, 1]),
([1, 2, 1, 2, 1], 2, 2,
[1, 2, 1, 2, 1]),
([3, 1], 3, 9,
[9, 1])
]
for *q, a in qa:
for i, e in enumerate(q):
print('input{0}: {1}'.format(i + 1, e))
ans = f(*q)
if ans != a:
print(' [failed]')
print(' output:', ans)
print(' expected:', a)
else:
print(' [ok]')
print(' output:', ans)
print()
| 19.78125 | 47 | 0.366509 | 106 | 633 | 2.188679 | 0.311321 | 0.094828 | 0.103448 | 0.086207 | 0.103448 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0.171795 | 0.383886 | 633 | 31 | 48 | 20.419355 | 0.423077 | 0.011058 | 0 | 0.074074 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.037037 | 0.259259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89386b743e51694854f66d61583124c986f918df | 349 | py | Python | 001-099/38/38.py | lunixbochs/project-euler | aa974c5ae68547309f33adbb4e633fe040964855 | [
"MIT"
] | 6 | 2015-07-21T20:45:08.000Z | 2021-03-13T14:07:48.000Z | 001-099/38/38.py | lunixbochs/project-euler | aa974c5ae68547309f33adbb4e633fe040964855 | [
"MIT"
] | null | null | null | 001-099/38/38.py | lunixbochs/project-euler | aa974c5ae68547309f33adbb4e633fe040964855 | [
"MIT"
] | 2 | 2017-10-28T09:52:08.000Z | 2019-04-11T00:55:36.000Z | ref = set('123456789')
def pans(n):
r = str(n)
i = 1
while True:
i += 1
r += str(i * n)
if len(r) > 9:
break
if len(r) == 9 and set(r) == ref:
yield i, r
m = 0
for i in xrange(2, 1000000000):
for z, n in pans(i):
if n > m:
m = n
print z, i, n
| 17.45 | 41 | 0.383954 | 58 | 349 | 2.310345 | 0.465517 | 0.059701 | 0.089552 | 0.104478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137363 | 0.47851 | 349 | 19 | 42 | 18.368421 | 0.598901 | 0 | 0 | 0 | 0 | 0 | 0.025788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89414998e5a7ff84331d610b6b9cc51dadd75709 | 6,364 | py | Python | TransfertFichier.py | Fabien034/TimeLaps | a9fcbdfd6551cf8c12961adeef15dfdfaad8fb3c | [
"CC0-1.0"
] | null | null | null | TransfertFichier.py | Fabien034/TimeLaps | a9fcbdfd6551cf8c12961adeef15dfdfaad8fb3c | [
"CC0-1.0"
] | null | null | null | TransfertFichier.py | Fabien034/TimeLaps | a9fcbdfd6551cf8c12961adeef15dfdfaad8fb3c | [
"CC0-1.0"
] | null | null | null | # !/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright: Fabien Rosso
# Version 0.1.1 - 19 Avril 2016
# Version 0.1.2 - 29 Avril 2016
## Debug VisualStudio
# import ptvsd
# ptvsd.enable_attach(None)
from __future__ import unicode_literals
from __future__ import print_function
import os
import sys
import subprocess
import pickle
import socket
import threading
from datetime import datetime
from time import sleep
from wrappers import *
from fonctions import *
import logging
from logging.handlers import RotatingFileHandler
os.system("clear")
class ThreadReception(threading.Thread):
"""objet thread gerant la reception des messages"""
def __init__(self, conn):
threading.Thread.__init__(self)
self.connexion = conn # ref. du socket de connexion
def run(self):
while True:
try:
# en attente de reception
message_recu = self.connexion.recv(4096)
message_recu = message_recu.encode(encoding='UTF-8')
print(message_recu)
except:
# fin du thread
break
print("ThreadReception arrete. Connexion interrompue.")
self.connexion.close()
def main():
# création de l'objet logger qui va nous servir à écrire dans les logs
logger = logging.getLogger()
# on met le niveau du logger à DEBUG, comme ça il écrit tout
logger.setLevel(logging.DEBUG)
# création d'un formateur qui va ajouter le temps, le niveau
# de chaque message quand on écrira un message dans le log
formatter = logging.Formatter('%(asctime)s :: %(levelname)s :: %(message)s')
# création d'un handler qui va rediriger une écriture du log vers
# un fichier en mode 'append', avec 1 backup et une taille max de 1Mo
file_handler = RotatingFileHandler('activity_transfert_fichier.log', 'a', 1000000, 1)
# on lui met le niveau sur DEBUG, on lui dit qu'il doit utiliser le formateur
# créé précédement et on ajoute ce handler au logger
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
# création d'un second handler qui va rediriger chaque écriture de log
# sur la console
steam_handler = logging.StreamHandler()
steam_handler.setLevel(logging.DEBUG)
logger.addHandler(steam_handler)
raw_input("Appuyer sur une touche pour demarrer l'envoi des photos")
os.system("clear")
#Lecture et recuperation des variables dans le fichier configServer
with open("configServer", "rb") as fichierConfig:
configRead = pickle.Unpickler(fichierConfig)
HOST = configRead.load()
PORT = int(configRead.load())
USER = configRead.load()
PORTSSH = int(configRead.load())
HOMESERVER = os.path.join("/home",USER,"Images")
# Création du dossier attente de tranfert
WAITPATH = os.path.join(os.path.expanduser("~"),"Pictures","Attente")
if not os.path.exists(WAITPATH):
os.makedirs(WAITPATH, mode=0o777)
# Etablissement de la connexion avec le serveur
# protocoles IPv4 et TCP
mySocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mySocket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
logger.info(" >> Connexion en cours avec " + HOST + "...")
try:
mySocket.connect((HOST, PORT))
except socket.error:
logger.warning(" >> le serveur '" + host + "' est introuvable.")
sys.exit()
# Dialogue avec le serveur : threads pour gerer la reception des messages
th_Reception = ThreadReception(mySocket)
th_Reception.start()
while True:
listFile = ListDirectory(WAITPATH)
if len(listFile) > 0:
file = Photo(listFile[0]) # Création de la class photo
octets = os.path.getsize(file.pathFile) / 1024 # Nombre d'octets de la photo
logger.info(" >> Transfert : '" + file.nameFile + "' [" + str(octets) + " Ko]")
# on crée le chemin du fichier pour le serveur suivant la date de création
datePath = time.strftime("%Y/%y.%m.%d/",file.createDate)
pathDestination = os.path.join(HOMESERVER,datePath)
fileDestination = os.path.join(pathDestination,file.nameFile)
# on demande au serveur de créer le chemin si il n'eiste pas
message_emis = b"makedirs {0}".format(pathDestination)
try:
# emission
mySocket.send(message_emis.decode(encoding='UTF-8'))
time.sleep(0.1)
except:
break
# on indique au serveur que l'on va lui enoyer un fichier de x octets
message_emis = b"EnvoiPhoto {0} {1}".format(file.nameFile,octets)
try:
# emission
mySocket.send(message_emis.decode(encoding='UTF-8'))
time.sleep(0.1)
except:
break
logger.info(" >> Envoi de la photo '{0}'".format(file.nameFile))
# subprocess SCP pour l'envoie de la photo
subprocess.call(["scp", "-P", bytes(PORTSSH), "-p", file.pathFile ,"{0}@{1}:{2}".format(USER, bytes(HOST), fileDestination)])
time.sleep(0.01)
logger.info(" >> Envoi OK")
# on crée le chemin du fichier pour le rangement sur le RPi suivant la date de création
# si il n'existe pas on crée le dossier
sourcePath = os.path.join(os.path.expanduser("~"),"Pictures",datePath)
if not os.path.exists(sourcePath):
os.makedirs(sourcePath, mode=0o777)
# Creation du nouveau nom
newPathFile = os.path.join(sourcePath,file.nameFile)
# Deplace/rennome la photo
file = file.move(newPathFile)
logger.info(" >> Fichier '{0}' déplacé vers '{1}'".format(file.nameFile, file.parenPathFile))
print("")
mySocket.close()
logger.info(" >> envoi des photos termine")
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
logger.warning("Programme interrompu par l'utilisateur")
sys.exit(0)
| 36.574713 | 138 | 0.61675 | 765 | 6,364 | 5.065359 | 0.379085 | 0.017032 | 0.015484 | 0.011355 | 0.091355 | 0.07071 | 0.07071 | 0.053161 | 0.038194 | 0.038194 | 0 | 0.015175 | 0.285512 | 6,364 | 173 | 139 | 36.786127 | 0.836815 | 0.247014 | 0 | 0.186275 | 0 | 0 | 0.121078 | 0.006628 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.137255 | null | null | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89418edd3c2fb8634084937401da418c6baab778 | 388 | py | Python | posts/urls.py | xXcoronaXx/Dblog | 56ee1b8f6f74fabbe0e081921e6cff092661e8df | [
"MIT"
] | null | null | null | posts/urls.py | xXcoronaXx/Dblog | 56ee1b8f6f74fabbe0e081921e6cff092661e8df | [
"MIT"
] | null | null | null | posts/urls.py | xXcoronaXx/Dblog | 56ee1b8f6f74fabbe0e081921e6cff092661e8df | [
"MIT"
] | null | null | null | from django.conf.urls import include, url
urlpatterns = [
url(r'^admin/(?P<slug>[\w\-]+)/', 'posts.views.post_preview', name='post_preview'),
url(r'^tag/(?P<slug>[\w\-]+)/', 'posts.views.posts_view_tag', name='posts_tag'),
url(r'^popular/', 'posts.views.posts_view_popular', name='posts_popular'),
url(r'^(?P<slug>[\w\-]+)/', 'posts.views.post_view', name='post_view'),
] | 48.5 | 87 | 0.634021 | 58 | 388 | 4.068966 | 0.362069 | 0.067797 | 0.076271 | 0.139831 | 0.237288 | 0.169492 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095361 | 388 | 8 | 88 | 48.5 | 0.672365 | 0 | 0 | 0 | 0 | 0 | 0.565553 | 0.383033 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
894a5fed9ddf654ea364f9de215cbb93bd3168ae | 582 | py | Python | britecoreproject/orm.py | alis0nc/britecore-project | aa31eb1f87c1c250f8bca9d9aa109eeae1232b9b | [
"MIT"
] | null | null | null | britecoreproject/orm.py | alis0nc/britecore-project | aa31eb1f87c1c250f8bca9d9aa109eeae1232b9b | [
"MIT"
] | 1 | 2018-02-13T00:34:22.000Z | 2018-02-13T00:34:22.000Z | britecoreproject/orm.py | alis0nc/britecore-project | aa31eb1f87c1c250f8bca9d9aa109eeae1232b9b | [
"MIT"
] | null | null | null | """
Define ORM classes.
"""
from sqlobject import *
class RiskType(SQLObject):
"""Define RiskType ORM class."""
riskTypeName = StringCol()
fields = RelatedJoin('Field')
class Field(SQLObject):
"""Define Field ORM class."""
fieldName = StringCol()
fieldType = ForeignKey('FieldType')
risks = RelatedJoin('RiskType')
class FieldType(SQLObject):
"""Define FieldType ORM class."""
fieldTypeName = StringCol()
def create_tables():
"""Create tables in database."""
RiskType.createTable()
Field.createTable()
FieldType.createTable()
| 22.384615 | 39 | 0.671821 | 55 | 582 | 7.090909 | 0.436364 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189003 | 582 | 25 | 40 | 23.28 | 0.826271 | 0.214777 | 0 | 0 | 0 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
89506cba8d48107a96bebd4fb53108e48e413e52 | 179 | py | Python | Curso_Coursera/Ex1_ParImpar.py | shirleyguimaraes/Linguagem_Python | cb8b871ee4fe85f803c35bfe55ce533cab8150fb | [
"MIT"
] | null | null | null | Curso_Coursera/Ex1_ParImpar.py | shirleyguimaraes/Linguagem_Python | cb8b871ee4fe85f803c35bfe55ce533cab8150fb | [
"MIT"
] | null | null | null | Curso_Coursera/Ex1_ParImpar.py | shirleyguimaraes/Linguagem_Python | cb8b871ee4fe85f803c35bfe55ce533cab8150fb | [
"MIT"
] | null | null | null |
def main():
n = int(input("Digite um número inteiro: "))
resto = n % 2
if resto == 0:
print("Par")
else:
print("Impar")
main()
| 11.933333 | 49 | 0.435754 | 21 | 179 | 3.714286 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019048 | 0.413408 | 179 | 14 | 50 | 12.785714 | 0.72381 | 0 | 0 | 0 | 0 | 0 | 0.207317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89528b5ef79fc793c901837f37deed12889d7c14 | 536 | py | Python | pyro/contrib/oed/__init__.py | cweniger/pyro | ba104f07ca17865d2600e8765d920d549fcb3fbc | [
"MIT"
] | 10 | 2020-03-18T14:41:25.000Z | 2021-07-04T08:49:57.000Z | pyro/contrib/oed/__init__.py | cweniger/pyro | ba104f07ca17865d2600e8765d920d549fcb3fbc | [
"MIT"
] | 19 | 2018-10-30T13:45:31.000Z | 2019-09-27T14:16:57.000Z | pyro/contrib/oed/__init__.py | cweniger/pyro | ba104f07ca17865d2600e8765d920d549fcb3fbc | [
"MIT"
] | 5 | 2020-06-21T23:40:35.000Z | 2021-11-09T16:18:42.000Z | """
The :mod:`pyro.contrib.oed` module provides tools to create optimal experiment
designs for pyro models. In particular, it provides estimators for the
expected information gain (EIG) criterion.
To estimate the EIG for a particular design, use::
def model(design):
...
# Select an appropriate EIG estimator, such as
eig = vnmc_eig(model, design, ...)
EIG can then be maximised using existing optimisers in :mod:`pyro.optim`.
"""
from pyro.contrib.oed import search, eig
__all__ = [
"search",
"eig"
]
| 23.304348 | 78 | 0.701493 | 74 | 536 | 5.013514 | 0.662162 | 0.037736 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199627 | 536 | 22 | 79 | 24.363636 | 0.864802 | 0.83209 | 0 | 0 | 0 | 0 | 0.109756 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
895d842bfc897627a79eb5fa98161a381e6ef77f | 3,925 | py | Python | venv/Lib/site-packages/caffe2/python/serialized_test/coverage.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | 1 | 2022-01-08T12:30:44.000Z | 2022-01-08T12:30:44.000Z | venv/Lib/site-packages/caffe2/python/serialized_test/coverage.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/caffe2/python/serialized_test/coverage.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | null | null | null |
from caffe2.proto import caffe2_pb2
from caffe2.python import core, workspace
import os
import tempfile
from zipfile import ZipFile
'''
Generates a document in markdown format summrizing the coverage of serialized
testing. The document lives in
`caffe2/python/serialized_test/SerializedTestCoverage.md`
'''
OpSchema = workspace.C.OpSchema
def gen_serialized_test_coverage(source_dir, output_dir):
(covered, not_covered, schemaless) = gen_coverage_sets(source_dir)
num_covered = len(covered)
num_not_covered = len(not_covered)
num_schemaless = len(schemaless)
total_ops = num_covered + num_not_covered
with open(os.path.join(output_dir, 'SerializedTestCoverage.md'), 'w+') as f:
f.write('# Serialized Test Coverage Report\n')
f.write("This is an automatically generated file. Please see "
"`caffe2/python/serialized_test/README.md` for details. "
"In the case of merge conflicts, please rebase and regenerate.\n")
f.write('## Summary\n')
f.write(
'Serialized tests have covered {}/{} ({}%) operators\n\n'.format(
num_covered, total_ops,
(int)(num_covered / total_ops * 1000) / 10))
f.write('## Not covered operators\n')
f.write('<details>\n')
f.write(
'<summary>There are {} not covered operators</summary>\n\n'.format(
num_not_covered))
for n in sorted(not_covered):
f.write('* ' + n + '\n')
f.write('</details>\n\n')
f.write('## Covered operators\n')
f.write('<details>\n')
f.write(
'<summary>There are {} covered operators</summary>\n\n'.format(
num_covered))
for n in sorted(covered):
f.write('* ' + n + '\n')
f.write('</details>\n\n')
f.write('## Excluded from coverage statistics\n')
f.write('### Schemaless operators\n')
f.write('<details>\n')
f.write(
'<summary>There are {} schemaless operators</summary>\n\n'.format(
num_schemaless))
for n in sorted(schemaless):
f.write('* ' + n + '\n')
f.write('</details>\n\n')
def gen_coverage_sets(source_dir):
covered_ops = gen_covered_ops(source_dir)
not_covered_ops = set()
schemaless_ops = []
for op_name in core._GetRegisteredOperators():
s = OpSchema.get(op_name)
if s is not None and s.private:
continue
if s:
if op_name not in covered_ops:
not_covered_ops.add(op_name)
else:
if op_name.find("_ENGINE_") == -1:
schemaless_ops.append(op_name)
return (covered_ops, not_covered_ops, schemaless_ops)
def gen_covered_ops(source_dir):
def parse_proto(x):
proto = caffe2_pb2.OperatorDef()
proto.ParseFromString(x)
return proto
covered = set()
for f in os.listdir(source_dir):
zipfile = os.path.join(source_dir, f)
if not os.path.isfile(zipfile):
continue
temp_dir = tempfile.mkdtemp()
with ZipFile(zipfile) as z:
z.extractall(temp_dir)
op_path = os.path.join(temp_dir, 'op.pb')
with open(op_path, 'rb') as f:
loaded_op = f.read()
op_proto = parse_proto(loaded_op)
covered.add(op_proto.type)
index = 0
grad_path = os.path.join(temp_dir, 'grad_{}.pb'.format(index))
while os.path.isfile(grad_path):
with open(grad_path, 'rb') as f:
loaded_grad = f.read()
grad_proto = parse_proto(loaded_grad)
covered.add(grad_proto.type)
index += 1
grad_path = os.path.join(temp_dir, 'grad_{}.pb'.format(index))
return covered
| 33.547009 | 81 | 0.581401 | 493 | 3,925 | 4.450304 | 0.237323 | 0.054695 | 0.047858 | 0.038286 | 0.304467 | 0.20237 | 0.180492 | 0.149499 | 0.149499 | 0.139016 | 0 | 0.006191 | 0.300382 | 3,925 | 116 | 82 | 33.836207 | 0.79279 | 0 | 0 | 0.188889 | 1 | 0 | 0.193113 | 0.03719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.055556 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89730fea2cfd2a597b98eac7b0559f46c95c3edb | 381 | py | Python | Opencart/tests.py | BahrmaLe/otus_python_homework | 510a4f1971b35048d760fcc45098e511b81bea31 | [
"MIT"
] | 1 | 2021-02-25T15:37:21.000Z | 2021-02-25T15:37:21.000Z | Opencart/tests.py | BahrmaLe/otus_python_homework | 510a4f1971b35048d760fcc45098e511b81bea31 | [
"MIT"
] | null | null | null | Opencart/tests.py | BahrmaLe/otus_python_homework | 510a4f1971b35048d760fcc45098e511b81bea31 | [
"MIT"
] | null | null | null | def test_open_home_page(driver, request):
url = 'opencart/'
return driver.get("".join([request.config.getoption("--address"), url]))
element = driver.find_elements_by_xpath("//base[contains(@href, 'http://192.168.56.103/opencart/]")
assert element == 'http://192.168.56.103/opencart/'
assert driver.find_element_by_class_name('col-sm-4').text == 'Your Store'
| 42.333333 | 103 | 0.692913 | 54 | 381 | 4.703704 | 0.685185 | 0.07874 | 0.07874 | 0.094488 | 0.228346 | 0.228346 | 0.228346 | 0 | 0 | 0 | 0 | 0.068249 | 0.115486 | 381 | 8 | 104 | 47.625 | 0.68546 | 0 | 0 | 0 | 0 | 0 | 0.324538 | 0.058047 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8976ac3ed7f5bc537c2e56c41551cfde83e492fd | 1,996 | py | Python | two-phase-v2/OPE-master/run_ope.py | zizuzi/DeepEmbedding | d8762a94863fadced35b1dcc59996dcd02b1acaf | [
"MIT"
] | 1 | 2019-07-10T05:59:26.000Z | 2019-07-10T05:59:26.000Z | two-phase-v2/OPE-master/run_ope.py | zizuzi/DeepEmbedding | d8762a94863fadced35b1dcc59996dcd02b1acaf | [
"MIT"
] | 7 | 2019-03-20T00:28:15.000Z | 2021-09-07T23:48:46.000Z | two-phase-v2/OPE-master/run_ope.py | thaihungle/DeepTopicEmbedding | d8762a94863fadced35b1dcc59996dcd02b1acaf | [
"MIT"
] | null | null | null | import sys
sys.path.insert(0, './')
sys.path.insert(0, './common')
sys.path.insert(0, './ML-FW')
sys.path.insert(0, './ML-OPE')
sys.path.insert(0, './Online-FW')
sys.path.insert(0, './Online-OPE')
sys.path.insert(0, './Streaming-FW')
sys.path.insert(0, './Streaming-OPE')
import time
import shutil
import utilities as ut
import run_ML_FW as mfw
import run_ML_OPE as mope
import run_Online_FW as ofw
import run_Online_OPE as oope
import run_Streaming_FW as sfw
import run_Streaming_OPE as sope
import os.path
'''
example command: python run.py ./data2/news20.dat ./settings.txt ML-FW
'''
if __name__ == '__main__':
# Get environment variables
train_file = sys.argv[1]
setting_file = sys.argv[2]
algo_name = sys.argv[3]
test_data_folder = None
if len(sys.argv)==5:
test_data_folder = sys.argv[4]
dataname=train_file.split('/')[-1].split('.')[0]
modelname='./models/%s/%s' % (algo_name, dataname)
train_file2=train_file+'.unsup'
if os.path.isfile(train_file2):
ut.convert_multilabel_unsupervised(train_file,train_file2)
train_file=train_file2
numdoc, numterm=ut.count_data(train_file)
print numdoc, numterm
rf=open(setting_file,'r')
rf.readline()
rf.readline()
newstr='num_docs: %d\nnum_terms: %d\n'%(numdoc, numterm)
wf=open(setting_file,'w')
wf.write(newstr)
shutil.copyfileobj(rf,wf)
wf.close()
start_time=time.time()
if algo_name=='ML-FW':
mfw.run(train_file,setting_file,modelname,test_data_folder)
if algo_name=='ML-OPE':
mope.run(train_file,setting_file,modelname,test_data_folder)
if algo_name=='Online-FW':
ofw.run(train_file,setting_file,modelname,test_data_folder)
if algo_name=='Online-OPE':
oope.run(train_file,setting_file,modelname,test_data_folder)
if algo_name=='Streaming-FW':
sfw.run(train_file,setting_file,modelname,test_data_folder)
if algo_name=='Streaming-OPE':
sope.run(train_file,setting_file,modelname,test_data_folder)
print '-----%s seconds------' %(time.time()-start_time)
| 30.707692 | 70 | 0.726954 | 323 | 1,996 | 4.256966 | 0.26935 | 0.078545 | 0.075636 | 0.081455 | 0.353455 | 0.258909 | 0.258909 | 0.258909 | 0.258909 | 0.225455 | 0 | 0.012465 | 0.115731 | 1,996 | 64 | 71 | 31.1875 | 0.766572 | 0.012525 | 0 | 0.035714 | 0 | 0 | 0.113168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.196429 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8977bd7b64038385d9857845a7707789f51dd14d | 216 | py | Python | setup.py | mechyai/RL-EmsPy | 01984a9bc37f35991073e17cf715687896d6ea8d | [
"Apache-2.0"
] | 8 | 2022-03-15T13:22:22.000Z | 2022-03-30T09:58:04.000Z | setup.py | mechyai/RL-EmsPy | 01984a9bc37f35991073e17cf715687896d6ea8d | [
"Apache-2.0"
] | null | null | null | setup.py | mechyai/RL-EmsPy | 01984a9bc37f35991073e17cf715687896d6ea8d | [
"Apache-2.0"
] | null | null | null | from distutils.core import setup
setup(
name='RL-EmsPy',
version='0.0.1',
packages=['emspy'],
url='https://github.com/mechyai/RL-EmsPy',
license='Apache License 2.0',
author='Chris Eubel',
)
| 19.636364 | 46 | 0.62963 | 30 | 216 | 4.533333 | 0.766667 | 0.102941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028409 | 0.185185 | 216 | 10 | 47 | 21.6 | 0.744318 | 0 | 0 | 0 | 0 | 0 | 0.37963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
897b4e6d0cd4fcb2d42922fef3503586389e4297 | 170 | py | Python | static/images/uploads/convert-to-web.py | KrishSkywalker/krishgoel.com-v2 | a27f6bd9541f354a6ded37e6dcd9ca930c2264ff | [
"MIT"
] | null | null | null | static/images/uploads/convert-to-web.py | KrishSkywalker/krishgoel.com-v2 | a27f6bd9541f354a6ded37e6dcd9ca930c2264ff | [
"MIT"
] | 2 | 2021-10-09T06:50:53.000Z | 2021-10-09T06:56:21.000Z | static/images/uploads/convert-to-web.py | KrishSkywalker/krishgoel.com-v2 | a27f6bd9541f354a6ded37e6dcd9ca930c2264ff | [
"MIT"
] | null | null | null | import os
from cv2 import cv2
for d in os.listdir():
if not d.endswith(".png"): continue
img = cv2.imread(d)
cv2.imwrite(d.replace(".png",".webp"), img)
os.remove(d) | 21.25 | 44 | 0.670588 | 31 | 170 | 3.677419 | 0.612903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.141176 | 170 | 8 | 45 | 21.25 | 0.753425 | 0 | 0 | 0 | 0 | 0 | 0.076023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
897b64057889d6f4e8ff4affe28b5a7f47a4113d | 16,328 | bzl | Python | test/internal/opts/process_compiler_opts_tests.bzl | buildbuddy-io/rules_xcodeproj | 79a54779d530d3907a69a7910c71c06fec4aefeb | [
"MIT"
] | 30 | 2022-02-11T16:36:24.000Z | 2022-03-31T18:18:06.000Z | test/internal/opts/process_compiler_opts_tests.bzl | buildbuddy-io/rules_xcodeproj | 79a54779d530d3907a69a7910c71c06fec4aefeb | [
"MIT"
] | 93 | 2022-02-10T23:46:22.000Z | 2022-03-30T14:57:40.000Z | test/internal/opts/process_compiler_opts_tests.bzl | buildbuddy-io/rules_xcodeproj | 79a54779d530d3907a69a7910c71c06fec4aefeb | [
"MIT"
] | 3 | 2022-03-09T00:14:47.000Z | 2022-03-31T11:16:16.000Z | """Tests for compiler options processing functions."""
load("@bazel_skylib//lib:unittest.bzl", "asserts", "unittest")
load("//test:utils.bzl", "stringify_dict")
# buildifier: disable=bzl-visibility
load("//xcodeproj/internal:opts.bzl", "testable")
process_compiler_opts = testable.process_compiler_opts
def _process_compiler_opts_test_impl(ctx):
env = unittest.begin(ctx)
build_settings = {}
search_paths = process_compiler_opts(
conlyopts = ctx.attr.conlyopts,
cxxopts = ctx.attr.cxxopts,
full_swiftcopts = ctx.attr.full_swiftcopts,
user_swiftcopts = ctx.attr.user_swiftcopts,
package_bin_dir = ctx.attr.package_bin_dir,
build_settings = build_settings,
)
string_build_settings = stringify_dict(build_settings)
json_search_paths = json.encode(search_paths)
expected_build_settings = {
"SWIFT_OBJC_INTERFACE_HEADER_NAME": "",
"SWIFT_OPTIMIZATION_LEVEL": "-Onone",
"SWIFT_VERSION": "5",
}
expected_build_settings.update(ctx.attr.expected_build_settings)
asserts.equals(
env,
expected_build_settings,
string_build_settings,
"build_settings",
)
asserts.equals(
env,
ctx.attr.expected_search_paths,
json_search_paths,
"search_paths",
)
return unittest.end(env)
process_compiler_opts_test = unittest.make(
impl = _process_compiler_opts_test_impl,
attrs = {
"conlyopts": attr.string_list(mandatory = True),
"cxxopts": attr.string_list(mandatory = True),
"expected_build_settings": attr.string_dict(mandatory = True),
"expected_search_paths": attr.string(mandatory = True),
"package_bin_dir": attr.string(mandatory = True),
"full_swiftcopts": attr.string_list(mandatory = True),
"user_swiftcopts": attr.string_list(mandatory = True),
},
)
def process_compiler_opts_test_suite(name):
"""Test suite for `process_compiler_opts`.
Args:
name: The base name to be used in things created by this macro. Also the
name of the test suite.
"""
test_names = []
def _add_test(
*,
name,
expected_build_settings,
expected_search_paths = {"quote_includes": [], "includes": [], "system_includes": []},
conlyopts = [],
cxxopts = [],
full_swiftcopts = [],
user_swiftcopts = [],
package_bin_dir = ""):
test_names.append(name)
process_compiler_opts_test(
name = name,
conlyopts = conlyopts,
cxxopts = cxxopts,
full_swiftcopts = full_swiftcopts,
user_swiftcopts = user_swiftcopts,
package_bin_dir = package_bin_dir,
expected_build_settings = stringify_dict(expected_build_settings),
expected_search_paths = json.encode(expected_search_paths),
timeout = "short",
)
# Base
_add_test(
name = "{}_swift_integration".format(name),
full_swiftcopts = [
"-target",
"arm64-apple-ios15.0-simulator",
"-sdk",
"__BAZEL_XCODE_SDKROOT__",
"-F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/iPhoneSimulator.platform/Developer/Library/Frameworks",
"-F__BAZEL_XCODE_SDKROOT__/Developer/Library/Frameworks",
"-I__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/iPhoneSimulator.platform/Developer/usr/lib",
"-emit-object",
"-output-file-map",
"bazel-out/ios-sim_arm64-min15.0-applebin_ios-ios_sim_arm64-fastbuild-ST-4e6c2a19403f/bin/examples/ExampleUITests/ExampleUITests.library.output_file_map.json",
"-Xfrontend",
"-no-clang-module-breadcrumbs",
"-emit-module-path",
"bazel-out/ios-sim_arm64-min15.0-applebin_ios-ios_sim_arm64-fastbuild-ST-4e6c2a19403f/bin/examples/ExampleUITests/ExampleUITests.swiftmodule",
"-DDEBUG",
"-Onone",
"-Xfrontend",
"-serialize-debugging-options",
"-enable-testing",
"-application-extension",
"weird",
"-gline-tables-only",
"-Xwrapped-swift=-debug-prefix-pwd-is-dot",
"-Xwrapped-swift=-ephemeral-module-cache",
"-Xcc",
"-iquote.",
"-Xcc",
"-iquotebazel-out/ios-sim_arm64-min15.0-applebin_ios-ios_sim_arm64-fastbuild-ST-4e6c2a19403f/bin",
"-Xfrontend",
"-color-diagnostics",
"-enable-batch-mode",
"-unhandled",
"-module-name",
"ExampleUITests",
"-parse-as-library",
"-Xcc",
"-O0",
"-Xcc",
"-DDEBUG=1",
"examples/xcode_like/ExampleUITests/ExampleUITests.swift",
"examples/xcode_like/ExampleUITests/ExampleUITestsLaunchTests.swift",
],
user_swiftcopts = [],
expected_build_settings = {
"ENABLE_TESTABILITY": "True",
"APPLICATION_EXTENSION_API_ONLY": "True",
"OTHER_SWIFT_FLAGS": "weird -unhandled",
"SWIFT_ACTIVE_COMPILATION_CONDITIONS": "DEBUG",
},
)
_add_test(
name = "{}_empty".format(name),
conlyopts = [],
cxxopts = [],
full_swiftcopts = [],
user_swiftcopts = [],
expected_build_settings = {},
)
# Skips
## C and C++
# Anything with __BAZEL_XCODE_
# -isysroot
# -mios-simulator-version-min
# -miphoneos-version-min
# -mmacosx-version-min
# -mtvos-simulator-version-min
# -mtvos-version-min
# -mwatchos-simulator-version-min
# -mwatchos-version-min
# -target
## Swift:
# Anything with __BAZEL_XCODE_
# -Ipath
# -emit-module-path
# -emit-object
# -enable-batch-mode
# -gline-tables-only
# -module-name
# -num-threads
# -output-file-map
# -parse-as-library
# -sdk
# -target
# -Xcc
# -Xfrontend
# -Xwrapped-swift
# Other things that end with ".swift", but don't start with "-"
_add_test(
name = "{}_skips".format(name),
conlyopts = [
"-mtvos-simulator-version-min=8.0",
"-passthrough",
"-isysroot",
"other",
"-mios-simulator-version-min=11.2",
"-miphoneos-version-min=9.0",
"-passthrough",
"-mtvos-version-min=12.1",
"-I__BAZEL_XCODE_SOMETHING_/path",
"-mwatchos-simulator-version-min=10.1",
"-passthrough",
"-mwatchos-version-min=9.2",
"-target",
"ios",
"-mmacosx-version-min=12.0",
"-passthrough",
],
cxxopts = [
"-isysroot",
"something",
"-miphoneos-version-min=9.4",
"-mmacosx-version-min=10.9",
"-passthrough",
"-mtvos-version-min=12.2",
"-mwatchos-simulator-version-min=9.3",
"-passthrough",
"-mtvos-simulator-version-min=12.1",
"-mwatchos-version-min=10.2",
"-I__BAZEL_XCODE_BOSS_",
"-target",
"macos",
"-passthrough",
"-mios-simulator-version-min=14.0",
],
full_swiftcopts = [
"-output-file-map",
"path",
"-passthrough",
"-emit-module-path",
"path",
"-passthrough",
"-Xfrontend",
"-hidden",
"-emit-object",
"-enable-batch-mode",
"-passthrough",
"-gline-tables-only",
"-sdk",
"something",
"-module-name",
"name",
"-passthrough",
"-I__BAZEL_XCODE_SOMETHING_/path",
"-num-threads",
"6",
"-passthrough",
"-Ibazel-out/...",
"-parse-as-library",
"-passthrough",
"-parse-as-library",
"-keep-me=something.swift",
"reject-me.swift",
"-target",
"ios",
"-Xcc",
"-weird",
"-Xwrapped-swift",
"-passthrough",
],
expected_build_settings = {
"OTHER_CFLAGS": [
"-passthrough",
"-passthrough",
"-passthrough",
"-passthrough",
],
"OTHER_CPLUSPLUSFLAGS": [
"-passthrough",
"-passthrough",
"-passthrough",
],
"OTHER_SWIFT_FLAGS": """\
-passthrough \
-passthrough \
-passthrough \
-passthrough \
-passthrough \
-passthrough \
-keep-me=something.swift \
-passthrough\
""",
},
)
# Specific Xcode build settings
# CLANG_CXX_LANGUAGE_STANDARD
_add_test(
name = "{}_options-std".format(name),
conlyopts = ["-std=c++42"],
cxxopts = ["-std=c++42"],
expected_build_settings = {
"CLANG_CXX_LANGUAGE_STANDARD": "c++42",
"OTHER_CFLAGS": ["-std=c++42"],
},
)
_add_test(
name = "{}_options-std=c++0x".format(name),
cxxopts = ["-std=c++11"],
expected_build_settings = {
"CLANG_CXX_LANGUAGE_STANDARD": "c++0x",
},
)
# CLANG_CXX_LIBRARY
_add_test(
name = "{}_options-stdlib".format(name),
conlyopts = ["-stdlib=random"],
cxxopts = ["-stdlib=random"],
expected_build_settings = {
"CLANG_CXX_LIBRARY": "random",
"OTHER_CFLAGS": ["-stdlib=random"],
},
)
## ENABLE_TESTABILITY
_add_test(
name = "{}_swift_option-enable-testing".format(name),
full_swiftcopts = ["-enable-testing"],
expected_build_settings = {
"ENABLE_TESTABILITY": "True",
},
)
## APPLICATION_EXTENSION_API_ONLY
_add_test(
name = "{}_swift_option-application-extension".format(name),
full_swiftcopts = ["-application-extension"],
expected_build_settings = {
"APPLICATION_EXTENSION_API_ONLY": "True",
},
)
## GCC_OPTIMIZATION_LEVEL
_add_test(
name = "{}_differing_gcc_optimization_level".format(name),
conlyopts = ["-O0"],
cxxopts = ["-O1"],
expected_build_settings = {
"OTHER_CFLAGS": ["-O0"],
"OTHER_CPLUSPLUSFLAGS": ["-O1"],
},
)
_add_test(
name = "{}_differing_gcc_optimization_level_common_first".format(name),
conlyopts = ["-O1", "-O0"],
cxxopts = ["-O1", "-O2"],
expected_build_settings = {
"GCC_OPTIMIZATION_LEVEL": "1",
"OTHER_CFLAGS": ["-O0"],
"OTHER_CPLUSPLUSFLAGS": ["-O2"],
},
)
_add_test(
name = "{}_multiple_gcc_optimization_levels".format(name),
conlyopts = ["-O1", "-O0"],
cxxopts = ["-O0", "-O1"],
expected_build_settings = {
"OTHER_CFLAGS": ["-O1", "-O0"],
"OTHER_CPLUSPLUSFLAGS": ["-O0", "-O1"],
},
)
_add_test(
name = "{}_common_gcc_optimization_level".format(name),
conlyopts = ["-O1"],
cxxopts = ["-O1"],
expected_build_settings = {
"GCC_OPTIMIZATION_LEVEL": "1",
},
)
## GCC_PREPROCESSOR_DEFINITIONS
_add_test(
name = "{}_gcc_optimization_preprocessor_definitions".format(name),
conlyopts = ["-DDEBUG", "-DDEBUG", "-DA=1", "-DZ=1", "-DB", "-DE"],
cxxopts = ["-DDEBUG", "-DDEBUG", "-DA=1", "-DZ=2", "-DC", "-DE"],
expected_build_settings = {
"GCC_PREPROCESSOR_DEFINITIONS": ["DEBUG", "A=1"],
"OTHER_CFLAGS": ["-DZ=1", "-DB", "-DE"],
"OTHER_CPLUSPLUSFLAGS": ["-DZ=2", "-DC", "-DE"],
},
)
## SWIFT_ACTIVE_COMPILATION_CONDITIONS
_add_test(
name = "{}_defines".format(name),
full_swiftcopts = [
"-DDEBUG",
"-DBAZEL",
"-DDEBUG",
"-DBAZEL",
],
expected_build_settings = {
"SWIFT_ACTIVE_COMPILATION_CONDITIONS": "DEBUG BAZEL",
},
)
## SWIFT_COMPILATION_MODE
_add_test(
name = "{}_multiple_swift_compilation_modes".format(name),
full_swiftcopts = [
"-wmo",
"-no-whole-module-optimization",
],
expected_build_settings = {
"SWIFT_COMPILATION_MODE": "singlefile",
},
)
_add_test(
name = "{}_swift_option-incremental".format(name),
full_swiftcopts = ["-incremental"],
expected_build_settings = {
"SWIFT_COMPILATION_MODE": "singlefile",
},
)
_add_test(
name = "{}_swift_option-whole-module-optimization".format(name),
full_swiftcopts = ["-whole-module-optimization"],
expected_build_settings = {
"SWIFT_COMPILATION_MODE": "wholemodule",
},
)
_add_test(
name = "{}_swift_option-wmo".format(name),
full_swiftcopts = ["-wmo"],
expected_build_settings = {
"SWIFT_COMPILATION_MODE": "wholemodule",
},
)
_add_test(
name = "{}_swift_option-no-whole-module-optimization".format(name),
full_swiftcopts = ["-no-whole-module-optimization"],
expected_build_settings = {
"SWIFT_COMPILATION_MODE": "singlefile",
},
)
## SWIFT_OBJC_INTERFACE_HEADER_NAME
_add_test(
name = "{}_generated_header".format(name),
full_swiftcopts = [
"-emit-objc-header-path",
"a/b/c/TestingUtils-Custom.h",
],
package_bin_dir = "a/b",
expected_build_settings = {
"SWIFT_OBJC_INTERFACE_HEADER_NAME": "c/TestingUtils-Custom.h",
},
)
## SWIFT_OPTIMIZATION_LEVEL
_add_test(
name = "{}_multiple_swift_optimization_levels".format(name),
full_swiftcopts = [
"-Osize",
"-Onone",
"-O",
],
expected_build_settings = {
"SWIFT_OPTIMIZATION_LEVEL": "-O",
},
)
_add_test(
name = "{}_swift_option-Onone".format(name),
full_swiftcopts = ["-Onone"],
expected_build_settings = {
"SWIFT_OPTIMIZATION_LEVEL": "-Onone",
},
)
_add_test(
name = "{}_swift_option-O".format(name),
full_swiftcopts = ["-O"],
expected_build_settings = {
"SWIFT_OPTIMIZATION_LEVEL": "-O",
},
)
_add_test(
name = "{}_swift_option-Osize".format(name),
full_swiftcopts = ["-Osize"],
expected_build_settings = {
"SWIFT_OPTIMIZATION_LEVEL": "-Osize",
},
)
## SWIFT_VERSION
_add_test(
name = "{}_swift_option-swift-version".format(name),
full_swiftcopts = ["-swift-version=42"],
expected_build_settings = {
"SWIFT_VERSION": "42",
},
)
# Search Paths
_add_test(
name = "{}_search_paths".format(name),
conlyopts = [
"-iquote",
"a/b/c",
"-Ix/y/z",
"-I",
"1/2/3",
"-iquote",
"0/9",
"-isystem",
"s1/s2",
],
cxxopts = [
"-iquote",
"y/z",
"-Ix/y/z",
"-I",
"aa/bb",
"-isystem",
"s3/s4",
],
user_swiftcopts = ["-Xcc", "-Ic/d/e", "-Xcc", "-iquote4/5", "-Xcc", "-isystems5/s6"],
expected_build_settings = {},
expected_search_paths = {
"quote_includes": [
"a/b/c",
"0/9",
"y/z",
"4/5",
],
"includes": [
"x/y/z",
"1/2/3",
"aa/bb",
"c/d/e",
],
"system_includes": [
"s1/s2",
"s3/s4",
"s5/s6",
],
},
)
# Test suite
native.test_suite(
name = name,
tests = test_names,
)
| 28.545455 | 171 | 0.524008 | 1,471 | 16,328 | 5.49898 | 0.185588 | 0.067499 | 0.088268 | 0.044505 | 0.418222 | 0.274076 | 0.204722 | 0.172827 | 0.120658 | 0.117814 | 0 | 0.014871 | 0.332803 | 16,328 | 571 | 172 | 28.595447 | 0.727648 | 0.067063 | 0 | 0.463203 | 0 | 0.006494 | 0.3367 | 0.186865 | 0 | 0 | 0 | 0 | 0.006494 | 1 | 0.006494 | false | 0.060606 | 0 | 0 | 0.008658 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
898103c6939f84b86c66b0636276a738603f0eb9 | 515 | py | Python | convert.py | FrilledShark/pi_graph | e5bb85d6f507f252ddad3375b7d24270cd4dcbb4 | [
"MIT"
] | null | null | null | convert.py | FrilledShark/pi_graph | e5bb85d6f507f252ddad3375b7d24270cd4dcbb4 | [
"MIT"
] | null | null | null | convert.py | FrilledShark/pi_graph | e5bb85d6f507f252ddad3375b7d24270cd4dcbb4 | [
"MIT"
] | null | null | null | import os
import re
filenames = sorted([frame for frame in os.listdir("frames")], key=lambda x: int(re.search(r'\d+', os.path.splitext(x)[0]).group()))
print("creating gif")
import imageio
with imageio.get_writer('storage/pi.gif', mode='I') as writer:
for filename in filenames:
image = imageio.imread(os.path.join("frames", filename))
writer.append_data(image)
print("creating mp4")
import moviepy.editor as mp
clip = mp.VideoFileClip("storage/pi.gif")
clip.write_videofile("storage/pi.mp4") | 32.1875 | 131 | 0.712621 | 79 | 515 | 4.607595 | 0.594937 | 0.074176 | 0.065934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006667 | 0.126214 | 515 | 16 | 132 | 32.1875 | 0.802222 | 0 | 0 | 0 | 0 | 0 | 0.158915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
898730ea9a49096e8b6161edc2af4f23006465d6 | 2,757 | py | Python | tests/regressiontests/custom_managers_regress/models.py | huicheese/Django-test3 | ac11d2dce245b48392e52d1f4acfd5e7433b243e | [
"BSD-3-Clause"
] | 19 | 2015-05-01T19:59:03.000Z | 2021-12-09T08:03:16.000Z | tests/regressiontests/custom_managers_regress/models.py | joetyson/django | c3699190186561d5c216b2a77ecbfc487d42a734 | [
"BSD-3-Clause"
] | 1 | 2018-01-03T15:26:49.000Z | 2018-01-03T15:26:49.000Z | tests/regressiontests/custom_managers_regress/models.py | joetyson/django | c3699190186561d5c216b2a77ecbfc487d42a734 | [
"BSD-3-Clause"
] | 30 | 2015-03-25T19:40:07.000Z | 2021-05-28T22:59:26.000Z | """
Regression tests for custom manager classes.
"""
from django.db import models
class RestrictedManager(models.Manager):
"""
A manager that filters out non-public instances.
"""
def get_query_set(self):
return super(RestrictedManager, self).get_query_set().filter(is_public=True)
class RelatedModel(models.Model):
name = models.CharField(max_length=50)
def __unicode__(self):
return self.name
class RestrictedModel(models.Model):
name = models.CharField(max_length=50)
is_public = models.BooleanField(default=False)
related = models.ForeignKey(RelatedModel)
objects = RestrictedManager()
plain_manager = models.Manager()
def __unicode__(self):
return self.name
class OneToOneRestrictedModel(models.Model):
name = models.CharField(max_length=50)
is_public = models.BooleanField(default=False)
related = models.OneToOneField(RelatedModel)
objects = RestrictedManager()
plain_manager = models.Manager()
def __unicode__(self):
return self.name
__test__ = {"tests": """
Even though the default manager filters out some records, we must still be able
to save (particularly, save by updating existing records) those filtered
instances. This is a regression test for #8990, #9527
>>> related = RelatedModel.objects.create(name="xyzzy")
>>> obj = RestrictedModel.objects.create(name="hidden", related=related)
>>> obj.name = "still hidden"
>>> obj.save()
# If the hidden object wasn't seen during the save process, there would now be
# two objects in the database.
>>> RestrictedModel.plain_manager.count()
1
Deleting related objects should also not be distracted by a restricted manager
on the related object. This is a regression test for #2698.
>>> RestrictedModel.plain_manager.all().delete()
>>> for name, public in (('one', True), ('two', False), ('three', False)):
... _ = RestrictedModel.objects.create(name=name, is_public=public, related=related)
# Reload the RelatedModel instance, just to avoid any instance artifacts.
>>> obj = RelatedModel.objects.get(name="xyzzy")
>>> obj.delete()
# All of the RestrictedModel instances should have been deleted, since they
# *all* pointed to the RelatedModel. If the default manager is used, only the
# public one will be deleted.
>>> RestrictedModel.plain_manager.all()
[]
# The same test case as the last one, but for one-to-one models, which are
# implemented slightly different internally, so it's a different code path.
>>> obj = RelatedModel.objects.create(name="xyzzy")
>>> _ = OneToOneRestrictedModel.objects.create(name="foo", is_public=False, related=obj)
>>> obj = RelatedModel.objects.get(name="xyzzy")
>>> obj.delete()
>>> OneToOneRestrictedModel.plain_manager.all()
[]
"""
}
| 33.216867 | 88 | 0.730141 | 353 | 2,757 | 5.600567 | 0.379603 | 0.057663 | 0.042994 | 0.031866 | 0.32524 | 0.290845 | 0.266566 | 0.247344 | 0.183106 | 0.183106 | 0 | 0.008137 | 0.153065 | 2,757 | 82 | 89 | 33.621951 | 0.838544 | 0.033732 | 0 | 0.355932 | 0 | 0.067797 | 0.634229 | 0.171526 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.016949 | 0.067797 | 0.40678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8987a8a45d351bbfef65dc5938b0c5ab50967bd0 | 1,685 | py | Python | test_code/python-plotter-serial-beta/plotter_freq_checker.py | ithallojunior/data_transmission | eaf1ef81c018dc5a42f1756b39e19bfaa7f34ed5 | [
"MIT"
] | null | null | null | test_code/python-plotter-serial-beta/plotter_freq_checker.py | ithallojunior/data_transmission | eaf1ef81c018dc5a42f1756b39e19bfaa7f34ed5 | [
"MIT"
] | null | null | null | test_code/python-plotter-serial-beta/plotter_freq_checker.py | ithallojunior/data_transmission | eaf1ef81c018dc5a42f1756b39e19bfaa7f34ed5 | [
"MIT"
] | null | null | null | import serial
import matplotlib.pyplot as plt
import numpy as np
import time
pd = "/dev/tty.wchusbserial1410"
#pd = "/dev/tty.usbmodem1421"
p = serial.Serial(port=pd, baudrate=115200,
bytesize=serial.EIGHTBITS, parity=serial.PARITY_NONE)
p.flush()
time.sleep(0.5)
#plt.axis([0, 10, 0, 1])
plt.ion()
samples = 4096#samples
fs = 2040. # samples/sec
freq = 0. # to start with
t_freq_checker = 2. # checks frequency every N seconds
time_window = samples/fs # seconds
to_show = 100
t = np.arange(0., time_window, 1./fs)
y = np.zeros(samples)
t1 = time.time()
while(1):
try:
plt.clf()
plt.xlabel("Time(s)")
plt.xlim(0, t[to_show])
plt.ylim(-0.1, 1.2)#(-0.1, 1.3)
i = 0;
while i < samples:
value1 = p.read()
value2 = p.read()
#print "read"
try:
v = ord(value1[0])*256 + ord(value2[0])
y[i] = float(v)*(1.1/1023.0)
i = i +1
#print "Values: ", v, ord(value1), ord(value2)
except IndexError:
print "Error"
pass
#print("Evlapsed time %f"%(time.time() -t1))
plt.plot(t, y)
plt.title("Fs: %.3f | Freq.: %.3f "%(fs, freq))
plt.grid()
plt.pause(1.0/30.0)
#print("time %f"%(time.time() -t1))
#updates freq
if ((time.time() - t1) >= t_freq_checker):
ymean = y - y.mean() #removes DC
fd = ( np.abs( np.fft.fft(ymean) ) )[:samples/2].argmax()
freq = fs * fd/samples
t1 = time.time()
except KeyboardInterrupt:
plt.close()
break
p.close()
| 26.328125 | 71 | 0.516914 | 235 | 1,685 | 3.668085 | 0.417021 | 0.046404 | 0.034803 | 0.039443 | 0.034803 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069116 | 0.321662 | 1,685 | 63 | 72 | 26.746032 | 0.685039 | 0.173294 | 0 | 0.081633 | 0 | 0 | 0.044235 | 0.018129 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.020408 | 0.081633 | null | null | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8988728d0b2235f53cc08dcd048278808b2a69e7 | 425 | py | Python | investment_report/migrations/0026_remove_howwecanhelp_sector.py | uktrade/pir-api | 79747ceab042c42c287e2b7471f6dade70f68693 | [
"MIT"
] | 1 | 2021-02-02T19:08:55.000Z | 2021-02-02T19:08:55.000Z | investment_report/migrations/0026_remove_howwecanhelp_sector.py | uktrade/invest-pir-api | be56efddf9dfdf81c8557441a9a54d9a4dd4bab1 | [
"MIT"
] | 21 | 2018-07-10T10:20:47.000Z | 2022-03-24T09:36:29.000Z | investment_report/migrations/0026_remove_howwecanhelp_sector.py | uktrade/pir-api | 79747ceab042c42c287e2b7471f6dade70f68693 | [
"MIT"
] | 1 | 2021-02-04T11:28:37.000Z | 2021-02-04T11:28:37.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.29 on 2020-09-01 14:28
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('investment_report', '0025_howwecanhelp_smartworkforcesector'),
]
operations = [
migrations.RemoveField(
model_name='howwecanhelp',
name='sector',
),
]
| 21.25 | 72 | 0.642353 | 43 | 425 | 6.139535 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.249412 | 425 | 19 | 73 | 22.368421 | 0.758621 | 0.162353 | 0 | 0 | 1 | 0 | 0.206799 | 0.107649 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
898c77e492dd4a9d199a1a4723672c492bb1ddb8 | 3,811 | py | Python | downloader.py | mathieuhendey/lastfm_downloader | 616ed6edad171e6eda05fd48b07a237d7c788fd5 | [
"MIT"
] | 3 | 2019-06-06T08:59:18.000Z | 2021-07-16T06:43:24.000Z | downloader.py | mathieuhendey/lastfm_downloader | 616ed6edad171e6eda05fd48b07a237d7c788fd5 | [
"MIT"
] | 1 | 2021-08-23T14:33:09.000Z | 2021-08-23T14:33:09.000Z | downloader.py | mathieuhendey/lastfm_downloader | 616ed6edad171e6eda05fd48b07a237d7c788fd5 | [
"MIT"
] | 1 | 2020-10-15T08:40:55.000Z | 2020-10-15T08:40:55.000Z | import sys
import pandas as pd
import requests
import pprint
from requests_toolbelt.threaded import pool
# Generate your own at https://www.last.fm/api/account/create
LASTFM_API_KEY = None
LASTFM_USER_NAME = None
TEXT = "#text"
ESTIMATED_TIME_FOR_PROCESSING_PAGE = 352
ESTIMATED_TIME_FOR_PROCESSING_DATAFRAME_PER_PAGE_OF_RESULTS = 275
if LASTFM_USER_NAME is None or LASTFM_API_KEY is None:
print(
"""
You need to generate some credentials, see the source code
"""
)
sys.exit(1)
def get_scrobbles(
endpoint="recenttracks",
username=LASTFM_USER_NAME,
key=LASTFM_API_KEY,
limit=200,
extended=0,
page=1,
pages=0,
):
"""
endpoint: API endpoint.
username: Last.fm username to fetch scrobbles for.
key: API key.
limit: The number of records per page. Maximum is 200.
extended: Extended results from API, such as whether user has "liked" the track.
page: First page to retrieve.
pages: How many pages of results after "page" argument to retrieve. If 0, get all pages.
"""
# initialize URL and lists to contain response fields
url = (
"https://ws.audioscrobbler.com/2.0/?method=user.get{}"
"&user={}"
"&api_key={}"
"&limit={}"
"&extended={}"
"&page={}"
"&format=json"
)
# get total number of pages
request_url = url.format(endpoint, username, key, limit, extended, page)
response = requests.get(request_url).json()
total_pages = int(response[endpoint]["@attr"]["totalPages"])
if pages > 0:
total_pages = min([total_pages, pages])
print(
"Total pages to retrieve: {}. Estimated time: {}".format(
total_pages, get_time_remaining(total_pages)
)
)
artist_names = []
album_names = []
track_names = []
timestamps = []
urls = []
# add formatted URLs to list to be requested in thread pool
for page in range(0, int(total_pages) + 1, 1):
urls.append(url.format(endpoint, username, key, limit, extended, page))
p = pool.Pool.from_urls(urls)
p.join_all()
for response in p.responses():
if endpoint in response.json():
response_json = response.json()[endpoint]["track"]
for track in response_json:
if "@attr" not in track:
artist_names.append(track["artist"][TEXT])
album_names.append(track["album"][TEXT])
track_names.append(track["name"])
timestamps.append(track["date"]["uts"])
# create and populate a dataframe to contain the data
df = pd.DataFrame()
df["artist"] = artist_names
df["album"] = album_names
df["track"] = track_names
df["timestamps"] = timestamps
# In UTC. Last.fm returns datetimes in the user's locale when they listened
df["datetime"] = pd.to_datetime(timestamps, unit="s")
df.sort_values("timestamps", ascending=False, inplace=True)
return df
def get_time_remaining(pages_remaining):
"""Calculate the estimated time remaining."""
millis_remaining = int(
(pages_remaining * ESTIMATED_TIME_FOR_PROCESSING_PAGE)
+ (
pages_remaining
* ESTIMATED_TIME_FOR_PROCESSING_DATAFRAME_PER_PAGE_OF_RESULTS
)
)
seconds_remaining = (millis_remaining / 1000) % 60
seconds_remaining = int(seconds_remaining)
minutes_remaining = (millis_remaining / (1000 * 60)) % 60
minutes_remaining = int(minutes_remaining)
return "{}m{:2}s".format(minutes_remaining, seconds_remaining)
scrobbles = get_scrobbles(page=1, pages=0) # Default to all Scrobbles
scrobbles.to_csv("./data/lastfm_scrobbles.csv", index=False, encoding="utf-8")
print("{:,} total rows".format(len(scrobbles)))
scrobbles.head()
| 32.02521 | 92 | 0.649173 | 486 | 3,811 | 4.919753 | 0.32716 | 0.029276 | 0.026767 | 0.043496 | 0.1422 | 0.102886 | 0.080301 | 0.080301 | 0.04266 | 0 | 0 | 0.014123 | 0.238258 | 3,811 | 118 | 93 | 32.29661 | 0.809507 | 0.191813 | 0 | 0.023529 | 1 | 0 | 0.109603 | 0.009162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023529 | false | 0 | 0.058824 | 0 | 0.105882 | 0.047059 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
898eb33546671a512a010d7eaa79d1cab11f0acf | 1,652 | py | Python | src/trainer/fasttexttrainer.py | liuweiping2020/pyml | 0b9a7a307b93f9313d7e1bb92b33ae330d681c73 | [
"Apache-2.0"
] | null | null | null | src/trainer/fasttexttrainer.py | liuweiping2020/pyml | 0b9a7a307b93f9313d7e1bb92b33ae330d681c73 | [
"Apache-2.0"
] | null | null | null | src/trainer/fasttexttrainer.py | liuweiping2020/pyml | 0b9a7a307b93f9313d7e1bb92b33ae330d681c73 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from modeler.fasttextmodel import FastTextModel
from trainer.tftrainer import TFTrainer
class FastTextTrainer(TFTrainer):
def __init__(self):
self.num_classes = 19
self.learning_rate = 0.01
self.batch_size = 8
self.decay_steps = 1000
self.decay_rate = 0.9
self.sequence_length = 5
self.vocab_size = 10000
self.embed_size = 100
self.is_training = True
self.dropout_keep_prob = 1
pass
def get_model(self):
self.fastText = FastTextModel(self.num_classes, self.learning_rate,
self.batch_size, self.decay_steps, self.decay_rate, 5,
self.sequence_length, self.vocab_size, self.embed_size, self.is_training)
def get_data(self):
self.input_x = np.zeros((self.batch_size, self.sequence_length), dtype=np.int32)
self.input_y = np.array([1, 0, 1, 1, 1, 2, 1, 1], dtype=np.int32)
def get_fetch_list(self):
self.fetch_list = [self.fastText.loss_val, self.fastText.accuracy,
self.fastText.predictions, self.fastText.train_op]
def get_feed_dict(self):
self.feed_dict = {self.fastText.sentence: self.input_x, self.fastText.labels: self.input_y}
def train(self):
for i in range(100):
loss, acc, predict, _ = self.session.run(self.fetch_list, feed_dict=self.feed_dict)
print("loss:", loss, "acc:", acc, "label:", self.input_y, "prediction:", predict)
if __name__ == '__main__':
FastTextTrainer = FastTextTrainer()
FastTextTrainer.run()
pass
| 34.416667 | 111 | 0.631961 | 216 | 1,652 | 4.587963 | 0.375 | 0.084763 | 0.039354 | 0.034309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031224 | 0.263317 | 1,652 | 47 | 112 | 35.148936 | 0.783073 | 0 | 0 | 0.055556 | 0 | 0 | 0.020581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.055556 | 0.083333 | 0 | 0.277778 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8994560794eac403f32c84dae0fea83d1c5d480e | 3,379 | py | Python | nonconformist/base.py | smazzanti/nonconformist | 5ed072c82ae6d923eb6063a6a5c8fa664fbf729c | [
"MIT"
] | 301 | 2015-03-18T01:47:37.000Z | 2022-03-11T14:50:55.000Z | nonconformist/base.py | smazzanti/nonconformist | 5ed072c82ae6d923eb6063a6a5c8fa664fbf729c | [
"MIT"
] | 25 | 2016-07-25T21:36:26.000Z | 2022-02-23T00:39:07.000Z | nonconformist/base.py | smazzanti/nonconformist | 5ed072c82ae6d923eb6063a6a5c8fa664fbf729c | [
"MIT"
] | 91 | 2015-03-19T08:25:16.000Z | 2022-03-08T08:31:38.000Z | #!/usr/bin/env python
"""
docstring
"""
# Authors: Henrik Linusson
import abc
import numpy as np
from sklearn.base import BaseEstimator
class RegressorMixin(object):
def __init__(self):
super(RegressorMixin, self).__init__()
@classmethod
def get_problem_type(cls):
return 'regression'
class ClassifierMixin(object):
def __init__(self):
super(ClassifierMixin, self).__init__()
@classmethod
def get_problem_type(cls):
return 'classification'
class BaseModelAdapter(BaseEstimator):
__metaclass__ = abc.ABCMeta
def __init__(self, model, fit_params=None):
super(BaseModelAdapter, self).__init__()
self.model = model
self.last_x, self.last_y = None, None
self.clean = False
self.fit_params = {} if fit_params is None else fit_params
def fit(self, x, y):
"""Fits the model.
Parameters
----------
x : numpy array of shape [n_samples, n_features]
Inputs of examples for fitting the model.
y : numpy array of shape [n_samples]
Outputs of examples for fitting the model.
Returns
-------
None
"""
self.model.fit(x, y, **self.fit_params)
self.clean = False
def predict(self, x):
"""Returns the prediction made by the underlying model.
Parameters
----------
x : numpy array of shape [n_samples, n_features]
Inputs of test examples.
Returns
-------
y : numpy array of shape [n_samples]
Predicted outputs of test examples.
"""
if (
not self.clean or
self.last_x is None or
self.last_y is None or
not np.array_equal(self.last_x, x)
):
self.last_x = x
self.last_y = self._underlying_predict(x)
self.clean = True
return self.last_y.copy()
@abc.abstractmethod
def _underlying_predict(self, x):
"""Produces a prediction using the encapsulated model.
Parameters
----------
x : numpy array of shape [n_samples, n_features]
Inputs of test examples.
Returns
-------
y : numpy array of shape [n_samples]
Predicted outputs of test examples.
"""
pass
class ClassifierAdapter(BaseModelAdapter):
def __init__(self, model, fit_params=None):
super(ClassifierAdapter, self).__init__(model, fit_params)
def _underlying_predict(self, x):
return self.model.predict_proba(x)
class RegressorAdapter(BaseModelAdapter):
def __init__(self, model, fit_params=None):
super(RegressorAdapter, self).__init__(model, fit_params)
def _underlying_predict(self, x):
return self.model.predict(x)
class OobMixin(object):
def __init__(self, model, fit_params=None):
super(OobMixin, self).__init__(model, fit_params)
self.train_x = None
def fit(self, x, y):
super(OobMixin, self).fit(x, y)
self.train_x = x
def _underlying_predict(self, x):
# TODO: sub-sampling of ensemble for test patterns
oob = x == self.train_x
if hasattr(oob, 'all'):
oob = oob.all()
if oob:
return self._oob_prediction()
else:
return super(OobMixin, self)._underlying_predict(x)
class OobClassifierAdapter(OobMixin, ClassifierAdapter):
def __init__(self, model, fit_params=None):
super(OobClassifierAdapter, self).__init__(model, fit_params)
def _oob_prediction(self):
return self.model.oob_decision_function_
class OobRegressorAdapter(OobMixin, RegressorAdapter):
def __init__(self, model, fit_params=None):
super(OobRegressorAdapter, self).__init__(model, fit_params)
def _oob_prediction(self):
return self.model.oob_prediction_
| 21.522293 | 63 | 0.718556 | 459 | 3,379 | 5.002179 | 0.213508 | 0.058798 | 0.067073 | 0.041812 | 0.503484 | 0.442509 | 0.406794 | 0.39547 | 0.336237 | 0.253484 | 0 | 0 | 0.167505 | 3,379 | 156 | 64 | 21.660256 | 0.81621 | 0.255697 | 0 | 0.293333 | 0 | 0 | 0.010918 | 0 | 0 | 0 | 0 | 0.00641 | 0 | 1 | 0.253333 | false | 0.013333 | 0.04 | 0.08 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89970080ba2f0570c2f1e353abfb492b88c1cf1e | 2,263 | py | Python | ShareSpace/views.py | BNordstrom16/ShareSpaceWeb | 676ddbc52f745d4b8a17f18f26a15329ad91ebfa | [
"MIT"
] | null | null | null | ShareSpace/views.py | BNordstrom16/ShareSpaceWeb | 676ddbc52f745d4b8a17f18f26a15329ad91ebfa | [
"MIT"
] | null | null | null | ShareSpace/views.py | BNordstrom16/ShareSpaceWeb | 676ddbc52f745d4b8a17f18f26a15329ad91ebfa | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect
from django.views.generic import UpdateView, DeleteView
from .models import Storage
from django_tables2 import RequestConfig
from .tables import StorageTable
from django.contrib.auth.decorators import login_required
from .forms import StorageForm, QuestionForm
def index(request):
table = Storage.objects.reverse()[0:3]
return render(request, '../templates/index.html', {'storages': table})
def about(request):
if request.method == "POST":
form = QuestionForm(request.POST)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
return '/about'
else:
form = QuestionForm()
return render(request, '../templates/about.html', {'form': form})
def current_storages(request):
table = StorageTable(Storage.objects.all())
RequestConfig(request).configure(table)
return render(request, '../templates/current_storages.html', {'table': table})
@login_required(login_url='/accounts/login')
def create_storage(request):
if request.method == "POST":
form = StorageForm(request.POST, request.FILES)
if form.is_valid():
model_instance = form.save(commit=False)
model_instance.user_id = request.user
model_instance.save()
return redirect('/current_storages')
else:
form = StorageForm()
return render(request, '../templates/create_storage.html', {'form': form})
def storage(request, storage_id):
storage_request = Storage.objects.get(pk=storage_id)
return render(request, '../templates/storage.html', {'storage': storage_request})
class EditStorage(UpdateView):
model = Storage
form_class = StorageForm
template_name = 'storage_update_form.html'
success_url = '/current_storages'
class DeleteStorage(DeleteView):
model = Storage
template_name = 'storage_confirm_delete.html'
success_url = '/current_storages'
def show_image(request, storage_id):
photo_request = Storage.objects.get(pk=storage_id)
return render(request, '../templates/show_image.html', {'storage': photo_request})
def logout(request):
return render(request, '../templates/registration/logout.html', {})
| 28.2875 | 86 | 0.696421 | 257 | 2,263 | 5.992218 | 0.287938 | 0.054545 | 0.086364 | 0.127273 | 0.193506 | 0.120779 | 0.081818 | 0.081818 | 0.081818 | 0.081818 | 0 | 0.001621 | 0.182059 | 2,263 | 79 | 87 | 28.64557 | 0.830362 | 0 | 0 | 0.188679 | 0 | 0 | 0.163121 | 0.112145 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132075 | false | 0 | 0.132075 | 0.018868 | 0.603774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
899a40a76b3d46b1165d8bfc6e8d1efe16d93e6b | 1,282 | py | Python | rest_framework_security/deny_repeat_password/signals.py | RubenEu/django-rest-framework-security | 638cf271c51a5bafd434a6b6a9c25a7c4849b485 | [
"MIT"
] | 7 | 2020-09-01T09:55:25.000Z | 2021-11-04T06:59:04.000Z | rest_framework_security/deny_repeat_password/signals.py | RubenEu/django-rest-framework-security | 638cf271c51a5bafd434a6b6a9c25a7c4849b485 | [
"MIT"
] | 32 | 2020-10-28T17:09:18.000Z | 2022-03-12T00:55:09.000Z | rest_framework_security/deny_repeat_password/signals.py | RubenEu/django-rest-framework-security | 638cf271c51a5bafd434a6b6a9c25a7c4849b485 | [
"MIT"
] | 2 | 2020-12-18T01:26:53.000Z | 2021-11-04T06:59:07.000Z | from django.apps import apps
from django.contrib.auth import get_user_model
from django.db.models.signals import post_save
from django.dispatch import receiver, Signal
from rest_framework_security.deny_repeat_password import config
from rest_framework_security.deny_repeat_password.emails import ChangedPasswordEmail
from rest_framework_security.deny_repeat_password.models import UserPassword
password_changed = Signal(providing_args=["user"])
@receiver(post_save, sender=get_user_model())
def set_user_password_receiver(sender, instance, **kwargs):
if instance._password is None:
return
user_password, created = UserPassword.objects.get_or_create(
user=instance, password=instance.password
)
UserPassword.objects.remove_old_passwords(instance)
if (
created
and config.DENY_REPEAT_PASSWORD_CLOSE_SESSIONS
and apps.is_installed("rest_framework_security.authentication")
):
from rest_framework_security.authentication.models import UserSession
UserSession.objects.filter(user=instance).clean_and_delete()
if created:
password_changed.send(None, user=instance)
@receiver(password_changed)
def send_password_changed_email(sender, user, **kwargs):
ChangedPasswordEmail(user).send()
| 34.648649 | 84 | 0.789392 | 157 | 1,282 | 6.159236 | 0.363057 | 0.067218 | 0.108583 | 0.103413 | 0.133402 | 0.133402 | 0.133402 | 0 | 0 | 0 | 0 | 0 | 0.138846 | 1,282 | 36 | 85 | 35.611111 | 0.875906 | 0 | 0 | 0 | 0 | 0 | 0.032761 | 0.029641 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0.5 | 0.285714 | 0 | 0.392857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
899a7241af6e0b8cdd621ac196abec7ab6dcaa11 | 1,085 | py | Python | tests/test_post.py | DerrickOdhiambo/Flask-IP3 | 9861d49152bced9d8ddda7783628451e5af10323 | [
"MIT"
] | null | null | null | tests/test_post.py | DerrickOdhiambo/Flask-IP3 | 9861d49152bced9d8ddda7783628451e5af10323 | [
"MIT"
] | null | null | null | tests/test_post.py | DerrickOdhiambo/Flask-IP3 | 9861d49152bced9d8ddda7783628451e5af10323 | [
"MIT"
] | null | null | null | import unittest
from app.models import User,Post,Comments
from app import db
class PostTest(unittest.TestCase):
def setUp(self):
"""
Set up method that will run before every Test
"""
self.post= Post(category='Product', content='Yes we can!')
self.user_Derrick = User(username = 'Derrick',password = 'password', email = 'derrick@mail.com')
self.new_comment = Comments(text='This is good', user=self.user_Derrick )
def tearDown(self):
Comments.query.delete()
Post.query.delete()
User.query.delete()
def test_instance(self):
self.assertTrue(isinstance(self.post, Post))
def test_check_instance_variables(self):
self.assertEquals(self.post.category,'Product')
self.assertEquals(self.post.content,'Yes we can!')
self.assertEquals(self.new_comment.text,'This is good')
self.assertEquals(self.new_comment.user,self.user_Derrick)
def test_save_comment(self):
self.new_comment.save_comment()
self.assertTrue(len(Comments.query.all())>0)
| 31.911765 | 104 | 0.669124 | 139 | 1,085 | 5.122302 | 0.395683 | 0.044944 | 0.078652 | 0.042135 | 0.19382 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001166 | 0.209217 | 1,085 | 33 | 105 | 32.878788 | 0.828671 | 0.041475 | 0 | 0 | 0 | 0 | 0.089567 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.227273 | false | 0.045455 | 0.136364 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
899e05d154700b28dc0f00552d97ba1b87816851 | 3,474 | py | Python | biblio/my_secondary_verifications.py | lokal-profil/isfdb_site | 0ce20d6347849926d4eda961ea9249c31519eea5 | [
"BSD-3-Clause"
] | null | null | null | biblio/my_secondary_verifications.py | lokal-profil/isfdb_site | 0ce20d6347849926d4eda961ea9249c31519eea5 | [
"BSD-3-Clause"
] | null | null | null | biblio/my_secondary_verifications.py | lokal-profil/isfdb_site | 0ce20d6347849926d4eda961ea9249c31519eea5 | [
"BSD-3-Clause"
] | null | null | null | #!_PYTHONLOC
#
# (C) COPYRIGHT 2020-2021 Ahasuerus
# ALL RIGHTS RESERVED
#
# The copyright notice above does not evidence any actual or
# intended publication of such source code.
#
# Version: $Revision: 571 $
# Date: $Date: 2020-11-19 15:53:08 -0500 (Thu, 19 Nov 2020) $
from isfdb import *
from common import *
from login import *
from SQLparsing import *
if __name__ == '__main__':
start = SESSION.Parameter(0, 'int', 0)
PrintHeader('My Secondary Verifications')
PrintNavbar('my_secondary_verifications', 0, 0, 'my_secondary_verifications.cgi', 0)
user = User()
user.load()
if not user.id:
print '<h3>You must be logged in to view your secondary verifications</h3>'
PrintTrailer('my_secondary_verifications', 0, 0)
sys.exit(0)
per_page = 200
# First select 200 verification IDs -- needs to be done as a separate query since the SQL optimizer
# in MySQL 5.0 is not always smart enough to use all available indices for multi-table queries
query = """select verification.* from verification
where ver_status = 1
and user_id = %d
order by ver_time desc
limit %d, %d""" % (int(user.id), start, per_page)
db.query(query)
result0 = db.store_result()
if result0.num_rows() == 0:
print '<h3>No verifications present</h3>'
PrintTrailer('recentver', 0, 0)
sys.exit(0)
ver = result0.fetch_row()
ver_set = []
while ver:
ver_set.append(ver[0])
ver = result0.fetch_row()
print '<table cellpadding=3 class="generic_table">'
print '<tr class="generic_table_header">'
print '<th>#</th>'
print '<th>Publication Title</th>'
print '<th>Reference</th>'
print '<th>Time</th>'
print '</tr>'
color = 0
count = start
for ver in ver_set:
pub_id = ver[VERIF_PUB_ID]
verifier_id = ver[VERIF_USER_ID]
verification_id = ver[VERIF_REF_ID]
verification_time = ver[VERIF_TIME]
query = """select r.reference_label, p.pub_title
from reference r, pubs p
where r.reference_id = %d
and p.pub_id = %d""" % (verification_id, pub_id)
db.query(query)
result = db.store_result()
record = result.fetch_row()
color = color ^ 1
while record:
count += 1
reference_name = record[0][0]
pub_title = record[0][1]
if color:
print '<tr align=left class="table1">'
else:
print '<tr align=left class="table2">'
print '<td>%d</td>' % count
print '<td>%s</td>' % ISFDBLink('pl.cgi', pub_id, pub_title)
print '<td>%s</td>' % reference_name
print '<td>%s</td>' % verification_time
print '</tr>'
record = result.fetch_row()
print '</table>'
if result0.num_rows() > (per_page - 1):
print '<p> [%s]' % ISFDBLink('my_secondary_verifications.cgi', start + per_page, 'MORE')
PrintTrailer('my_secondary_verifications', 0, 0)
| 35.44898 | 107 | 0.531952 | 408 | 3,474 | 4.375 | 0.382353 | 0.086275 | 0.080672 | 0.042017 | 0.111485 | 0.042577 | 0 | 0 | 0 | 0 | 0 | 0.035442 | 0.358377 | 3,474 | 97 | 108 | 35.814433 | 0.765366 | 0.136154 | 0 | 0.169014 | 0 | 0 | 0.31158 | 0.063253 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.056338 | null | null | 0.253521 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
899e62389d7e3e73d0a9cac9e6a927bac137f515 | 1,523 | py | Python | extended_uva_judge/utilities.py | fritogotlayed/Extended-UVA-Judge | 7f4b04052429374d90597757c5bfb2a0a5bfe2ba | [
"MIT"
] | null | null | null | extended_uva_judge/utilities.py | fritogotlayed/Extended-UVA-Judge | 7f4b04052429374d90597757c5bfb2a0a5bfe2ba | [
"MIT"
] | null | null | null | extended_uva_judge/utilities.py | fritogotlayed/Extended-UVA-Judge | 7f4b04052429374d90597757c5bfb2a0a5bfe2ba | [
"MIT"
] | null | null | null | import os
import yaml
from extended_uva_judge import errors
def get_problem_directory(app_config):
"""Gets the directory containing the problem configs.
:return: The path to the problem configs.
:rtype: str
"""
problem_directory = app_config['problem_directory']
if not problem_directory:
raise errors.MissingConfigEntryError('problem_directory')
# Check for full windows or *nix directory path
if not (problem_directory.startswith('/') or ':' in problem_directory):
# assume it's relative to the current working directory
problem_directory = os.path.join(os.getcwd(), problem_directory)
return problem_directory
def get_problem_config(app_config, problem_id):
"""Gets the configuration for this objects corresponding problem.
:return: The configuration for the users selected problem
:rtype: dict
"""
problem_directory = get_problem_directory(app_config)
problem_config_path = os.path.join(
problem_directory, '%s.yaml' % problem_id)
problem_config = yaml.load(open(problem_config_path))
return problem_config
def does_problem_config_exist(app_config, problem_id):
"""Checks to see if the problem configuration exists in the system.
:return: True if it exists, false otherwise
:rtype: bool
"""
problem_directory = get_problem_directory(app_config)
problem_config_path = os.path.join(
problem_directory, '%s.yaml' % problem_id)
return os.path.exists(problem_config_path)
| 29.288462 | 75 | 0.730138 | 196 | 1,523 | 5.438776 | 0.321429 | 0.24015 | 0.075047 | 0.093809 | 0.245779 | 0.189493 | 0.189493 | 0.189493 | 0.189493 | 0.189493 | 0 | 0 | 0.19107 | 1,523 | 51 | 76 | 29.862745 | 0.86526 | 0.304662 | 0 | 0.285714 | 0 | 0 | 0.04985 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89a0e886b154e2d90823978c34e848f5d1927982 | 18,885 | py | Python | scripts/ur5_open_loop.py | lar-deeufba/real_time_grasp | d5ba0253bc42103b3dc5135d345c3a650f0bd64f | [
"BSD-3-Clause"
] | 27 | 2020-05-26T23:48:12.000Z | 2022-01-21T09:33:11.000Z | scripts/ur5_open_loop.py | lar-deeufba/real-time-grasp | d5ba0253bc42103b3dc5135d345c3a650f0bd64f | [
"BSD-3-Clause"
] | null | null | null | scripts/ur5_open_loop.py | lar-deeufba/real-time-grasp | d5ba0253bc42103b3dc5135d345c3a650f0bd64f | [
"BSD-3-Clause"
] | 8 | 2020-05-20T03:29:20.000Z | 2021-12-21T13:40:10.000Z | #!/usr/bin/python
import rospy
import actionlib
import numpy as np
import argparse
import copy
from copy import deepcopy
import rosservice
import sys
import re
from std_msgs.msg import Float64MultiArray, Float32MultiArray
from control_msgs.msg import FollowJointTrajectoryAction, FollowJointTrajectoryGoal, JointTolerance
from sensor_msgs.msg import JointState
from trajectory_msgs.msg import JointTrajectory, JointTrajectoryPoint
from geometry_msgs.msg import Pose
from controller_manager_msgs.srv import SwitchController
# Gazebo
from gazebo_msgs.msg import ModelStates, ContactsState, ContactState, LinkState
from gazebo_msgs.srv import GetLinkState, DeleteModel
from tf import TransformListener, TransformBroadcaster
from tf.transformations import euler_from_quaternion, quaternion_from_euler
# Inverse kinematics
from trac_ik_python.trac_ik import IK
from robotiq_2f_gripper_control.msg import _Robotiq2FGripper_robot_output as outputMsg
CLOSE_GRIPPER_VEL = 0.05
MAX_GRIPPER_CLOSE_INIT = 0.25 # Maximum angle that the gripper should be started using velocity command
PICKING = False # Tells the node that the object must follow the gripper
def parse_args():
parser = argparse.ArgumentParser(description='AAPF_Orientation')
parser.add_argument('--gazebo', action='store_true', help='Set the parameters related to the simulated enviroonment in Gazebo')
args = parser.parse_args()
return args
class vel_control(object):
def __init__(self, args, joint_values = None):
rospy.init_node('command_GGCNN_ur5')
self.args = args
self.joint_values_home = joint_values
self.tf = TransformListener()
# Used to change the controller
self.controller_switch = rospy.ServiceProxy('/controller_manager/switch_controller', SwitchController)
# actionClient used to send joint positions
self.client = actionlib.SimpleActionClient('pos_based_pos_traj_controller/follow_joint_trajectory', FollowJointTrajectoryAction)
print "Waiting for server (pos_based_pos_traj_controller)..."
self.client.wait_for_server()
print "Connected to server (pos_based_pos_traj_controller)"
# Gazebo topics
if self.args.gazebo:
# For picking
self.pub_model_position = rospy.Publisher('/gazebo/set_link_state', LinkState, queue_size=1)
self.get_model_coordinates = rospy.ServiceProxy('/gazebo/get_link_state', GetLinkState)
self.delete_model_service = rospy.ServiceProxy('/gazebo/delete_model', DeleteModel)
rospy.Subscriber('gazebo/model_states', ModelStates, self.get_model_state_callback, queue_size=1)
# Subscriber used to read joint values
rospy.Subscriber('/joint_states', JointState, self.ur5_actual_position_callback, queue_size=1)
rospy.sleep(1.0)
# USED FOR COLLISION DETECTION
self.finger_links = ['robotiq_85_right_finger_tip_link', 'robotiq_85_left_finger_tip_link']
# LEFT GRIPPER
self.string = ""
rospy.Subscriber('/left_finger_bumper_vals', ContactsState, self.monitor_contacts_left_finger_callback) # ContactState
self.left_collision = False
self.contactState_left = ContactState()
# RIGHT GRIPPER
rospy.Subscriber('/right_finger_bumper_vals', ContactsState, self.monitor_contacts_right_finger_callback) # ContactState
self.right_collision = False
self.contactState_right = ContactState()
self.client_gripper = actionlib.SimpleActionClient('gripper_controller_pos/follow_joint_trajectory', FollowJointTrajectoryAction)
print "Waiting for server (gripper_controller_pos)..."
self.client_gripper.wait_for_server()
print "Connected to server (gripper_controller_pos)"
# GGCNN
self.posCB = []
self.ori = []
self.grasp_cartesian_pose = []
self.gripper_angle_grasp = 0.0
self.final_orientation = 0.0
if self.args.gazebo:
self.offset_x = 0.0
self.offset_y = 0.0
self.offset_z = 0.020 #0.019
else:
self.offset_x = -0.03 # 0.002
self.offset_y = 0.02 # -0.05
self.offset_z = 0.058 # 0.013
self.ur5_joint_names = rospy.get_param("/ur5_joint_names")
self.robotiq_joint_name = rospy.get_param("/robotiq_joint_name")
# Topic published from GG-CNN Node
rospy.Subscriber('ggcnn/out/command', Float32MultiArray, self.ggcnn_command_callback, queue_size=1)
# Robotiq control
self.pub_gripper_command = rospy.Publisher('Robotiq2FGripperRobotOutput', outputMsg.Robotiq2FGripper_robot_output, queue_size=1)
self.d = None # msg received from GGCN
self.gripper_max_width = 0.14
def turn_velocity_controller_on(self):
self.controller_switch(['joint_group_vel_controller'], ['pos_based_pos_traj_controller'], 1)
def turn_position_controller_on(self):
self.controller_switch(['pos_based_pos_traj_controller'], ['joint_group_vel_controller'], 1)
def turn_gripper_velocity_controller_on(self):
self.controller_switch(['gripper_controller_vel'], ['gripper_controller_pos'], 1)
def turn_gripper_position_controller_on(self):
self.controller_switch(['gripper_controller_pos'], ['gripper_controller_vel'], 1)
def monitor_contacts_left_finger_callback(self, msg):
if msg.states:
self.left_collision = True
string = msg.states[0].collision1_name
string_collision = re.findall(r'::(.+?)::',string)[0]
# print("Left String_collision: ", string_collision)
if string_collision in self.finger_links:
string = msg.states[0].collision2_name
# print("Left Real string (object): ", string)
self.string = re.findall(r'::(.+?)::', string)[0]
# print("Left before: ", self.string)
else:
self.string = string_collision
# print("Left in else: ", string_collision)
else:
self.left_collision = False
def monitor_contacts_right_finger_callback(self, msg):
if msg.states:
self.right_collision = True
string = msg.states[0].collision1_name
string_collision = re.findall(r'::(.+?)::',string)[0]
# print("Right String_collision: ", string_collision)
if string_collision in self.finger_links:
string = msg.states[0].collision2_name
# print("Right Real string (object): ", string)
self.string = re.findall(r'::(.+?)::',string)[0]
# print("Right before: ", self.string)
else:
self.string = string_collision
# print("Right in else: ", self.string)
else:
self.right_collision = False
def delete_model_service_method(self):
"""
Delete a model in Gazebo
"""
string = self.string
model = string.replace("_link", "")
self.delete_model_service(model)
def ur5_actual_position_callback(self, joint_values_from_ur5):
"""Get UR5 joint angles
The joint states published by /joint_staes of the UR5 robot are in wrong order.
/joint_states topic normally publishes the joint in the following order:
[elbow_joint, shoulder_lift_joint, shoulder_pan_joint, wrist_1_joint, wrist_2_joint, wrist_3_joint]
But the correct order of the joints that must be sent to the robot is:
['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint']
Arguments:
joint_values_from_ur5 {list} -- Actual angles of the UR5 Robot
"""
if self.args.gazebo:
self.th3, self.robotic, self.th2, self.th1, self.th4, self.th5, self.th6 = joint_values_from_ur5.position
# print("Robotic angle: ", self.robotic)
else:
self.th3, self.th2, self.th1, self.th4, self.th5, self.th6 = joint_values_from_ur5.position
self.actual_position = [self.th1, self.th2, self.th3, self.th4, self.th5, self.th6]
def get_model_state_callback(self, msg):
self.object_picking()
def ggcnn_command_callback(self, msg):
"""
GGCNN Command Subscriber Callback
"""
self.tf.waitForTransform("base_link", "object_detected", rospy.Time.now(), rospy.Duration(4.0))
object_pose, object_ori = self.tf.lookupTransform("base_link", "object_detected", rospy.Time(0))
self.d = list(msg.data)
object_pose[0] += self.offset_x
object_pose[1] += self.offset_y
object_pose[2] += self.offset_z
self.posCB = object_pose
self.ori = self.d[3]
br = TransformBroadcaster()
br.sendTransform((object_pose[0],
object_pose[1],
object_pose[2]),
quaternion_from_euler(0.0, 0.0, self.ori),
rospy.Time.now(),
"object_link",
"base_link")
def get_link_position_picking(self):
link_name = self.string
model_coordinates = self.get_model_coordinates(self.string, 'wrist_3_link')
self.model_pose_picking = model_coordinates.link_state.pose
def reset_link_position_picking(self):
self.string = ""
def object_picking(self):
global PICKING
if PICKING:
angle = quaternion_from_euler(1.57, 0.0, 0.0)
object_picking = LinkState()
object_picking.link_name = self.string
object_picking.pose = Pose(self.model_pose_picking.position, self.model_pose_picking.orientation)
object_picking.reference_frame = "wrist_3_link"
self.pub_model_position.publish(object_picking)
def get_ik(self, pose):
"""Get the inverse kinematics
Get the inverse kinematics of the UR5 robot using track_IK package giving a desired intial pose
Arguments:
pose {list} -- A pose representing x, y and z
Returns:
sol {list} -- Joint angles or None if track_ik is not able to find a valid solution
"""
camera_support_angle_offset = 0.0
q = quaternion_from_euler(0.0, -3.14 + camera_support_angle_offset, 0.0)
# Joint order:
# ('shoulder_link', 'upper_arm_link', 'forearm_link', 'wrist_1_link', 'wrist_2_link', 'wrist_3_link', 'grasping_link')
ik_solver = IK("base_link", "grasping_link", solve_type="Distance")
sol = ik_solver.get_ik([0.2201039360819781, -1.573845095552878, -1.521853400505349, -1.6151347051274518, 1.5704492904506875, 0.0],
pose[0], pose[1], pose[2], q[0], q[1], q[2], q[3])
if sol is not None:
sol = list(sol)
sol[-1] = 0.0
return sol
def __build_goal_message_ur5(self):
goal = FollowJointTrajectoryGoal()
goal.trajectory = JointTrajectory()
goal.trajectory.joint_names = self.ur5_joint_names
goal.goal_tolerance.append(JointTolerance('joint_tolerance', 0.1, 0.1, 0))
goal.goal_time_tolerance = rospy.Duration(5,0)
return goal
def traj_planner(self, cart_pos, grasp_step='move', way_points_number=10, movement='slow'):
"""Quintic Trajectory Planner
Publish a trajectory to UR5 using quintic splines.
Arguments:
cart_pos {[float]} -- Grasp position [x, y, z]
Keyword Arguments:
grasp_step {str} -- Set UR5 movement type (default: {'move'})
way_points_number {number} -- Number of points considered in trajectory (default: {10})
movement {str} -- Movement speed (default: {'slow'})
"""
if grasp_step == 'pregrasp':
self.grasp_cartesian_pose = deepcopy(self.posCB)
self.grasp_cartesian_pose[-1] += 0.1
joint_pos = self.get_ik(self.grasp_cartesian_pose)
joint_pos[-1] = self.ori
self.final_orientation = deepcopy(self.ori)
self.gripper_angle_grasp = deepcopy(self.d[-2])
elif grasp_step == 'grasp':
self.grasp_cartesian_pose[-1] -= 0.1
joint_pos = self.get_ik(self.grasp_cartesian_pose)
joint_pos[-1] = self.final_orientation
elif grasp_step == 'move':
joint_pos = self.get_ik(cart_pos)
joint_pos[-1] = 0.0
if movement=='slow':
final_traj_duration = 500.0 # total iteractions
elif movement=='fast':
final_traj_duration = 350.0
v0 = a0 = vf = af = 0
t0 = 5.0
tf = (t0 + final_traj_duration) / way_points_number # tf by way point
t = tf / 10 # for each movement
ta = tf / 10 # to complete each movement
a = [0.0]*6
pos_points, vel_points, acc_points = [0.0]*6, [0.0]*6, [0.0]*6
goal = self.__build_goal_message_ur5()
for i in range(6):
q0 = self.actual_position[i]
qf = joint_pos[i]
b = np.array([q0,v0,a0,qf,vf,af]).transpose()
m = np.array([[1, t0, t0**2, t0**3, t0**4, t0**5],
[0, 1, 2*t0, 3*t0**2, 4*t0**3, 5*t0**4],
[0, 0, 2, 6*t0, 12*t0**2, 20*t0**3],
[1, tf, tf**2, tf**3, tf**4, tf**5],
[0, 1, 2*tf, 3*tf**2, 4*tf**3, 5*tf**4],
[0, 0, 2, 6*tf, 12*tf**2, 20*tf**3]])
a[i] = np.linalg.inv(m).dot(b)
for i in range(way_points_number):
for j in range(6):
pos_points[j] = a[j][0] + a[j][1]*t + a[j][2]*t**2 + a[j][3]*t**3 + a[j][4]*t**4 + a[j][5]*t**5
vel_points[j] = a[j][1] + 2*a[j][2]*t + 3*a[j][3]*t**2 + 4*a[j][4]*t**3 + 5*a[j][5]*t**4
acc_points[j] = 2*a[j][2] + 6*a[j][3]*t + 12*a[j][4]*t**2 + 20*a[j][5]*t**3
goal.trajectory.points.append(JointTrajectoryPoint(positions=pos_points,
velocities=vel_points,
accelerations=acc_points,
time_from_start=rospy.Duration(t))) #default 0.1*i + 5
t += ta
self.client.send_goal(goal)
self.all_close(joint_pos)
def all_close(self, goal, tolerance=0.00005):
"""Wait until goal is reached in configuration space
This method check if the robot reached goal position since wait_for_result seems to be broken
Arguments:
goal {[list]} -- Goal in configuration space (joint values)
Keyword Arguments:
tolerance {number} -- Minimum error allowed to consider the trajectory completed (default: {0.00005})
"""
error = np.sum([(self.actual_position[i] - goal[i])**2 for i in range(6)])
rospy.loginfo("Waiting for trajectory.")
while not rospy.is_shutdown() and error > tolerance:
error = np.sum([(self.actual_position[i] - goal[i])**2 for i in range(6)])
if error < tolerance:
rospy.loginfo("Trajectory Suceeded.") # whithin the tolerance specified
else:
rospy.logerr("Trajectory aborted.")
def genCommand(self, char, command, pos=None):
"""
Update the command according to the character entered by the user.
"""
if char == 'a':
# command = outputMsg.Robotiq2FGripper_robot_output();
command.rACT = 1 # Gripper activation
command.rGTO = 1 # Go to position request
command.rSP = 255 # Speed
command.rFR = 150 # Force
if char == 'r':
command.rACT = 0
if char == 'c':
command.rACT = 1
command.rGTO = 1
command.rATR = 0
command.rPR = 255
command.rSP = 40
command.rFR = 150
# @param pos Gripper width in meters. [0, 0.087]
if char == 'p':
command.rACT = 1
command.rGTO = 1
command.rATR = 0
command.rPR = int(np.clip((13.-230.)/self.gripper_max_width * self.ori + 230., 0, 255))
command.rSP = 40
command.rFR = 150
if char == 'o':
command.rACT = 1
command.rGTO = 1
command.rATR = 0
command.rPR = 0
command.rSP = 40
command.rFR = 150
return command
def command_gripper(self, action):
command = outputMsg.Robotiq2FGripper_robot_output();
command = self.genCommand(action, command)
self.pub_gripper_command.publish(command)
def gripper_send_position_goal(self, position=0.3, velocity=0.4, action='move'):
"""Send position goal to the gripper
Keyword Arguments:
position {float} -- Gripper angle (default: {0.3})
velocity {float} -- Gripper velocity profile (default: {0.4})
action {str} -- Gripper movement (default: {'move'})
"""
self.turn_gripper_position_controller_on()
duration = 0.2
if action == 'pre_grasp_angle':
max_distance = 0.085
angular_coeff = 0.11
K = 1.3
angle = (max_distance - self.gripper_angle_grasp) / angular_coeff * K
position = angle
velocity = 0.4
elif action == 'pick':
position = 0.7
velocity = 0.05
duration = 8.0
goal = FollowJointTrajectoryGoal()
goal.trajectory = JointTrajectory()
goal.trajectory.joint_names = self.robotiq_joint_name
goal.trajectory.points.append(JointTrajectoryPoint(positions=[position],
velocities=[velocity],
accelerations=[0.0],
time_from_start=rospy.Duration(duration)))
self.client_gripper.send_goal(goal)
if action == 'pick':
while not rospy.is_shutdown() and not self.left_collision and not self.right_collision:
pass
self.client_gripper.cancel_goal()
def move_home_on_shutdown(self):
self.client.cancel_goal()
# self.client_gripper.cancel_goal()
rospy.loginfo("Shutting down node...")
def main():
global PICKING
arg = parse_args()
ur5_vel = vel_control(arg)
point_init_home = [-0.37, 0.11, 0.15]
joint_values_home = ur5_vel.get_ik(point_init_home)
ur5_vel.joint_values_home = joint_values_home
# Send the robot to the custom HOME position
raw_input("==== Press enter to 'home' the robot!")
rospy.on_shutdown(ur5_vel.move_home_on_shutdown)
ur5_vel.traj_planner(point_init_home, movement='fast')
# Remove all objects from the scene and press enter
raw_input("==== Press enter to move the robot to the 'depth cam shot' position!")
point_init = [-0.37, 0.11, 0.05]
ur5_vel.traj_planner(point_init, movement='fast')
if arg.gazebo:
rospy.loginfo("Starting the gripper in Gazebo! Please wait...")
ur5_vel.gripper_send_position_goal(0.4)
else:
rospy.loginfo("Starting the real gripper! Please wait...")
ur5_vel.command_gripper('r')
rospy.sleep(0.5)
ur5_vel.command_gripper('a')
ur5_vel.command_gripper('o')
while not rospy.is_shutdown():
raw_input("==== Press enter to move to the pre grasp position!")
ur5_vel.traj_planner(point_init, 'pregrasp', movement='fast')
# It closes the gripper before approaching the object
# It prevents the gripper to collide with other objects when grasping
raw_input("==== Press enter start the grasping process!")
if arg.gazebo:
ur5_vel.gripper_send_position_goal(action='pre_grasp_angle')
else:
ur5_vel.command_gripper('p')
# Generate the trajectory to the grasp position
# BE CAREFUL!
ur5_vel.traj_planner([], 'grasp', movement='slow')
rospy.loginfo("Picking object")
if arg.gazebo:
ur5_vel.gripper_send_position_goal(action='pick')
ur5_vel.get_link_position_picking()
else:
raw_input("==== Press enter to close the gripper!")
ur5_vel.command_gripper('c')
rospy.loginfo("Moving object to the bin")
# After a collision is detected, the arm will start the picking action
PICKING = True # Attach object
ur5_vel.traj_planner([-0.45, 0.0, 0.15], movement='fast')
ur5_vel.traj_planner([-0.45, -0.16, 0.15], movement='fast')
ur5_vel.traj_planner([-0.45, -0.16, 0.08], movement='slow') # Be careful when approaching the bin
rospy.loginfo("Placing object")
# After the bin location is reached, the robot will place the object and move back
# to the initial position
PICKING = False # Detach object
if arg.gazebo:
ur5_vel.gripper_send_position_goal(0.3)
ur5_vel.delete_model_service_method()
ur5_vel.reset_link_position_picking()
else:
ur5_vel.command_gripper('o')
rospy.loginfo("Moving back to home position")
ur5_vel.traj_planner([-0.45, -0.16, 0.15], movement='fast')
ur5_vel.traj_planner(point_init_home, movement='fast')
ur5_vel.traj_planner(point_init, movement='slow')
if __name__ == '__main__':
try:
main()
except rospy.ROSInterruptException:
print "Program interrupted before completion"
| 35.365169 | 133 | 0.714112 | 2,763 | 18,885 | 4.666667 | 0.167933 | 0.012564 | 0.007756 | 0.013184 | 0.285482 | 0.236234 | 0.189623 | 0.155654 | 0.12913 | 0.116178 | 0 | 0.03803 | 0.164575 | 18,885 | 533 | 134 | 35.43152 | 0.779236 | 0.097379 | 0 | 0.209632 | 0 | 0 | 0.119337 | 0.042296 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002833 | 0.05949 | null | null | 0.014164 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89a1c74f6ff7daea200b8e6b6423ed86d7bc83ec | 2,936 | py | Python | certbot_dns_corenetworks/test/unit/test_dns_corenetwork.py | xoxys/certbot-dns-ispconfig | 8d5c0a9df72342ce55db6469d10cdd69ce3bb024 | [
"MIT"
] | 1 | 2021-03-19T00:29:39.000Z | 2021-03-19T00:29:39.000Z | certbot_dns_corenetworks/test/unit/test_dns_corenetwork.py | xoxys/certbot-dns-ispconfig | 8d5c0a9df72342ce55db6469d10cdd69ce3bb024 | [
"MIT"
] | 6 | 2020-11-11T21:32:23.000Z | 2021-03-20T16:00:17.000Z | certbot_dns_corenetworks/test/unit/test_dns_corenetwork.py | thegeeklab/certbot-dns-corenetworks | 8d5c0a9df72342ce55db6469d10cdd69ce3bb024 | [
"MIT"
] | null | null | null | """Tests for certbot_dns_corenetworks.dns_corenetworks."""
import unittest
import mock
from certbot import errors
from certbot.compat import os
from certbot.plugins import dns_test_common
from certbot.plugins.dns_test_common import DOMAIN
from certbot.tests import util as test_util
API_USER = "my_user"
API_PASSWORD = "secure"
class AuthenticatorTest(test_util.TempDirTestCase, dns_test_common.BaseAuthenticatorTest):
"""Test for Hetzner DNS Authenticator."""
def setUp(self):
from certbot_dns_corenetworks.dns_corenetworks import Authenticator
super(AuthenticatorTest, self).setUp()
path = os.path.join(self.tempdir, "file.ini")
dns_test_common.write({
"corenetworks_username": API_USER,
"corenetworks_password": API_PASSWORD
}, path)
self.config = mock.MagicMock(
corenetworks_credentials=path, corenetworks_propagation_seconds=0
) # don't wait during tests
self.auth = Authenticator(self.config, "corenetworks")
self.mock_client = mock.MagicMock()
self.auth._get_corenetworks_client = mock.MagicMock(return_value=self.mock_client)
def test_perform(self):
self.auth.perform([self.achall])
expected = [
mock.call.add_txt_record(DOMAIN, "_acme-challenge." + DOMAIN, mock.ANY, mock.ANY)
]
self.assertEqual(expected, self.mock_client.mock_calls)
def test_cleanup(self):
self.auth.nameCache["_acme-challenge." + DOMAIN] = "_acme-challenge." + DOMAIN
self.auth._attempt_cleanup = True
self.auth.cleanup([self.achall])
expected = [mock.call.del_txt_record(DOMAIN, "_acme-challenge." + DOMAIN, mock.ANY)]
self.assertEqual(expected, self.mock_client.mock_calls)
def test_creds(self):
dns_test_common.write({
"corenetworks_username": API_USER,
"corenetworks_password": API_PASSWORD
}, self.config.corenetworks_credentials)
self.auth.perform([self.achall])
expected = [
mock.call.add_txt_record(DOMAIN, "_acme-challenge." + DOMAIN, mock.ANY, mock.ANY)
]
self.assertEqual(expected, self.mock_client.mock_calls)
def test_no_creds(self):
dns_test_common.write({}, self.config.corenetworks_credentials)
self.assertRaises(errors.PluginError, self.auth.perform, [self.achall])
def test_missing_user_or_password(self):
dns_test_common.write({"corenetworks_username": API_USER},
self.config.corenetworks_credentials)
self.assertRaises(errors.PluginError, self.auth.perform, [self.achall])
dns_test_common.write({"corenetworks_password": API_PASSWORD},
self.config.corenetworks_credentials)
self.assertRaises(errors.PluginError, self.auth.perform, [self.achall])
if __name__ == "__main__":
unittest.main()
| 35.373494 | 93 | 0.68733 | 336 | 2,936 | 5.744048 | 0.241071 | 0.041451 | 0.053886 | 0.046632 | 0.563731 | 0.540933 | 0.479793 | 0.479793 | 0.462176 | 0.401554 | 0 | 0.000431 | 0.209469 | 2,936 | 82 | 94 | 35.804878 | 0.831107 | 0.038488 | 0 | 0.344828 | 0 | 0 | 0.087838 | 0.044808 | 0 | 0 | 0 | 0 | 0.103448 | 1 | 0.103448 | false | 0.086207 | 0.137931 | 0 | 0.258621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
89a5c2838a442d78a5960f630f0b8be24007cf8e | 1,405 | py | Python | src/calrissian/regularization/regularize_orthogonal.py | awlange/brainsparks | 05baff28da347172083672c940406f5696893201 | [
"MIT"
] | 3 | 2015-10-30T04:04:02.000Z | 2020-02-13T19:08:42.000Z | src/calrissian/regularization/regularize_orthogonal.py | awlange/brainsparks | 05baff28da347172083672c940406f5696893201 | [
"MIT"
] | null | null | null | src/calrissian/regularization/regularize_orthogonal.py | awlange/brainsparks | 05baff28da347172083672c940406f5696893201 | [
"MIT"
] | null | null | null | import numpy as np
class RegularizeOrthogonal(object):
"""
Orthogonal
"""
def __init__(self, coeff_lambda=0.0):
self.coeff_lambda = coeff_lambda
def cost(self, layers):
c = 0.0
for layer in layers:
wt = layer.w.transpose()
for j in range(layer.output_size):
wtj = wt[j] / np.sqrt(wt[j].dot(wt[j]))
for k in range(layer.output_size):
if j == k:
continue
wtk = wt[k] / np.sqrt(wt[k].dot(wt[k]))
c += np.abs(wtj.dot(wtk))
return self.coeff_lambda * c
def cost_gradient(self, layers, dc_db, dc_dw):
for l, layer in enumerate(layers):
wt = layer.w.transpose()
tmp = np.zeros_like(wt)
for j in range(layer.output_size):
dj = np.sqrt(wt[j].dot(wt[j]))
wtj = wt[j] / dj
# TODO: simplify this
s = 2 * (np.eye(len(wtj)) - np.outer(wtj, wtj)) / dj
for k in range(layer.output_size):
if j == k:
continue
dk = np.sqrt(wt[k].dot(wt[k]))
wtk = wt[k] / dk
tmp[j] += wtk.dot(s) * np.sign(wtj.dot(wtk))
dc_dw[l] += self.coeff_lambda * tmp.transpose()
return dc_db, dc_dw
| 31.222222 | 68 | 0.458363 | 187 | 1,405 | 3.336898 | 0.299465 | 0.028846 | 0.096154 | 0.115385 | 0.375 | 0.301282 | 0.301282 | 0.121795 | 0.121795 | 0.121795 | 0 | 0.006075 | 0.414235 | 1,405 | 44 | 69 | 31.931818 | 0.752126 | 0.022064 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.09375 | false | 0 | 0.03125 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89a6fed9b1397a9b14c141ea4b85328623a63382 | 7,839 | py | Python | data/external/repositories/113677/KaggleBillionWordImputation-master/scripts/util.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | null | null | null | data/external/repositories/113677/KaggleBillionWordImputation-master/scripts/util.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | null | null | null | data/external/repositories/113677/KaggleBillionWordImputation-master/scripts/util.py | Keesiu/meta-kaggle | 87de739aba2399fd31072ee81b391f9b7a63f540 | [
"MIT"
] | 1 | 2019-12-04T08:23:33.000Z | 2019-12-04T08:23:33.000Z | import sys, bisect
from collections import defaultdict
from itertools import islice, izip
import numpy as np
from scipy.misc import logsumexp
from scipy.spatial import distance
import Levenshtein
PUNCTUATION = set(("'", '"', ',', '.', '!', '?', ';', ':', '-', '--', '(', ')',
'/', '_', '\\', '+', '<', '>', '|', '@', '#', '$', '%', '^',
'&', '*', '[', ']', '{', '}'))
POS_TAGS = set(('UH','WP$','PDT','RBS','LS','EX','WP','$','SYM','RP','CC','RBR','VBG','NNS','CD','PRP$','MD','DT','NNPS','VBD','IN','JJS','WRB','VBN','JJR','WDT','POS','TO','NNP','JJ','RB','VB','FW','PRP','VBZ','NN','VBP'))
UNKNOWN = '<unknown>'
def is_punctuation(word):
return (word in PUNCTUATION)
def is_number(word):
try:
x = float(word)
return True
except:
return False
def is_pos_tag(word):
return (word in POS_TAGS)
def window(seq, n=2):
"Returns a sliding window (of width n) over data from the iterable"
" s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... "
it = iter(seq)
result = tuple(islice(it, n))
if len(result) == n:
yield result
for elem in it:
result = result[1:] + (elem,)
yield result
def tokenize_words(line, delim=' '):
return line.rstrip().split(delim)
def pos_tag(word):
return word.rsplit('_', 1)[-1]
def ngram_frequencies(istream, n=1):
counts = defaultdict(int)
for i, line in enumerate(istream):
if i % 100000 == 0:
print >>sys.stderr, i
words = tokenize_words(line)
for ngram in window(words, n):
counts[ngram] += 1
return counts
def words2ids(words, idmap):
ids = []
for word in words:
if word not in idmap:
idmap[word] = len(idmap)
ids.append(idmap[word])
return ids
def ngram_frequencies2(istream, n=1):
unigrams = dict()
counts = defaultdict(int)
for i, line in enumerate(istream):
if i % 100000 == 0:
print >>sys.stderr, "Line %d (%d 1-grams, %d %d-grams)" \
% (i, len(unigrams), len(counts), n)
words = tokenize_words(line)
ids = words2ids(words, unigrams)
for ngram in window(ids, n):
counts[ngram] += 1
id2word = {v: k for k, v in unigrams.iteritems()}
del unigrams
return counts, id2word
def load_vocab(vocab_file):
vocab = {}
for line in vocab_file:
word, freq = line.strip().split('\t')
freq = int(freq)
vocab[word] = freq
return vocab
def prune_vocab(vocab, n):
nwords = sum(v for v in vocab.itervalues())
nvocab = len(vocab)
print >>sys.stderr, "Input has nwords = %s, vocab size = %d" \
% (nwords, nvocab)
vocab = [(v,k) for k,v in vocab.iteritems()]
vocab = list(reversed(sorted(vocab)))
vocab = vocab[:n]
vocab = {k: v for v, k in vocab}
nremaining = sum(v for v in vocab.itervalues())
percent_kept = float(len(vocab)) / nvocab
percent_mass = float(nremaining) / nwords
print >>sys.stderr, "Keeping %d words (%.2f%% of vocab, %.2f%% of mass)" \
% (len(vocab), 100*percent_kept, 100*percent_mass)
return vocab
def score(golden, predicted):
total_d = 0.0
n = 0
for ref, pred in izip(golden, predicted):
total_d += Levenshtein.distance(ref, pred)
n += 1
return total_d / n
def estimate_probabilities(ngrams):
# no smoothing; if we didn't see it in train, best not insert
ntotal = float(sum(ngrams.itervalues()))
print "%d total syntactic ngrams" % ntotal
p = {k: np.log10(v/ntotal) for k, v in ngrams.iteritems()}
print "Total probability = %f" % sum(10.**v for v in p.itervalues())
return p
normalize_ngrams = estimate_probabilities
class Word2Vec(object):
def __init__(self, words, V):
self.words = words
self.word_to_id = {w: i for i, w in enumerate(self.words)}
self.V = V
@classmethod
def load(cls, istream):
# first line indicates # words and dimension of vectors
header = istream.readline().rstrip().split()
nwords = int(header[0])
d = int(header[1])
print >>sys.stderr, "Allocating %dx%d word vector matrix" \
% (nwords, d)
words = []
V = np.zeros((nwords,d), dtype=np.float32)
# subsequent lines have word and vector
print >>sys.stderr, "Loading word vectors"
for i, line in enumerate(istream):
entry = line.rstrip().split()
word = entry[0]
words.append(word)
V[i] = map(float, entry[1:])
if i % 500000 == 0: print >>sys.stderr, i
return cls(words, V)
def get(self, word):
'''get vector for word'''
if word not in self.word_to_id:
raise ValueError("Word2Vec does not contain '%s'" % word)
id = self.word_to_id[word]
return self.V[id]
def nearest(self, word, indices=None):
'''yield words in ascending order of distance to @word'''
# compute distance from word to all other words
# too much memory to precompute all of these ahead of time
# and vector dimension is too large for a KD-tree to be much help
word_vec = np.array(self.get(word), ndmin=2)
V = self.V if indices is None else self.V[indices]
d = distance.cdist(word_vec, V)[0]
for i in np.argsort(d):
w = self.words[i]
# element 0 is this word (d=0) if this word is in indices
# but not this word if this word is not in indices
if w == word: continue
yield w
class Prediction(object):
keep_top_n = 5
def __init__(self, word, locations, Z, Z_location, *args):
self.word = word
self.locations = locations
self.Z = Z
self.Z_location = Z_location
self.p_anywhere = args[:self.keep_top_n]
self.p_at_location = args[self.keep_top_n:2*self.keep_top_n]
self.p_at_other_location = args[2*self.keep_top_n:3*self.keep_top_n]
self.p_surrounding = args[3*self.keep_top_n:]
#assert self.p_anywhere[0] == self.p_at_location[0]
#assert self.p_at_location[0] != self.p_at_other_location[0]
@property
def location(self):
return self.locations[0]
@property
def order(self):
return len(self.p_surrounding)
@property
def location_posterior(self):
return 10.**(self.Z_location - self.Z)
@property
def word_posterior(self):
return 10.**(self.p_at_location[0] - self.Z)
@property
def location_ratio(self):
return self.p_at_location[0] - self.p_at_other_location[0]
@property
def word_ratio(self):
return self.p_at_location[0] - self.p_at_location[1]
@classmethod
def parse(cls, line):
entry = line.rstrip().split('\t')
word = entry[0]
# locations
loc = map(int, entry[1:cls.keep_top_n])
# probabilities
for i in xrange(cls.keep_top_n+1, len(entry)):
entry[i] = float(entry[i])
return cls(word, loc, *entry[cls.keep_top_n+1:])
class TopK(object):
'''Keep track the top-k objects'''
def __init__(self, n):
self.things = [None] * n
self.values = [float('inf')] * n
def add(self, thing, value):
i = bisect.bisect(self.values, -value)
if i < len(self.values):
self.values[i] = -value
self.things[i] = thing
def update(self, other):
for thing, value in other:
self.add(thing, -value)
def __iter__(self):
return izip(self.things, self.values) | 33.075949 | 223 | 0.56474 | 1,069 | 7,839 | 4.04116 | 0.254443 | 0.016204 | 0.018519 | 0.024306 | 0.155324 | 0.106019 | 0.087269 | 0.066898 | 0.066898 | 0.066898 | 0 | 0.016411 | 0.292639 | 7,839 | 237 | 224 | 33.075949 | 0.762669 | 0.0708 | 0 | 0.132979 | 0 | 0.005319 | 0.073795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.037234 | null | null | 0.047872 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89a8b28d63bb94947dfc6a9846e9c93c4d60b60d | 845 | py | Python | creatDjangoApp1/accounts/migrations/0006_auto_20201130_2119.py | imxiaow/django-xw-project | 06d97b7d3701422b08720e00e87f6c471461c19a | [
"MIT"
] | 1 | 2020-11-25T17:33:31.000Z | 2020-11-25T17:33:31.000Z | creatDjangoApp1/accounts/migrations/0006_auto_20201130_2119.py | imxiaow/django-xw-project | 06d97b7d3701422b08720e00e87f6c471461c19a | [
"MIT"
] | null | null | null | creatDjangoApp1/accounts/migrations/0006_auto_20201130_2119.py | imxiaow/django-xw-project | 06d97b7d3701422b08720e00e87f6c471461c19a | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-30 21:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('accounts', '0005_auto_20201130_1450'),
]
operations = [
migrations.AddField(
model_name='order',
name='note',
field=models.CharField(max_length=200, null=True),
),
migrations.AlterField(
model_name='order',
name='status',
field=models.CharField(choices=[('Pending', 'Pending'), ('Out for delivery', 'Out for delivery'), ('Delivered', 'Delivered')], max_length=200, null=True),
),
migrations.AlterField(
model_name='product',
name='description',
field=models.CharField(blank=True, max_length=200, null=True),
),
]
| 29.137931 | 166 | 0.581065 | 87 | 845 | 5.54023 | 0.551724 | 0.056017 | 0.124481 | 0.099585 | 0.244813 | 0.20332 | 0.20332 | 0.20332 | 0.20332 | 0 | 0 | 0.066007 | 0.28284 | 845 | 28 | 167 | 30.178571 | 0.729373 | 0.053254 | 0 | 0.318182 | 1 | 0 | 0.166667 | 0.028822 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89a8d67890dba5fcad6eb3055086bfd2d086adc8 | 647 | py | Python | HelloWorld/guessinggame.py | zahraaliaghazadeh/python | 2f2d0141a916c99e8724f803bd4e5c7246a7a02e | [
"MIT"
] | null | null | null | HelloWorld/guessinggame.py | zahraaliaghazadeh/python | 2f2d0141a916c99e8724f803bd4e5c7246a7a02e | [
"MIT"
] | null | null | null | HelloWorld/guessinggame.py | zahraaliaghazadeh/python | 2f2d0141a916c99e8724f803bd4e5c7246a7a02e | [
"MIT"
] | null | null | null | # answer = 5
# print("please guess number between 1 and 10: ")
# guess = int(input())
#
# if guess < answer:
# print("Please guess higher")
# elif guess > answer:
# print ( "Please guess lower")
# else:
# print("You got it first time")
answer = 5
print ("Please guess number between 1 and 10: ")
guess = int(input())
# when there is : we indent the code
if guess != answer:
if guess < answer:
print("Please guess higher")
guess = int(input())
if guess == answer:
print("Well done, you guessed it")
else:
print("Sorry, you have not guessed correctly")
else:
print("You got it first time")
| 23.107143 | 54 | 0.615147 | 91 | 647 | 4.373626 | 0.395604 | 0.138191 | 0.201005 | 0.135678 | 0.728643 | 0.660804 | 0.660804 | 0.276382 | 0.276382 | 0.276382 | 0 | 0.016598 | 0.255023 | 647 | 27 | 55 | 23.962963 | 0.809129 | 0.404946 | 0 | 0.307692 | 0 | 0 | 0.375335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.384615 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
89ad3ad12dbbb7451941460b3c2d3ac8663e01da | 1,370 | py | Python | deprecated_examples_robust/multimedia/avmnist_MFM_robust.py | TianhaoFu/MultiBench | b174a3187124d6f92be1ff3b487eef292f7883bb | [
"MIT"
] | null | null | null | deprecated_examples_robust/multimedia/avmnist_MFM_robust.py | TianhaoFu/MultiBench | b174a3187124d6f92be1ff3b487eef292f7883bb | [
"MIT"
] | null | null | null | deprecated_examples_robust/multimedia/avmnist_MFM_robust.py | TianhaoFu/MultiBench | b174a3187124d6f92be1ff3b487eef292f7883bb | [
"MIT"
] | null | null | null | import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.getcwd())))
from training_structures.MFM import train_MFM,test_MFM
from fusions.common_fusions import Concat
from unimodals.MVAE import LeNetEncoder,DeLeNet
from unimodals.common_models import MLP
from torch import nn
import torch
from objective_functions.recon import recon_weighted_sum,sigmloss1dcentercrop
from datasets.avmnist.get_data_robust import get_dataloader
filename='avmnist_MFM_robust_best.pt'
traindata, validdata, testdata, robustdata = get_dataloader('../../../../yiwei/avmnist/_MFAS/avmnist')
channels=6
classes=10
n_latent=200
fuse=Concat()
encoders=[LeNetEncoder(1,channels,3,n_latent,twooutput=False).cuda(),LeNetEncoder(1,channels,5,n_latent,twooutput=False).cuda()]
decoders=[DeLeNet(1,channels,3,n_latent).cuda(),DeLeNet(1,channels,5,n_latent).cuda()]
intermediates=[MLP(n_latent,n_latent//2,n_latent//2).cuda(),MLP(n_latent,n_latent//2,n_latent//2).cuda(),MLP(2*n_latent,n_latent,n_latent//2).cuda()]
head=MLP(n_latent//2,40,classes).cuda()
recon_loss=recon_weighted_sum([sigmloss1dcentercrop(28,34),sigmloss1dcentercrop(112,130)],[1.0,1.0])
train_MFM(encoders,decoders,head,intermediates,fuse,recon_loss,traindata,validdata,25,savedir=filename)
model=torch.load(filename)
print("Testing:")
test_MFM(model,testdata)
print("Robustness testing:")
test(model,testdata)
| 37.027027 | 149 | 0.806569 | 209 | 1,370 | 5.110048 | 0.373206 | 0.098315 | 0.044944 | 0.052434 | 0.170412 | 0.058989 | 0.058989 | 0.058989 | 0.058989 | 0.058989 | 0 | 0.032209 | 0.048175 | 1,370 | 36 | 150 | 38.055556 | 0.78681 | 0 | 0 | 0 | 0 | 0 | 0.067153 | 0.047445 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
89b1a7985acccd182bc360d4370b40a665ed42c6 | 1,352 | py | Python | pygame_template.py | Pittsy24/pygame_template | b72397a37610fc267870151e0141de0d15793b7c | [
"MIT"
] | null | null | null | pygame_template.py | Pittsy24/pygame_template | b72397a37610fc267870151e0141de0d15793b7c | [
"MIT"
] | null | null | null | pygame_template.py | Pittsy24/pygame_template | b72397a37610fc267870151e0141de0d15793b7c | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# By Joseph Pitts
import sys
import random
import pygame
from pygame.locals import *
class Colours:
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
AQUA = (0, 255, 255)
GREY = (128, 128, 128)
NAVY = (0, 0, 128)
SILVER = (192, 192 ,192)
GREEN = (0, 128, 0)
OLIVE = (128, 128, 0)
TEAL = (0, 128, 128)
BLUE = (0, 0, 255)
LIME = (0, 255, 0)
PURPLE = (128, 0, 128)
FUCHSIA = (255, 0, 255)
MAROON = (128, 0, 0)
RED = (255, 0, 0)
YELLOW = (255, 255, 0)
@staticmethod
def RANDOM():
return (random.randrange(0, 255), random.randrange(0, 255), random.randrange(0, 255))
class PyGame(object):
def __init__(self, width = 640, height = 480):
pygame.init()
self.fps = 60
self.fpsClock = pygame.time.Clock()
self.width, self.height = width, height
self.screen = pygame.display.set_mode((self.width, self.height))
def game_loop(self):
self.screen.fill(Colours.BLACK)
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
pygame.display.flip()
fpsClock.tick(fps)
def run(self):
while True:
self.game_loop()
| 23.719298 | 94 | 0.513314 | 171 | 1,352 | 4.017544 | 0.409357 | 0.040757 | 0.069869 | 0.082969 | 0.082969 | 0.082969 | 0.082969 | 0.082969 | 0 | 0 | 0 | 0.140251 | 0.351331 | 1,352 | 56 | 95 | 24.142857 | 0.643101 | 0.024408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.095238 | 0.02381 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
89bb8aec70dc5ddd26138767ff03178f1d8f86d1 | 830 | py | Python | sdk/pyseele/clazz.py | rinkako/SeeleFlow | 4b8858f367ef02a568c03c8114e1c1b221208feb | [
"MIT"
] | 3 | 2019-12-20T13:32:34.000Z | 2021-04-27T06:02:28.000Z | sdk/pyseele/clazz.py | rinkako/SeeleFlow | 4b8858f367ef02a568c03c8114e1c1b221208feb | [
"MIT"
] | null | null | null | sdk/pyseele/clazz.py | rinkako/SeeleFlow | 4b8858f367ef02a568c03c8114e1c1b221208feb | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
"""
Project Seele
@author : Rinka
@date : 2019/12/17
"""
from pyseele.workitem import Workitem
class ResourcingContext:
"""
Resourcing Context maintains all org.rinka.seele.server.resource service principals that guide RS
to handle the workitem.
"""
def __init__(self):
# resourcing unique id
self.rsid: str = None
# process runtime unique id
self.rtid: str = None
# priority, bigger schedule faster
self.priority: int = 0
# execution cost in ms
self.execution_time_span: int = 0
# TODO: service type, indicate the action RS should do
self.rs_type = None
# TODO: resourcing context
self.rs_properties: dict = None
# binding workitem
self.workitem: Workitem = None
| 23.055556 | 101 | 0.624096 | 99 | 830 | 5.151515 | 0.646465 | 0.066667 | 0.047059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018771 | 0.293976 | 830 | 35 | 102 | 23.714286 | 0.851536 | 0.471084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98241ad4baa33294d1ccb68f952b1b1217195b7e | 10,405 | py | Python | src/GG_ESB.py | CB1204/LapSimulation | 7d7f7c43a6bc3db3dbf02050d939da3f17647c2c | [
"MIT"
] | 7 | 2018-02-22T16:58:26.000Z | 2022-02-05T18:17:56.000Z | src/GG_ESB.py | CB1204/LapSimulation | 7d7f7c43a6bc3db3dbf02050d939da3f17647c2c | [
"MIT"
] | null | null | null | src/GG_ESB.py | CB1204/LapSimulation | 7d7f7c43a6bc3db3dbf02050d939da3f17647c2c | [
"MIT"
] | 2 | 2019-04-15T21:07:03.000Z | 2021-05-11T07:41:49.000Z | from TwoDimLookup_motor import TwoDimLookup_motor
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from scipy import interpolate
import numpy as np
### Input Parameters
# C_F = 27000 # Cornering stiffness front / Schräglaufsteifigkeit vorne [N/rad] - Is already for two wheels !
# C_R = 35000 # Cornering stiffness rear / Schräglaufsteifigkeit hinten [N/rad] - Is already for two wheels !
# m = 350 # Mass vehicle + driver [kg]
# CoG_X = 0.65 # Actual position of CG (0 at front axis, 1 at rear axis)
#
# mu = 1.4 # Friction coefficient
# alpha = 7.5 # Slip angle [deg]
# DriveType = 2WD or 4WD
# res = resolution = number of data points
class GG_ESB_OneDim:
def __init__(self, C_F, C_R, m, CoG_X, mu, alpha, DriveType):
self.C_F = C_F
self.C_R = C_R
self.m = m
self.CoG_X = CoG_X
self.mu = mu
self.alpha = alpha
self.DriveType = DriveType
# General constants
self.deg2rad = np.pi/180
self.g = 9.81
def GG_ESB_ay_Max(self):
# calculate ay max
FY_F = self.alpha * self.deg2rad * self.C_F
FY_R = self.alpha * self.deg2rad * self.C_R
FY_ovr = FY_F + FY_R
ay_max = FY_ovr / self.m
#ay_max = 14
return ay_max
def GG_ESB_ax_Max(self):
# calculate ax max
FZ_F = (1-self.CoG_X) * self.m * self.g
FZ_R = self.CoG_X * self.m * self.g
FX_F = self.mu * FZ_F
FX_R = self.mu * FZ_R
if self.DriveType == '2WD':
FX_ovr = FX_R
elif self.DriveType == '4WD':
FX_ovr = FX_F + FX_R
ax_max = FX_ovr / self.m
#ax_max = 7
return ax_max
def Plot_gg(self, ay_max, ax_max):
# Calculate values
resolution = 1000
ay = np.linspace((-1)*ay_max, ay_max, resolution)
ax_upper =((1-(ay**2 / ay_max**2)) * ax_max**2)**0.5
if self.DriveType == '2WD':
ax_lower = (-2) * ax_upper
if self.DriveType == '4WD':
ax_lower = (-1) * ax_upper
ay = ay /self.g
ax_U_inG = np.asarray(ax_upper)/self.g
ax_L_inG = np.asarray(ax_lower)/self.g
# # Plot g-g diagram
# plt.plot(ay, ax_U_inG , 'b')
# plt.plot(ay, ax_L_inG, 'b')
# plt.xlabel('ay [g]' + '\n' + ('ay max: ' + str(np.round(ay_max/self.g,2)) + ' g'), fontsize = 16)
# plt.ylabel('ax [g]' + '\n' + ('ax max: ' + str(np.round(ax_max/self.g,2)) + ' g'), fontsize = 16)
# plt.title('g-g diagram', fontsize = 20, y=1.03)
# plt.grid(True)
# plt.show()
########################################################
class GG_ESB_TwoDim:
def __init__(self, C_F, C_R, m, CoG_X, mu, alpha, CoP_X, C_la, rho, DriveType, gearRatio, tireRadius, fr, Lift2Drag ):
self.C_F = C_F
self.C_R = C_R
self.m = m
self.CoG_X = CoG_X
self.mu = mu
self.alpha = alpha
self.CoP_X = CoP_X # = 0.61
self.C_la = C_la # = 3.52 m^2
self.rho = rho
self.DriveType = DriveType #2WD or 4WD
self.gearRatio = gearRatio
self.tireRadius = tireRadius
self.fr = fr
self.Lift2Drag = Lift2Drag
# General constants
self.deg2rad = np.pi/180
self.g = 9.81
def GGV_Map(self):
ax_upper_values = []
ax_lower_values = []
ay_values = []
speed_values =[]
Speed_resolution = 200
ay_resolution = 500
# initialize speed
VehicleSpeed = np.linspace(0.001, 50, Speed_resolution) #Speed in m/s
#loop through different velocities
for i in range(len(VehicleSpeed)):
#find maximum lateral acceleration
ay_max = self.GG_ESB_ay_Max()
ay = np.linspace((-1)*ay_max, ay_max, ay_resolution)
ax_max_upper = self.GG_ESB_ax_Max_upper(VehicleSpeed[i])
ax_up = []
for j in range(len(ay)):
ax_up.append(((1-(ay[j]**2 / ay_max**2)) * ax_max_upper**2)**0.5)
ax_max_lower = self.GG_ESB_ax_Max_lower(VehicleSpeed[i])
ax_low = []
for j in range(len(ay)):
ax_low.append(-1*((1-(ay[j]**2 / ay_max**2)) * ax_max_lower**2)**0.5)
speed = []
for j in range(len(ay)):
speed.append(VehicleSpeed[i])
ax_upper_values.append(ax_up)
ax_lower_values.append(ax_low)
ay_values.append(ay)
speed_values.append(speed)
# Load Motor map
motor = TwoDimLookup_motor(self.gearRatio, self.tireRadius, self.CoG_X, self.m, self.CoP_X, self.C_la, self.rho, self.fr, self.Lift2Drag, self.DriveType)
ax_motor = motor.ax_motor(speed_values)
# Replace ax values which are higher than ax_motor
for i in range(len(ax_upper_values)):
for j in range(len (ax_upper_values[i])):
if ax_upper_values[i][j] > ax_motor[i]:
ax_upper_values[i][j] = ax_motor[i]
return ax_upper_values, ax_lower_values, ay_values, speed_values
def GG_ESB_ay_Max(self):
# calculate ay max
FY_F = self.alpha * self.deg2rad * self.C_F
FY_R = self.alpha * self.deg2rad * self.C_R
FY_ovr = FY_F + FY_R
ay_max = FY_ovr / self.m
return ay_max
def GG_ESB_ax_Max_upper(self, VehicleSpeed):
# calculate ax max
# Mass
FZ_m_F = (1-self.CoG_X) * self.m * self.g
FZ_m_R = self.CoG_X * self.m * self.g
#Downforce
FZ_d_F = (1-self.CoP_X) * self.C_la * 0.5*self.rho * VehicleSpeed**2
FZ_d_R = (self.CoP_X) * self.C_la * 0.5*self.rho * VehicleSpeed**2
# FZ ovr
FZ_ovr_F = FZ_m_F + FZ_d_F
FZ_ovr_R = FZ_m_R + FZ_d_R
#FX
FX_F = self.mu * FZ_ovr_F
FX_R = self.mu * FZ_ovr_R
if self.DriveType == '2WD':
FX_ovr = FX_R
elif self.DriveType == '4WD':
FX_ovr = FX_F + FX_R
ax_max_upper = FX_ovr / self.m
return ax_max_upper
def GG_ESB_ax_Max_lower(self, VehicleSpeed):
# calculate ax max
# Mass
FZ_m_F = (1-self.CoG_X) * self.m * self.g
FZ_m_R = self.CoG_X * self.m * self.g
#Downforce
FZ_d_F = (1-self.CoP_X) * self.C_la * 0.5*self.rho * VehicleSpeed**2
FZ_d_R = (self.CoP_X) * self.C_la * 0.5*self.rho * VehicleSpeed**2
# FZ ovr
FZ_ovr_F = FZ_m_F + FZ_d_F
FZ_ovr_R = FZ_m_R + FZ_d_R
#FX
FX_F = self.mu * FZ_ovr_F
FX_R = self.mu * FZ_ovr_R
#FX ovr
FX_ovr = FX_F + FX_R
ax_max_lower = FX_ovr / self.m
return ax_max_lower
def Plot_ggV(self, ax_upper_values, ax_lower_values, ay_values, speed_values ):
fig = plt.figure()
ax = fig.gca(projection='3d')
surf_upper = ax.plot_surface(ay_values, speed_values, ax_upper_values, color = 'r')
surf_lower = ax.plot_surface(ay_values, speed_values, ax_lower_values)
ax.set_xlabel('ay [m/s^2]')
ax.set_ylabel('speed [m/s]')
ax.set_zlabel('ax [m/s^2]')
plt.show()
def Plot_Long_Speed_Distance(self):
# Calculate motor
VehicleSpeed = np.array([np.linspace(0.0001, 60, 200), np.zeros(200)])
VehicleSpeed = np.transpose(VehicleSpeed).tolist()
Motor = TwoDimLookup_motor(self.gearRatio, self.tireRadius, self.CoG_X, self.m, self.CoP_X, self.C_la, self.rho, self.fr, self.Lift2Drag, self.DriveType)
ax_Motor = Motor.ax_motor(VehicleSpeed)
speed = np.linspace(0.0001, 60, 200)
a = interpolate.interp1d(speed, ax_Motor, bounds_error=False, fill_value=0)
#Calculate speed and distance
v = [0.0001]
s = [0]
t = [0]
delta_t = 0.005 # sec
t_end = 6
while t[-1] <= t_end:
a_v = a(v[-1])
v_t = a_v * delta_t + v[-1]
s_t = 0.5 * a_v * delta_t**2 + v[-1] * delta_t + s[-1]
v.append(v_t)
s.append(s_t)
t.append(t[-1] + delta_t)
# Find important vehicle parameters
# From 1 to 100 kph
index1 = np.nonzero(np.asarray(v) <= 100/3.6)
index_0to100 = np.max(index1)
# Time for 75 m acceleration
index2 = np.nonzero(np.asarray(s) <= 75)
#print(index2)
index_75m = np.max(index2)
# Graphs
# f, axarr = plt.subplots(2, sharex=True)
# axarr[0].plot(t, v,'b', label = 'Vehicle speed', linewidth = 1.5)
# axarr[0].plot(t[index_0to100], v[index_0to100], marker = '+', markersize = 15, markeredgewidth = 2.5, markerfacecolor = 'r', markeredgecolor = 'r', linestyle = 'None', label = 'Time from 0 to 100 kph: ' + str(np.round(t[index_0to100],2)) + ' s')
# axarr[0].set_title('Vehicle Speed vs. Time')
# axarr[0].set_xlabel('Time [s]')
# axarr[0].set_ylabel('Vehicle speed [m/s]')
# axarr[0].grid(True)
# axarr[0].legend(numpoints=1, shadow=True, fancybox=True)
#
# axarr[1].plot(t, s, 'g', label = 'Driven Distance', linewidth = 1.5)
# axarr[1].plot(t[index_75m], s[index_75m], marker = '+', markersize = 15, markeredgewidth = 2.5, markerfacecolor = 'r', markeredgecolor = 'r', linestyle = 'None', label = 'Time for 75 meters acceleration: ' + str(np.round(t[index_75m],2)) + ' s')
# axarr[1].set_title('Driven Distance vs. Time')
# axarr[1].set_xlabel('Time [s]')
# axarr[1].set_ylabel('Driven Distance [m]')
# axarr[1].grid(True)
# axarr[1].legend(numpoints=1, shadow=True, fancybox=True)
#
# plt.show()
| 33.242812 | 257 | 0.528304 | 1,487 | 10,405 | 3.471419 | 0.161399 | 0.022278 | 0.015498 | 0.018597 | 0.487408 | 0.455637 | 0.431228 | 0.391709 | 0.335723 | 0.329523 | 0 | 0.038598 | 0.347621 | 10,405 | 312 | 258 | 33.349359 | 0.721715 | 0.259779 | 0 | 0.36875 | 0 | 0 | 0.006876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.03125 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9827c78b55c36213e56e3cab4efb3a255d146830 | 21,718 | py | Python | temboardui/plugins/monitoring/tools.py | pierrehilbert/temboard | d28bbe87329d3023774fe8c773b8b4ad50e09757 | [
"PostgreSQL"
] | null | null | null | temboardui/plugins/monitoring/tools.py | pierrehilbert/temboard | d28bbe87329d3023774fe8c773b8b4ad50e09757 | [
"PostgreSQL"
] | null | null | null | temboardui/plugins/monitoring/tools.py | pierrehilbert/temboard | d28bbe87329d3023774fe8c773b8b4ad50e09757 | [
"PostgreSQL"
] | null | null | null | import logging
from .model.orm import (
Check,
Host,
Instance,
)
from .alerting import (
bootstrap_checks,
check_specs,
)
logger = logging.getLogger(__name__)
def merge_agent_info(session, host_info, instances_info):
"""Update the host, instance and database information with the
data received from the agent."""
try:
# Try to get host_id, based on hostname
host_info['host_id'] = get_host_id(session, host_info['hostname'])
except Exception:
# host not found
pass
host = Host.from_dict(host_info)
# Insert or update host information
session.merge(host)
session.flush()
session.commit()
# Get host_id in any case
host_id = get_host_id(session, host_info['hostname'])
for instance_info in instances_info:
# Only process instances marked as available, since only those
# have complete information
if instance_info['available']:
try:
# Try to get instance_id
instance_info['instance_id'] = get_instance_id(
session, host_id, instance_info['port']
)
except Exception:
# instance not found
pass
instance_info['host_id'] = host_id
inst = Instance.from_dict(instance_info)
# Insert or update instance information
session.merge(inst)
session.flush()
session.commit()
return host
def get_host_id(session, hostname):
"""
Get host_id from the hostname.
"""
query = """
SELECT host_id FROM monitoring.hosts
WHERE hostname = :hostname
"""
result = session.execute(query, {"hostname": hostname})
try:
return result.fetchone()[0]
except Exception:
raise Exception("Can't find host_id for \"%s\""
" in monitoring.hosts table." % hostname)
def get_instance_id(session, host_id, port):
"""
Get instance from host_id and port.
"""
query = """
SELECT instance_id
FROM monitoring.instances
WHERE host_id = :host_id AND port = :port
"""
result = session.execute(query, {"host_id": host_id, "port": port})
try:
return result.fetchone()[0]
except Exception:
raise Exception("Can't find instance_id for \"%s/%s\" "
"in monitoring.instances table." % (host_id, port))
def check_agent_key(session, hostname, pg_data, pg_port, agent_key):
query = """
SELECT agent_key
FROM application.instances
WHERE hostname = :hostname AND pg_data=:pgdata AND pg_port = :pgport
LIMIT 1
"""
result = session.execute(
query,
{"hostname": hostname, "pgdata": pg_data, "pgport": pg_port})
try:
row = result.fetchone()
if row[0] == agent_key:
return
except Exception:
raise Exception("Can't find the instance \"%s\" "
"in application.instances table." % hostname)
raise Exception("Can't check agent's key.")
def check_host_key(session, hostname, agent_key):
query = """
SELECT agent_key
FROM application.instances
WHERE hostname = :hostname
"""
result = session.execute(query, {"hostname": hostname})
try:
for row in result.fetchall():
if row[0] == agent_key:
return
except Exception:
raise Exception("Can't find the instance \"%s\" "
"in application.instances table." % hostname)
raise Exception("Can't check agent's key.")
def insert_metrics(session, host, agent_data, logger, hostname, port):
try:
# Find host_id & instance_id
host_id = get_host_id(session, hostname)
instance_id = get_instance_id(session, host_id, port)
except Exception as e:
logger.info("Unable to find host & instance IDs")
logger.debug(agent_data)
logger.exception(str(e))
session.rollback()
return
cur = session.connection().connection.cursor()
for metric in agent_data.keys():
# Do not try to insert empty lines
if len(agent_data[metric]) == 0:
continue
try:
# Insert data
if metric == 'sessions':
for metric_data in agent_data['sessions']:
query = """
INSERT INTO monitoring.metric_sessions_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['dbname'],
(
None,
metric_data['active'],
metric_data['waiting'],
metric_data['idle'],
metric_data['idle_in_xact'],
metric_data['idle_in_xact_aborted'],
metric_data['fastpath'],
metric_data['disabled'],
metric_data['no_priv']
)
)
)
elif metric == 'xacts':
for metric_data in agent_data['xacts']:
query = """
INSERT INTO monitoring.metric_xacts_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['dbname'],
(
None,
str(metric_data['measure_interval']),
metric_data['n_commit'],
metric_data['n_rollback']
)
)
)
elif metric == 'locks':
for metric_data in agent_data['locks']:
query = """
INSERT INTO monitoring.metric_locks_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['dbname'],
(
None,
metric_data['access_share'],
metric_data['row_share'],
metric_data['row_exclusive'],
metric_data['share_update_exclusive'],
metric_data['share'],
metric_data['share_row_exclusive'],
metric_data['exclusive'],
metric_data['access_exclusive'],
metric_data['siread'],
metric_data['waiting_access_share'],
metric_data['waiting_row_share'],
metric_data['waiting_row_exclusive'],
metric_data['waiting_share_update_exclusive'],
metric_data['waiting_share'],
metric_data['waiting_share_row_exclusive'],
metric_data['waiting_exclusive'],
metric_data['waiting_access_exclusive']
)
)
)
elif metric == 'blocks':
for metric_data in agent_data['blocks']:
query = """
INSERT INTO monitoring.metric_blocks_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['dbname'],
(
None,
str(metric_data['measure_interval']),
metric_data['blks_read'],
metric_data['blks_hit'],
metric_data['hitmiss_ratio']
)
)
)
elif metric == 'bgwriter':
for metric_data in agent_data['bgwriter']:
query = """
INSERT INTO monitoring.metric_bgwriter_current
VALUES (%s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
(
None,
str(metric_data['measure_interval']),
metric_data['checkpoints_timed'],
metric_data['checkpoints_req'],
metric_data['checkpoint_write_time'],
metric_data['checkpoint_sync_time'],
metric_data['buffers_checkpoint'],
metric_data['buffers_clean'],
metric_data['maxwritten_clean'],
metric_data['buffers_backend'],
metric_data['buffers_backend_fsync'],
metric_data['buffers_alloc'],
metric_data['stats_reset']
)
)
)
elif metric == 'db_size':
for metric_data in agent_data['db_size']:
query = """
INSERT INTO monitoring.metric_db_size_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['dbname'],
(
None,
metric_data['size']
)
)
)
elif metric == 'tblspc_size':
for metric_data in agent_data['tblspc_size']:
query = """
INSERT INTO monitoring.metric_tblspc_size_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['spcname'],
(
None,
metric_data['size']
)
)
)
elif metric == 'filesystems_size':
for metric_data in agent_data['filesystems_size']:
query = """
INSERT INTO monitoring.metric_filesystems_size_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
host_id,
metric_data['mount_point'],
(
None,
metric_data['used'],
metric_data['total'],
metric_data['device']
)
)
)
elif metric == 'temp_files_size_tblspc':
for metric_data in agent_data['temp_files_size_tblspc']:
query = """
INSERT INTO
monitoring.metric_temp_files_size_tblspc_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['spcname'],
(
None,
metric_data['size']
)
)
)
elif metric == 'temp_files_size_db':
for metric_data in agent_data['temp_files_size_db']:
query = """
INSERT INTO
monitoring.metric_temp_files_size_db_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['dbname'],
(
None,
metric_data['size']
)
)
)
elif metric == 'wal_files':
for metric_data in agent_data['wal_files']:
query = """
INSERT INTO monitoring.metric_wal_files_current
VALUES (%s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
(
None,
str(metric_data['measure_interval']),
metric_data['written_size'],
metric_data['current_location'],
metric_data['total'],
metric_data['archive_ready'],
metric_data['total_size']
)
)
)
elif metric == 'cpu':
for metric_data in agent_data['cpu']:
query = """
INSERT INTO monitoring.metric_cpu_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
host_id,
metric_data['cpu'],
(
None,
str(metric_data['measure_interval']),
metric_data['time_user'],
metric_data['time_system'],
metric_data['time_idle'],
metric_data['time_iowait'],
metric_data['time_steal']
)
)
)
elif metric == 'process':
for metric_data in agent_data['process']:
query = """
INSERT INTO monitoring.metric_process_current
VALUES (%s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
host_id,
(
None,
str(metric_data['measure_interval']),
metric_data['context_switches'],
metric_data['forks'],
metric_data['procs_running'],
metric_data['procs_blocked'],
metric_data['procs_total']
)
)
)
elif metric == 'memory':
for metric_data in agent_data['memory']:
query = """
INSERT INTO monitoring.metric_memory_current
VALUES (%s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
host_id,
(
None,
metric_data['mem_total'],
metric_data['mem_used'],
metric_data['mem_free'],
metric_data['mem_buffers'],
metric_data['mem_cached'],
metric_data['swap_total'],
metric_data['swap_used']
)
)
)
elif metric == 'loadavg':
for metric_data in agent_data['loadavg']:
query = """
INSERT INTO monitoring.metric_loadavg_current
VALUES (%s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
host_id,
(
None,
metric_data['load1'],
metric_data['load5'],
metric_data['load15']
)
)
)
elif metric == 'vacuum_analyze':
for metric_data in agent_data['vacuum_analyze']:
query = """
INSERT INTO monitoring.metric_vacuum_analyze_current
VALUES (%s, %s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
metric_data['dbname'],
(
None,
str(metric_data['measure_interval']),
metric_data['n_vacuum'],
metric_data['n_analyze'],
metric_data['n_autovacuum'],
metric_data['n_autoanalyze']
)
)
)
elif metric == 'replication':
for metric_data in agent_data['replication']:
query = """
INSERT INTO monitoring.metric_replication_current
VALUES (%s, %s, %s)
"""
cur.execute(
query,
(
metric_data['datetime'],
instance_id,
(
None,
metric_data['receive_location'],
metric_data['replay_location']
)
)
)
session.connection().connection.commit()
except Exception as e:
logger.info("Metric data not inserted for '%s' type" % (metric))
logger.debug(agent_data[metric])
logger.exception(str(e))
session.connection().connection.rollback()
def get_host_checks(session, host_id):
# Returns enabled alerting checks as list of tuples:
# (name, warning threshold, critical threshold)
checks = session.query(Check).filter(Check.host_id == host_id)
return [(c.name, c.warning, c.critical)
for c in checks if c.enabled]
def populate_host_checks(session, host_id, instance_id, hostinfo):
# Populate checks table with bootstraped checks if needed
q = session.query(Check)
n = q.filter(Check.host_id == host_id).count()
if n != 0:
return
specs = check_specs
for bc in bootstrap_checks(hostinfo):
c = Check(host_id=host_id,
instance_id=instance_id,
name=bc[0],
enabled=True,
warning=bc[1],
critical=bc[2],
description=specs.get(bc[0]).get('description'))
session.add(c)
session.commit()
| 39.061151 | 78 | 0.38056 | 1,594 | 21,718 | 4.923463 | 0.13739 | 0.168196 | 0.010703 | 0.032492 | 0.528287 | 0.417431 | 0.34633 | 0.331295 | 0.305556 | 0.2816 | 0 | 0.001499 | 0.539368 | 21,718 | 555 | 79 | 39.131532 | 0.782987 | 0.03039 | 0 | 0.429423 | 0 | 0 | 0.234574 | 0.042801 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015905 | false | 0.003976 | 0.005964 | 0 | 0.037773 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
982acf7f9f2218e64a54202928052ad4200a4b8a | 1,720 | py | Python | main.py | matr095/A-Day-Calculator | 76859f69fe7731f8292ca9830e201643f4420820 | [
"Apache-2.0"
] | null | null | null | main.py | matr095/A-Day-Calculator | 76859f69fe7731f8292ca9830e201643f4420820 | [
"Apache-2.0"
] | null | null | null | main.py | matr095/A-Day-Calculator | 76859f69fe7731f8292ca9830e201643f4420820 | [
"Apache-2.0"
] | null | null | null | from ftplib import FTP
from datetime import *
from tkinter import *
def interval():
now = date.today()
yourDay = int(input("Vous êtes né quel jour ? "))
yourMonth = int(input("Vous êtes né quel mois ? "))
yourYear = int(input("Vous êtes né quelle année ? "))
birthday = date(yourYear, yourMonth, yourDay)
daysPassed = now - birthday
import os
clear = lambda: os.system('clear')
clear()
endOfString = str(daysPassed).index(" ")
print("Tu as " + str(daysPassed)[:endOfString] + " jours !")
def calculate():
global result
global response
try:
result.destroy()
except:
print("no")
now = date.today()
yourDay = int(day.get('1.0', END))
yourMonth = int(month.get('1.0', END))
yourYear = int(year.get('1.0', END))
birthday = date(yourYear, yourMonth, yourDay)
daysPassed = now - birthday
endOfString = str(daysPassed).index(" ")
response = str(daysPassed)[:endOfString]
result = Label(root, text="Félicitations ! Tu as exactement: " + response + " jours !!!")
result.pack()
root = Tk()
root.minsize(width=640, height=480)
root.maxsize(width=640, height=480)
root.wm_title("A Day Calculator")
Label(root, text="A Day Calculator calcule ton nombre de jours passés depuis ta naissance !").pack()
calc = Button(root, text="Calculer", command=calculate)
calc.pack()
Label(root, text="Jour de naissance").pack()
day = Text(root, width="4", height="2", background="gray")
day.pack()
Label(root, text="Mois de naissance").pack()
month = Text(root, width="4", height="2", background="gray")
month.pack()
Label(root, text="Année de naissance").pack()
year = Text(root, width="4", height="2", background="gray")
year.pack()
Label(root, text="-----------------").pack()
root.mainloop()
| 23.243243 | 100 | 0.674419 | 235 | 1,720 | 4.931915 | 0.365957 | 0.048318 | 0.067299 | 0.058671 | 0.314064 | 0.22692 | 0.188956 | 0.188956 | 0 | 0 | 0 | 0.016338 | 0.14593 | 1,720 | 73 | 101 | 23.561644 | 0.772634 | 0 | 0 | 0.16 | 0 | 0 | 0.196855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.14 | 0.08 | 0 | 0.12 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
982c53384fb7fdf1f9847dabcbbd5e3b70a7ace1 | 5,032 | py | Python | nodes/sunled_action.py | willdickson/virtual_desert | 989e5b9e3f19e1c502795ae5033873365d325d1b | [
"MIT"
] | 1 | 2021-06-23T06:07:56.000Z | 2021-06-23T06:07:56.000Z | nodes/sunled_action.py | willdickson/virtual_desert | 989e5b9e3f19e1c502795ae5033873365d325d1b | [
"MIT"
] | null | null | null | nodes/sunled_action.py | willdickson/virtual_desert | 989e5b9e3f19e1c502795ae5033873365d325d1b | [
"MIT"
] | null | null | null | import math
import rospy
import numpy as np
import random
from base_action import BaseAction
class SunledAction(BaseAction):
index_to_led_position = {}
def __init__(self,init_angle,device,param,trial_index):
print('sunled action __init__')
super(SunledAction,self).__init__(device,param)
self.init_angle = init_angle
self.position = 0
self.last_update_t = None
self.trial_index = trial_index
def update(self,t,angle):
rval_msg = super(SunledAction,self).update(t,angle)
if self.param['mode'] == 'inherit_from':
pass
if self.last_update_t is None:
self.last_update_t = t
dt = t - self.last_update_t
if dt > self.param['update_period']:
if self.param['mode'] == 'fixed_rate':
self.position = dt*self.param['rate'] + self.position
self.position = np.mod(self.position, self.param['number_of_leds'])
self.device.set_led(int(self.position),self.param['rgb_value'])
self.last_update_t = t
return rval_msg
def start(self):
if not self.is_started:
#rospy.logwarn(self.param['mode'])
if self.param['mode'] in ('fixed_position', 'fixed_rate'):
if self.param['position'] == 'inherit':
inherit_index = self.param['inherit_from']
self.position = self.index_to_led_position[inherit_index]
else:
self.position = self.param['position']
self.device.set_led(int(self.position),self.param['rgb_value'])
self.index_to_led_position[self.trial_index] = self.position
elif self.param['mode'] == 'set_by_angle':
self.position = self.get_position_from_table(self.init_angle)
self.index_to_led_position[self.trial_index] = self.position
if self.position is not None:
self.device.set_led(int(self.position),self.param['rgb_value'])
#fponce edit 23/01/2020
elif self.param['mode'] == 'all_ON':
if self.position is not None:
self.device.set_all(self.param['rgb_value'])
self.index_to_led_position[self.trial_index] = -1
#fponce edit
elif self.param['mode'] == 'inherit_n_set_by_table':
inherit_index = self.param['inherit_from']
self.prev_position = self.index_to_led_position[inherit_index]
self.position = self.get_position_from_ledtable(self.prev_position)
self.index_to_led_position[self.trial_index] = self.position
if self.position is not None:
self.device.set_led(int(self.position),self.param['rgb_value'])
elif self.param['mode'] == 'inherit_from_last':
inherit_index = self.trial_index-1
self.position = self.index_to_led_position[inherit_index]
self.index_to_led_position[self.trial_index] = self.position
elif self.param['mode'] == 'random_from_list':
self.position = self.get_random_from_list(self.param['sunled_position_list'])
self.index_to_led_position[self.trial_index] = self.position
if self.position is not None:
self.device.set_led(int(self.position),self.param['rgb_value'])
##########
else:
raise ValueError, 'unknown mode'
#rospy.logwarn(self.position)
self.is_started = True
# def stop(self):
# if not self.is_stopped:
# self.device.set_led(-1,(0,0,0))
# self.is_stopped = True
def get_position_from_table(self,angle):
angle_pair_list = [angle_pair for angle_pair,led_index in self.param['sunled_table']]
led_index_list = [led_index for angle,led_index in self.param['sunled_table']]
position = None
for angle_pair, led_index in zip(angle_pair_list, led_index_list):
lower_angle, upper_angle = angle_pair
if lower_angle <= self.init_angle and self.init_angle <= upper_angle:
position = led_index
break
return position
######################
#fponce edit
def get_position_from_ledtable(self,prev_position):
prev_led_list = [prev_led for prev_led,led_index in self.param['led_to_led_table']]
led_index_list = [led_index for old_led,led_index in self.param['led_to_led_table']]
position = None
for prev_led, led_index in zip(prev_led_list, led_index_list):
if prev_led == prev_position:
position = led_index
return position
def get_random_from_list(self, position_list):
position = random.choice(position_list)
return position
| 43.37931 | 93 | 0.595787 | 628 | 5,032 | 4.476115 | 0.14172 | 0.115261 | 0.079687 | 0.064034 | 0.528993 | 0.442547 | 0.40555 | 0.323017 | 0.314479 | 0.252579 | 0 | 0.00427 | 0.301868 | 5,032 | 115 | 94 | 43.756522 | 0.795901 | 0.046701 | 0 | 0.333333 | 0 | 0 | 0.08265 | 0.004627 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011494 | 0.057471 | null | null | 0.011494 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
982eca037ad091ca6153033d361aee2cf39feb98 | 371 | py | Python | Base64_Cleanup.py | Har6ard/HackTheBox | a1dcfa6373afa13469250572d5e93956c24e01dc | [
"MIT"
] | null | null | null | Base64_Cleanup.py | Har6ard/HackTheBox | a1dcfa6373afa13469250572d5e93956c24e01dc | [
"MIT"
] | null | null | null | Base64_Cleanup.py | Har6ard/HackTheBox | a1dcfa6373afa13469250572d5e93956c24e01dc | [
"MIT"
] | null | null | null | #!/usr/bin/python3
"""
This script is just to clean up Base64 if is has � present in the output.
Example:
$�G�r�o�U�P�P�O�L�i�C�Y�S�E�t�t�I�N�G�s� �=� �[�r�E�F�]�.�A�S�s�e�M�B�L�Y�.�G�E�t�T�y�p�E�
$GroUPPOLiCYSEttINGs = [rEF].ASseMBLY.GEtTypE
"""
with open("./target.txt", "r") as f_obj:
data = f_obj.read()
cleaned = data.replace("�", "")
print(cleaned)
| 24.733333 | 90 | 0.587601 | 124 | 371 | 2.120968 | 0.41129 | 0.053232 | 0.057034 | 0.030418 | 0.079848 | 0.053232 | 0.053232 | 0 | 0 | 0 | 0.126685 | 0.009494 | 0.148248 | 371 | 14 | 91 | 26.5 | 0.674051 | 0.6469 | 0 | 0 | 0 | 0 | 0.113821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
982faf238d52b2846a9e61e74530e474bb1097c9 | 742 | py | Python | setup.py | jonbulica99/deeplator | d6ad8d60b09022e442cbde6c155a2d752072e358 | [
"MIT"
] | 64 | 2017-08-31T19:26:12.000Z | 2022-02-27T19:25:21.000Z | setup.py | jonbulica99/deeplator | d6ad8d60b09022e442cbde6c155a2d752072e358 | [
"MIT"
] | 12 | 2017-09-08T15:12:02.000Z | 2021-12-12T07:18:41.000Z | setup.py | jonbulica99/deeplator | d6ad8d60b09022e442cbde6c155a2d752072e358 | [
"MIT"
] | 14 | 2017-09-07T11:39:44.000Z | 2021-12-11T09:25:29.000Z | #!/usr/bin/env python3
from setuptools import setup
setup(
name="deeplator",
version="0.0.7",
description="Wrapper for DeepL translator.",
long_description="Deeplator is a library enabling translation via the DeepL translator.",
author="uinput",
author_email="uinput@users.noreply.github.com",
license="MIT",
url="https://github.com/uinput/deeplator",
keywords=["deepl", "translation", "translate", "language"],
python_requires=">=3",
packages=["deeplator"],
py_modules=["deeplator"],
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
]
)
| 29.68 | 93 | 0.645553 | 79 | 742 | 6.012658 | 0.721519 | 0.063158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011824 | 0.202156 | 742 | 24 | 94 | 30.916667 | 0.790541 | 0.028302 | 0 | 0 | 0 | 0 | 0.520833 | 0.043056 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98329bd5987a8121de4a33c66083b79a38d22afd | 478 | py | Python | blogs/migrations/0012_auto_20200601_1247.py | daaawx/bearblog | 5e01e4443c632ff53b918cf8a0d3b1c648b352fe | [
"MIT"
] | 657 | 2020-05-26T16:16:07.000Z | 2022-03-26T22:35:01.000Z | blogs/migrations/0012_auto_20200601_1247.py | daaawx/bearblog | 5e01e4443c632ff53b918cf8a0d3b1c648b352fe | [
"MIT"
] | 107 | 2020-05-26T17:45:04.000Z | 2022-03-17T08:24:00.000Z | blogs/migrations/0012_auto_20200601_1247.py | chachan/bearblog | f399f806839bea9f6fd499f50c87cf84fda3bc91 | [
"MIT"
] | 42 | 2020-05-26T23:57:58.000Z | 2022-03-15T04:20:26.000Z | # Generated by Django 3.0.6 on 2020-06-01 12:47
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blogs', '0011_auto_20200531_0915'),
]
operations = [
migrations.RemoveField(
model_name='post',
name='tags',
),
migrations.AddField(
model_name='blog',
name='hashtags',
field=models.TextField(blank=True),
),
]
| 20.782609 | 47 | 0.558577 | 48 | 478 | 5.458333 | 0.791667 | 0.068702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095975 | 0.324268 | 478 | 22 | 48 | 21.727273 | 0.71517 | 0.094142 | 0 | 0.125 | 1 | 0 | 0.111369 | 0.053364 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98357169592d3463991af69ec85a5b95cd26dfba | 753 | py | Python | app/setup.py | cleve/varidb | fc1b10aa4d708cee1c83909f10773948cee0c539 | [
"Apache-2.0"
] | null | null | null | app/setup.py | cleve/varidb | fc1b10aa4d708cee1c83909f10773948cee0c539 | [
"Apache-2.0"
] | 6 | 2020-11-05T02:18:15.000Z | 2022-03-12T00:50:09.000Z | app/setup.py | cleve/pulzar | fc1b10aa4d708cee1c83909f10773948cee0c539 | [
"Apache-2.0"
] | null | null | null | from setuptools import setup
# with open("../README.md", "r") as fh:
# long_description = fh.read()
setup(name='pulzar-pkg',
version='21.4.1',
author='Mauricio Cleveland',
author_email='mauricio.cleveland@gmail.com',
description='Distributed database and jobs',
# long_description=long_description,
# long_description_content_type="text/markdown",
# data_files=[('/var/lib/pulzar/data', [])],
url='http://github.com/cleve/pulzar',
packages=['pulzarcore', 'pulzarutils'],
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux"
],
python_requires='>=3.6',
)
| 32.73913 | 58 | 0.605578 | 80 | 753 | 5.5875 | 0.7625 | 0.134228 | 0.085011 | 0.134228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012132 | 0.233732 | 753 | 22 | 59 | 34.227273 | 0.762565 | 0.266932 | 0 | 0 | 0 | 0 | 0.465201 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9838b9feb9100277e59803005d55c82663dd9651 | 1,435 | py | Python | cnc/migrations/0001_initial.py | andrewmallory/pan-cnc | bdd6349820107acf06b607c2dc0154ded468f049 | [
"Apache-2.0"
] | 3 | 2019-03-13T14:59:59.000Z | 2020-04-26T06:30:16.000Z | cnc/migrations/0001_initial.py | andrewmallory/pan-cnc | bdd6349820107acf06b607c2dc0154ded468f049 | [
"Apache-2.0"
] | 29 | 2019-02-05T00:01:32.000Z | 2021-03-22T14:10:07.000Z | cnc/migrations/0001_initial.py | andrewmallory/pan-cnc | bdd6349820107acf06b607c2dc0154ded468f049 | [
"Apache-2.0"
] | 2 | 2019-08-31T13:54:53.000Z | 2020-11-18T16:27:11.000Z | # Generated by Django 3.0.5 on 2020-05-26 13:58
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='RepositoryDetails',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
('url', models.CharField(max_length=512)),
('deploy_key_path', models.CharField(default='', max_length=128, null='')),
('deploy_key_priv', models.CharField(default='', max_length=2048, null='')),
('deploy_key_pub', models.CharField(default='', max_length=2048, null='')),
('details_json', models.TextField(max_length=2048)),
],
),
migrations.CreateModel(
name='Skillet',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200, unique=True)),
('skillet_json', models.TextField(default='', max_length=2048)),
('repository', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='cnc.RepositoryDetails')),
],
),
]
| 38.783784 | 123 | 0.587456 | 147 | 1,435 | 5.578231 | 0.421769 | 0.087805 | 0.078049 | 0.087805 | 0.40122 | 0.363415 | 0.363415 | 0.268293 | 0.268293 | 0.268293 | 0 | 0.040797 | 0.265505 | 1,435 | 36 | 124 | 39.861111 | 0.737192 | 0.031359 | 0 | 0.344828 | 1 | 0 | 0.102305 | 0.01513 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.068966 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
983c57ddcc0f5ba60bf439a4f0ac1aa43adc5576 | 557 | py | Python | core/migrations/0003_auto_20180730_1452.py | CobwebOrg/cobweb-django | 14241326860620dbaa64f7eefc6d4b393f80d23c | [
"MIT"
] | 7 | 2017-09-14T18:52:58.000Z | 2020-05-18T21:01:20.000Z | core/migrations/0003_auto_20180730_1452.py | CobwebOrg/cobweb-django | 14241326860620dbaa64f7eefc6d4b393f80d23c | [
"MIT"
] | 151 | 2017-09-14T18:46:02.000Z | 2022-02-10T09:18:44.000Z | core/migrations/0003_auto_20180730_1452.py | CobwebOrg/cobweb-django | 14241326860620dbaa64f7eefc6d4b393f80d23c | [
"MIT"
] | 1 | 2017-10-29T19:37:29.000Z | 2017-10-29T19:37:29.000Z | # Generated by Django 2.0.7 on 2018-07-30 21:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0002_auto_20180730_1451'),
]
operations = [
migrations.AddField(
model_name='resourcescan',
name='redirect_url',
field=models.URLField(blank=True, null=True),
),
migrations.AlterUniqueTogether(
name='resourcescan',
unique_together={('resource', 'is_active', 'redirect_url')},
),
]
| 24.217391 | 72 | 0.594255 | 55 | 557 | 5.872727 | 0.781818 | 0.099071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077694 | 0.283662 | 557 | 22 | 73 | 25.318182 | 0.73183 | 0.08079 | 0 | 0.125 | 1 | 0 | 0.180392 | 0.045098 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
983e1c82111a5197c93a2fe5e845fe88a4784b12 | 4,954 | py | Python | csr/csr.py | AlexJanse/python_csr2transmart | c01a76dfa6ecfa4248b274144092ccc6c31aab5a | [
"MIT"
] | null | null | null | csr/csr.py | AlexJanse/python_csr2transmart | c01a76dfa6ecfa4248b274144092ccc6c31aab5a | [
"MIT"
] | null | null | null | csr/csr.py | AlexJanse/python_csr2transmart | c01a76dfa6ecfa4248b274144092ccc6c31aab5a | [
"MIT"
] | null | null | null | from datetime import date
from typing import Sequence, Optional, Union, Dict, List, Any
from pydantic import BaseModel, validator, Field
from csr.entity_validation import validate_entity_data
from csr.exceptions import DataException
class Individual(BaseModel):
"""
Individual entity
"""
individual_id: str = Field(..., min_length=1, identity=True)
taxonomy: Optional[str]
gender: Optional[str]
birth_date: Optional[date]
death_date: Optional[date]
ic_type: Optional[str]
ic_version: Optional[float]
ic_given_date: Optional[date]
ic_withdrawn_date: Optional[date]
report_her_susc: Optional[str]
report_inc_findings: Optional[str]
diagnosis_count: Optional[int]
age_first_diagnosis: Optional[int]
class Diagnosis(BaseModel):
"""
Diagnosis entity
"""
diagnosis_id: str = Field(..., min_length=1, identity=True)
individual_id: str = Field(..., min_length=1, references='Individual')
tumor_type: Optional[str]
topography: Optional[str]
treatment_protocol: Optional[str]
tumor_stage: Optional[str]
diagnosis_date: Optional[date]
diagnosis_center: Optional[str]
class Biosource(BaseModel):
"""
Biosource entity
"""
biosource_id: str = Field(..., min_length=1, identity=True)
biosource_dedicated: Optional[str]
individual_id: str = Field(..., min_length=1, references='Individual')
diagnosis_id: Optional[str] = Field(None, min_length=1, references='Diagnosis')
src_biosource_id: Optional[str] = Field(None, min_length=1, references='Biosource')
tissue: Optional[str]
biosource_date: Optional[date]
disease_status: Optional[str]
tumor_percentage: Optional[int]
@validator('src_biosource_id')
def check_self_reference(cls, src_biosource_id, values):
if src_biosource_id == values['biosource_id']:
raise DataException(f'Biosource cannot be derived from itself')
return src_biosource_id
class Biomaterial(BaseModel):
"""
Biomaterial entity
"""
biomaterial_id: str = Field(..., min_length=1, identity=True)
src_biosource_id: str = Field(..., min_length=1, references='Biosource')
src_biomaterial_id: Optional[str] = Field(None, min_length=1, references='Biomaterial')
biomaterial_date: Optional[date]
type: Optional[str]
library_strategy: Optional[List[str]]
analysis_type: Optional[List[str]]
@validator('src_biomaterial_id')
def check_self_reference(cls, src_biomaterial_id, values):
if src_biomaterial_id == values['biomaterial_id']:
raise DataException(f'Biomaterial cannot be derived from itself')
return src_biomaterial_id
@validator('library_strategy')
def validate_molecule_type_agrees_with_library_strategy(cls, library_strategy, values):
if 'type' in values and library_strategy is not None:
if values['type'] == 'DNA' and library_strategy.__contains__('RNA-Seq'):
raise DataException(f'Not allowed RNA-Seq library strategy for molecule type: DNA')
if values['type'] == 'RNA' and library_strategy.__contains__('WXS'):
raise DataException(f'Not allowed WXS library strategy for molecule type: RNA')
if values['type'] == 'RNA' and library_strategy.__contains__('WGS'):
raise DataException(f'Not allowed WGS library strategy for molecule type: RNA')
if values['type'] == 'RNA' and library_strategy.__contains__('DNA-meth_array'):
raise DataException(f'Not allowed DNA-meth_array library strategy for molecule type: RNA')
return library_strategy
class Study(BaseModel):
"""
Study
"""
study_id: str = Field(..., min_length=1, identity=True)
acronym: Optional[str]
title: Optional[str]
datadictionary: Optional[str]
class IndividualStudy(BaseModel):
"""
Study to individual mapping
"""
study_id_individual_study_id: str = Field(..., min_length=1, identity=True)
individual_study_id: str
individual_id: str = Field(..., min_length=1, references='Individual')
study_id: str = Field(..., min_length=1, references='Study')
SubjectEntity = Union[Individual, Diagnosis, Biosource, Biomaterial]
class CentralSubjectRegistry(BaseModel):
"""
Central subject registry
"""
entity_data: Dict[str, Sequence[Any]]
@staticmethod
def create(entity_data: Dict[str, Sequence[Any]]):
validate_entity_data(entity_data, list(SubjectEntity.__args__))
return CentralSubjectRegistry(entity_data=entity_data)
StudyEntity = Union[Study, IndividualStudy]
class StudyRegistry(BaseModel):
"""
Study registry
"""
entity_data: Dict[str, Sequence[Any]]
@staticmethod
def create(entity_data: Dict[str, Sequence[Any]]):
validate_entity_data(entity_data, list(StudyEntity.__args__))
return StudyRegistry(entity_data=entity_data)
| 34.402778 | 106 | 0.701655 | 587 | 4,954 | 5.679727 | 0.201022 | 0.065987 | 0.041992 | 0.042891 | 0.402819 | 0.353629 | 0.343731 | 0.285543 | 0.233953 | 0.146371 | 0 | 0.003488 | 0.189746 | 4,954 | 143 | 107 | 34.643357 | 0.827105 | 0.029067 | 0 | 0.097826 | 0 | 0 | 0.111538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054348 | false | 0 | 0.054348 | 0 | 0.76087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9840c6e619f369bd516308c15457bc17f344220a | 2,007 | py | Python | marketplaces/cron_report_daily_activity.py | diassor/CollectorCity-Market-Place | 892ad220b8cf1c0fc7433f625213fe61729522b2 | [
"Apache-2.0"
] | 135 | 2015-03-19T13:28:18.000Z | 2022-03-27T06:41:42.000Z | marketplaces/cron_report_daily_activity.py | dfcoding/CollectorCity-Market-Place | e59acec3d600c049323397b17cae14fdcaaaec07 | [
"Apache-2.0"
] | null | null | null | marketplaces/cron_report_daily_activity.py | dfcoding/CollectorCity-Market-Place | e59acec3d600c049323397b17cae14fdcaaaec07 | [
"Apache-2.0"
] | 83 | 2015-01-30T01:00:15.000Z | 2022-03-08T17:25:10.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import logging
import datetime
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
from django.core.management import setup_environ
from django.core.mail import send_mail
#from django.db import transaction
import settings
setup_environ(settings)
"""
Daily Activity (Sign Up / Cancel)
Total Customers
Total Sign Ups This Month
Total Sign Ups This Today
Total Cancelations This Month
Total Cancelations This Today
"""
def report_daily_activity():
from django.core.mail import EmailMultiAlternatives, EmailMessage
from django.template import Context, loader
from reports.views import get_daily_activity_data
day = datetime.datetime.now()
try:
t_txt = loader.get_template("admin/mail/daily_activity_report.txt")
t_html = loader.get_template("admin/mail/daily_activity_report.html")
c = get_daily_activity_data(day)
subject, from_email, to = 'Daily Activity Report', "no-reply@greatcoins.com", "admin@greatcoins.com"
text_content = t_txt.render(Context(c))
html_content = t_html.render(Context(c))
msg = EmailMultiAlternatives(subject, text_content, from_email, [to])
msg.attach_alternative(html_content, "text/html")
msg.send()
except Exception, e:
logging.info(e)
mail = EmailMessage(subject='Error when trying to generate Daily Activity Report',
body=e,
from_email=settings.EMAIL_FROM,
to=[mail for (name, mail) in settings.STAFF],
headers={'X-SMTPAPI': '{\"category\": \"Error\"}'})
mail.send(fail_silently=True)
# send_mail('Error when trying to generate Daily Activity Report', e , settings.EMAIL_FROM, [mail for (name, mail) in settings.STAFF], fail_silently=True)
if __name__ == "__main__":
report_daily_activity() | 35.210526 | 161 | 0.655705 | 243 | 2,007 | 5.226337 | 0.366255 | 0.102362 | 0.074803 | 0.028346 | 0.261417 | 0.187402 | 0.187402 | 0.140157 | 0 | 0 | 0 | 0.000662 | 0.247633 | 2,007 | 57 | 162 | 35.210526 | 0.840397 | 0.11709 | 0 | 0 | 0 | 0 | 0.169823 | 0.074495 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.272727 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98420fbde9020acf9d2baa635fdb24d604a951dc | 3,882 | py | Python | utils/usergrid-util-python/usergrid_tools/general/queue_monitor.py | snoopdave/incubator-usergrid | 104f64eb4f318221f0d11bf43baad0e7c630cafe | [
"Apache-2.0"
] | 788 | 2015-08-21T16:46:57.000Z | 2022-03-16T01:57:44.000Z | utils/usergrid-util-python/usergrid_tools/general/queue_monitor.py | snoopdave/incubator-usergrid | 104f64eb4f318221f0d11bf43baad0e7c630cafe | [
"Apache-2.0"
] | 101 | 2015-08-23T04:58:13.000Z | 2019-11-13T07:02:57.000Z | utils/usergrid-util-python/usergrid_tools/general/queue_monitor.py | snoopdave/incubator-usergrid | 104f64eb4f318221f0d11bf43baad0e7c630cafe | [
"Apache-2.0"
] | 342 | 2015-08-22T06:14:20.000Z | 2022-03-15T01:20:39.000Z | # */
# * Licensed to the Apache Software Foundation (ASF) under one
# * or more contributor license agreements. See the NOTICE file
# * distributed with this work for additional information
# * regarding copyright ownership. The ASF licenses this file
# * to you under the Apache License, Version 2.0 (the
# * "License"); you may not use this file except in compliance
# * with the License. You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing,
# * software distributed under the License is distributed on an
# * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# * KIND, either express or implied. See the License for the
# * specific language governing permissions and limitations
# * under the License.
# */
import argparse
import json
import datetime
import os
import time
import sys
import boto
from boto import sqs
### This monitors an SQS queue and measures the delta message count between polling intervals to infer the amount of time
### remaining to fully drain the queue
__author__ = 'Jeff.West@yahoo.com'
def total_seconds(td):
return (td.microseconds + (td.seconds + td.days * 24.0 * 3600) * 10.0 ** 6) / 10.0 ** 6
def total_milliseconds(td):
return (td.microseconds + td.seconds * 1000000) / 1000
def get_time_remaining(count, rate):
if rate == 0:
return 'NaN'
seconds = count * 1.0 / rate
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
return "%d:%02d:%02d" % (h, m, s)
def parse_args():
parser = argparse.ArgumentParser(description='Usergrid Loader - Queue Monitor')
parser.add_argument('-c', '--config',
help='The queue to load into',
type=str,
default='%s/.usergrid/queue_monitor.json' % os.getenv("HOME"))
parser.add_argument('-q', '--queue_name',
help='The queue name to send messages to. If not specified the filename is used',
default='entities',
type=str)
my_args = parser.parse_args(sys.argv[1:])
print str(my_args)
return vars(my_args)
def main():
args = parse_args()
queue_name = args.get('queue_name')
print 'queue_name=%s' % queue_name
start_time = datetime.datetime.utcnow()
first_start_time = start_time
print "first start: %s" % first_start_time
with open(args.get('config'), 'r') as f:
config = json.load(f)
sqs_config = config.get('sqs')
last_time = datetime.datetime.utcnow()
sqs_conn = boto.sqs.connect_to_region(**sqs_config)
queue = sqs_conn.get_queue(queue_name)
last_size = queue.count()
first_size = last_size
print 'Starting Size: %s' % last_size
sleep = 10
time.sleep(sleep)
rate_sum = 0
rate_count = 0
while True:
size = queue.count()
time_stop = datetime.datetime.utcnow()
time_delta = total_seconds(time_stop - last_time)
agg_time_delta = total_seconds(time_stop - first_start_time)
agg_size_delta = first_size - size
agg_messages_rate = 1.0 * agg_size_delta / agg_time_delta
size_delta = last_size - size
messages_rate = 1.0 * size_delta / time_delta
rate_sum += messages_rate
rate_count += 1
print '%s | %s | Size: %s | Processed: %s | Last: %s | Avg: %s | Count: %s | agg rate: %s | Remaining: %s' % (
datetime.datetime.utcnow(),
queue_name,
size, size_delta, round(messages_rate, 2),
round(rate_sum / rate_count, 2), rate_count,
round(agg_messages_rate, 2),
get_time_remaining(size, agg_messages_rate))
last_size = size
last_time = time_stop
time.sleep(sleep)
if __name__ == '__main__':
main()
| 27.928058 | 121 | 0.638073 | 533 | 3,882 | 4.470919 | 0.339587 | 0.030214 | 0.036928 | 0.013428 | 0.050357 | 0.050357 | 0 | 0 | 0 | 0 | 0 | 0.018743 | 0.257857 | 3,882 | 138 | 122 | 28.130435 | 0.8084 | 0.243689 | 0 | 0.026316 | 0 | 0.013158 | 0.137066 | 0.010649 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0.065789 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9843d82030095d57d6ca2780e6e91c3791600765 | 433 | py | Python | src/hexdump2/__init__.py | HGrooms/hexdump2 | bd9c5de0ac7cd82d0a77c0b5561e5cb12eb7c2b3 | [
"MIT"
] | null | null | null | src/hexdump2/__init__.py | HGrooms/hexdump2 | bd9c5de0ac7cd82d0a77c0b5561e5cb12eb7c2b3 | [
"MIT"
] | null | null | null | src/hexdump2/__init__.py | HGrooms/hexdump2 | bd9c5de0ac7cd82d0a77c0b5561e5cb12eb7c2b3 | [
"MIT"
] | null | null | null | """
mirrors functionality of hexdump(1) and API interface of Python hexdump package.
Usage:
1. Within Python:
from hexdump2 import hexdump, color_always
# Enable or disable color all the time
color_always()
hexdump(bytes-like data)
2. From commandline, run the console scripts hexdump2 or hd2
$ hd2 -h
"""
# Import for everyone to use
from .hexdump2 import hexdump, hd, color_always
__all__ = ["hexdump", "hd", "color_always"]
| 20.619048 | 80 | 0.752887 | 64 | 433 | 4.96875 | 0.59375 | 0.138365 | 0.113208 | 0.157233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.159353 | 433 | 20 | 81 | 21.65 | 0.851648 | 0.759815 | 0 | 0 | 0 | 0 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
984c178778bd5a8ed38c3fb9100e9a5a25991cad | 3,050 | py | Python | ok_redirects/models.py | LowerDeez/ok-redirects | 5eb317a8aeae70a4899a06c4eefa9a088f269a03 | [
"MIT"
] | 1 | 2021-01-04T08:58:09.000Z | 2021-01-04T08:58:09.000Z | ok_redirects/models.py | LowerDeez/ok-redirects | 5eb317a8aeae70a4899a06c4eefa9a088f269a03 | [
"MIT"
] | null | null | null | ok_redirects/models.py | LowerDeez/ok-redirects | 5eb317a8aeae70a4899a06c4eefa9a088f269a03 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.contrib.sites.models import Site
from django.db import models
from django.utils.translation import pgettext_lazy
from .constants import REDIRECT_TYPE_CHOICES, REDIRECT_301
from .fields import MultipleChoiceArrayField
__all__ = (
'Redirect',
)
LANGUAGES = getattr(settings, 'LANGUAGES', [])
class Redirect(models.Model):
site = models.ForeignKey(
Site,
models.CASCADE,
verbose_name=pgettext_lazy("ok:redirects", 'site')
)
old_path = models.CharField(
pgettext_lazy("ok:redirects", 'redirect from'),
max_length=250,
db_index=True,
help_text=pgettext_lazy(
"ok:redirects",
"This should be an absolute path, "
"excluding the domain name. Example: '/events/search/'."
),
)
languages = MultipleChoiceArrayField(
models.CharField(
max_length=2,
choices=LANGUAGES,
blank=True
),
blank=True,
default=[lang[0] for lang in LANGUAGES] if LANGUAGES else list,
verbose_name=pgettext_lazy("ok:redirects", "Languages to check redirect")
)
is_ignore_get_params = models.BooleanField(
pgettext_lazy("ok:redirects", 'Ignore GET parameters'),
default=True
)
new_path = models.CharField(
pgettext_lazy("ok:redirects", 'redirect to'),
blank=True,
max_length=250,
help_text=pgettext_lazy(
"ok:redirects",
"This can be either an absolute path (as above) "
"or a full URL starting with 'http://'."
),
)
to_language = models.CharField(
pgettext_lazy("ok:redirects", 'to language'),
blank=True,
choices=LANGUAGES,
max_length=5,
help_text=pgettext_lazy(
"ok:redirects",
"Leave blank to redirect to the current language on the site"
),
)
status_code = models.PositiveSmallIntegerField(
db_index=True,
choices=REDIRECT_TYPE_CHOICES,
default=REDIRECT_301,
verbose_name=pgettext_lazy("ok:redirects", 'Status code'),
help_text=pgettext_lazy(
"ok:redirects",
'The redirect http status code.'
)
)
counter = models.PositiveIntegerField(
blank=True,
default=0,
verbose_name=pgettext_lazy("ok:redirects", 'Counter'),
)
is_active = models.BooleanField(
pgettext_lazy("ok:redirects", 'Is active'),
default=True,
db_index=True,
)
class Meta:
db_table = 'ok_redirects'
ordering = ('old_path',)
unique_together = (('site', 'old_path'),)
verbose_name = pgettext_lazy("ok:redirects", 'redirect')
verbose_name_plural = pgettext_lazy("ok:redirects", 'redirects')
def __str__(self):
return (
f"{pgettext_lazy('ok:redirects', 'Redirect')} "
f"{self.status_code}: "
f"`{self.old_path}` ---> `{self.new_path}`"
)
| 30.5 | 81 | 0.605902 | 326 | 3,050 | 5.469325 | 0.322086 | 0.114414 | 0.125631 | 0.206394 | 0.314638 | 0.292765 | 0.095345 | 0.056085 | 0 | 0 | 0 | 0.007329 | 0.284262 | 3,050 | 99 | 82 | 30.808081 | 0.809437 | 0 | 0 | 0.25 | 0 | 0 | 0.237705 | 0.009836 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01087 | false | 0 | 0.065217 | 0.01087 | 0.206522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
984deaa3e520bef0fbb915ca5c77f71dc4dc260d | 307 | py | Python | driver/urls.py | mzazakeith/uber-clone | 02f0be423681be5517182b278176feede1126b7a | [
"MIT"
] | 1 | 2022-01-13T19:31:56.000Z | 2022-01-13T19:31:56.000Z | driver/urls.py | mzazakeith/uber-clone | 02f0be423681be5517182b278176feede1126b7a | [
"MIT"
] | 5 | 2020-02-12T01:28:33.000Z | 2021-06-10T20:44:23.000Z | driver/urls.py | mzazakeith/uber-clone | 02f0be423681be5517182b278176feede1126b7a | [
"MIT"
] | null | null | null | from django.conf.urls import url, include
from driver import views
# from djgeojson.views import GeoJSONLayerView
# from driver.models import Points
urlpatterns = [
url(r'^new/driver$', views.create_driver_profile, name='new-driver-profile'),
url(r'^new/car$', views.submit_car, name='new-car'),
] | 30.7 | 81 | 0.742671 | 44 | 307 | 5.113636 | 0.477273 | 0.088889 | 0.062222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123779 | 307 | 10 | 82 | 30.7 | 0.836431 | 0.250814 | 0 | 0 | 0 | 0 | 0.201754 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
98501d1d0a0e678ce5a1a02156e3060dda0fe5b8 | 2,286 | py | Python | dodo.py | spandanb/textwalker | 0ebabf5a1cf9142b7ca031d7070ba7cf1d118e34 | [
"MIT"
] | 2 | 2021-05-07T23:41:32.000Z | 2021-05-08T15:52:08.000Z | dodo.py | spandanb/textwalker | 0ebabf5a1cf9142b7ca031d7070ba7cf1d118e34 | [
"MIT"
] | null | null | null | dodo.py | spandanb/textwalker | 0ebabf5a1cf9142b7ca031d7070ba7cf1d118e34 | [
"MIT"
] | null | null | null | """
doit docs: https://pydoit.org/cmd_run.html
"""
import pdoc
import os
import os.path
def generate_docs(docs_dir: str):
"""
python callable that creates docs like docs/textwalker.html, docs/patternparser.py
Args:
docs_dir: location to output docs to
"""
if not os.path.exists(docs_dir):
print(f'{docs_dir} does not exist; creating dir')
os.mkdir(docs_dir)
mod_names = ["textwalker", "textwalker.textwalker", "textwalker.pattern_parser", "textwalker.utils"]
context = pdoc.Context()
modules = [pdoc.Module(mod, context=context)
for mod in mod_names]
pdoc.link_inheritance(context)
for module in modules:
if module.name == "textwalker":
filepath = os.path.join(docs_dir, 'index.html')
else:
pkg, modname = module.name.split('.')
filename = f'{modname}.html'
filepath = os.path.join(docs_dir, filename)
with open(filepath, 'w', encoding='utf-8') as fp:
fp.write(module.html())
print(f'wrote docs for module {module.name} to {filepath}')
def task_run_tests():
"""
run tests
"""
task = {
'actions': ['pytest textwalker'],
'verbosity': 2
}
return task
def task_run_tests_with_codecov():
"""
run tests with codecov
"""
task = {
'actions': ['pytest --cov=textwalker'],
'verbosity': 2
}
return task
def task_run_flake8():
"""
calls flake8 linter
"""
task = {
'actions': ['flake8 textwalker'],
'verbosity': 2
}
return task
def task_run_black():
"""
calls black code formatter
"""
task = {
'actions': ['black textwalker'],
'verbosity': 2
}
return task
def task_run_pdoc_cli():
"""
calls pdoc via CLI
pdoc3 : https://pdoc3.github.io/pdoc/doc/pdoc/#programmatic-usage
"""
task = {
'actions': ['pdoc3 --html --force textwalker -o docs'],
'verbosity': 2
}
return task
def task_run_pdoc():
"""
calls pdoc via python
pdoc3 : https://pdoc3.github.io/pdoc/doc/pdoc/#programmatic-usage
"""
task = {
'actions': [(generate_docs, ('docs',))],
'verbosity': 2
}
return task
| 22.194175 | 104 | 0.575241 | 267 | 2,286 | 4.816479 | 0.348315 | 0.038103 | 0.046656 | 0.093313 | 0.311042 | 0.289269 | 0.250389 | 0.250389 | 0.096423 | 0.096423 | 0 | 0.009186 | 0.285652 | 2,286 | 102 | 105 | 22.411765 | 0.778322 | 0.185914 | 0 | 0.310345 | 1 | 0 | 0.238453 | 0.026559 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12069 | false | 0 | 0.051724 | 0 | 0.275862 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98594e8f28f9b9c4731a1d8dca33c92082e2d4cf | 2,451 | py | Python | examples/main_simulation_lemon_graph.py | KaterynaMelnyk/GraphKKE | 4651f1a5e75e23ad0b84403151f7000ab8f292eb | [
"MIT"
] | 1 | 2021-07-23T08:47:05.000Z | 2021-07-23T08:47:05.000Z | examples/main_simulation_lemon_graph.py | k-melnyk/graphKKE | 4651f1a5e75e23ad0b84403151f7000ab8f292eb | [
"MIT"
] | null | null | null | examples/main_simulation_lemon_graph.py | k-melnyk/graphKKE | 4651f1a5e75e23ad0b84403151f7000ab8f292eb | [
"MIT"
] | null | null | null | import os
import argparse
import numpy as np
import scipy
import imageio
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import graphkke.generate_graphs.graph_generation as graph_generation
import graphkke.generate_graphs.generate_SDE as generate_SDE
parser = argparse.ArgumentParser()
parser.add_argument('--input_dir', type=str,
default='/home/katerynam/work/data/artificial/test/')
parser.add_argument('--n_graphs', type=int,
default=500)
parser.add_argument('--n_nodes', type=int,
default=300)
parser.add_argument('--radius', type=float,
default=0.6)
parser.add_argument('--n_wells', type=int,
default=3)
parser.add_argument('--out_state', type=int,
default=0.1)
parser.add_argument('--if_plot', type=bool,
default=True)
parser.add_argument('--seed', type=int,
default=7)
args = parser.parse_args()
def randb(n, b):
return b[0] + (b[1] - b[0]) * scipy.rand(1, n)
def rand(n, bounds, boxes):
d = boxes.size
x = np.zeros([d, n])
for i in range(d):
x[i, :] = randb(n, bounds[i, :])
return x
if __name__ == '__main__':
lm = generate_SDE.LemonSlice2D([0.9, 0.9], args.n_graphs, 2, args.n_wells)
x = rand(1, np.asarray([[-0.5, 0.5], [-0.5, 0.5]]), np.asarray([10, 10]))
sde_traj = np.asarray(lm.sim_determ_system(x[:, 0]))
k_means = KMeans(n_clusters=args.n_wells).fit(sde_traj)
graph_states = k_means.labels_
# sde_traj = np.load(args.input_dir + 'traj.npy')
# graph_states = np.load(args.input_dir + 'graph_states.npy')
plt.scatter(sde_traj[:, 0], sde_traj[:, 1], c=graph_states)
plt.show()
sim_graph = graph_generation.LemonGraph(args.radius, args.n_graphs, args.n_nodes,
graph_states)
graphs, images, node_points = sim_graph.create_adj_matrix(sde_traj, args.out_state, args.if_plot)
for i, image in enumerate(images):
imageio.imwrite(args.input_dir + f'/traj_{i}.png', image)
imageio.mimsave(args.input_dir + '/anim.gif', images, fps=2)
np.save(os.path.join(args.input_dir + 'traj.npy'), sde_traj)
np.save(os.path.join(args.input_dir + 'graphs.npy'), graphs)
np.save(os.path.join(args.input_dir + 'graph_states.npy'), graph_states)
np.save(os.path.join(args.input_dir + 'node_points.npy'), node_points)
| 32.68 | 101 | 0.641779 | 365 | 2,451 | 4.106849 | 0.317808 | 0.048032 | 0.090727 | 0.032021 | 0.132088 | 0.106738 | 0.074716 | 0.074716 | 0 | 0 | 0 | 0.020103 | 0.208486 | 2,451 | 74 | 102 | 33.121622 | 0.752577 | 0.043656 | 0 | 0 | 1 | 0 | 0.082906 | 0.017949 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0 | 0.169811 | 0.018868 | 0.245283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98598ee67261e34d645c03e2285132218a35b61d | 741 | py | Python | tests/io/open_plus.py | peterson79/pycom-micropython-sigfox | 3f93fc2c02567c96f18cff4af9125db8fd7a6fb4 | [
"MIT"
] | 37 | 2017-12-07T15:49:29.000Z | 2022-03-16T16:01:38.000Z | tests/io/open_plus.py | peterson79/pycom-micropython-sigfox | 3f93fc2c02567c96f18cff4af9125db8fd7a6fb4 | [
"MIT"
] | 17 | 2016-06-20T09:06:14.000Z | 2016-08-21T10:09:39.000Z | tests/io/open_plus.py | peterson79/pycom-micropython-sigfox | 3f93fc2c02567c96f18cff4af9125db8fd7a6fb4 | [
"MIT"
] | 22 | 2016-08-01T01:35:30.000Z | 2022-03-22T18:12:23.000Z | import sys
try:
import uos as os
except ImportError:
import os
if not hasattr(os, "unlink"):
print("SKIP")
sys.exit()
# cleanup in case testfile exists
try:
os.unlink("testfile")
except OSError:
pass
try:
f = open("testfile", "r+b")
print("Unexpectedly opened non-existing file")
except OSError:
print("Expected OSError")
pass
f = open("testfile", "w+b")
f.write(b"1234567890")
f.seek(0)
print(f.read())
f.close()
# Open with truncation
f = open("testfile", "w+b")
f.write(b"abcdefg")
f.seek(0)
print(f.read())
f.close()
# Open without truncation
f = open("testfile", "r+b")
f.write(b"1234")
f.seek(0)
print(f.read())
f.close()
# cleanup
try:
os.unlink("testfile")
except OSError:
pass
| 15.122449 | 50 | 0.643725 | 115 | 741 | 4.147826 | 0.382609 | 0.041929 | 0.109015 | 0.050314 | 0.461216 | 0.398323 | 0.398323 | 0.247379 | 0.109015 | 0 | 0 | 0.028099 | 0.183536 | 741 | 48 | 51 | 15.4375 | 0.760331 | 0.11336 | 0 | 0.675676 | 0 | 0 | 0.220859 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.081081 | 0.108108 | 0 | 0.108108 | 0.162162 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
986bbe481dae578c33047776f9bf717531d791dd | 588 | py | Python | backend/api/migrations/0005_education.py | EmileSchneider/cityrepo | fc9d84016342f7cc83231aab9853b89c37d542f9 | [
"MIT"
] | null | null | null | backend/api/migrations/0005_education.py | EmileSchneider/cityrepo | fc9d84016342f7cc83231aab9853b89c37d542f9 | [
"MIT"
] | null | null | null | backend/api/migrations/0005_education.py | EmileSchneider/cityrepo | fc9d84016342f7cc83231aab9853b89c37d542f9 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.5 on 2020-12-11 14:48
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0004_auto_20201211_1411'),
]
operations = [
migrations.CreateModel(
name='Education',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('institution', models.CharField(max_length=200)),
('description', models.CharField(max_length=2000)),
],
),
]
| 26.727273 | 114 | 0.585034 | 60 | 588 | 5.6 | 0.766667 | 0.089286 | 0.107143 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090476 | 0.285714 | 588 | 21 | 115 | 28 | 0.709524 | 0.076531 | 0 | 0 | 1 | 0 | 0.112754 | 0.042514 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
986cf0c2c336b0be083773fcf498596f4ecc8d23 | 216 | py | Python | Chicago Data Clean/categories_chicago.py | minxstm/Bootcamp_Project_1 | 1f5d15faab605f7e5b678b6eca802eca04351189 | [
"MIT"
] | null | null | null | Chicago Data Clean/categories_chicago.py | minxstm/Bootcamp_Project_1 | 1f5d15faab605f7e5b678b6eca802eca04351189 | [
"MIT"
] | null | null | null | Chicago Data Clean/categories_chicago.py | minxstm/Bootcamp_Project_1 | 1f5d15faab605f7e5b678b6eca802eca04351189 | [
"MIT"
] | null | null | null | import pandas as pd
chicago_df=pd.read_csv("Chicago_Crime_2015-2017.csv")
#print(chicago_df.head())
chicago_vc = chicago_df["Primary Type"].value_counts()
pd.DataFrame(chicago_vc).to_csv("crime_types_chicago_1.csv")
| 36 | 60 | 0.805556 | 37 | 216 | 4.351351 | 0.594595 | 0.167702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043902 | 0.050926 | 216 | 5 | 61 | 43.2 | 0.741463 | 0.111111 | 0 | 0 | 0 | 0 | 0.335079 | 0.272251 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
986f08850a9f3156a8739237ba8493743a09c545 | 1,904 | py | Python | vespa/simulation/auto_gui/experiment_list.py | vespa-mrs/vespa | 6d3e84a206ec427ac1304e70c7fadf817432956b | [
"BSD-3-Clause"
] | null | null | null | vespa/simulation/auto_gui/experiment_list.py | vespa-mrs/vespa | 6d3e84a206ec427ac1304e70c7fadf817432956b | [
"BSD-3-Clause"
] | 4 | 2021-04-17T13:58:31.000Z | 2022-01-20T14:19:57.000Z | vespa/simulation/auto_gui/experiment_list.py | vespa-mrs/vespa | 6d3e84a206ec427ac1304e70c7fadf817432956b | [
"BSD-3-Clause"
] | 3 | 2021-06-05T16:34:57.000Z | 2022-01-19T16:13:22.000Z | # -*- coding: UTF-8 -*-
#
# generated by wxGlade 0.9.3 on Wed Sep 11 13:50:00 2019
#
import wx
# begin wxGlade: dependencies
# end wxGlade
# begin wxGlade: extracode
# end wxGlade
class MyDialog(wx.Dialog):
def __init__(self, *args, **kwds):
# begin wxGlade: MyDialog.__init__
kwds["style"] = kwds.get("style", 0) | wx.DEFAULT_DIALOG_STYLE
wx.Dialog.__init__(self, *args, **kwds)
self.SetSize((473, 300))
self.ListExperiments = wx.ListBox(self, wx.ID_ANY, choices=[], style=0)
self.ButtonCopy = wx.Button(self, wx.ID_ANY, "Copy List to Clipboard")
self.ButtonClose = wx.Button(self, wx.ID_CLOSE, "")
self.__set_properties()
self.__do_layout()
self.Bind(wx.EVT_BUTTON, self.on_copy, self.ButtonCopy)
self.Bind(wx.EVT_BUTTON, self.on_close, self.ButtonClose)
# end wxGlade
def __set_properties(self):
# begin wxGlade: MyDialog.__set_properties
self.SetTitle("dialog_1")
self.SetSize((473, 300))
self.ButtonClose.SetDefault()
# end wxGlade
def __do_layout(self):
# begin wxGlade: MyDialog.__do_layout
sizer_1 = wx.BoxSizer(wx.VERTICAL)
sizer_2 = wx.BoxSizer(wx.HORIZONTAL)
sizer_1.Add(self.ListExperiments, 1, wx.ALL | wx.EXPAND, 10)
sizer_2.Add(self.ButtonCopy, 0, 0, 0)
sizer_2.Add((20, 20), 1, 0, 0)
sizer_2.Add(self.ButtonClose, 0, 0, 0)
sizer_1.Add(sizer_2, 0, wx.ALL | wx.EXPAND, 10)
self.SetSizer(sizer_1)
self.Layout()
# end wxGlade
def on_copy(self, event): # wxGlade: MyDialog.<event_handler>
print("Event handler 'on_copy' not implemented!")
event.Skip()
def on_close(self, event): # wxGlade: MyDialog.<event_handler>
print("Event handler 'on_close' not implemented!")
event.Skip()
# end of class MyDialog
| 31.213115 | 79 | 0.629202 | 258 | 1,904 | 4.430233 | 0.29845 | 0.052493 | 0.052493 | 0.027997 | 0.250219 | 0.139983 | 0.139983 | 0.096238 | 0.096238 | 0.096238 | 0 | 0.04083 | 0.241071 | 1,904 | 60 | 80 | 31.733333 | 0.750173 | 0.204307 | 0 | 0.121212 | 1 | 0 | 0.080828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0 | 0.030303 | 0 | 0.212121 | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98737ef22d1fe7939a250bbf74abbab55ff11614 | 368 | py | Python | for python/data/mramesh/pframe.py | aerolalit/Auto-Testing-Python-Programs | dd49ab266c9f0fd8e34278f68f8af017711942e3 | [
"MIT"
] | 4 | 2019-10-03T21:16:51.000Z | 2019-10-04T01:28:08.000Z | for python/data/mramesh/pframe.py | aerolalit/Auto-Testing | dd49ab266c9f0fd8e34278f68f8af017711942e3 | [
"MIT"
] | null | null | null | for python/data/mramesh/pframe.py | aerolalit/Auto-Testing | dd49ab266c9f0fd8e34278f68f8af017711942e3 | [
"MIT"
] | null | null | null | #35011
#a3_p10.py
#Miruthula Ramesh
#mramesh@jacobs-university.de
n = int(input("Enter the width"))
w = int(input("Enter the length"))
c = input("Enter a character")
space=" "
def print_frame(n, w):
for i in range(n):
if i == 0 or i == n-1:
print(w*c)
else:
print(c + space*(w-2) + c)
print_frame(n,w)
| 21.647059 | 42 | 0.540761 | 59 | 368 | 3.322034 | 0.59322 | 0.153061 | 0.132653 | 0.163265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042802 | 0.30163 | 368 | 16 | 43 | 23 | 0.719844 | 0.157609 | 0 | 0 | 0 | 0 | 0.168966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.090909 | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98746ffc4cea4687e7a965dcec7581058c45625e | 3,430 | py | Python | detectors/hamlog.py | time-track-tool/time-track-tool | a1c280f32a7766e460c862633b748fa206256f24 | [
"MIT"
] | null | null | null | detectors/hamlog.py | time-track-tool/time-track-tool | a1c280f32a7766e460c862633b748fa206256f24 | [
"MIT"
] | 1 | 2019-07-03T13:32:38.000Z | 2019-07-03T13:32:38.000Z | detectors/hamlog.py | time-track-tool/time-track-tool | a1c280f32a7766e460c862633b748fa206256f24 | [
"MIT"
] | 1 | 2019-05-15T16:01:31.000Z | 2019-05-15T16:01:31.000Z | # Copyright (C) 2012 Dr. Ralf Schlatterbeck Open Source Consulting.
# Reichergasse 131, A-3411 Weidling.
# Web: http://www.runtux.com Email: office@runtux.com
# All rights reserved
# ****************************************************************************
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
# ****************************************************************************
#
#++
# Name
# doc
#
# Purpose
# Detectors for hamlog
#--
from roundup.exceptions import Reject
from roundup.cgi.TranslationService import get_translation
from hamlib import fix_qsl_status
import common
def check_qso_empty (db, cl, nodeid, old_values) :
""" Retire qsl if qso Link is removed """
if 'qso' in old_values and not cl.get (nodeid, 'qso') :
cl.retire (nodeid)
# end def check_qso_empty
def check_dupe_qsl_type (db, cl, nodeid, new_values) :
common.require_attributes (_, cl, nodeid, new_values, 'qsl_type', 'qso')
type = new_values ['qsl_type']
qso = new_values ['qso']
qsl = db.qsl.filter (None, dict (qso = qso, qsl_type = type))
qn = db.qsl_type.get (type, 'name')
if qsl :
raise Reject, _ ('Duplicate QSL type "%s" for QSO' % qn)
# end def check_dupe_qsl_type
def check_owner_has_qsos (db, cl, nodeid, new_values) :
if 'call' not in new_values :
return
oldcalls = set (cl.get (nodeid, 'call'))
newcalls = set (new_values ['call'])
deleted = oldcalls - newcalls
if not deleted :
return
for call in deleted :
qsos = db.qso.filter (None, dict (owner = call))
if qsos :
name = db.ham_call.get (call, 'name')
raise Reject, _ ('Cant\'t delete "%(name)s" Call has QSOs') \
% locals ()
else :
db.ham_call.retire (call)
# end def check_owner_has_qsos
def fix_stati_qsl (db, cl, nodeid, old_values) :
fix_qsl_status (db, cl.get (nodeid, 'qso'))
# end def fix_stati_qsl
def fix_stati_qso (db, cl, nodeid, old_values) :
w = 'wont_qsl_via'
if ( not old_values
or (w in old_values and old_values [w] != cl.get (nodeid, w))
) :
fix_qsl_status (db, nodeid)
# end def fix_stati_qso
def init (db) :
if 'qso' not in db.classes :
return
global _
_ = get_translation \
(db.config.TRACKER_LANGUAGE, db.config.TRACKER_HOME).gettext
db.qsl.react ('set', check_qso_empty)
db.qsl.audit ('create', check_dupe_qsl_type)
db.qsl.react ('create', fix_stati_qsl)
db.qsl.react ('set', fix_stati_qsl)
db.qso.react ('create', fix_stati_qso)
db.qso.react ('set', fix_stati_qso)
db.user.audit ('set', check_owner_has_qsos)
# end def init
### __END__
| 35.360825 | 78 | 0.62828 | 488 | 3,430 | 4.247951 | 0.354508 | 0.027014 | 0.02412 | 0.027496 | 0.158707 | 0.048239 | 0.032803 | 0 | 0 | 0 | 0 | 0.009084 | 0.229738 | 3,430 | 96 | 79 | 35.729167 | 0.775549 | 0.354811 | 0 | 0.057692 | 0 | 0 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
987ebf3bb40f294fbfe683c9dee6246ece40b947 | 937 | py | Python | globus_contents_manager/scripts/spawn_tokens.py | NickolausDS/globus-contents-manager | 40ad5e8ef97686feff4ae36ff0f71b0c600c3e83 | [
"Apache-2.0"
] | null | null | null | globus_contents_manager/scripts/spawn_tokens.py | NickolausDS/globus-contents-manager | 40ad5e8ef97686feff4ae36ff0f71b0c600c3e83 | [
"Apache-2.0"
] | null | null | null | globus_contents_manager/scripts/spawn_tokens.py | NickolausDS/globus-contents-manager | 40ad5e8ef97686feff4ae36ff0f71b0c600c3e83 | [
"Apache-2.0"
] | null | null | null | import os
import json
from fair_research_login import NativeClient
CLIENT_ID = 'e54de045-d346-42ef-9fbc-5d466f4a00c6'
APP_NAME = 'My App'
SCOPES = 'openid email profile urn:globus:auth:scope:transfer.api.globus.org:all urn:globus:auth:scope:search.api.globus.org:all'
CONFIG_FILE = 'tokens-data.json'
tokens = None
# try to load tokens from local file (native app config)
client = NativeClient(client_id=CLIENT_ID, app_name=APP_NAME)
try:
tokens = client.load_tokens(requested_scopes=SCOPES)
except:
pass
if not tokens:
# if no tokens, need to start Native App authentication process to get tokens
tokens = client.login(requested_scopes=SCOPES,
refresh_tokens=False)
try:
# save the tokens
client.save_tokens(tokens)
# create environment variable
os.environ['GLOBUS_DATA'] = json.dumps(tokens, indent=4, sort_keys=True)
except:
pass
| 29.28125 | 129 | 0.709712 | 130 | 937 | 4.992308 | 0.507692 | 0.03698 | 0.061633 | 0.05547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02681 | 0.203842 | 937 | 31 | 130 | 30.225806 | 0.843164 | 0.185699 | 0 | 0.285714 | 0 | 0.047619 | 0.246702 | 0.174142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.095238 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
988f5d1f8daa9ec72e11862df13ec07c4e300748 | 3,696 | py | Python | src/pyrin/audit/api.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/audit/api.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/audit/api.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
audit api module.
"""
import pyrin.audit.services as audit_services
from pyrin.api.router.decorators import api
audit_config = audit_services.get_audit_configurations()
audit_config.update(no_cache=True)
is_enabled = audit_config.pop('enabled', False)
if is_enabled is True:
@api(**audit_config)
def inspect(**options):
"""
inspects all registered packages and gets inspection data.
---
parameters:
- name: application
type: boolean
description: specifies that application info must be included
- name: packages
type: boolean
description: specifies that loaded packages info must be included
- name: framework
type: boolean
description: specifies that framework info must be included
- name: python
type: boolean
description: specifies that python info must be included
- name: os
type: boolean
description: specifies that operating system info must be included
- name: hardware
type: boolean
description: specifies that hardware info must be included
- name: database
type: boolean
description: specifies that database info must be included
- name: caching
type: boolean
description: specifies that caching info must be included
- name: celery
type: boolean
description: specifies that celery info must be included
- name: traceback
type: boolean
description: specifies that on failure, it must include the traceback of errors
responses:
200:
description: all packages are working normally
schema:
properties:
application:
type: object
description: application info
packages:
type: object
description: loaded packages info
framework:
type: object
description: framework info
python:
type: object
description: python info
platform:
type: object
description: platform info
database:
type: object
description: database info
caching:
type: object
description: caching info
celery:
type: object
description: celery info
500:
description: some packages have errors
schema:
properties:
application:
type: object
description: application info
packages:
type: object
description: loaded packages info
framework:
type: object
description: framework info
python:
type: object
description: python info
platform:
type: object
description: platform info
database:
type: object
description: database info
caching:
type: object
description: caching info
celery:
type: object
description: celery info
"""
return audit_services.inspect(**options)
| 33 | 91 | 0.513528 | 307 | 3,696 | 6.143322 | 0.257329 | 0.084836 | 0.178155 | 0.164369 | 0.659597 | 0.369035 | 0.369035 | 0.369035 | 0.369035 | 0.369035 | 0 | 0.0034 | 0.442911 | 3,696 | 111 | 92 | 33.297297 | 0.912579 | 0.698323 | 0 | 0 | 0 | 0 | 0.018817 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9891a82e9057c30b7703a78123e0f01f73c0248f | 6,932 | py | Python | seqs/IntegerHeap.py | vincentdavis/special-sequences | b7b7f8c2bd2f655baeb7b2139ddf007615bffd67 | [
"MIT"
] | 1 | 2020-04-15T10:46:57.000Z | 2020-04-15T10:46:57.000Z | seqs/IntegerHeap.py | vincentdavis/special-sequences | b7b7f8c2bd2f655baeb7b2139ddf007615bffd67 | [
"MIT"
] | 1 | 2016-09-14T03:57:25.000Z | 2016-09-14T03:57:25.000Z | seqs/IntegerHeap.py | vincentdavis/special-sequences | b7b7f8c2bd2f655baeb7b2139ddf007615bffd67 | [
"MIT"
] | null | null | null | """IntegerHeap.py
Priority queues of integer keys based on van Emde Boas trees.
Only the keys are stored; caller is responsible for keeping
track of any data associated with the keys in a separate dictionary.
We use a version of vEB trees in which all accesses to subtrees
are performed indirectly through a hash table and the data structures
for the subtrees are only created when they are nonempty. As a
consequence, the data structure takes only linear space
(linear in the number of keys stored in the heap) while still preserving
the O(log log U) time per operation of vEB trees. For better performance,
we switch to bitvectors for sufficiently small integer sizes.
Usage:
Q = BitVectorHeap() # Bit-vector based heap for integers
Q = FlatHeap(i) # Flat heap for 2^i-bit integers
Q = LinearHeap() # Set-based heap with linear-time min operation
Q = IntegerHeap(i) # Choose between BVH and FH depending on i
Q.add(x) # Include x among the values in the heap
Q.remove(x) # Remove x from the values in the heap
Q.min() # Return the minimum value in the heap
if Q # True if Q is nonempty, false if empty
Because the min operation in LinearHeap is a Python primitive rather than
a sequence of interpreted Python instructions, it is actually quite fast;
testing indicates that, for 32-bit keys, FlatHeap(5) beats LinearHeap only
for heaps of 250 or more items. This breakeven point would likely be
different for different numbers of bits per word or when runtime optimizers
such as psyco are in use.
D. Eppstein, January 2010
"""
def IntegerHeap(i):
"""Return an integer heap for 2^i-bit integers.
We use a BitVectorHeap for small i and a FlatHeap for large i.
Timing tests indicate that the cutoff i <= 3 is slightly
faster than the also-plausible cutoff i <= 2, and that both
are much faster than the way-too-large cutoff i <= 4.
The resulting IntegerHeap objects will use 255-bit long integers,
still small compared to the overhead of a FlatHeap."""
if i <= 3:
return BitVectorHeap()
return FlatHeap(i)
Log2Table = {} # Table of powers of two, with their logs
def Log2(b):
"""Return log_2(b), where b must be a power of two."""
while b not in Log2Table:
i = len(Log2Table)
Log2Table[1 << i] = i
return Log2Table[b]
# ======================================================================
# BitVectorHeap
# ======================================================================
class BitVectorHeap(object):
"""Maintain the minimum of a set of integers using bitvector operations."""
def __init__(self):
"""Create a new BitVectorHeap."""
self._S = 0
def __nonzero__(self):
"""True if this heap is nonempty, false if empty."""
return self._S != 0
def __bool__(self):
"""True if this heap is nonempty, false if empty."""
return self._S != 0
def add(self, x):
"""Include x among the values in the heap."""
self._S |= 1 << x
def remove(self, x):
"""Remove x from the values in the heap."""
self._S &= ~1 << x
def min(self):
"""Return the minimum value in the heap."""
if not self._S:
raise ValueError("BitVectorHeap is empty")
return Log2(self._S & ~(self._S - 1))
# ======================================================================
# FlatHeap
# ======================================================================
class FlatHeap(object):
"""Maintain the minimum of a set of 2^i-bit integer values."""
def __init__(self, i):
"""Create a new FlatHeap for 2^i-bit integers."""
self._min = None
self._order = i
self._shift = 1 << (i - 1)
self._max = (1 << (1 << i)) - 1
self._HQ = IntegerHeap(i - 1) # Heap of high halfwords
self._LQ = {} # Map high half to heaps of low halfwords
def _rangecheck(self, x):
"""Make sure x is a number we can include in this FlatHeap."""
if x < 0 or x > self._max:
raise ValueError("FlatHeap: {0!s} out of range".format(repr(x)))
def __nonzero__(self):
"""True if this heap is nonempty, false if empty."""
return self._min is not None
def __bool__(self):
"""True if this heap is nonempty, false if empty."""
return self._min is not None
def min(self):
"""Return the minimum value in the heap."""
if self._min is None:
raise ValueError("FlatHeap is empty")
return self._min
def add(self, x):
"""Include x among the values in the heap."""
self._rangecheck(x)
if self._min is None or self._min == x:
# adding to an empty heap is easy
self._min = x
return
if x < self._min:
# swap to make sure the value we're adding is non-minimal
x, self._min = self._min, x
H = x >> self._shift # split into high and low halfwords
L = x - (H << self._shift)
if H not in self._LQ:
self._HQ.add(H)
self._LQ[H] = IntegerHeap(self._order - 1)
self._LQ[H].add(L)
def remove(self, x):
"""Remove x from the values in the heap."""
self._rangecheck(x)
if self._min == x:
# Removing minimum, move next value into place
# and prepare to remove that next value from secondary heaps
if not self._HQ:
self._min = None
return
H = self._HQ.min()
L = self._LQ[H].min()
x = self._min = (H << self._shift) + L
else:
H = x >> self._shift # split into high and low halfwords
L = x - (H << self._shift)
if H not in self._LQ:
return # ignore removal when not in heap
self._LQ[H].remove(L)
if not self._LQ[H]:
del self._LQ[H]
self._HQ.remove(H)
# ======================================================================
# LinearHeap
# ======================================================================
class LinearHeap(object):
"""Maintain the minimum of a set of integers using a set object."""
def __init__(self):
"""Create a new BitVectorHeap."""
self._S = set()
def __nonzero__(self):
"""True if this heap is nonempty, false if empty."""
return len(self._S) > 0
def __bool__(self):
"""True if this heap is nonempty, false if empty."""
return len(self._S) > 0
def add(self, x):
"""Include x among the values in the heap."""
self._S.add(x)
def remove(self, x):
"""Remove x from the values in the heap."""
self._S.remove(x)
def min(self):
"""Return the minimum value in the heap."""
return min(self._S)
| 34.147783 | 79 | 0.568523 | 969 | 6,932 | 3.972136 | 0.247678 | 0.018187 | 0.030398 | 0.029098 | 0.354638 | 0.339829 | 0.328917 | 0.328917 | 0.312289 | 0.275916 | 0 | 0.009447 | 0.282314 | 6,932 | 202 | 80 | 34.316832 | 0.764221 | 0.562031 | 0 | 0.409091 | 0 | 0 | 0.023402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238636 | false | 0 | 0 | 0 | 0.443182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9893aa6d0245de7a42d6065da71ab428fbb28e3b | 3,238 | py | Python | core/client/client.py | spiritotaku/fedlearn-algo | 842700d43e7f033a7b6a32d0845cb6d9db5b866a | [
"Apache-2.0"
] | 1 | 2021-07-20T23:44:28.000Z | 2021-07-20T23:44:28.000Z | core/client/client.py | kfliubo/fedlearn-algo | 3440bb10a8680319bcad1b9c9677874bab35550a | [
"Apache-2.0"
] | null | null | null | core/client/client.py | kfliubo/fedlearn-algo | 3440bb10a8680319bcad1b9c9677874bab35550a | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Fedlearn authors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is the class template theABC JDT client
from core.entity.common.message import RequestMessage, ResponseMessage
from core.grpc_comm.grpc_converter import grpc_msg_to_common_msg, common_msg_to_grpc_msg
from core.proto.transmission_pb2 import ReqResMessage
from core.proto.transmission_pb2_grpc import TransmissionServicer
from abc import abstractmethod
from typing import Dict
import pickle
class ClientError(ValueError):
pass
class Client(TransmissionServicer):
"""
Basic client class
"""
@property
def dict_functions(self):
"""
Dictionary of functions that store the training function mapping as
<phase_id: training_function>.
"""
return self._dict_functions
@dict_functions.setter
def dict_functions(self, value):
if not isinstance(value, dict):
raise ValueError("Funcion mapping must be a dictionary!")
self._dict_functions = value
@abstractmethod
def train_init(self) -> None:
"""
Training initialization function
Returns
-------
None
"""
@abstractmethod
def inference_init(self) -> None:
"""
Inference initialization function
Returns
-------
None
"""
def load_model(self, model_path: str) -> Dict:
"""
Parameters
----------
model_path: str
Returns
-------
model: dict
"""
f = open(model_path, 'rb')
model = pickle.load(f)
f.close()
return model
def save_model(self, model_path: str, model: Dict) -> None:
"""
Parameters
----------
model_path: str
model: dict
Returns
-------
None
"""
f = open(model_path, 'wb')
pickle.dump(model, f)
f.close()
def process_request(self, request: RequestMessage) -> ResponseMessage:
"""
Parameters
----------
request: RequestMessage
Returns
-------
response: ResponseMessage
"""
symbol = request.phase_id
if symbol not in self.dict_functions.keys():
raise ClientError("Function %s is not implemented.", symbol)
response = self.dict_functions[symbol](request)
return response
def comm(self, grpc_request: ReqResMessage, context) -> ReqResMessage:
common_req_msg = grpc_msg_to_common_msg(grpc_request)
common_res_msg = self.process_request(common_req_msg)
return common_msg_to_grpc_msg(common_res_msg)
| 26.540984 | 88 | 0.62446 | 363 | 3,238 | 5.418733 | 0.393939 | 0.046263 | 0.03457 | 0.016268 | 0.101678 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004322 | 0.285361 | 3,238 | 121 | 89 | 26.760331 | 0.845722 | 0.328598 | 0 | 0.097561 | 0 | 0 | 0.040089 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195122 | false | 0.02439 | 0.170732 | 0 | 0.512195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9894e55664da246d5300969b91941e5dc7ab68d5 | 9,369 | py | Python | L1Trigger/GlobalTriggerAnalyzer/test/L1GtPackUnpackAnalyzer_cfg.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 6 | 2017-09-08T14:12:56.000Z | 2022-03-09T23:57:01.000Z | L1Trigger/GlobalTriggerAnalyzer/test/L1GtPackUnpackAnalyzer_cfg.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 545 | 2017-09-19T17:10:19.000Z | 2022-03-07T16:55:27.000Z | L1Trigger/GlobalTriggerAnalyzer/test/L1GtPackUnpackAnalyzer_cfg.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 14 | 2017-10-04T09:47:21.000Z | 2019-10-23T18:04:45.000Z | from __future__ import print_function
#
# cfg file to pack (DigiToRaw) a GT DAQ record, unpack (RawToDigi) it back
# and compare the two set of digis
#
# V M Ghete 2009-04-06
import FWCore.ParameterSet.Config as cms
# process
process = cms.Process('TestGtPackUnpackAnalyzer')
###################### user choices ######################
# choose the type of sample used (True for RelVal, False for data)
useRelValSample = True
#useRelValSample=False
# actual GlobalTag must be appropriate for the sample use
if useRelValSample == True :
useGlobalTag = 'IDEAL_V12'
#useGlobalTag='STARTUP_V9'
else :
useGlobalTag = 'CRAFT_ALL_V12'
# change to True to use local files
# the type of file should match the choice of useRelValSample and useGlobalTag
useLocalFiles = False
###################### end user choices ###################
# number of events to be processed and source file
process.maxEvents = cms.untracked.PSet(
input=cms.untracked.int32(10)
)
readFiles = cms.untracked.vstring()
secFiles = cms.untracked.vstring()
process.source = cms.Source ('PoolSource', fileNames=readFiles, secondaryFileNames=secFiles)
# type of sample used (True for RelVal, False for data)
if useRelValSample == True :
if useGlobalTag.count('IDEAL') :
#/RelValTTbar/CMSSW_2_2_4_IDEAL_V11_v1/GEN-SIM-DIGI-RAW-HLTDEBUG
dataset = cms.untracked.vstring('RelValTTbar_CMSSW_2_2_4_IDEAL_V11_v1')
readFiles.extend([
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/IDEAL_V11_v1/0000/02697009-5CF3-DD11-A862-001D09F2423B.root',
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/IDEAL_V11_v1/0000/064657A8-59F3-DD11-ACA5-000423D991F0.root',
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/IDEAL_V11_v1/0000/0817F6DE-5BF3-DD11-880D-0019DB29C5FC.root',
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/IDEAL_V11_v1/0000/0899697C-5AF3-DD11-9D21-001617DBD472.root'
]);
secFiles.extend([
])
elif useGlobalTag.count('STARTUP') :
#/RelValTTbar/CMSSW_2_2_4_STARTUP_V8_v1/GEN-SIM-DIGI-RAW-HLTDEBUG
dataset = cms.untracked.vstring('RelValTTbar_CMSSW_2_2_4_STARTUP_V8_v1')
readFiles.extend([
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/STARTUP_V8_v1/0000/069AA022-5BF3-DD11-9A56-001617E30D12.root',
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/STARTUP_V8_v1/0000/08DA99A6-5AF3-DD11-AAC1-001D09F24493.root',
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/STARTUP_V8_v1/0000/0A725E15-5BF3-DD11-8B4B-000423D99CEE.root',
'/store/relval/CMSSW_2_2_4/RelValTTbar/GEN-SIM-DIGI-RAW-HLTDEBUG/STARTUP_V8_v1/0000/0AF5B676-5AF3-DD11-A22F-001617DBCF1E.root'
]);
secFiles.extend([
])
else :
print('Error: Global Tag ', useGlobalTag, ' not defined.')
else :
# data
dataset = '/Cosmics/Commissioning09-v1/RAW'
print(' Running on set: '+ dataset)
readFiles.extend( [
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/00BD9A1F-B908-DE11-8B2C-000423D94A04.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/025E8B48-B608-DE11-A0EE-00161757BF42.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/027AA271-D208-DE11-9A7F-001617DBD5AC.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/04281D2F-D108-DE11-9A27-000423D944DC.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/065B0C1C-C008-DE11-A32B-001617E30F48.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/08B1054B-BD08-DE11-AF8B-001617C3B78C.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/0C055C33-D108-DE11-B678-001617C3B73A.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/0E480977-D208-DE11-BA78-001617C3B6E2.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/0E79251B-B908-DE11-83FF-000423D99CEE.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/101B8CA0-B508-DE11-B614-000423D99160.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/12C62C71-BF08-DE11-A48C-000423D99614.root',
'/store/data/Commissioning09/Cosmics/RAW/v1/000/076/966/16A77E08-B008-DE11-9121-000423D8F63C.root'
]);
secFiles.extend([
])
if useLocalFiles :
readFiles = 'file:/afs/cern.ch/user/g/ghete/scratch0/CmsswTestFiles/testGt_PackUnpackAnalyzer_source.root.root'
# load and configure modules via Global Tag
# https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideFrontierConditions
process.load('Configuration.StandardSequences.Geometry_cff')
process.load('Configuration.StandardSequences.FrontierConditions_GlobalTag_cff')
process.GlobalTag.globaltag = useGlobalTag+'::All'
# remove FakeConditions when GTag is OK
process.load('L1Trigger.Configuration.L1Trigger_FakeConditions_cff')
#
# pack.......
#
process.load("EventFilter.L1GlobalTriggerRawToDigi.l1GtPack_cfi")
# input tag for GT readout collection:
# input tag for GMT readout collection:
# source = hardware record
if useRelValSample == True :
daqGtInputTagPack = 'simGtDigis'
muGmtInputTagPack = 'simGmtDigis'
else :
daqGtInputTagPack = 'l1GtUnpack'
muGmtInputTagPack = 'l1GtUnpack'
process.l1GtPack.DaqGtInputTag = daqGtInputTagPack
process.l1GtPack.MuGmtInputTag = muGmtInputTagPack
# mask for active boards (actually 16 bits)
# if bit is zero, the corresponding board will not be packed
# default: no board masked: ActiveBoardsMask = 0xFFFF
# no board masked (default)
#process.l1GtPack.ActiveBoardsMask = 0xFFFF
# GTFE only in the record
#process.l1GtPack.ActiveBoardsMask = 0x0000
# GTFE + FDL
#process.l1GtPack.ActiveBoardsMask = 0x0001
# GTFE + GMT
#process.l1GtPack.ActiveBoardsMask = 0x0100
# GTFE + FDL + GMT
#process.l1GtPack.ActiveBoardsMask = 0x0101
# set it to verbose
process.l1GtPack.Verbosity = cms.untracked.int32(1)
#
# unpack.......
#
import EventFilter.L1GlobalTriggerRawToDigi.l1GtUnpack_cfi
process.gtPackedUnpack = EventFilter.L1GlobalTriggerRawToDigi.l1GtUnpack_cfi.l1GtUnpack.clone()
# input tag for GT and GMT readout collections in the packed data:
process.gtPackedUnpack.DaqGtInputTag = 'l1GtPack'
# Active Boards Mask
# no board masked (default)
#process.gtPackedUnpack.ActiveBoardsMask = 0xFFFF
# GTFE only in the record
#process.gtPackedUnpack.ActiveBoardsMask = 0x0000
# GTFE + FDL
#process.gtPackedUnpack.ActiveBoardsMask = 0x0001
# GTFE + GMT
#process.gtPackedUnpack.ActiveBoardsMask = 0x0100
# GTFE + FDL + GMT
#process.gtPackedUnpack.ActiveBoardsMask = 0x0101
# BxInEvent to be unpacked
# all available BxInEvent (default)
#process.gtPackedUnpack.UnpackBxInEvent = -1
# BxInEvent = 0 (L1A)
#process.gtPackedUnpack.UnpackBxInEvent = 1
# 3 BxInEvent (F, 0, 1)
#process.gtPackedUnpack.UnpackBxInEvent = 3
#
# compare the initial and final digis .......
#
process.load("L1Trigger.GlobalTriggerAnalyzer.l1GtPackUnpackAnalyzer_cfi")
# input tag for the initial GT DAQ record: must match the pack label
# input tag for the initial GMT readout collection: must match the pack label
process.l1GtPackUnpackAnalyzer.InitialDaqGtInputTag = daqGtInputTagPack
process.l1GtPackUnpackAnalyzer.InitialMuGmtInputTag = muGmtInputTagPack
# input tag for the final GT DAQ and GMT records: must match the unpack label
# GT unpacker: gtPackedUnpack (cloned unpacker from L1GtPackUnpackAnalyzer.cfg)
#process.l1GtPackUnpackAnalyzer.FinalGtGmtInputTag = 'gtPackedUnpack'
# path to be run
if useRelValSample == True :
process.p = cms.Path(process.l1GtPack*process.gtPackedUnpack*process.l1GtPackUnpackAnalyzer)
else :
process.p = cms.Path(process.l1GtPack*process.gtPackedUnpack*process.l1GtPackUnpackAnalyzer)
# FIXME unpack first raw data
# Message Logger
process.load('FWCore.MessageService.MessageLogger_cfi')
process.MessageLogger.debugModules = [ 'l1GtPack', 'l1GtUnpack', 'l1GtPackUnpackAnalyzer']
process.MessageLogger.destinations = ['L1GtPackUnpackAnalyzer']
process.MessageLogger.L1GtPackUnpackAnalyzer = cms.untracked.PSet(
threshold=cms.untracked.string('DEBUG'),
#threshold = cms.untracked.string('INFO'),
#threshold = cms.untracked.string('ERROR'),
DEBUG=cms.untracked.PSet(
limit=cms.untracked.int32(-1)
),
INFO=cms.untracked.PSet(
limit=cms.untracked.int32(-1)
),
WARNING=cms.untracked.PSet(
limit=cms.untracked.int32(-1)
),
ERROR=cms.untracked.PSet(
limit=cms.untracked.int32(-1)
),
default = cms.untracked.PSet(
limit=cms.untracked.int32(-1)
)
)
# summary
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool(True)
)
# output
process.outputL1GtPackUnpack = cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('L1GtPackUnpackAnalyzer.root'),
# keep only emulated data, packed data, unpacked data in the ROOT file
outputCommands = cms.untracked.vstring('drop *',
'keep *_simGtDigis_*_*',
'keep *_simGmtDigis_*_*',
'keep *_l1GtPack_*_*',
'keep *_l1GtPackedUnpack_*_*')
)
process.outpath = cms.EndPath(process.outputL1GtPackUnpack)
| 35.488636 | 139 | 0.732629 | 1,120 | 9,369 | 6.040179 | 0.286607 | 0.044346 | 0.012417 | 0.014191 | 0.335107 | 0.293422 | 0.281892 | 0.281892 | 0.228234 | 0.221582 | 0 | 0.101225 | 0.145907 | 9,369 | 263 | 140 | 35.623574 | 0.744189 | 0.293094 | 0 | 0.275862 | 0 | 0.181034 | 0.468184 | 0.42793 | 0 | 0 | 0 | 0.003802 | 0 | 1 | 0 | false | 0 | 0.025862 | 0 | 0.025862 | 0.025862 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
98985142e7c8b8249e12ffcb7bdd9280a12f0e9c | 428 | py | Python | cases/urls.py | testyourcodenow/core | 05865b02ff7e60ffd3b30652161b3523046b9696 | [
"MIT"
] | 1 | 2020-05-10T06:40:58.000Z | 2020-05-10T06:40:58.000Z | cases/urls.py | testyourcodenow/core | 05865b02ff7e60ffd3b30652161b3523046b9696 | [
"MIT"
] | 25 | 2020-05-03T08:10:38.000Z | 2021-09-22T18:59:29.000Z | cases/urls.py | testyourcodenow/core | 05865b02ff7e60ffd3b30652161b3523046b9696 | [
"MIT"
] | 10 | 2020-05-03T08:25:56.000Z | 2020-06-03T06:49:34.000Z | from django.urls import path
from cases.api.get_visuals_data import UpdateVisualsData
from cases.api.kenyan_cases import KenyanCaseList
from cases.api.visuals import VisualList
urlpatterns = [
path('kenyan/all', KenyanCaseList.as_view(), name='Historical data'),
path('history/', VisualList.as_view(), name='Historical data'),
path('update/history', UpdateVisualsData.as_view(), name='Update Historical data'),
]
| 32.923077 | 87 | 0.766355 | 54 | 428 | 5.962963 | 0.407407 | 0.083851 | 0.111801 | 0.124224 | 0.173913 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11215 | 428 | 12 | 88 | 35.666667 | 0.847368 | 0 | 0 | 0 | 0 | 0 | 0.196262 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
98abe48e2e82e8030955b56dce5a86874efde1ce | 1,934 | py | Python | Examples/batch_data_reduction.py | keflavich/TurbuStat | a6fac4c0d10473a74c62cce4a9c6a30773a955b1 | [
"MIT"
] | null | null | null | Examples/batch_data_reduction.py | keflavich/TurbuStat | a6fac4c0d10473a74c62cce4a9c6a30773a955b1 | [
"MIT"
] | null | null | null | Examples/batch_data_reduction.py | keflavich/TurbuStat | a6fac4c0d10473a74c62cce4a9c6a30773a955b1 | [
"MIT"
] | null | null | null | # Licensed under an MIT open source license - see LICENSE
'''
Runs data_reduc on all data cubes in the file.
Creates a folder for each data cube and its products
Run from folder containing data cubes
'''
from turbustat.data_reduction import *
from astropy.io.fits import getdata
import os
import sys
import errno
import shutil
from datetime import datetime
folder = sys.argv[1]
noise = sys.argv[2]
if str(noise) == "None":
noise=None
os.chdir(folder)
## Read files in the folder
data_cubes = [x for x in os.listdir(".") if os.path.isfile(x) and x[-4:]=="fits"]
# [os.path.join(folder,x) for x in os.listdir(folder) if os.path.isfile(os.path.join(folder,x)) and x[-4:]=="fits"]
print data_cubes
logfile = open("".join([folder[:-1],"_reductionlog",".txt"]), "w+")
for fitscube in data_cubes:
filestr = "Reducing %s \n" % (fitscube)
print filestr
print str(datetime.now())
logfile.write(filestr)
logfile.write("".join([str(datetime.now()),"\n"]))
try:
os.makedirs(fitscube[:-5])
except OSError as exception:
pass
# if exception.errno != errno.EEXIST:
# logfile.write(OSError)
# logfile.close()
# raise
shutil.move(fitscube, fitscube[:-5])
os.chdir(fitscube[:-5])
cube, header = getdata(fitscube, header=True)
# if np.isnan(cube.sum(axis=0)[:,cube.shape[2]]).shape[1] == cube.shape[2]:
cube[:,:,cube.shape[2]-1] = cube[:,:,0]
# elif np.isnan(cube.sum(axis=0)[cube.shape[1],:]).shape[1] == cube.shape[1]:
cube[:,cube.shape[1]-1,:] = cube[:,0,:]
reduction = property_arrays((cube,header), rms_noise=0.001, kernel_size=10, save_name=fitscube[:-5])
reduction.return_all()
## Clean up
cube, header = None, None
reduction = None
os.chdir("..")
print "Done!\n "
print str(datetime.now())
logfile.write("Done!")
logfile.write("".join([str(datetime.now()),"\n"]))
logfile.close()
| 24.794872 | 115 | 0.639607 | 286 | 1,934 | 4.286713 | 0.36014 | 0.044046 | 0.045677 | 0.011419 | 0.215334 | 0.17292 | 0.096248 | 0.045677 | 0 | 0 | 0 | 0.018378 | 0.184074 | 1,934 | 77 | 116 | 25.116883 | 0.758555 | 0.238883 | 0 | 0.1 | 0 | 0 | 0.047256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.025 | 0.175 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f283665e8b6be70031379c582e3646c30d58f97 | 814 | py | Python | camunda/utils/log_utils.py | finexioinc/camunda-external-task-client-python3 | dd88da967e8cc1aaf91972e2667e01bfa02265d0 | [
"Apache-2.0"
] | null | null | null | camunda/utils/log_utils.py | finexioinc/camunda-external-task-client-python3 | dd88da967e8cc1aaf91972e2667e01bfa02265d0 | [
"Apache-2.0"
] | null | null | null | camunda/utils/log_utils.py | finexioinc/camunda-external-task-client-python3 | dd88da967e8cc1aaf91972e2667e01bfa02265d0 | [
"Apache-2.0"
] | 1 | 2020-08-05T22:20:06.000Z | 2020-08-05T22:20:06.000Z | import logging
from frozendict import frozendict
def log_with_context(message, context=frozendict({}), log_level='info', **kwargs):
log_function = __get_log_function(log_level)
log_context_prefix = __get_log_context_prefix(context)
if log_context_prefix:
log_function(f"{log_context_prefix} {message}", **kwargs)
else:
log_function(message, **kwargs)
def __get_log_context_prefix(context):
log_context_prefix = ""
if context:
for k, v in context.items():
if v:
log_context_prefix += f"[{k}:{v}]"
return log_context_prefix
def __get_log_function(log_level):
switcher = {
'info': logging.info,
'warning': logging.warning,
'error': logging.error
}
return switcher.get(log_level, logging.info)
| 25.4375 | 82 | 0.665848 | 101 | 814 | 4.980198 | 0.267327 | 0.159046 | 0.254473 | 0.067594 | 0.190855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224816 | 814 | 31 | 83 | 26.258065 | 0.797147 | 0 | 0 | 0 | 0 | 0 | 0.072482 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.086957 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f28a3ba2a1d3d7c7bdf664d76babed98cc0144e | 742 | py | Python | wildlifecompliance/migrations/0055_auto_20180704_0848.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 1 | 2020-12-07T17:12:40.000Z | 2020-12-07T17:12:40.000Z | wildlifecompliance/migrations/0055_auto_20180704_0848.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 14 | 2020-01-08T08:08:26.000Z | 2021-03-19T22:59:46.000Z | wildlifecompliance/migrations/0055_auto_20180704_0848.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 15 | 2020-01-08T08:02:28.000Z | 2021-11-03T06:48:32.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-07-04 00:48
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('wildlifecompliance', '0054_assessment_licence_activity_type'),
]
operations = [
migrations.AlterField(
model_name='applicationgrouptype',
name='name',
field=models.CharField(
choices=[
('officer',
'Officer'),
('assessor',
'Assessor')],
default='officer',
max_length=40,
verbose_name='Group Type'),
),
]
| 25.586207 | 72 | 0.521563 | 62 | 742 | 6.048387 | 0.758065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049251 | 0.37062 | 742 | 28 | 73 | 26.5 | 0.753747 | 0.091644 | 0 | 0 | 1 | 0 | 0.187779 | 0.055142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f28fd8107aa3d01ef3f003e276b381cfb9af0d6 | 677 | py | Python | day2/day2.py | tlee911/aoc2021 | f3b46590be72ceccdf4915b67b050c0c3207f002 | [
"MIT"
] | null | null | null | day2/day2.py | tlee911/aoc2021 | f3b46590be72ceccdf4915b67b050c0c3207f002 | [
"MIT"
] | null | null | null | day2/day2.py | tlee911/aoc2021 | f3b46590be72ceccdf4915b67b050c0c3207f002 | [
"MIT"
] | null | null | null | with open('input.txt', 'r') as file:
input = file.readlines()
input = [ step.split() for step in input ]
input = [ {step[0]: int(step[1])} for step in input ]
def part1():
x = 0
y = 0
for step in input:
x += step.get('forward', 0)
y += step.get('down', 0)
y -= step.get('up', 0)
print(x,y)
return x * y
def part2():
x = 0
y = 0
a = 0
for step in input:
x += step.get('forward', 0)
y += step.get('forward', 0) * a
#y += step.get('down', 0)
a += step.get('down', 0)
#y -= step.get('up', 0)
a -= step.get('up', 0)
print(x,y,a)
return x * y
print(part2()) | 20.515152 | 53 | 0.468242 | 110 | 677 | 2.881818 | 0.245455 | 0.198738 | 0.126183 | 0.176656 | 0.485804 | 0.444795 | 0.444795 | 0.369085 | 0.369085 | 0.246057 | 0 | 0.042316 | 0.33678 | 677 | 33 | 54 | 20.515152 | 0.663697 | 0.067947 | 0 | 0.4 | 0 | 0 | 0.068254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0 | 0 | 0.16 | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f297ce902d7f9247b5e656d705431d2cc8409f0 | 4,307 | py | Python | user_metrics/metrics/edit_count.py | wikimedia/user_metrics | 23ef17f7134729a0a4a8a47ac7100fb88db4d155 | [
"BSD-3-Clause"
] | 1 | 2017-09-03T21:33:35.000Z | 2017-09-03T21:33:35.000Z | user_metrics/metrics/edit_count.py | wikimedia/user_metrics | 23ef17f7134729a0a4a8a47ac7100fb88db4d155 | [
"BSD-3-Clause"
] | null | null | null | user_metrics/metrics/edit_count.py | wikimedia/user_metrics | 23ef17f7134729a0a4a8a47ac7100fb88db4d155 | [
"BSD-3-Clause"
] | null | null | null |
__author__ = "Ryan Faulkner"
__date__ = "July 27th, 2012"
__license__ = "GPL (version 2 or later)"
from os import getpid
from collections import namedtuple
import user_metric as um
from user_metrics.metrics import query_mod
from user_metrics.metrics.users import UMP_MAP
from user_metrics.utils import multiprocessing_wrapper as mpw
from user_metrics.config import logging
class EditCount(um.UserMetric):
"""
Produces a count of edits as well as the total number of bytes added
for a registered user.
`https://meta.wikimedia.org/wiki/Research:Metrics/edit_count(t)`
usage e.g.: ::
>>> import classes.Metrics as m
>>> m.EditCount(date_start='2012-12-12 00:00:00',date_end=
'2012-12-12 00:00:00',namespace=0).process(123456)
25, 10000
The output in this case is the number of edits (25) made by the
editor with ID 123456 and the total number of bytes added by those
edits (10000)
"""
# Structure that defines parameters for EditRate class
_param_types = {
'init': {},
'process': {
'k': [int, 'Number of worker processes.', 5]
}
}
# Define the metrics data model meta
_data_model_meta = {
'id_fields': [0],
'date_fields': [],
'float_fields': [],
'integer_fields': [1],
'boolean_fields': [],
}
_agg_indices = {
'list_sum_indices': _data_model_meta['integer_fields'] +
_data_model_meta['float_fields'],
}
@um.pre_metrics_init
def __init__(self, **kwargs):
super(EditCount, self).__init__(**kwargs)
@staticmethod
def header():
return ['user_id', 'edit_count']
@um.UserMetric.pre_process_metric_call
def process(self, users, **kwargs):
"""
Determine edit count. The parameter *user_handle* can be either
a string or an integer or a list of these types. When the
*user_handle* type is integer it is interpreted as a user id, and
as a user_name for string input. If a list of users is passed
to the *process* method then a dict object with edit counts keyed
by user handles is returned.
- Paramters:
- **user_handle** - String or Integer (optionally lists):
Value or list of values representing user handle(s).
- **is_id** - Boolean. Flag indicating whether user_handle
stores user names or user ids
"""
# Pack args, call thread pool
args = self._pack_params()
results = mpw.build_thread_pool(users, _process_help,
self.k_, args)
# Get edit counts from query - all users not appearing have
# an edit count of 0
user_set = set([long(user_id) for user_id in users])
edit_count = list()
for row in results:
edit_count.append([row[0], int(row[1])])
user_set.discard(row[0])
for user in user_set:
edit_count.append([user, 0])
self._results = edit_count
return self
def _process_help(args):
"""
Worker thread method for edit count.
"""
# Unpack args
users = args[0]
state = args[1]
metric_params = um.UserMetric._unpack_params(state)
query_args_type = namedtuple('QueryArgs', 'date_start date_end')
logging.debug(__name__ + ':: Executing EditCount on '
'%s users (PID = %s)' % (len(users), getpid()))
# Call user period method
umpd_obj = UMP_MAP[metric_params.group](users, metric_params)
results = list()
for t in umpd_obj:
args = query_args_type(t.start, t.end)
# Build edit count results list
results += query_mod.edit_count_user_query(t.user,
metric_params.project,
args)
return results
# Rudimentary Testing
if __name__ == '__main__':
users = ['13234584', '13234503', '13234565', '13234585', '13234556']
e = EditCount(t=10000)
# Check edit counts against
for res in e.process(users):
print res
| 31.669118 | 78 | 0.58997 | 538 | 4,307 | 4.507435 | 0.375465 | 0.040825 | 0.024742 | 0.018144 | 0.03299 | 0.03299 | 0 | 0 | 0 | 0 | 0 | 0.039809 | 0.317622 | 4,307 | 135 | 79 | 31.903704 | 0.785301 | 0.070583 | 0 | 0 | 0 | 0 | 0.128644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.102941 | null | null | 0.014706 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f29a4a16372f3427a26db56be3bf0908d8eb335 | 7,210 | py | Python | sdk/python/pulumi_aws_native/panorama/get_application_instance.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 29 | 2021-09-30T19:32:07.000Z | 2022-03-22T21:06:08.000Z | sdk/python/pulumi_aws_native/panorama/get_application_instance.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 232 | 2021-09-30T19:26:26.000Z | 2022-03-31T23:22:06.000Z | sdk/python/pulumi_aws_native/panorama/get_application_instance.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 4 | 2021-11-10T19:42:01.000Z | 2022-02-05T10:15:49.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._enums import *
__all__ = [
'GetApplicationInstanceResult',
'AwaitableGetApplicationInstanceResult',
'get_application_instance',
'get_application_instance_output',
]
@pulumi.output_type
class GetApplicationInstanceResult:
def __init__(__self__, application_instance_id=None, arn=None, created_time=None, default_runtime_context_device_name=None, device_id=None, health_status=None, last_updated_time=None, status=None, status_description=None, status_filter=None, tags=None):
if application_instance_id and not isinstance(application_instance_id, str):
raise TypeError("Expected argument 'application_instance_id' to be a str")
pulumi.set(__self__, "application_instance_id", application_instance_id)
if arn and not isinstance(arn, str):
raise TypeError("Expected argument 'arn' to be a str")
pulumi.set(__self__, "arn", arn)
if created_time and not isinstance(created_time, int):
raise TypeError("Expected argument 'created_time' to be a int")
pulumi.set(__self__, "created_time", created_time)
if default_runtime_context_device_name and not isinstance(default_runtime_context_device_name, str):
raise TypeError("Expected argument 'default_runtime_context_device_name' to be a str")
pulumi.set(__self__, "default_runtime_context_device_name", default_runtime_context_device_name)
if device_id and not isinstance(device_id, str):
raise TypeError("Expected argument 'device_id' to be a str")
pulumi.set(__self__, "device_id", device_id)
if health_status and not isinstance(health_status, str):
raise TypeError("Expected argument 'health_status' to be a str")
pulumi.set(__self__, "health_status", health_status)
if last_updated_time and not isinstance(last_updated_time, int):
raise TypeError("Expected argument 'last_updated_time' to be a int")
pulumi.set(__self__, "last_updated_time", last_updated_time)
if status and not isinstance(status, str):
raise TypeError("Expected argument 'status' to be a str")
pulumi.set(__self__, "status", status)
if status_description and not isinstance(status_description, str):
raise TypeError("Expected argument 'status_description' to be a str")
pulumi.set(__self__, "status_description", status_description)
if status_filter and not isinstance(status_filter, str):
raise TypeError("Expected argument 'status_filter' to be a str")
pulumi.set(__self__, "status_filter", status_filter)
if tags and not isinstance(tags, list):
raise TypeError("Expected argument 'tags' to be a list")
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="applicationInstanceId")
def application_instance_id(self) -> Optional[str]:
return pulumi.get(self, "application_instance_id")
@property
@pulumi.getter
def arn(self) -> Optional[str]:
return pulumi.get(self, "arn")
@property
@pulumi.getter(name="createdTime")
def created_time(self) -> Optional[int]:
return pulumi.get(self, "created_time")
@property
@pulumi.getter(name="defaultRuntimeContextDeviceName")
def default_runtime_context_device_name(self) -> Optional[str]:
return pulumi.get(self, "default_runtime_context_device_name")
@property
@pulumi.getter(name="deviceId")
def device_id(self) -> Optional[str]:
return pulumi.get(self, "device_id")
@property
@pulumi.getter(name="healthStatus")
def health_status(self) -> Optional['ApplicationInstanceHealthStatus']:
return pulumi.get(self, "health_status")
@property
@pulumi.getter(name="lastUpdatedTime")
def last_updated_time(self) -> Optional[int]:
return pulumi.get(self, "last_updated_time")
@property
@pulumi.getter
def status(self) -> Optional['ApplicationInstanceStatus']:
return pulumi.get(self, "status")
@property
@pulumi.getter(name="statusDescription")
def status_description(self) -> Optional[str]:
return pulumi.get(self, "status_description")
@property
@pulumi.getter(name="statusFilter")
def status_filter(self) -> Optional['ApplicationInstanceStatusFilter']:
return pulumi.get(self, "status_filter")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence['outputs.ApplicationInstanceTag']]:
return pulumi.get(self, "tags")
class AwaitableGetApplicationInstanceResult(GetApplicationInstanceResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetApplicationInstanceResult(
application_instance_id=self.application_instance_id,
arn=self.arn,
created_time=self.created_time,
default_runtime_context_device_name=self.default_runtime_context_device_name,
device_id=self.device_id,
health_status=self.health_status,
last_updated_time=self.last_updated_time,
status=self.status,
status_description=self.status_description,
status_filter=self.status_filter,
tags=self.tags)
def get_application_instance(application_instance_id: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetApplicationInstanceResult:
"""
Schema for ApplicationInstance CloudFormation Resource
"""
__args__ = dict()
__args__['applicationInstanceId'] = application_instance_id
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('aws-native:panorama:getApplicationInstance', __args__, opts=opts, typ=GetApplicationInstanceResult).value
return AwaitableGetApplicationInstanceResult(
application_instance_id=__ret__.application_instance_id,
arn=__ret__.arn,
created_time=__ret__.created_time,
default_runtime_context_device_name=__ret__.default_runtime_context_device_name,
device_id=__ret__.device_id,
health_status=__ret__.health_status,
last_updated_time=__ret__.last_updated_time,
status=__ret__.status,
status_description=__ret__.status_description,
status_filter=__ret__.status_filter,
tags=__ret__.tags)
@_utilities.lift_output_func(get_application_instance)
def get_application_instance_output(application_instance_id: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetApplicationInstanceResult]:
"""
Schema for ApplicationInstance CloudFormation Resource
"""
...
| 43.433735 | 257 | 0.710402 | 818 | 7,210 | 5.892421 | 0.154034 | 0.078838 | 0.065353 | 0.06722 | 0.338174 | 0.237552 | 0.155187 | 0.074274 | 0 | 0 | 0 | 0.000173 | 0.198058 | 7,210 | 165 | 258 | 43.69697 | 0.833449 | 0.042718 | 0 | 0.104478 | 1 | 0 | 0.180402 | 0.077169 | 0 | 0 | 0 | 0 | 0 | 1 | 0.11194 | false | 0 | 0.052239 | 0.08209 | 0.276119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f3657c2007cb8e2fe9ba1e91d9490a327ac7d62 | 3,624 | py | Python | libc/kernel/tools/update_all.py | Keneral/abionic | 0f441ff93f011ed0069339e16f59e572134a5f3b | [
"Unlicense"
] | null | null | null | libc/kernel/tools/update_all.py | Keneral/abionic | 0f441ff93f011ed0069339e16f59e572134a5f3b | [
"Unlicense"
] | null | null | null | libc/kernel/tools/update_all.py | Keneral/abionic | 0f441ff93f011ed0069339e16f59e572134a5f3b | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python
#
import sys, cpp, kernel, glob, os, re, getopt, clean_header, subprocess
from defaults import *
from utils import *
def usage():
print """\
usage: %(progname)s [kernel-original-path] [kernel-modified-path]
this program is used to update all the auto-generated clean headers
used by the Bionic C library. it assumes the following:
- a set of source kernel headers is located in
'external/kernel-headers/original', relative to the current
android tree
- a set of manually modified kernel header files located in
'external/kernel-headers/modified', relative to the current
android tree
- the clean headers will be placed in 'bionic/libc/kernel/arch-<arch>/asm',
'bionic/libc/kernel/common', etc..
""" % { "progname" : os.path.basename(sys.argv[0]) }
sys.exit(0)
try:
optlist, args = getopt.getopt(sys.argv[1:], '')
except:
# unrecognized option
sys.stderr.write("error: unrecognized option\n")
usage()
if len(optlist) > 0 or len(args) > 2:
usage()
modified_dir = get_kernel_headers_modified_dir()
if len(args) == 1 or len(args) == 2:
original_dir = args[0]
if not os.path.isdir(original_dir):
panic("Not a directory: %s\n" % original_dir)
if len(args) == 2:
modified_dir = args[1]
if not os.path.isdir(modified_dir):
panic("Not a directory: %s\n" % modified_dir)
else:
original_dir = get_kernel_headers_original_dir()
if not os.path.isdir(original_dir):
panic("Missing directory, please specify one through command-line: %s\n" % original_dir)
if not os.path.isdir(modified_dir):
modified_dir = None
# Find all source files in 'original'.
sources = dict()
original_dir = os.path.normpath(original_dir)
original_dir_len = len(original_dir) + 1
for root, _, files in os.walk(original_dir):
for file in files:
_, ext = os.path.splitext(file)
if ext == ".h":
rel_path = os.path.normpath(os.path.join(root, file))
rel_path = rel_path[original_dir_len:]
# Check to see if there is a modified header to use instead.
if modified_dir and os.path.exists(os.path.join(modified_dir, rel_path)):
sources[rel_path] = False
else:
sources[rel_path] = True
b = BatchFileUpdater()
kernel_dir = get_kernel_dir()
for arch in kernel_archs:
b.readDir(os.path.join(kernel_dir, "arch-%s" % arch))
b.readDir(os.path.join(kernel_dir, "common"))
oldlen = 120
android_root_len = len(get_android_root()) + 1
for rel_path in sorted(sources):
if sources[rel_path]:
src_dir = original_dir
src_str = "<original>/"
else:
src_dir = modified_dir
src_str = "<modified>/"
dst_path, newdata = clean_header.cleanupFile(kernel_dir, src_dir, rel_path)
if not dst_path:
continue
dst_path = os.path.join(kernel_dir, dst_path)
b.readFile(dst_path)
r = b.editFile(dst_path, newdata)
if r == 0:
state = "unchanged"
elif r == 1:
state = "edited"
else:
state = "added"
# dst_path is guaranteed to include android root.
rel_dst_path = dst_path[android_root_len:]
str = "cleaning: %-*s -> %-*s (%s)" % (35, src_str + rel_path, 35, rel_dst_path, state)
if sys.stdout.isatty():
print "%-*s" % (oldlen, str),
if (r == 0):
print "\r",
else:
print "\n",
oldlen = 0
else:
print str
oldlen = len(str)
print "%-*s" % (oldlen, "Done!")
b.updateGitFiles()
sys.exit(0)
| 29.463415 | 96 | 0.631347 | 517 | 3,624 | 4.274662 | 0.290135 | 0.038009 | 0.022624 | 0.01991 | 0.174208 | 0.131674 | 0.10362 | 0.028959 | 0 | 0 | 0 | 0.008788 | 0.246413 | 3,624 | 122 | 97 | 29.704918 | 0.800439 | 0.050773 | 0 | 0.170213 | 0 | 0.010638 | 0.25364 | 0.052126 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.031915 | null | null | 0.06383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f3776d641ecf1c2ca820e1d8531b10c65ba6e21 | 1,555 | py | Python | convert_pfm_json.py | biolib/deepclip | 06e0a3c431db76745b6674afabc4d171f19b3eb0 | [
"MIT"
] | 7 | 2019-07-23T10:20:11.000Z | 2022-03-14T14:46:13.000Z | convert_pfm_json.py | biolib/deepclip | 06e0a3c431db76745b6674afabc4d171f19b3eb0 | [
"MIT"
] | 10 | 2019-09-05T22:45:04.000Z | 2022-03-21T08:40:49.000Z | convert_pfm_json.py | biolib/deepclip | 06e0a3c431db76745b6674afabc4d171f19b3eb0 | [
"MIT"
] | 5 | 2019-07-23T10:20:46.000Z | 2021-11-14T07:18:05.000Z | #!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
import argparse
import json
parser = argparse.ArgumentParser()
parser.add_argument("-seqs",
required=True,
type=str,
default=None,
help="File containing sequences")
args = parser.parse_args()
dicts = []
with open(args.seqs,'r') as inf:
for line in inf:
dicts.append(eval(line))
try:
assert len(dicts) == 5
except AssertionError:
print "AssertionError: assert len(dicts) == 5"
print "len(dicts) =",str(len(dicts))
print str(dicts)
exit(1)
sizes = [4,5,6,7,8]
tot_info = []
for i in range(len(dicts)):
temp_pfm = np.array(dicts[i]['PFM_'+str(sizes[i])+'_0']).T
#print temp_pfm.shape
temp_pfm1 = temp_pfm + 0.000000001
tot_info.append([np.sum(np.log2(4) + np.sum(temp_pfm * np.log2(temp_pfm1), axis=1, keepdims = True))/len(temp_pfm), sizes[i]])
#print tot_info
tot_info.sort()
tot_info.reverse()
#print tot_info
new_json = {}
for mp in range(len(tot_info)):
for qq in range(len(dicts)):
if 'PFM_'+str(tot_info[mp][1])+'_0' in dicts[qq]:
print 'PFM_'+str(tot_info[mp][1])+'_0'
new_json['PFM_'+str(tot_info[mp][1])+'_0'] = dicts[qq]['PFM_'+str(tot_info[mp][1])+'_0']
with open('PFMs.json', 'w') as outfile:
json.dump(new_json, outfile)
with open('PFM_order.txt', 'w') as txt:
print >> txt, "name, info_per_bp"
for i in tot_info:
print >> txt, "PFM_" +str(i[1])+ "_0,", "{0:.3f}".format(i[0])
| 23.923077 | 130 | 0.607074 | 246 | 1,555 | 3.678862 | 0.353659 | 0.092818 | 0.039779 | 0.057459 | 0.075138 | 0.075138 | 0.075138 | 0 | 0 | 0 | 0 | 0.031173 | 0.216077 | 1,555 | 64 | 131 | 24.296875 | 0.711239 | 0.04373 | 0 | 0 | 0 | 0 | 0.112011 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | null | null | 0 | 0.095238 | null | null | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f3df9c6bc7653f174e6f2a144e946916c638b4e | 1,467 | py | Python | rewx/components.py | akrk1986/re-wx | 2f50d1c0afe77313548847b279327d7041623721 | [
"MIT"
] | null | null | null | rewx/components.py | akrk1986/re-wx | 2f50d1c0afe77313548847b279327d7041623721 | [
"MIT"
] | null | null | null | rewx/components.py | akrk1986/re-wx | 2f50d1c0afe77313548847b279327d7041623721 | [
"MIT"
] | null | null | null | """
All components and wrappers currently
supported by rewx.
"""
import wx
import wx.adv
import wx.lib.scrolledpanel
import wx.media
ActivityIndicator = wx.ActivityIndicator
Button = wx.Button
BitmapButton = wx.BitmapButton
CalendarCtrl = wx.adv.CalendarCtrl
CheckBox = wx.CheckBox
# CollapsiblePane = wx.CollapsiblePane
ComboBox = wx.ComboBox
Dropdown = ComboBox
Frame = wx.Frame
Gauge = wx.Gauge
ListBox = wx.ListBox
ListCtrl = wx.ListCtrl
Panel = wx.Panel
RadioBox = wx.RadioBox
RadioButton = wx.RadioButton
ScrolledPanel = wx.lib.scrolledpanel.ScrolledPanel
Slider = wx.Slider
SpinCtrl = wx.SpinCtrl
SpinCtrlDouble = wx.SpinCtrlDouble
StaticBitmap = wx.StaticBitmap
StaticBox = wx.StaticBox
StaticLine = wx.StaticLine
StaticText = wx.StaticText
TextCtrl = wx.TextCtrl
ToggleButton = wx.ToggleButton
MediaCtrl = wx.media.MediaCtrl
class Grid(wx.Panel):
"""
Wrapper type for creating a panel
with a Grid Sizer
"""
pass
class Block(wx.Panel):
"""
Wrapper type for creating a panel
with a BoxSizer
"""
class TextArea(wx.TextCtrl):
"""
TextCtrl Wrapper with it's style pre-baked
to be TE_MULTILINE
"""
pass
class SVG(wx.StaticBitmap):
"""
Wrapper for converting an SVG to a scaled StaticBitmap
"""
pass
class SVGButton(wx.BitmapButton):
"""
Wrapper for creating an SVG backed BitmapButton.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) | 20.957143 | 58 | 0.720518 | 180 | 1,467 | 5.822222 | 0.405556 | 0.030534 | 0.034351 | 0.034351 | 0.076336 | 0.076336 | 0.076336 | 0.076336 | 0.076336 | 0.076336 | 0 | 0 | 0.186094 | 1,467 | 70 | 59 | 20.957143 | 0.877722 | 0.246762 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0.076923 | 0.102564 | 0 | 0.25641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7f4786e590d0a9418a3d92cb5ddc03d945909da4 | 6,840 | py | Python | script/proxycheck/check.py | kenshinx/rps | 3d141056f3512598525430ecc9ddb1ee79b887cf | [
"MIT"
] | 6 | 2017-06-20T09:44:03.000Z | 2018-07-28T06:44:10.000Z | script/proxycheck/check.py | kenshinx/rps | 3d141056f3512598525430ecc9ddb1ee79b887cf | [
"MIT"
] | null | null | null | script/proxycheck/check.py | kenshinx/rps | 3d141056f3512598525430ecc9ddb1ee79b887cf | [
"MIT"
] | 3 | 2017-09-23T16:48:18.000Z | 2018-05-26T19:12:12.000Z | #!/usr/bin/env python
import re
import sys
import time
import logging
import asyncore
import optparse
from datetime import datetime
import schedule
from pymongo import MongoReplicaSetClient, MongoClient
from async_s5 import AsyncSocks5Client
from async_http import AsyncHTTPClient
from async_http_tunnel import AsyncHTTPTunnelClient
from conf import (MONGO_USERNAME, MONGO_PASSWORD,
MONGO_HOST, MONGO_REPLICA_SET,
LOG_LEVEL, LOG_FORMAT)
class BaseCheck(object):
HTTP_PATTERN = re.compile("^HTTP\/1\.\d ([0-9]{3}) .*")
HTTP_HOST = "www.baidu.com"
HTTP_PORT = 80
HTTP_PAYLOAD = "domain=.baidu.com"
WHOIS_HOST = "133.130.126.119"
WHOIS_PORT = 43
WHOIS_PAYLOAD = "google.com"
def __init__(self, db, client, payload_proto, concurrency):
self.dbname = db
self.client =client
self.payload_proto = payload_proto
self.concurrency = concurrency
self.conn = self.createConn()
@property
def db(self):
if self.dbname == "rps":
return self.conn.rps
elif self.dbname == "rps_test":
return self.conn.rps_test
else:
raise Exception("Invalid DataBase <%s>" %self.dbname)
def createConn(self):
addr = "mongodb://%s:%s@%s" %(MONGO_USERNAME, MONGO_PASSWORD, MONGO_HOST)
conn = MongoClient(addr, replicaSet = MONGO_REPLICA_SET)
return conn
def getProxyList(self):
raise NotImplemetedError()
def updateProxy(self):
raise NotImplemetedError()
def checkHTTPResp(self, resp):
if not resp or resp == " ":
return False
try:
code = self.HTTP_PATTERN.findall(resp)[0]
except Exception, e:
return False
if code != "200":
return False
return self.HTTP_PAYLOAD in resp
def checkWhoisResp(self, resp, proxy_ip):
if not resp or resp == " ":
return False
return (resp == "%s:%s:%s" %(self.WHOIS_PAYLOAD, proxy_ip, self.WHOIS_PAYLOAD))
def check(self, client, host, port):
resp = client.buffer.getvalue()
client.buffer.close()
if self.payload_proto == "http":
is_ok = self.checkHTTPResp(resp)
elif self.payload_proto == "whois":
is_ok = self.checkWhoisResp(resp, host)
else:
raise Exception("unsupport payload protocol '%s'" %self.payload_proto)
return is_ok
def afterLoop(self, clients):
for c in clients:
if self.check(c, c.host, c.port):
print "%s:%d is ok" %(c.host, c.port)
self.updateProxy(c.host, c.port, True)
else:
print "%s:%d is bad" %(c.host, c.port)
self.updateProxy(c.host, c.port, False)
def run(self):
clients = []
socket_map = {}
for r in self.getProxyList():
try:
if self.payload_proto == "whois":
clients.append(self.client(r["host"], r["port"],
self.WHOIS_HOST, self.WHOIS_PORT, "whois", socket_map))
else:
clients.append(self.client(r["host"], r["port"],
self.HTTP_HOST, self.HTTP_PORT, "http", socket_map))
except Exception, e:
continue
if len(clients) >= self.concurrency:
asyncore.loop(timeout=5, use_poll=True, map=socket_map)
self.afterLoop(clients)
clients = []
socket_map = {}
if len(clients) < self.concurrency:
asyncore.loop(timeout=5, use_poll=True, map=socket_map)
self.afterLoop(clients)
class HTTPProxyCheck(BaseCheck):
def __init__(self, db, payload_proto, concurrency):
BaseCheck.__init__(self, db, AsyncHTTPClient, "http", concurrency)
def getProxyList(self):
for r in self.db.http.find():
yield r
def updateProxy(self, host, port, enable):
self.db.http.update({"host":host, "port":port},
{"$set":{"enable":int(enable), "last_check":datetime.now()}})
class HTTPTunnelProxyCheck(BaseCheck):
def __init__(self, db, payload_proto, concurrency):
BaseCheck.__init__(self, db, AsyncHTTPTunnelClient, payload_proto, concurrency)
def getProxyList(self):
for r in self.db.http_tunnel.find({"source":{"$in":["nd", "nf"]}}):
yield r
def updateProxy(self, host, port, enable):
self.db.http_tunnel.update({"host":host, "port":port},
{"$set":{"enable":int(enable), "last_check":datetime.now()}})
class Socks5ProxyCheck(BaseCheck):
def __init__(self, db, payload_proto, concurrency):
BaseCheck.__init__(self, db, AsyncSocks5Client, payload_proto, concurrency)
def getProxyList(self):
for r in self.db.socks5.find({"source":{"$in":["nd", "nf"]}}):
yield r
def updateProxy(self, host, port, enable):
self.db.socks5.update({"host":host, "port":port},
{"$set":{"enable":int(enable), "last_check":datetime.now()}})
def usage():
return "Usage: %s <http|http_tunnel|socks5> [options]" %(sys.argv[0])
if __name__ == "__main__":
parser = optparse.OptionParser(usage= usage())
parser.add_option("-d", "--database", action="store", dest="db", type="string",
default="rps_test", help="[rps|rps_test]")
parser.add_option("-c", "--concurrency", action="store", dest="concurrency", type="int",
default=1000)
parser.add_option("-e", "--every", action="store", dest="every", type="int", default=30,
help="run check every %default minutes")
parser.add_option("-p", "--payload_proto", action="store", dest="payload_proto", type="string",
default="whois", help="payload protocol default be whois for http_tunnel \
and socks5 proxy, http for http proxy")
options, args = parser.parse_args()
logging.basicConfig(level=LOG_LEVEL, format=LOG_FORMAT)
if len(sys.argv) < 2:
print usage()
sys.exit(1)
protocol = sys.argv[1]
if protocol == "http_tunnel":
checker = HTTPTunnelProxyCheck(options.db, options.payload_proto, options.concurrency)
elif protocol == "http":
checker = HTTPProxyCheck(options.db, options.payload_proto, options.concurrency)
elif protocol == "socks5":
checker = Socks5ProxyCheck(options.db, options.payload_proto, options.concurrency)
else:
print usage()
sys.exit(1)
schedule.every(options.every).minutes.do(checker.run).run()
while True:
schedule.run_pending()
time.sleep(1)
| 31.232877 | 100 | 0.594591 | 793 | 6,840 | 4.981084 | 0.220681 | 0.051646 | 0.017722 | 0.012658 | 0.343291 | 0.334177 | 0.316456 | 0.291646 | 0.291646 | 0.243544 | 0 | 0.009705 | 0.276901 | 6,840 | 218 | 101 | 31.376147 | 0.78892 | 0.002924 | 0 | 0.299363 | 0 | 0 | 0.088869 | 0.003666 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.012739 | 0.082803 | null | null | 0.025478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f495918acf00767d9fbeb784d409f5760173565 | 19,291 | py | Python | attic/library/kepler.py | vdods/heisenberg | 66921b80c2d035e5f10e0af4d0cec21d5470dbd1 | [
"MIT"
] | 3 | 2018-01-04T07:40:14.000Z | 2021-02-19T15:27:10.000Z | attic/library/kepler.py | vdods/heisenberg | 66921b80c2d035e5f10e0af4d0cec21d5470dbd1 | [
"MIT"
] | null | null | null | attic/library/kepler.py | vdods/heisenberg | 66921b80c2d035e5f10e0af4d0cec21d5470dbd1 | [
"MIT"
] | null | null | null | # NOTE: This changes a/b to produce a floating point approximation of that
# ratio, not the integer quotient. For integer quotient, use a//b instead.
from __future__ import division
import fourier_parameterization
import numpy as np
import multiindex
import symbolic
import sympy
import tensor
# TODO: look at and use https://en.wikipedia.org/wiki/Elliptic_orbit
class KeplerProblemSymbolicContext:
def __init__ (self, configuration_space_dimension, use_constraint=True):
"""
Use sympy to define various functions and automatically compute their derivatives, crunching
them down to (probably efficient) Python lambda functions.
"""
assert configuration_space_dimension > 0
self.X = X = configuration_space_dimension
# Indicates if the constraint should be used or not.
self.use_constraint = use_constraint
# q is position
q = symbolic.tensor('q', (X,))
# v is velocity
v = symbolic.tensor('v', (X,))
# g is gravitational constant
g = symbolic.variable('g')
# lm is the lagrange multiplier
lm = symbolic.variable('lm')
# qv is the q and v variables in one array
self.qv = qv = np.array(list(q) + list(v))
# P is all variables
self.P = P = np.array(list(q) + list(v) + [g, lm])
# U is potential energy
# U = -g / sympy.sqrt(np.sum(np.square(q)))
U = -1 / sympy.sqrt(np.sum(np.square(q)))
# K is kinetic energy -- NOTE: There is unit mass, so the momentum is the same as the velocity.
# This fact is used in defining the Hamiltonian vector field.
K = np.sum(np.square(v)) / 2
# H is total energy (Hamiltonian)
H = K + U
# L is the difference in energy (Lagrangian)
L = K - U
# H_0 is the constant value which defines the constraint (that the Hamiltonian must equal that at all times)
self.H_0 = H_0 = -1.8
# This is the constraint. The extra division is used to act as a metric on the lagrange multiplier coordinate.
C = (H - H_0)**2 / 2 / 100
# Construct the Hamiltonian vector field of H.
dH = symbolic.D(H, qv)
# Construct the symplectic form on phase space.
omega = np.zeros((2*X,2*X), dtype=np.int)
omega[0:X,X:2*X] = np.eye(X,X)
omega[X:2*X,0:X] = -np.eye(X,X)
# Symplectic gradient of H, aka the Hamiltonian vector field.
X_H = tensor.contract('ij,j', omega, dH, dtype=object)
# DL = symbolic.D(L, P)
# # DH = symbolic.D(H, P)
# DC = symbolic.D(C, P)
# This is the integrand of the action functional
Lambda_integrand = L #+ lm*C
# This is the integrand of the first variation of the action
DLambda_integrand = symbolic.D(Lambda_integrand, P)
# This is the integrand for the constraint functional
C_integrand = C
# Solving the constrained optimization problem by minimizing the norm squared of DLambda.
# DDLambda_integrand = symbolic.D(DLambda_integrand, P)
# Obj_integrand = (np.sum(np.square(DLambda_integrand))/2)#.simplify()
# DObj_integrand = symbolic.D(Obj_integrand, P)
assert not use_constraint
Obj_integrand = Lambda_integrand
DObj_integrand = DLambda_integrand
# print 'Obj_integrand =', Obj_integrand
# print ''
# print 'DObj_integrand =', DObj_integrand
# print ''
replacement_d = {'dtype=object':'dtype=float'}
# self.L = symbolic.lambdify(L, P, replacement_d=replacement_d)
# self.DL = symbolic.lambdify(DL, P, replacement_d=replacement_d)
# self.DDL = symbolic.lambdify(DDL, P, replacement_d=replacement_d)
# self.H = symbolic.lambdify(H, P, replacement_d=replacement_d)
# self.DH = symbolic.lambdify(DH, P, replacement_d=replacement_d)
self.H = symbolic.lambdify(H, P, replacement_d=replacement_d)
self.X_H = symbolic.lambdify(X_H, qv, replacement_d=replacement_d)
self.Lambda_integrand = symbolic.lambdify(Lambda_integrand, P, replacement_d=replacement_d)
self.DLambda_integrand = symbolic.lambdify(DLambda_integrand, P, replacement_d=replacement_d)
self.Obj_integrand = symbolic.lambdify(Obj_integrand, P, replacement_d=replacement_d)
self.DObj_integrand = symbolic.lambdify(DObj_integrand, P, replacement_d=replacement_d)
self.C_integrand = symbolic.lambdify(C_integrand, P, replacement_d=replacement_d)
class KeplerProblemContext(KeplerProblemSymbolicContext):
def __init__ (self, closed_time_interval, frequencies, use_constraint=True):
# Call superclass constructor.
KeplerProblemSymbolicContext.__init__(self, 2, use_constraint=use_constraint)
X = self.X
# Number of derivatives in phase space (2, i.e. position and velocity)
self.D = D = 2
self.fourier_curve_parameterization = fourier_parameterization.Planar(frequencies, np.array(range(D)), closed_time_interval)
# 2 indicates that there are two coefficients for each (cos,sin) pair
self.position_velocity_shape = position_velocity_shape = (X,D)
self.fourier_coefficients_shape = fourier_coefficients_shape = self.fourier_curve_parameterization.fourier_coefficients_shape #(X,F,2)
self.gravitational_constant_shape = gravitational_constant_shape = (1,)
self.lagrange_multipliers_shape = lagrange_multipliers_shape = (1,)
self.time_domain_parameter_count = time_domain_parameter_count = multiindex.prod(position_velocity_shape) + multiindex.prod(gravitational_constant_shape) + multiindex.prod(lagrange_multipliers_shape)
self.frequency_domain_parameter_count = frequency_domain_parameter_count = multiindex.prod(fourier_coefficients_shape) + multiindex.prod(gravitational_constant_shape) + multiindex.prod(lagrange_multipliers_shape)
def make_frequency_domain_parameters_and_views (self):
frequency_domain_parameters = np.zeros((self.frequency_domain_parameter_count,), dtype=float)
# Define names for the views. Note that when assigning into the views, the
# [:] notation is necessary, otherwise el_form_fourier_coefficients_part and
# el_form_lagrange_multipliers_part will be reassigned to be different references
# altogether, and will no longer be views into euler_lagrange_form_buffer.
fd_fc,fd_g,fd_lm = self.frequency_domain_views(frequency_domain_parameters)
return frequency_domain_parameters,fd_fc,fd_g,fd_lm
def time_domain_views (self, time_domain_parameters):
"""
Returns a tuple (td_qv,td_g,td_lm), where each of the elements are views into:
td_qv[x,d] : The position and velocity tensor. The x index indexes the configuration space (i.e. x,y,z axis)
while the d index indexes the order of derivative (i.e. 0 is position, 1 is velocity).
td_g[:] : The scalar gravitational constant.
td_lm[:] : The Lagrange multiplier.
Note that slice notation must be used to assign to these views, otherwise a new, unrelated local variable will be declared.
"""
td_qv_count = multiindex.prod(self.position_velocity_shape)
td_g_count = multiindex.prod(self.gravitational_constant_shape)
td_lm_count = multiindex.prod(self.lagrange_multipliers_shape)
assert time_domain_parameters.shape == (self.time_domain_parameter_count,)
td_qv = time_domain_parameters[:td_qv_count].reshape(self.position_velocity_shape, order='F')
td_g = time_domain_parameters[td_qv_count:td_qv_count+td_g_count].reshape(self.gravitational_constant_shape, order='F')
td_lm = time_domain_parameters[td_qv_count+td_g_count:].reshape(self.lagrange_multipliers_shape, order='F')
return td_qv,td_g,td_lm
def frequency_domain_views (self, frequency_domain_parameters):
"""
Returns a tuple (fd_fc,fd_g,fd_lm), where each of the elements are views into:
fd_fc[x,f,c] : The Fourier coefficients of the curve. The x index indexes the configuration space (i.e. x,y,z axis),
the f index denotes the frequency, while c indexes which of cos or sin the coefficient is for (0 for cos, 1 for sin).
fd_g[:] : The scalar gravitational constant.
fd_lm[:] : The Lagrange multiplier.
Note that slice notation must be used to assign to these views, otherwise a new, unrelated local variable will be declared.
"""
fd_fc_count = multiindex.prod(self.fourier_coefficients_shape)
fd_g_count = multiindex.prod(self.gravitational_constant_shape)
fd_lm_count = multiindex.prod(self.lagrange_multipliers_shape)
fd_fc = frequency_domain_parameters[:fd_fc_count].reshape(self.fourier_coefficients_shape, order='F')
fd_g = frequency_domain_parameters[fd_fc_count:fd_fc_count+fd_g_count].reshape(self.gravitational_constant_shape, order='F')
fd_lm = frequency_domain_parameters[fd_fc_count+fd_g_count:].reshape(self.lagrange_multipliers_shape, order='F')
return fd_fc,fd_g,fd_lm
def curve_at_t (self, t, fc):
return np.einsum('dfxc,fc->xd', self.fourier_curve_parameterization.fourier_tensor[t,:,:,:], fc)
def curve (self, fc):
return np.einsum('tdfxc,fc->txd', self.fourier_curve_parameterization.fourier_tensor, fc)
def time_domain_variation_pullback_at_t (self, time_domain_parameter_variation, t):
"""Uses the Fourier-transform-parameterization of the curve to pull back a qv-g-lm vector to be a fc-g-lm vector."""
assert time_domain_parameter_variation.shape == (self.time_domain_parameter_count,)
td_qv,td_g,td_lm = self.time_domain_views(time_domain_parameter_variation)
retval = np.ndarray((self.frequency_domain_parameter_count,), dtype=float)
fd_fc,fd_g,fd_lm = self.frequency_domain_views(retval)
fd_fc[:] = np.einsum('xd,dfxc->fc', td_qv, self.fourier_curve_parameterization.fourier_tensor[t,:,:,:,:])
fd_g[:] = td_g
fd_lm[:] = td_lm
return retval
def time_domain_parameters_at_t (self, t, frequency_domain_parameters):
retval = np.ndarray((self.time_domain_parameter_count,), dtype=float)
td_qv,td_g,td_lm = self.time_domain_views(retval)
fd_fc,fd_g,fd_lm = self.frequency_domain_views(frequency_domain_parameters)
td_qv[:] = self.curve_at_t(t, fd_fc)
td_g[:] = fd_g
td_lm[:] = fd_lm
return retval
def Lambda (self, frequency_domain_parameters):
return np.dot(
self.fourier_curve_parameterization.half_open_time_interval_deltas,
np.fromiter((
self.Lambda_integrand(self.time_domain_parameters_at_t(t, frequency_domain_parameters))
for t in xrange(self.fourier_curve_parameterization.T)
), float, count=self.fourier_curve_parameterization.T)
)
def DLambda_at_time (self, t, frequency_domain_parameters):
return self.time_domain_variation_pullback_at_t(self.DLambda_integrand(self.time_domain_parameters_at_t(t, frequency_domain_parameters)), t)
def DLambda (self, frequency_domain_parameters, batch_t_v=None):
if batch_t_v is None:
batch_t_v = xrange(self.fourier_curve_parameterization.T)
batch_size = self.fourier_curve_parameterization.T
else:
batch_size = len(batch_t_v)
return sum(self.fourier_curve_parameterization.half_open_time_interval_deltas[t]*self.DLambda_at_time(t,frequency_domain_parameters) for t in batch_t_v)
def Obj (self, frequency_domain_parameters):
return np.dot(
self.fourier_curve_parameterization.half_open_time_interval_deltas,
np.fromiter((
self.Obj_integrand(self.time_domain_parameters_at_t(t, frequency_domain_parameters))
for t in xrange(self.fourier_curve_parameterization.T)
), float, count=self.fourier_curve_parameterization.T)
)
def DObj_at_time (self, t, frequency_domain_parameters):
return self.time_domain_variation_pullback_at_t(self.DObj_integrand(self.time_domain_parameters_at_t(t, frequency_domain_parameters)), t)
def DObj (self, frequency_domain_parameters, batch_t_v=None):
if batch_t_v is None:
batch_t_v = xrange(self.fourier_curve_parameterization.T)
batch_size = self.fourier_curve_parameterization.T
else:
batch_size = len(batch_t_v)
return sum(self.fourier_curve_parameterization.half_open_time_interval_deltas[t]*self.DObj_at_time(t,frequency_domain_parameters) for t in batch_t_v)
def C (self, frequency_domain_parameters):
return np.dot(
self.fourier_curve_parameterization.half_open_time_interval_deltas,
np.fromiter((
self.C_integrand(self.time_domain_parameters_at_t(t, frequency_domain_parameters))
for t in xrange(self.fourier_curve_parameterization.T)
), float, count=self.fourier_curve_parameterization.T)
)
if __name__ == '__main__':
import matplotlib.pyplot as plt
import polynomial
import scipy.integrate
import scipy.signal
import warnings
warnings.filterwarnings('ignore', module='matplotlib')
kpsc = KeplerProblemSymbolicContext(2, use_constraint=False)
time = np.linspace(0.0, 6.0, 20000)
qv_0 = np.array([1.5, 0.0, 0.0, 0.5])
qv = scipy.integrate.odeint(lambda qv,t:kpsc.X_H(qv), qv_0, time)
# TODO: Use distance squared, then fit a parabola near the min and solve
# for a better approximation of the min using subsample accuracy.
jet_dist_squared = np.sum(np.square(qv - qv_0), axis=1)
local_min_index_v = filter(lambda i:jet_dist_squared[i-1] > jet_dist_squared[i] and jet_dist_squared[i] < jet_dist_squared[i+1], xrange(1,len(jet_dist_squared)-2))
print('locally minimizing jet-distances-squared:')
for local_min_index in local_min_index_v:
print(' index: {0:3}, time: {1:17}, jet_dist_squared: {2:17}'.format(local_min_index, time[local_min_index], jet_dist_squared[local_min_index]))
assert len(local_min_index_v) > 0
assert local_min_index_v[0] > 0
estimated_period_time_index = local_min_index_v[0]
assert 5 < estimated_period_time_index < len(time)-5
if True:
# Subtracting the center of the time window is necessary for the polynomial fit to be numerically stable.
time_offset = time[estimated_period_time_index]
time_window = time[estimated_period_time_index-5:estimated_period_time_index+6] - time_offset
jet_dist_squared_window = jet_dist_squared[estimated_period_time_index-5:estimated_period_time_index+6]
samples = np.vstack((time_window, jet_dist_squared_window)).T
# Fit a quadratic polynomial
coefficients = polynomial.fit(samples, 2)
quadratically_approximated = np.vectorize(polynomial.python_function_from_coefficients(coefficients))(time_window)
max_abs_approximation_error = np.max(np.abs(quadratically_approximated - jet_dist_squared_window))
print 'max_abs_approximation_error for jet_dist_squared near minimum:', max_abs_approximation_error
assert coefficients[2] > 0.0
# Solve for the critical point, adding back in the time offset that was subtracted.
estimated_period = -0.5 * coefficients[1] / coefficients[2] + time_offset
print 'estimated_period:', estimated_period
# Save the old time for plotting.
old_time = time
# Re-run the ODE integrator using the estimated period.
time = np.linspace(0.0, estimated_period, 10001)
# qv_0 is unchanged.
qv = scipy.integrate.odeint(lambda qv,t:kpsc.X_H(qv), qv_0, time)
# Re-sample time and qv to be sparser.
time = time[::10]
assert len(time) == 1001
qv = qv[::10,:]
assert qv.shape == (1001,4)
qv = qv[:-1,:]
assert qv.shape == (1000,4)
# estimated_period = time[estimated_period_time_index]
H_v = np.apply_along_axis(kpsc.H, 1, np.hstack((qv, np.zeros((qv.shape[0],2), dtype=float))))
row_count = 2
col_count = 3
fig,axes = plt.subplots(row_count, col_count, squeeze=False, figsize=(8*col_count,8*row_count))
axis = axes[0][0]
axis.set_title('position')
axis.scatter(qv[:,0], qv[:,1], s=1)
axis.set_aspect('equal')
axis.scatter([0], [0], color='black')
axis = axes[0][1]
axis.set_title('velocity')
axis.scatter(qv[:,2], qv[:,3], s=1)
axis.set_aspect('equal')
axis.scatter([0], [0], color='black')
axis = axes[1][0]
axis.set_title('jet distance from initial condition\nestimated period: {0}'.format(estimated_period))
for local_min_index in local_min_index_v:
axis.axvline(old_time[local_min_index])
axis.plot(old_time, jet_dist_squared)
axis = axes[1][1]
axis.set_title('Hamiltonian w.r.t. time\nmin-to-max range: {0}'.format(np.max(H_v)-np.min(H_v)))
axis.plot(time[:-1], H_v)
frequencies = np.linspace(-40, 40, 40+40+1, dtype=int)
derivatives = np.array([0])
# fp = fourier_parameterization.Planar(frequencies, derivatives, time[:estimated_period_time_index+1])
# fc = np.einsum('fctx,tx->fc', fp.inverse_fourier_tensor, qv[:estimated_period_time_index,0:2])
fp = fourier_parameterization.Planar(frequencies, derivatives, time)
fc = np.einsum('fctx,tx->fc', fp.inverse_fourier_tensor, qv[:,0:2])
pickle_filename = 'kepler.pickle'
with open(pickle_filename, 'wb') as f:
import pickle
pickle.dump(
{
'frequencies':fp.frequencies,
'period':fp.period,
'fourier_coefficient_tensor':fc
},
f
)
print 'wrote "{0}"'.format(pickle_filename)
reconstructed_q = np.einsum('tdfxc,fc->dtx', fp.fourier_tensor, fc)
assert reconstructed_q.shape[0] == 1
reconstructed_q = reconstructed_q[0,:,:]
max_expected_error = 1.0e-3
max_reconstruction_errors = np.max(np.abs(reconstructed_q-qv[:,0:2]), axis=0)
print 'max_reconstruction_errors:', max_reconstruction_errors
if max_expected_error is not None:
assert np.all(max_reconstruction_errors <= max_expected_error), 'at least one component of max_reconstruction_errors ({0}) exceeded max_expected_error ({1})'.format(max_reconstruction_errors, max_expected_error)
axis = axes[0][2]
axis.set_title('reconstructed position\nmax reconstruction error: {0}'.format(np.max(max_reconstruction_errors)))
axis.plot(qv[:,0], qv[:,1], lw=5, color='green', alpha=0.2)
axis.scatter(reconstructed_q[:,0], reconstructed_q[:,1], s=1)
axis.set_aspect('equal')
axis.scatter([0], [0], color='black')
axis = axes[1][2]
axis.set_title('log abs of Fourier coefficients')
axis.semilogy(fp.frequencies, np.linalg.norm(fc, axis=1), 'o')
axis.semilogy(fp.frequencies, np.linalg.norm(fc, axis=1), lw=5, alpha=0.2)
fig.tight_layout()
filename = 'kepler.test.png'
plt.savefig(filename)
print('wrote "{0}"'.format(filename))
plt.close(fig)
| 50.368146 | 220 | 0.696024 | 2,710 | 19,291 | 4.694096 | 0.153875 | 0.044022 | 0.047166 | 0.050311 | 0.436601 | 0.379137 | 0.34274 | 0.290858 | 0.266331 | 0.247936 | 0 | 0.012261 | 0.205173 | 19,291 | 382 | 221 | 50.5 | 0.817387 | 0.165258 | 0 | 0.161826 | 0 | 0.004149 | 0.051534 | 0.008669 | 0 | 0 | 0 | 0.005236 | 0.053942 | 0 | null | null | 0 | 0.053942 | null | null | 0.029046 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f497463579627def6a6bc8b025032e2117e0a72 | 1,029 | py | Python | flaskr/tripods/tripod.py | ardtieboy/campy | dc59c62395aa45c0d125f6187a8a3b19521469d3 | [
"MIT"
] | null | null | null | flaskr/tripods/tripod.py | ardtieboy/campy | dc59c62395aa45c0d125f6187a8a3b19521469d3 | [
"MIT"
] | null | null | null | flaskr/tripods/tripod.py | ardtieboy/campy | dc59c62395aa45c0d125f6187a8a3b19521469d3 | [
"MIT"
] | null | null | null | import pantilthat
class Tripod(object):
def __init__(self):
self.horizontal = 0
self.vertical = 0
self.step_size = 5
pantilthat.pan(0)
pantilthat.tilt(0)
def left(self):
if (-80 < self.horizontal):
self.horizontal = self.horizontal - self.step_size
print('moving left ' + str(self.horizontal))
pantilthat.pan(self.horizontal)
def right(self):
if (self.horizontal < 80):
self.horizontal = self.horizontal + self.step_size
print('moving right ' + str(self.horizontal))
pantilthat.pan(self.horizontal)
def up(self):
if (-80 < self.vertical):
self.vertical = self.vertical - self.step_size
print('moving up ' + str(self.vertical))
pantilthat.tilt(self.vertical)
def down(self):
if (self.vertical < 80):
self.vertical = self.vertical + self.step_size
print('moving down ' + str(self.vertical))
pantilthat.tilt(self.vertical) | 30.264706 | 62 | 0.597668 | 120 | 1,029 | 5.05 | 0.208333 | 0.254125 | 0.09901 | 0.112211 | 0.669967 | 0.669967 | 0.613861 | 0.478548 | 0.323432 | 0 | 0 | 0.017687 | 0.285714 | 1,029 | 34 | 63 | 30.264706 | 0.806803 | 0 | 0 | 0.142857 | 0 | 0 | 0.045631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.035714 | 0 | 0.25 | 0.142857 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f4a9a45f9839d4156f378a8a84cdc5c95302052 | 837 | py | Python | tests/test_api_key.py | sodrooome/bmaclient | 8d3609a63e3e2576642ecde77245ee0b02225e31 | [
"MIT"
] | null | null | null | tests/test_api_key.py | sodrooome/bmaclient | 8d3609a63e3e2576642ecde77245ee0b02225e31 | [
"MIT"
] | null | null | null | tests/test_api_key.py | sodrooome/bmaclient | 8d3609a63e3e2576642ecde77245ee0b02225e31 | [
"MIT"
] | null | null | null | import os
import unittest
from bmaclient.utils import get_api_key_from_file
DATA_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'data')
class APIKeyTest(unittest.TestCase):
def test_get_api_key_from_file(self):
path = os.path.join(DATA_DIR, 'api_key.txt')
key = get_api_key_from_file(path)
self.assertEqual(key, 'THIS_IS_API_KEY_VALUE')
def test_get_api_key_from_empty_file(self):
path = os.path.join(DATA_DIR, 'api_key_empty.txt')
key = get_api_key_from_file(path)
self.assertEqual(key, '')
def test_get_api_key_from_empty_file_strict(self):
path = os.path.join(DATA_DIR, 'api_key_empty.txt')
with self.assertRaises(ValueError):
key = get_api_key_from_file(path, strict=True)
if __name__ == '__main__':
unittest.main()
| 28.862069 | 75 | 0.706093 | 129 | 837 | 4.124031 | 0.27907 | 0.12406 | 0.118421 | 0.171053 | 0.599624 | 0.567669 | 0.530075 | 0.484962 | 0.383459 | 0.383459 | 0 | 0 | 0.18399 | 837 | 28 | 76 | 29.892857 | 0.778917 | 0 | 0 | 0.210526 | 0 | 0 | 0.09319 | 0.02509 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.157895 | false | 0 | 0.157895 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f64ce20d2bc7463fbe8a47e2e961d23f53650a4 | 14,042 | py | Python | source/StimulationScreen.py | FelixTheoret/Ergocycle | 0656e6cb43b535dd2f845420ad71742b1b85e412 | [
"MIT"
] | null | null | null | source/StimulationScreen.py | FelixTheoret/Ergocycle | 0656e6cb43b535dd2f845420ad71742b1b85e412 | [
"MIT"
] | null | null | null | source/StimulationScreen.py | FelixTheoret/Ergocycle | 0656e6cb43b535dd2f845420ad71742b1b85e412 | [
"MIT"
] | 2 | 2022-03-17T23:17:56.000Z | 2022-03-23T01:08:58.000Z | """
Created on Wed March 30 11::00 2022
@author: Frédérique Leclerc
"""
from tracemalloc import start
from numpy import number
from Screen import Screen as Screen
from PyQt5 import QtWidgets
from PyQt5.QtWidgets import *
from PyQt5.QtGui import QFont, QPixmap
from PIL import Image
#from Ergocycle.source.StartWindow import StartWindow
from StartWindow import StartWindow
from TestingWindow import TestingWindow
from InstructionWindow import InstructionWindow
from Parameters import Parameters
from StimulationWindow import StimulationWindow
from MainWindowStim import MainWindowStim
from DangerPopUp import DangerPopUp
import sys
import datetime
import time
import csv
from CommandButton import CommandButton as CommandButton
# Take the code from main_sef.py and add getters and setters
#def window():
#app = QApplication(sys.argv)
#win = StartWindow()
#win.show()
#sys.exit(app.exec_())
#window()
class StimulationScreen(Screen):
def __init__(self, event_function):
super(StimulationScreen, self).__init__(event_function)
self.event_function = event_function
self.current_menu = 0
self.danger_menu = 0
#self.now = datetime.datetime.now()
### 1.1. Permet de gérer les fenêtre apparaissant sur l'interface. ###
def manage_active_window(self, stim_parameters):
if self.window_counter == 0:
self.current_menu = StartWindow()
self.current_menu.training_button.clicked.connect(lambda : self.event_function("start_training"))
self.current_menu.test_button.clicked.connect(lambda : self.event_function("start_test"))
self.current_menu.show()
# self.connect_buttons(self.current_menu)
elif self.window_counter == -1:
self.current_menu.close()
self.current_menu = TestingWindow()
self.current_menu.increase_amp_button.clicked.connect(lambda : self.event_function("increase_amp"))
self.current_menu.increase_freq_button.clicked.connect(lambda : self.event_function("increase_frequency"))
self.current_menu.increase_imp_button.clicked.connect(lambda : self.event_function("increase_imp"))
self.current_menu.decrease_amp_button.clicked.connect(lambda : self.event_function("decrease_amp"))
self.current_menu.decrease_freq_button.clicked.connect(lambda : self.event_function("decrease_frequency"))
self.current_menu.decrease_imp_button.clicked.connect(lambda : self.event_function("decrease_imp"))
self.current_menu.back_button.clicked.connect(lambda : self.event_function("back_button_clicked"))
self.current_menu.show()
elif self.window_counter == 1:
self.current_menu.close()
self.current_menu = MainWindowStim()
self.current_menu.submit_button.clicked.connect(lambda : self.event_function("submit_button_clicked"))
self.current_menu.submit_final_button.clicked.connect(lambda : self.event_function("submit_final_button_clicked"))
self.current_menu.show()
elif self.window_counter == -2:
self.danger_menu = DangerPopUp(stim_parameters)
self.danger_menu.show()
self.danger_menu.back_to_menu_button.clicked.connect(lambda : self.back_to_menu_button_clicked())
self.danger_menu.continue_button.clicked.connect(lambda : self.continue_button_clicked(stim_parameters))
elif self.window_counter == 2:
self.current_menu.close()
self.current_menu = InstructionWindow(stim_parameters)
self.current_menu.start_button.clicked.connect(lambda : self.event_function("start_stimulations"))
self.current_menu.show()
elif self.window_counter == 3:
self.current_menu.close()
self.current_menu = StimulationWindow(stim_parameters)
self.current_menu.increase_amplitude1_button.clicked.connect(lambda : self.event_function("increase_amplitude1"))
self.current_menu.increase_amplitude2_button.clicked.connect(lambda : self.event_function("increase_amplitude2"))
self.current_menu.increase_amplitude3_button.clicked.connect(lambda : self.event_function("increase_amplitude3"))
self.current_menu.increase_amplitude4_button.clicked.connect(lambda : self.event_function("increase_amplitude4"))
self.current_menu.increase_amplitude5_button.clicked.connect(lambda : self.event_function("increase_amplitude5"))
self.current_menu.increase_amplitude6_button.clicked.connect(lambda : self.event_function("increase_amplitude6"))
self.current_menu.increase_amplitude7_button.clicked.connect(lambda : self.event_function("increase_amplitude7"))
self.current_menu.increase_amplitude8_button.clicked.connect(lambda : self.event_function("increase_amplitude8"))
self.current_menu.decrease_amplitude1_button.clicked.connect(lambda : self.event_function("decrease_amplitude1"))
self.current_menu.decrease_amplitude2_button.clicked.connect(lambda : self.event_function("decrease_amplitude2"))
self.current_menu.decrease_amplitude3_button.clicked.connect(lambda : self.event_function("decrease_amplitude3"))
self.current_menu.decrease_amplitude4_button.clicked.connect(lambda : self.event_function("decrease_amplitude4"))
self.current_menu.decrease_amplitude5_button.clicked.connect(lambda : self.event_function("decrease_amplitude5"))
self.current_menu.decrease_amplitude6_button.clicked.connect(lambda : self.event_function("decrease_amplitude6"))
self.current_menu.decrease_amplitude7_button.clicked.connect(lambda : self.event_function("decrease_amplitude7"))
self.current_menu.decrease_amplitude8_button.clicked.connect(lambda : self.event_function("decrease_amplitude8"))
self.current_menu.increase_frequency1_button.clicked.connect(lambda : self.event_function("increase_frequency1"))
self.current_menu.increase_frequency2_button.clicked.connect(lambda : self.event_function("increase_frequency2"))
self.current_menu.increase_frequency3_button.clicked.connect(lambda : self.event_function("increase_frequency3"))
self.current_menu.increase_frequency4_button.clicked.connect(lambda : self.event_function("increase_frequency4"))
self.current_menu.increase_frequency5_button.clicked.connect(lambda : self.event_function("increase_frequency5"))
self.current_menu.increase_frequency6_button.clicked.connect(lambda : self.event_function("increase_frequency6"))
self.current_menu.increase_frequency7_button.clicked.connect(lambda : self.event_function("increase_frequency7"))
self.current_menu.increase_frequency8_button.clicked.connect(lambda : self.event_function("increase_frequency8"))
self.current_menu.decrease_frequency1_button.clicked.connect(lambda : self.event_function("decrease_frequency1"))
self.current_menu.decrease_frequency2_button.clicked.connect(lambda : self.event_function("decrease_frequency2"))
self.current_menu.decrease_frequency3_button.clicked.connect(lambda : self.event_function("decrease_frequency3"))
self.current_menu.decrease_frequency4_button.clicked.connect(lambda : self.event_function("decrease_frequency4"))
self.current_menu.decrease_frequency5_button.clicked.connect(lambda : self.event_function("decrease_frequency5"))
self.current_menu.decrease_frequency6_button.clicked.connect(lambda : self.event_function("decrease_frequency6"))
self.current_menu.decrease_frequency7_button.clicked.connect(lambda : self.event_function("decrease_frequency7"))
self.current_menu.decrease_frequency8_button.clicked.connect(lambda : self.event_function("decrease_frequency8"))
self.current_menu.increase_imp1_button.clicked.connect(lambda : self.event_function("increase_imp1"))
self.current_menu.increase_imp2_button.clicked.connect(lambda : self.event_function("increase_imp2"))
self.current_menu.increase_imp3_button.clicked.connect(lambda : self.event_function("increase_imp3"))
self.current_menu.increase_imp4_button.clicked.connect(lambda : self.event_function("increase_imp4"))
self.current_menu.increase_imp5_button.clicked.connect(lambda : self.event_function("increase_imp5"))
self.current_menu.increase_imp6_button.clicked.connect(lambda : self.event_function("increase_imp6"))
self.current_menu.increase_imp7_button.clicked.connect(lambda : self.event_function("increase_imp7"))
self.current_menu.increase_imp8_button.clicked.connect(lambda : self.event_function("increase_imp8"))
self.current_menu.decrease_imp1_button.clicked.connect(lambda : self.event_function("decrease_imp1"))
self.current_menu.decrease_imp2_button.clicked.connect(lambda : self.event_function("decrease_imp2"))
self.current_menu.decrease_imp3_button.clicked.connect(lambda : self.event_function("decrease_imp3"))
self.current_menu.decrease_imp4_button.clicked.connect(lambda : self.event_function("decrease_imp4"))
self.current_menu.decrease_imp5_button.clicked.connect(lambda : self.event_function("decrease_imp5"))
self.current_menu.decrease_imp6_button.clicked.connect(lambda : self.event_function("decrease_imp6"))
self.current_menu.decrease_imp7_button.clicked.connect(lambda : self.event_function("decrease_imp7"))
self.current_menu.decrease_imp8_button.clicked.connect(lambda : self.event_function("decrease_imp8"))
self.current_menu.pauseWatch.pressed.connect(lambda : self.event_function("pause_stimulation"))
self.current_menu.stop_button.clicked.connect(lambda : self.event_function("stop_stimulation"))
self.current_menu.show()
else:
self.current_menu.close()
def back_to_menu_button_clicked(self):
self.danger_menu.close()
# self.manage_active_window(stim_parameters)
self.event_function("back_to_menu")
def continue_button_clicked(self, stim_parameters):
self.window_counter = 2
self.manage_active_window(stim_parameters)
self.danger_menu.close()
self.event_function("continue_to_instructions")
### 1.2. Création d'un fichier CSV lors de l'entraînement afin d'enregistrer les données de stimulations ###
def create_csv_file(self, matrice):
self.now = datetime.datetime.now()
file = (self.now.strftime("%m-%d-%Y, %H;%M;%S"))
path = "\\home\\pi\\Downloads\\stimulation_data_" # à commenter si vous travaillez sur votre ordinateur
name_of_file = path+file+".csv" # à commenter si vous travaillez sur votre ordi
#name_of_file = (self.now.strftime("%m-%d-%Y, %H;%M;%S"))+" stimulations_data.csv" # à décommenter si vous travaillez sur votre ordinateur
with open(name_of_file, 'w',newline='') as f:
fieldnames = ['Date and time', 'Electrode', 'Amplitude(mA)','Frequence(Hz)', 'Durée dimpulsion(us)', 'muscle']
thewriter = csv.DictWriter(f,fieldnames)
thewriter.writeheader()
now = datetime.datetime.now()
date_time = now.strftime("%m-%d-%Y,%H:%M:%S")
for i in range(8):
muscle_name = self.get_muscle_traduction(matrice[3,i])
thewriter.writerow({'Date and time' : date_time, 'Electrode': str(i+1), 'Amplitude(mA)': str(matrice[0,i]) ,'Frequence(Hz)': str(matrice[1,i]), 'Durée dimpulsion(us)': str(matrice[2,i]), 'muscle': muscle_name})
f.close
### 1.3. Ajouter les modification des paramètres d'entraînement au même fichier CSV ###
def save_data_in_csv_file(self, matrice):
file = self.now.strftime("%m-%d-%Y, %H;%M;%S")
path = "\\home\\pi\\Downloads\\stimulation_data_" # à commenter si vous travaillez sur votre ordi
name_of_file = path+file+".csv" # à commenter si vous travaillez sur votre ordi
#name_of_file = (self.now.strftime("%m-%d-%Y, %H;%M;%S"))+" stimulations_data.csv" # à décommenter si vous travaillez sur votre ordi
with open(name_of_file, 'a+',newline='') as f:
fieldnames = ['Date and time', 'Electrode', 'Amplitude(mA)','Frequence(Hz)', 'Durée dimpulsion(us)', 'muscle']
thewriter = csv.DictWriter(f,fieldnames)
new_now = datetime.datetime.now()
date_time = new_now.strftime("%m-%d-%Y,%H:%M:%S")
if matrice == []:
for i in range(8):
thewriter.writerow({'Date and time' : date_time, 'Electrode': str(i+1), 'Amplitude(mA)': str(0) ,'Frequence(Hz)': str(0), 'Durée dimpulsion(us)': str(0), 'muscle': str(0)})
else:
for i in range(8):
muscle_name = self.get_muscle_traduction(matrice[3,i])
thewriter.writerow({'Date and time' : date_time, 'Electrode': str(i+1), 'Amplitude(mA)': str(matrice[0,i]) ,'Frequence(Hz)': str(matrice[1,i]), 'Durée dimpulsion(us)': str(matrice[2,i]), 'muscle': muscle_name})
f.close
### 1.4. Traduit les muscles de chiffre à nom pour l'enregistrements des données ###
def get_muscle_traduction(self, muscle_number):
if muscle_number == 0:
muscle = "Aucun"
if muscle_number == 1:
muscle = "Biceps Brachii"
if muscle_number== 2:
muscle = "Triceps Brachii"
if muscle_number == 3:
muscle = "Deltoide Postérieur"
if muscle_number == 4:
muscle = "Deltoide Antérieur"
return(muscle)
| 65.924883 | 230 | 0.703319 | 1,671 | 14,042 | 5.64991 | 0.13465 | 0.092045 | 0.125516 | 0.173499 | 0.596123 | 0.556933 | 0.539456 | 0.514458 | 0.146489 | 0.146489 | 0 | 0.013097 | 0.189788 | 14,042 | 212 | 231 | 66.235849 | 0.816736 | 0.083464 | 0 | 0.186747 | 0 | 0 | 0.131055 | 0.011857 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042169 | false | 0 | 0.114458 | 0 | 0.162651 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f667e0c792310a6ca90389cd7acf95cb20a0aed | 3,631 | py | Python | matrix_procedures/load_dataset_class.py | jamiecook/AequilibraE | b1013d59cbeaf6fc4e1a944cf31f20460a2a4156 | [
"MIT"
] | null | null | null | matrix_procedures/load_dataset_class.py | jamiecook/AequilibraE | b1013d59cbeaf6fc4e1a944cf31f20460a2a4156 | [
"MIT"
] | null | null | null | matrix_procedures/load_dataset_class.py | jamiecook/AequilibraE | b1013d59cbeaf6fc4e1a944cf31f20460a2a4156 | [
"MIT"
] | null | null | null | """
-----------------------------------------------------------------------------------------------------------
Package: AequilibraE
Name: Loads datasets into AequilibraE binary files
Purpose: Implements dataset creation
Original Author: Pedro Camargo (c@margo.co)
Contributors:
Last edited by: Pedro Camargo
Website: www.AequilibraE.com
Repository: https://github.com/AequilibraE/AequilibraE
Created: 2016-08-15 (Initially as vector loading)
Updated: 2017-10-02
Copyright: (c) AequilibraE authors
Licence: See LICENSE.TXT
-----------------------------------------------------------------------------------------------------------
"""
from PyQt4.QtCore import *
import numpy as np
from ..common_tools.worker_thread import WorkerThread
import struct
from ..aequilibrae.matrix import AequilibraEData
from ..common_tools.global_parameters import *
class LoadDataset(WorkerThread):
def __init__(self, parent_thread, layer, index_field, fields, file_name):
WorkerThread.__init__(self, parent_thread)
self.layer = layer
self.index_field = index_field
self.fields = fields
self.error = None
self.python_version = (8 * struct.calcsize("P"))
self.output = AequilibraEData()
self.output_name = file_name
def doWork(self):
feat_count = self.layer.featureCount()
self.emit(SIGNAL("ProgressMaxValue( PyQt_PyObject )"), feat_count)
# Create specification for the output file
datafile_spec = {'entries': feat_count}
if self.output_name is None:
datafile_spec['memory_mode'] = True
else:
datafile_spec['memory_mode'] = False
fields = []
types = []
idxs = []
empties = []
for field in self.layer.dataProvider().fields().toList():
if field.name() in self.fields:
if field.type() in integer_types:
types.append('<i8')
empties.append(np.iinfo(np.int64).min)
elif field.type() in float_types:
types.append('<f8')
empties.append(np.nan)
elif field.type() in string_types:
types.append('S' + str(field.length()))
empties.append('')
else:
print field.type()
self.error = 'Field {} does has a type not supported.'.format(str(field.name()))
break
fields.append(str(field.name()))
idxs.append(self.layer.fieldNameIndex(field.name()))
index_idx = self.layer.fieldNameIndex(self.index_field)
datafile_spec['field_names'] = fields
datafile_spec['data_types'] = types
datafile_spec['file_path'] = self.output_name
if self.error is None:
self.output.create_empty(**datafile_spec)
# Get all the data
for p, feat in enumerate(self.layer.getFeatures()):
for idx, field, empty in zip(idxs, fields, empties):
if isinstance(feat.attributes()[idx], QPyNullVariant):
self.output.data[field][p] = empty
else:
self.output.data[field][p] = feat.attributes()[idx]
self.output.index[p] = feat.attributes()[index_idx]
self.emit(SIGNAL("ProgressValue( PyQt_PyObject )"), p)
self.emit(SIGNAL("ProgressValue( PyQt_PyObject )"), int(feat_count))
self.emit(SIGNAL("finished_threaded_procedure( PyQt_PyObject )"), 'Done')
| 39.043011 | 108 | 0.56238 | 378 | 3,631 | 5.26455 | 0.407407 | 0.040201 | 0.028141 | 0.020101 | 0.059296 | 0.039196 | 0 | 0 | 0 | 0 | 0 | 0.008397 | 0.278436 | 3,631 | 92 | 109 | 39.467391 | 0.751145 | 0.015698 | 0 | 0.04918 | 0 | 0 | 0.085497 | 0.009692 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.098361 | null | null | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f696039cf0e21135e93815d60394b72a34a443f | 1,040 | py | Python | app/api/routes/server/plot_queue.py | cPoolChia/ChiaAutoplotter-Backend | 8d875c1f846df395ddc76e3d84b36da45ad4d557 | [
"MIT"
] | 7 | 2021-06-01T09:20:34.000Z | 2021-10-12T07:24:04.000Z | app/api/routes/server/plot_queue.py | cPoolChia/ChiaFarmerManager-Backend | 8d875c1f846df395ddc76e3d84b36da45ad4d557 | [
"MIT"
] | null | null | null | app/api/routes/server/plot_queue.py | cPoolChia/ChiaFarmerManager-Backend | 8d875c1f846df395ddc76e3d84b36da45ad4d557 | [
"MIT"
] | 1 | 2021-05-31T13:08:14.000Z | 2021-05-31T13:08:14.000Z | from typing import Any
from uuid import UUID, uuid4
import celery
from app import crud, models, schemas
from app.api import deps
from app.core.config import settings
from app.utils import auth
from app.core import tasks
from fastapi import APIRouter, Depends, HTTPException, Body, Query
from sqlalchemy.orm import Session
from fastapi_utils.cbv import cbv
from fastapi_utils.inferring_router import InferringRouter
from app.api.routes.base import BaseAuthCBV
router = InferringRouter()
@cbv(router)
class QueueCBV(BaseAuthCBV):
@router.get("/")
def get_queues_table(
self,
server: models.Server = Depends(deps.get_server_by_id),
filtration: schemas.FilterData[models.Plot] = Depends(
deps.get_filtration_data(models.Plot)
),
) -> schemas.Table[schemas.PlotQueueReturn]:
amount, items = crud.plot_queue.get_multi_by_server(
self.db, server=server, filtration=filtration
)
return schemas.Table[schemas.PlotQueueReturn](amount=amount, items=items) | 32.5 | 81 | 0.740385 | 134 | 1,040 | 5.641791 | 0.41791 | 0.055556 | 0.026455 | 0.089947 | 0.10582 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001171 | 0.178846 | 1,040 | 32 | 81 | 32.5 | 0.884075 | 0 | 0 | 0 | 0 | 0 | 0.000961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.464286 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
7f6d004764ff3c98efc96ebb1cf1f394bc32a7c1 | 440 | py | Python | example_pkg/Parrots.py | TillWeinhold/EQUSWORKSHOPTill | 95099cc8d8ebc93b53db6d9e9b4c67051fcb7763 | [
"MIT"
] | null | null | null | example_pkg/Parrots.py | TillWeinhold/EQUSWORKSHOPTill | 95099cc8d8ebc93b53db6d9e9b4c67051fcb7763 | [
"MIT"
] | null | null | null | example_pkg/Parrots.py | TillWeinhold/EQUSWORKSHOPTill | 95099cc8d8ebc93b53db6d9e9b4c67051fcb7763 | [
"MIT"
] | null | null | null | class Parrot():
def __init__(self, squawk, is_alive=True):
self.squawk = squawk
self.is_alive = is_alive
african_grey = Parrot('Polly want a Cracker?')
norwegian_blue = Parrot('', is_alive=False)
def squawk(self):
if self.is_alive:
return self.squawk
else:
return None
african_grey.squawk()
norwegian_blue.squawk()
def set_alive(self, is_alive):
self.is_alive = is_alive | 22 | 46 | 0.652273 | 60 | 440 | 4.5 | 0.383333 | 0.207407 | 0.162963 | 0.096296 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245455 | 440 | 20 | 47 | 22 | 0.813253 | 0 | 0 | 0.133333 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f6f7feeb11eb05bb310cec5c813b37346f053c2 | 226 | py | Python | Codility/python/problem2.py | rayguang/ratesbuddy | ec97f85201812967bb3380bba6de41bdb223eab6 | [
"MIT"
] | null | null | null | Codility/python/problem2.py | rayguang/ratesbuddy | ec97f85201812967bb3380bba6de41bdb223eab6 | [
"MIT"
] | null | null | null | Codility/python/problem2.py | rayguang/ratesbuddy | ec97f85201812967bb3380bba6de41bdb223eab6 | [
"MIT"
] | null | null | null | def solution(S, K):
# write your code in Python 3.6
day_dict = {0:'Mon', 1:'Tue', 2:'Wed', 3:'Thu', 4:'Fri', 5:'Sat', 6:'Sun'}
return day_dict[list(day_dict.values()).index(S)+K]
S='Wed'
K=2
print(solution(S,K))
| 22.6 | 78 | 0.584071 | 45 | 226 | 2.866667 | 0.666667 | 0.046512 | 0.155039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053191 | 0.168142 | 226 | 9 | 79 | 25.111111 | 0.632979 | 0.128319 | 0 | 0 | 0 | 0 | 0.123077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f7574b64532b1f7794103ece680d1785b89f64b | 210 | py | Python | tests/Auth_Test.py | PyDO-Team/PyDO-Old | a89d9d30aee72a42ce3afdd90b5b297e7b908d73 | [
"BSD-3-Clause"
] | 1 | 2021-02-23T12:54:05.000Z | 2021-02-23T12:54:05.000Z | tests/Auth_Test.py | PyDO-Team/PyDO-Old | a89d9d30aee72a42ce3afdd90b5b297e7b908d73 | [
"BSD-3-Clause"
] | null | null | null | tests/Auth_Test.py | PyDO-Team/PyDO-Old | a89d9d30aee72a42ce3afdd90b5b297e7b908d73 | [
"BSD-3-Clause"
] | 1 | 2021-02-23T12:54:07.000Z | 2021-02-23T12:54:07.000Z | from TestOutput import LogManager
class Auth_Test:
def __init__(self):
self.logger = LogManager('AuthTest')
self.logger.writeTestEvent('AuthTest', 'Test Started')
if __name__ == "__main__":
Auth_Test() | 21 | 56 | 0.747619 | 25 | 210 | 5.72 | 0.68 | 0.111888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128571 | 210 | 10 | 57 | 21 | 0.781421 | 0 | 0 | 0 | 0 | 0 | 0.170616 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f7cecbd7d8387712e80404ab9516b060333f0e5 | 444 | py | Python | timeblock/stopwatch.py | daustin391/timeblock | 1895f95205100f2f191feccbb1bcb90672aae043 | [
"MIT"
] | null | null | null | timeblock/stopwatch.py | daustin391/timeblock | 1895f95205100f2f191feccbb1bcb90672aae043 | [
"MIT"
] | null | null | null | timeblock/stopwatch.py | daustin391/timeblock | 1895f95205100f2f191feccbb1bcb90672aae043 | [
"MIT"
] | null | null | null | """ Stopwatch
A simple stopwatch for measuring time
"""
from datetime import datetime
class Stopwatch:
"""Stopwatch object for measuring time"""
def __init__(self):
self._start = None
def start(self):
"""Starts watch by setting start time to now"""
self._start = datetime.now()
def check(self):
"""Check the time passed since watch started"""
return self._start - datetime.now()
| 22.2 | 55 | 0.635135 | 54 | 444 | 5.092593 | 0.518519 | 0.098182 | 0.116364 | 0.145455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261261 | 444 | 19 | 56 | 23.368421 | 0.838415 | 0.376126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.125 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f7d50f6a8fff49e5e4da85906cece438d04d0c3 | 623 | py | Python | src/data/mnist_converter.py | laurabondeholst/Mapping_high_dimensional_data | 208c1024d3984cf60fc2f54b7bfdffaa83c60157 | [
"MIT"
] | null | null | null | src/data/mnist_converter.py | laurabondeholst/Mapping_high_dimensional_data | 208c1024d3984cf60fc2f54b7bfdffaa83c60157 | [
"MIT"
] | null | null | null | src/data/mnist_converter.py | laurabondeholst/Mapping_high_dimensional_data | 208c1024d3984cf60fc2f54b7bfdffaa83c60157 | [
"MIT"
] | null | null | null | """
Takes the MNIST dataset as input (images and labels separated)
and creates a new dataset only with 0's and 1's
"""
import numpy as np
DATA_PATH = "data/raw/"
OUTPUT_PATH = "data/processed/mnist/"
X = np.loadtxt(DATA_PATH + "mnist2500_X.txt")
labels = np.loadtxt(DATA_PATH + "mnist2500_labels.txt")
X_new = []
labels_new = []
for i,label in enumerate(labels):
if label < 5:
labels_new.append(label)
X_new.append(X[i])
if i%100 == 0:
print(f"{i} labels passed")
np.savetxt(OUTPUT_PATH + "mnist2500_X_01234.txt",X_new)
np.savetxt(OUTPUT_PATH +"mnist2500_labels_01234.txt",labels_new) | 25.958333 | 64 | 0.693419 | 102 | 623 | 4.058824 | 0.431373 | 0.125604 | 0.062802 | 0.082126 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064078 | 0.173355 | 623 | 24 | 64 | 25.958333 | 0.739806 | 0.176565 | 0 | 0 | 0 | 0 | 0.254941 | 0.134387 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.066667 | 0 | 0.066667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7f7df73d539af4f42bcbde355e7bdef39b43dc8a | 7,364 | py | Python | models/pytorch_gcn.py | SamGalanakis/FlowCompare | ed26e48298fe42cf9ddcc252c19b502b4a71d54e | [
"MIT"
] | 5 | 2021-05-22T23:14:19.000Z | 2022-02-19T14:11:27.000Z | models/pytorch_gcn.py | SamGalanakis/PointFlowChange | ed26e48298fe42cf9ddcc252c19b502b4a71d54e | [
"MIT"
] | 1 | 2021-06-05T11:20:53.000Z | 2021-06-05T12:30:44.000Z | models/pytorch_gcn.py | SamGalanakis/PointFlowChange | ed26e48298fe42cf9ddcc252c19b502b4a71d54e | [
"MIT"
] | null | null | null | import os
import sys
import copy
import math
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from models.nets import MLP
# Original code from : https://github.com/WangYueFt/dgcnn/blob/master/pytorch/model.py
def knn(x, k):
inner = -2*torch.matmul(x.transpose(2, 1), x)
xx = torch.sum(x**2, dim=1, keepdim=True)
pairwise_distance = -xx - inner - xx.transpose(2, 1)
# (batch_size, num_points, k)
idx = pairwise_distance.topk(k=k, dim=-1)[1]
return idx
def get_graph_feature(x, k=20, idx=None):
batch_size = x.size(0)
num_points = x.size(2)
x = x.view(batch_size, -1, num_points)
if idx is None:
idx = knn(x, k=k) # (batch_size, num_points, k)
idx_base = torch.arange(
0, batch_size, device=x.device).view(-1, 1, 1)*num_points
idx = idx + idx_base
idx = idx.view(-1)
_, num_dims, _ = x.size()
# (batch_size, num_points, num_dims) -> (batch_size*num_points, num_dims) # batch_size * num_points * k + range(0, batch_size*num_points)
x = x.transpose(2, 1).contiguous()
feature = x.view(batch_size*num_points, -1)[idx, :]
feature = feature.view(batch_size, num_points, k, num_dims)
x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
feature = torch.cat((feature-x, x), dim=3).permute(0, 3, 1, 2).contiguous()
return feature
class DGCNNembedder(nn.Module):
"""DGCNN embedder with n_points x embedding dim as output"""
def __init__(self, out_mlp_dims, emb_dim=22, dropout=0, n_neighbors=20):
super().__init__()
self.n_neighbors = n_neighbors
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(64)
self.bn3 = nn.BatchNorm2d(128)
self.bn4 = nn.BatchNorm2d(256)
self.bn5 = nn.BatchNorm1d(512)
self.conv1 = nn.Sequential(nn.Conv2d(12, 64, kernel_size=1, bias=False),
self.bn1,
nn.LeakyReLU(negative_slope=0.2))
self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
self.bn2,
nn.LeakyReLU(negative_slope=0.2))
self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),
self.bn3,
nn.LeakyReLU(negative_slope=0.2))
self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),
self.bn4,
nn.LeakyReLU(negative_slope=0.2))
self.conv5 = nn.Sequential(nn.Conv1d(512, 512, kernel_size=1, bias=False),
self.bn5,
nn.LeakyReLU(negative_slope=0.2))
self.out_mlp = MLP(512, out_mlp_dims, emb_dim, torch.nn.GELU())
def forward(self, x):
batch_size = x.size(0)
x = x.permute((0, 2, 1))
x = get_graph_feature(x, k=self.n_neighbors)
x = self.conv1(x)
x1 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x1, self.n_neighbors)
x = self.conv2(x)
x2 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x2, k=self.n_neighbors)
x = self.conv3(x)
x3 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x3, k=self.n_neighbors)
x = self.conv4(x)
x4 = x.max(dim=-1, keepdim=False)[0]
x = torch.cat((x1, x2, x3, x4),dim=1)
x = self.conv5(x).permute((0, 2, 1))
x = self.out_mlp(x)
return x
class DGCNNembedderGlobal(nn.Module):
"""DGCNN embedder with single embedding vector as output"""
def __init__(self, input_dim, out_mlp_dims, emb_dim=22, n_neighbors=20):
super().__init__()
self.n_neighbors = n_neighbors
self.input_dim = input_dim
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(64)
self.bn3 = nn.BatchNorm2d(128)
self.bn4 = nn.BatchNorm2d(256)
self.bn5 = nn.BatchNorm1d(512)
self.conv1 = nn.Sequential(nn.Conv2d(self.input_dim*2, 64, kernel_size=1, bias=False),
self.bn1,
nn.LeakyReLU(negative_slope=0.2))
self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
self.bn2,
nn.LeakyReLU(negative_slope=0.2))
self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),
self.bn3,
nn.LeakyReLU(negative_slope=0.2))
self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),
self.bn4,
nn.LeakyReLU(negative_slope=0.2))
self.conv5 = nn.Sequential(nn.Conv1d(512, 512, kernel_size=1, bias=False),
self.bn5,
nn.LeakyReLU(negative_slope=0.2))
self.out_mlp = MLP(512*2, out_mlp_dims, emb_dim, torch.nn.GELU())
def forward(self, x):
batch_size = x.size(0)
x = x.permute((0, 2, 1))
# (batch_size, 3, num_points) -> (batch_size, 3*2, num_points, k)
x = get_graph_feature(x, k=self.n_neighbors)
# (batch_size, 3*2, num_points, k) -> (batch_size, 64, num_points, k)
x = self.conv1(x)
# (batch_size, 64, num_points, k) -> (batch_size, 64, num_points)
x1 = x.max(dim=-1, keepdim=False)[0]
# (batch_size, 64, num_points) -> (batch_size, 64*2, num_points, k)
x = get_graph_feature(x1, k=self.n_neighbors)
# (batch_size, 64*2, num_points, k) -> (batch_size, 64, num_points, k)
x = self.conv2(x)
# (batch_size, 64, num_points, k) -> (batch_size, 64, num_points)
x2 = x.max(dim=-1, keepdim=False)[0]
# (batch_size, 64, num_points) -> (batch_size, 64*2, num_points, k)
x = get_graph_feature(x2, k=self.n_neighbors)
# (batch_size, 64*2, num_points, k) -> (batch_size, 128, num_points, k)
x = self.conv3(x)
# (batch_size, 128, num_points, k) -> (batch_size, 128, num_points)
x3 = x.max(dim=-1, keepdim=False)[0]
# (batch_size, 128, num_points) -> (batch_size, 128*2, num_points, k)
x = get_graph_feature(x3, k=self.n_neighbors)
# (batch_size, 128*2, num_points, k) -> (batch_size, 256, num_points, k)
x = self.conv4(x)
# (batch_size, 256, num_points, k) -> (batch_size, 256, num_points)
x4 = x.max(dim=-1, keepdim=False)[0]
# (batch_size, 64+64+128+256, num_points)
x = torch.cat((x1, x2, x3, x4), dim=1)
# (batch_size, 64+64+128+256, num_points) -> (batch_size, emb_dims, num_points)
x = self.conv5(x)
# (batch_size, emb_dims, num_points) -> (batch_size, emb_dims)
x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
# (batch_size, emb_dims, num_points) -> (batch_size, emb_dims)
x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)
x = torch.cat((x1, x2), 1) # (batch_size, emb_dims*2)
# (batch_size, 256) -> (batch_size, output_channels)
x = self.out_mlp(x)
return x
| 38.354167 | 144 | 0.562873 | 1,073 | 7,364 | 3.674744 | 0.11836 | 0.114126 | 0.050723 | 0.038042 | 0.767182 | 0.707583 | 0.667766 | 0.623383 | 0.570124 | 0.518133 | 0 | 0.070986 | 0.297936 | 7,364 | 191 | 145 | 38.554974 | 0.691683 | 0.204508 | 0 | 0.620968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0.072581 | 0 | 0.169355 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f8384a311353de10a642c69e9df0c76f447b11c | 1,565 | py | Python | trisicell/ul/__init__.py | faridrashidi/trisicell | 4db89edd44c03ccb6c7d3477beff0079c3ff8035 | [
"BSD-3-Clause"
] | 2 | 2021-07-02T13:53:15.000Z | 2021-11-16T03:14:36.000Z | trisicell/ul/__init__.py | faridrashidi/trisicell | 4db89edd44c03ccb6c7d3477beff0079c3ff8035 | [
"BSD-3-Clause"
] | 58 | 2021-06-14T17:14:39.000Z | 2022-03-11T19:32:54.000Z | trisicell/ul/__init__.py | faridrashidi/trisicell | 4db89edd44c03ccb6c7d3477beff0079c3ff8035 | [
"BSD-3-Clause"
] | null | null | null | """Utils Module."""
from trisicell.ul._hclustering import (
dist_cosine_ignore_na,
dist_dendro,
dist_l1_ignore_na,
hclustering,
)
from trisicell.ul._packages import (
import_graph_tool,
import_graphviz,
import_gurobi,
import_mpi4py,
import_rpy2,
)
from trisicell.ul._trees import (
cells_rooted_at,
is_leaf,
muts_rooted_at,
partition_cells,
root_id,
to_cfmatrix,
to_mtree,
to_tree,
)
from trisicell.ul._utils import (
calc_score_tree,
cleanup,
count_flips,
dir_base,
dirbase,
executable,
get_file,
get_param,
infer_rates,
is_conflict_free,
is_conflict_free_gusfield,
log_flip,
log_input,
log_output,
mkdir,
remove,
split_mut,
stat,
timeit,
tmpdir,
tmpdirsys,
tmpfile,
tqdm_joblib,
with_timeout,
)
__all__ = (
dist_cosine_ignore_na,
dist_dendro,
dist_l1_ignore_na,
hclustering,
import_graph_tool,
import_graphviz,
import_gurobi,
import_mpi4py,
import_rpy2,
cells_rooted_at,
muts_rooted_at,
partition_cells,
root_id,
to_cfmatrix,
to_mtree,
to_tree,
calc_score_tree,
cleanup,
dir_base,
dirbase,
get_file,
get_param,
infer_rates,
is_conflict_free,
is_conflict_free_gusfield,
log_flip,
log_input,
log_output,
mkdir,
remove,
stat,
timeit,
tmpdir,
tmpdirsys,
tmpfile,
tqdm_joblib,
with_timeout,
is_leaf,
executable,
split_mut,
count_flips,
)
| 16.302083 | 39 | 0.651118 | 187 | 1,565 | 4.967914 | 0.368984 | 0.055974 | 0.064586 | 0.038751 | 0.688913 | 0.688913 | 0.688913 | 0.688913 | 0.688913 | 0.574812 | 0 | 0.005324 | 0.279872 | 1,565 | 95 | 40 | 16.473684 | 0.818988 | 0.008307 | 0 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.152174 | 0 | 0.152174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f83ab03f9f78c70886157722d2f935b1bea1759 | 2,137 | py | Python | utils/pktgen/sender.py | lilyhuegerich/No-hop | 2e82af6c8036312c35b3f2338c776863416ea043 | [
"MIT"
] | null | null | null | utils/pktgen/sender.py | lilyhuegerich/No-hop | 2e82af6c8036312c35b3f2338c776863416ea043 | [
"MIT"
] | 1 | 2021-02-15T17:29:03.000Z | 2021-02-15T17:29:03.000Z | utils/utils-backup/pktgen/sender.py | guiladkatz/NetAction | 436c5ac9dcf0e2064bcc9a2facca556828de6703 | [
"Apache-2.0"
] | 1 | 2021-04-18T18:24:59.000Z | 2021-04-18T18:24:59.000Z | #!/usr/bin/env python
import argparse, sys, socket, random, struct, time
from scapy.all import sendp, send, get_if_list, get_if_list, get_if_hwaddr, hexdump
from scapy.all import Packet
from scapy.all import Ether, IP, IPv6, UDP, TCP
sip_port=5060
def get_if():
iface=None
for i in get_if_list():
# find hx-eth0
if "eth0" in i:
iface=i;
break;
if not iface:
print "Cannot find eth0 interface"
exit(1)
return iface
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-v', type=str, help="Specify using (4)IPv4/(6)IPv6.")
parser.add_argument('--ip', type=str, help="The destination IP address.")
parser.add_argument('--loop', type=int, help="Number of loop.", default=0)
parser.add_argument('--msg', type=str, help="The message which will send to dst.",default="Hello World")
parser.add_argument('--dport', type=int, help="TCP/UDP source port.", default=1234)
parser.add_argument('--sport', type=int, help="TCP/UDP destination port.", default=random.randint(49152,65535))
args = parser.parse_args()
addr = socket.gethostbyname(args.ip)
iface = get_if()
# start to pack
if args.v is "4":
print "sending on interface {} to IP addr {}".format(iface, str(addr))
for x in range(0, args.loop):
pkt = Ether(src=get_if_hwaddr(iface), dst='ff:ff:ff:ff:ff:ff')
pkt = pkt / IP(dst=addr) / TCP(dport=args.dport, sport=args.sport) / args.msg
# show
pkt.show2()
# send
sendp(pkt, iface=iface, verbose=False)
# sleep
time.sleep(1)
elif args.v is "6":
print "sending on interface {} to IPv6 addr {}".format(iface, str(addr))
for x in range(0, args.loop):
pkt = Ether(src=get_if_hwaddr(iface), dst='ff:ff:ff:ff:ff:ff')
pkt = pkt / IPv6(dst=addr) / TCP(dport=args.dport, sport=args.sport) / args.msg
# show
pkt.show2()
# send
sendp(pkt, iface=iface, verbose=False)
if __name__ == '__main__':
main()
| 35.032787 | 115 | 0.602714 | 309 | 2,137 | 4.071197 | 0.346278 | 0.031797 | 0.038156 | 0.038156 | 0.372814 | 0.287758 | 0.287758 | 0.287758 | 0.287758 | 0.287758 | 0 | 0.02327 | 0.255966 | 2,137 | 60 | 116 | 35.616667 | 0.767925 | 0.035096 | 0 | 0.186047 | 0 | 0 | 0.16756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.093023 | null | null | 0.069767 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7f83fb54d3d89551adb5762f761b61b33bff3c08 | 950 | py | Python | maze/models.py | TerryHowe/pymaze | 139f15fb8e932c1cf1e63f5c5aee7895691e993f | [
"MIT"
] | 1 | 2020-12-14T04:01:08.000Z | 2020-12-14T04:01:08.000Z | maze/models.py | TerryHowe/pymaze | 139f15fb8e932c1cf1e63f5c5aee7895691e993f | [
"MIT"
] | null | null | null | maze/models.py | TerryHowe/pymaze | 139f15fb8e932c1cf1e63f5c5aee7895691e993f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
# Create your models here.
class Passage(models.Model):
DIRECTIONS = (
('N', 'North'),
('E', 'East'),
('S', 'South'),
('W', 'West'),
('U', 'Up'),
('D', 'Down'),
)
room_x = models.IntegerField()
room_y = models.IntegerField()
direction = models.CharField(max_length=1, choices=DIRECTIONS)
destination = models.OneToOneField('self',
null=True, blank=True,
on_delete=models.CASCADE)
@classmethod
def get_direction(cls, direction):
return dict(cls.DIRECTIONS).get(direction, 'Nowhere')
def __str__(self):
return(self.__repr__())
def __repr__(self):
dest = str(self.destination.id) if self.destination else 'None'
return('Passage(id=%d, room_x=%d, room_y=%d, direction=%s, destination=%s)' % (self.id, self.room_x, self.room_y, self.direction, dest))
class Meta:
unique_together = (('room_x', 'room_y', 'direction'),)
| 25.675676 | 138 | 0.676842 | 128 | 950 | 4.796875 | 0.515625 | 0.032573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002448 | 0.14 | 950 | 36 | 139 | 26.388889 | 0.749082 | 0.048421 | 0 | 0 | 0 | 0.037037 | 0.146504 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.074074 | 0.074074 | 0.074074 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.