hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
60e7fd2a428e03eaeee1467baac8a69ce2121456 | 9,545 | py | Python | ambari-server/src/test/python/stacks/2.6/SPARK2/test_spark_livy2.py | Syndra/Ambari-source | 717526b2bf3636622212b14de0d3d298a20c7370 | [
"Apache-2.0"
] | 1 | 2021-06-24T07:59:25.000Z | 2021-06-24T07:59:25.000Z | ambari-server/src/test/python/stacks/2.6/SPARK2/test_spark_livy2.py | Syndra/Ambari-source | 717526b2bf3636622212b14de0d3d298a20c7370 | [
"Apache-2.0"
] | null | null | null | ambari-server/src/test/python/stacks/2.6/SPARK2/test_spark_livy2.py | Syndra/Ambari-source | 717526b2bf3636622212b14de0d3d298a20c7370 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
'''
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
import json
from mock.mock import MagicMock, patch
from stacks.utils.RMFTestCase import *
from only_for_platform import not_for_platform, PLATFORM_WINDOWS
@not_for_platform(PLATFORM_WINDOWS)
@patch("resource_management.libraries.functions.get_stack_version", new=MagicMock(return_value="2.5.0.0-1597"))
class TestSparkClient(RMFTestCase):
COMMON_SERVICES_PACKAGE_DIR = "SPARK2/2.0.0/package"
STACK_VERSION = "2.6"
DEFAULT_IMMUTABLE_PATHS = ['/apps/hive/warehouse', '/apps/falcon', '/mr-history/done', '/app-logs', '/tmp']
def test_configure_default(self):
self.executeScript(self.COMMON_SERVICES_PACKAGE_DIR + "/scripts/livy2_server.py",
classname = "LivyServer",
command = "start",
config_file="default.json",
stack_version = self.STACK_VERSION,
target = RMFTestCase.TARGET_COMMON_SERVICES
)
self.assert_start_default()
self.assertNoMoreResources()
def assert_start_default(self):
self.assertResourceCalled('Directory', '/var/run/livy2',
owner = 'livy',
group = 'hadoop',
create_parents = True,
mode = 0775
)
self.assertResourceCalled('Directory', '/var/log/livy2',
owner = 'livy',
group = 'hadoop',
create_parents = True,
mode = 0775
)
self.assertResourceCalled('HdfsResource', '/user/livy',
immutable_paths = self.DEFAULT_IMMUTABLE_PATHS,
security_enabled = False,
hadoop_bin_dir = '/usr/hdp/2.5.0.0-1235/hadoop/bin',
keytab = UnknownConfigurationMock(),
default_fs = 'hdfs://c6401.ambari.apache.org:8020',
hdfs_site = {u'a': u'b'},
kinit_path_local = '/usr/bin/kinit',
principal_name = UnknownConfigurationMock(),
user = 'hdfs',
owner = 'livy',
hadoop_conf_dir = '/usr/hdp/2.5.0.0-1235/hadoop/conf',
type = 'directory',
action = ['create_on_execute'], hdfs_resource_ignore_file='/var/lib/ambari-agent/data/.hdfs_resource_ignore',
dfs_type = '',
mode = 0775,
)
self.assertResourceCalled('HdfsResource', None,
immutable_paths = self.DEFAULT_IMMUTABLE_PATHS,
security_enabled = False,
hadoop_bin_dir = '/usr/hdp/2.5.0.0-1235/hadoop/bin',
keytab = UnknownConfigurationMock(),
default_fs = 'hdfs://c6401.ambari.apache.org:8020',
hdfs_site = {u'a': u'b'},
kinit_path_local = '/usr/bin/kinit',
principal_name = UnknownConfigurationMock(),
user = 'hdfs',
action = ['execute'], hdfs_resource_ignore_file='/var/lib/ambari-agent/data/.hdfs_resource_ignore',
dfs_type = '',
hadoop_conf_dir = '/usr/hdp/2.5.0.0-1235/hadoop/conf',
)
self.assertResourceCalled('HdfsResource', '/livy2-recovery',
immutable_paths = self.DEFAULT_IMMUTABLE_PATHS,
security_enabled = False,
hadoop_bin_dir = '/usr/hdp/2.5.0.0-1235/hadoop/bin',
keytab = UnknownConfigurationMock(),
default_fs = 'hdfs://c6401.ambari.apache.org:8020',
hdfs_site = {u'a': u'b'},
kinit_path_local = '/usr/bin/kinit',
principal_name = UnknownConfigurationMock(),
user = 'hdfs',
owner = 'livy',
hadoop_conf_dir = '/usr/hdp/2.5.0.0-1235/hadoop/conf',
type = 'directory',
action = ['create_on_execute'], hdfs_resource_ignore_file='/var/lib/ambari-agent/data/.hdfs_resource_ignore',
dfs_type = '',
mode = 0700,
)
self.assertResourceCalled('HdfsResource', None,
immutable_paths = self.DEFAULT_IMMUTABLE_PATHS,
security_enabled = False,
hadoop_bin_dir = '/usr/hdp/2.5.0.0-1235/hadoop/bin',
keytab = UnknownConfigurationMock(),
default_fs = 'hdfs://c6401.ambari.apache.org:8020',
hdfs_site = {u'a': u'b'},
kinit_path_local = '/usr/bin/kinit',
principal_name = UnknownConfigurationMock(),
user = 'hdfs',
action = ['execute'], hdfs_resource_ignore_file='/var/lib/ambari-agent/data/.hdfs_resource_ignore',
dfs_type = '',
hadoop_conf_dir = '/usr/hdp/2.5.0.0-1235/hadoop/conf',
)
self.assertResourceCalled('File', '/usr/hdp/current/livy2-server/conf/livy-env.sh',
content = InlineTemplate(self.getConfig()['configurations']['livy2-env']['content']),
owner = 'livy',
group = 'livy',
mode = 0644,
)
self.assertResourceCalled('PropertiesFile', '/usr/hdp/current/livy2-server/conf/livy.conf',
owner = 'livy',
key_value_delimiter = ' ',
group = 'livy',
properties = self.getConfig()['configurations']['livy2-conf'],
)
self.assertResourceCalled('File', '/usr/hdp/current/livy2-server/conf/log4j.properties',
content = '\n # Set everything to be logged to the console\n log4j.rootCategory=INFO, console\n log4j.appender.console=org.apache.log4j.ConsoleAppender\n log4j.appender.console.target=System.err\n log4j.appender.console.layout=org.apache.log4j.PatternLayout\n log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n\n\n log4j.logger.org.eclipse.jetty=WARN',
owner = 'livy',
group = 'livy',
mode = 0644,
)
self.assertResourceCalled('File', '/usr/hdp/current/livy2-server/conf/spark-blacklist.conf',
content = self.getConfig()['configurations']['livy2-spark-blacklist']['content'],
owner = 'livy',
group = 'livy',
mode = 0644,
)
self.assertResourceCalled('Directory', '/usr/hdp/current/livy2-server/logs',
owner = 'livy',
group = 'livy',
mode = 0755,
)
self.assertResourceCalled('Execute', '/usr/hdp/current/livy2-server/bin/livy-server start',
environment = {'JAVA_HOME': '/usr/jdk64/jdk1.7.0_45'},
not_if = 'ls /var/run/livy2/livy-livy-server.pid >/dev/null 2>&1 && ps -p `cat /var/run/livy2/livy-livy-server.pid` >/dev/null 2>&1',
user = 'livy'
)
| 62.796053 | 497 | 0.466841 | 836 | 9,545 | 5.186603 | 0.29067 | 0.019373 | 0.006227 | 0.008303 | 0.549816 | 0.504151 | 0.504151 | 0.495849 | 0.489852 | 0.450646 | 0 | 0.033785 | 0.435621 | 9,545 | 151 | 498 | 63.211921 | 0.771116 | 0.002095 | 0 | 0.581395 | 0 | 0.015504 | 0.245976 | 0.153065 | 0 | 0 | 0 | 0 | 0.116279 | 0 | null | null | 0 | 0.031008 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
60ec05a1e04f7befa5818096872bdb308d2b1dde | 4,552 | py | Python | wgskex/worker/netlink.py | moepman/wgskex | 7a931088b5910f8034ad5a1362777e08c47c42fe | [
"0BSD"
] | 2 | 2021-01-05T23:42:35.000Z | 2021-10-03T14:12:30.000Z | wgskex/worker/netlink.py | moepman/wgskex | 7a931088b5910f8034ad5a1362777e08c47c42fe | [
"0BSD"
] | null | null | null | wgskex/worker/netlink.py | moepman/wgskex | 7a931088b5910f8034ad5a1362777e08c47c42fe | [
"0BSD"
] | null | null | null | import hashlib
import logging
import re
from dataclasses import dataclass
from datetime import datetime, timedelta
from textwrap import wrap
from typing import Dict, List
from pyroute2 import IPRoute, NDB, WireGuard
from wgskex.common.utils import mac2eui64
logger = logging.getLogger(__name__)
# TODO make loglevel configurable
logger.setLevel("DEBUG")
@dataclass
class WireGuardClient:
public_key: str
domain: str
remove: bool
@property
def lladdr(self) -> str:
m = hashlib.md5()
m.update(self.public_key.encode("ascii") + b"\n")
hashed_key = m.hexdigest()
hash_as_list = wrap(hashed_key, 2)
temp_mac = ":".join(["02"] + hash_as_list[:5])
lladdr = re.sub(r"/\d+$", "/128", mac2eui64(mac=temp_mac, prefix="fe80::/10"))
return lladdr
@property
def vx_interface(self) -> str:
return f"vx-{self.domain}"
@property
def wg_interface(self) -> str:
return f"wg-{self.domain}"
"""WireGuardClient describes complete configuration for a specific WireGuard client
Attributes:
public_key: WireGuard Public key
domain: Domain Name of the WireGuard peer
lladdr: IPv6 lladdr of the WireGuard peer
wg_interface: Name of the WireGuard interface this peer will use
vx_interface: Name of the VXLAN interface we set a route for the lladdr to
remove: Are we removing this peer or not?
"""
def wg_flush_stale_peers(domain: str) -> List[Dict]:
stale_clients = find_stale_wireguard_clients("wg-" + domain)
result = []
for stale_client in stale_clients:
stale_wireguard_client = WireGuardClient(
public_key=stale_client,
domain=domain,
remove=True,
)
result.append(link_handler(stale_wireguard_client))
return result
# pyroute2 stuff
def link_handler(client: WireGuardClient) -> Dict[str, Dict]:
results = {}
results.update({"Wireguard": wireguard_handler(client)})
try:
results.update({"Route": route_handler(client)})
except Exception as e:
results.update({"Route": {"Exception": e}})
results.update({"Bridge FDB": bridge_fdb_handler(client)})
return results
def bridge_fdb_handler(client: WireGuardClient) -> Dict:
with IPRoute() as ip:
return ip.fdb(
"del" if client.remove else "append",
# FIXME this list may be empty if the interface is not existing
ifindex=ip.link_lookup(ifname=client.vx_interface)[0],
lladdr="00:00:00:00:00:00",
dst=re.sub(r"/\d+$", "", client.lladdr),
nda_ifindex=ip.link_lookup(ifname=client.wg_interface)[0],
)
def wireguard_handler(client: WireGuardClient) -> Dict:
with WireGuard() as wg:
wg_peer = {
"public_key": client.public_key,
"persistent_keepalive": 15,
"allowed_ips": [client.lladdr],
"remove": client.remove,
}
return wg.set(client.wg_interface, peer=wg_peer)
def route_handler(client: WireGuardClient) -> Dict:
with IPRoute() as ip:
return ip.route(
"del" if client.remove else "replace",
dst=client.lladdr,
oif=ip.link_lookup(ifname=client.wg_interface)[0],
)
def find_wireguard_domains() -> List[str]:
with NDB() as ndb:
# ndb.interfaces[{"kind": "wireguard"}]] seems to trigger https://github.com/svinota/pyroute2/issues/737
iface_values = ndb.interfaces.values()
interfaces = [iface.get("ifname", "") for iface in iface_values if iface.get("kind", "") == "wireguard"]
result = [iface.removeprefix("wg-") for iface in interfaces if iface.startswith("wg-")]
return result
def find_stale_wireguard_clients(wg_interface: str) -> List[str]:
with WireGuard() as wg:
all_clients = []
infos = wg.info(wg_interface)
for info in infos:
clients = info.get_attr("WGDEVICE_A_PEERS")
if clients is not None:
all_clients.extend(clients)
three_minutes_ago = (datetime.now() - timedelta(minutes=3)).timestamp()
stale_clients = [
client.get_attr("WGPEER_A_PUBLIC_KEY").decode("utf-8")
for client in all_clients
# TODO add never connected peers to a list and remove them on next call
if 0 < (client.get_attr("WGPEER_A_LAST_HANDSHAKE_TIME") or {}).get("tv_sec", int()) < three_minutes_ago
]
return stale_clients
| 30.756757 | 115 | 0.639719 | 572 | 4,552 | 4.935315 | 0.326923 | 0.025505 | 0.039674 | 0.045342 | 0.16153 | 0.080057 | 0.066596 | 0.066596 | 0.066596 | 0.038966 | 0 | 0.013185 | 0.25022 | 4,552 | 147 | 116 | 30.965986 | 0.813947 | 0.061731 | 0 | 0.092784 | 0 | 0 | 0.076903 | 0.007349 | 0 | 0 | 0 | 0.013605 | 0 | 1 | 0.103093 | false | 0 | 0.092784 | 0.020619 | 0.340206 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
60ed6b24088a86522dceda37eeecc0306f7958dc | 1,335 | py | Python | scanBase/migrations/0003_ipsection.py | wsqy/sacn_server | e91a41a71b27926fbcfbe3f22bbb6bbc61b39461 | [
"Apache-2.0"
] | null | null | null | scanBase/migrations/0003_ipsection.py | wsqy/sacn_server | e91a41a71b27926fbcfbe3f22bbb6bbc61b39461 | [
"Apache-2.0"
] | null | null | null | scanBase/migrations/0003_ipsection.py | wsqy/sacn_server | e91a41a71b27926fbcfbe3f22bbb6bbc61b39461 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-01-16 13:35
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('scanBase', '0002_auto_20180116_1321'),
]
operations = [
migrations.CreateModel(
name='IPSection',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ip_section', models.CharField(blank=True, max_length=30, null=True, unique=True, verbose_name='ip段')),
('ip_start', models.GenericIPAddressField(blank=True, null=True, verbose_name='开始ip')),
('ip_end', models.GenericIPAddressField(blank=True, null=True, verbose_name='结束ip')),
('total', models.IntegerField(blank=True, null=True, verbose_name='总量')),
('deal_time', models.DateTimeField(blank=True, null=True, verbose_name='处理时间')),
('country', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='scanBase.CountryInfo', verbose_name='所属国家')),
],
options={
'verbose_name_plural': 'ip段信息',
'verbose_name': 'ip段信息',
},
),
]
| 40.454545 | 140 | 0.612734 | 143 | 1,335 | 5.538462 | 0.538462 | 0.125 | 0.094697 | 0.085859 | 0.209596 | 0.209596 | 0.138889 | 0.138889 | 0 | 0 | 0 | 0.03373 | 0.244944 | 1,335 | 32 | 141 | 41.71875 | 0.751984 | 0.049438 | 0 | 0 | 1 | 0 | 0.135071 | 0.018167 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.12 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
60f1a087b3bdc065cf389f51df6915d8dc0b8312 | 563 | py | Python | airtech_api/flight/models.py | chidioguejiofor/airtech-api | 45d77da0cc4230dd3cb7ab4cbb5168a9239850f5 | [
"MIT"
] | 1 | 2019-04-04T12:27:55.000Z | 2019-04-04T12:27:55.000Z | airtech_api/flight/models.py | chidioguejiofor/airtech-api | 45d77da0cc4230dd3cb7ab4cbb5168a9239850f5 | [
"MIT"
] | 34 | 2019-03-26T11:18:17.000Z | 2022-02-10T08:12:36.000Z | airtech_api/flight/models.py | chidioguejiofor/airtech-api | 45d77da0cc4230dd3cb7ab4cbb5168a9239850f5 | [
"MIT"
] | null | null | null | from airtech_api.utils.auditable_model import AuditableBaseModel
from django.db import models
# Create your models here.
class Flight(AuditableBaseModel):
class Meta:
db_table = 'Flight'
capacity = models.IntegerField(null=False)
location = models.TextField(null=False)
destination = models.TextField(null=False)
schedule = models.DateTimeField(null=False)
current_price = models.IntegerField()
type = models.CharField(
choices=(('local', 'local'), ('international', 'international')),
max_length=13,
)
| 28.15 | 73 | 0.706927 | 60 | 563 | 6.55 | 0.616667 | 0.091603 | 0.096692 | 0.122137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004357 | 0.184725 | 563 | 19 | 74 | 29.631579 | 0.851852 | 0.042629 | 0 | 0 | 0 | 0 | 0.078212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
60f465712817804b710d37a55f88faebafc8ed3c | 1,675 | py | Python | bot_components/configurator.py | Ferlern/Arctic-Tundra | 407b8c38c31f6c930df662e87ced527b9fd26c61 | [
"MIT"
] | 3 | 2021-11-05T20:22:05.000Z | 2022-02-14T12:12:31.000Z | bot_components/configurator.py | Ferlern/Arctic-Tundra | 407b8c38c31f6c930df662e87ced527b9fd26c61 | [
"MIT"
] | null | null | null | bot_components/configurator.py | Ferlern/Arctic-Tundra | 407b8c38c31f6c930df662e87ced527b9fd26c61 | [
"MIT"
] | null | null | null | import json
from typing import TypedDict
from .bot_emoji import AdditionalEmoji
class Warn(TypedDict):
text: str
mute_time: int
ban: bool
class PersonalVoice(TypedDict):
categoty: int
price: int
slot_price: int
bitrate_price: int
class System(TypedDict):
token: str
initial_extensions: list[str]
class ExperienceSystem(TypedDict):
experience_channel: int
cooldown: int
minimal_message_length: int
experience_per_message: list[int]
roles: dict[str, int]
coins_per_level_up: int
class AutoTranslation(TypedDict):
channels: list
lang: str
class Config(TypedDict):
guild: int
token: str
prefixes: list[str]
commands_channels: list[int]
mute_role: int
suggestions_channel: int
moderators_roles: list[int]
warns_system: list[Warn]
coin: str
daily: int
marry_price: int
personal_voice: PersonalVoice
experience_system: ExperienceSystem
auto_translation: AutoTranslation
additional_emoji: AdditionalEmoji
class Configurator:
def __init__(self) -> None:
self.system: System
self.config: Config
def dump(self):
with open("./bot_components/config.json", "w") as write_file:
to_dump = [self.system, self.config]
json.dump(to_dump, write_file, indent=4)
def load(self):
with open("./bot_components/config.json", "r") as write_file:
data = json.load(write_file)
self.system = System(data[0])
self.config = Config(data[1])
def reload(self):
self.dump()
self.load()
configurator = Configurator()
configurator.load()
| 21.202532 | 69 | 0.666269 | 201 | 1,675 | 5.378109 | 0.402985 | 0.029602 | 0.029602 | 0.027752 | 0.064755 | 0.064755 | 0.064755 | 0 | 0 | 0 | 0 | 0.002372 | 0.244776 | 1,675 | 78 | 70 | 21.474359 | 0.852174 | 0 | 0 | 0.033898 | 0 | 0 | 0.034627 | 0.033433 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.050847 | 0 | 0.779661 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
60f8300c3b1d0bfc0e3ab0efad7d54c27160ef0c | 1,053 | py | Python | 2017-2018/lecture-notes/python/02-algorithms_listing_8_contains_word.py | essepuntato/comp-think | 3dac317bda0eb7650adc4a92c1ccb8a4ce87a3a6 | [
"BSD-2-Clause"
] | 19 | 2017-07-03T11:55:33.000Z | 2021-10-17T10:21:24.000Z | 2017-2018/lecture-notes/python/02-algorithms_listing_8_contains_word.py | essepuntato/comp-think | 3dac317bda0eb7650adc4a92c1ccb8a4ce87a3a6 | [
"BSD-2-Clause"
] | 1 | 2017-12-21T10:52:56.000Z | 2018-06-06T13:59:13.000Z | 2017-2018/lecture-notes/python/02-algorithms_listing_8_contains_word.py | essepuntato/comp-think | 3dac317bda0eb7650adc4a92c1ccb8a4ce87a3a6 | [
"BSD-2-Clause"
] | 4 | 2017-11-13T09:29:06.000Z | 2019-05-09T03:29:49.000Z | def contains_word(first_word, second_word, bibliographic_entry):
contains_first_word = first_word in bibliographic_entry
contains_second_word = second_word in bibliographic_entry
if contains_first_word and contains_second_word:
return 2
elif contains_first_word or contains_second_word:
return 1
else:
return 0
if __name__ == "__main__":
bibliographic_entry = "Peroni, S., Osborne, F., Di Iorio, A., Nuzzolese, A. G., Poggi, F., Vitali, F., " \
"Motta, E. (2017). Research Articles in Simplified HTML: a Web-first format for " \
"HTML-based scholarly articles. PeerJ Computer Science 3: e132. e2513. " \
"DOI: https://doi.org/10.7717/peerj-cs.132"
print(contains_word("Peroni", "Osborne", bibliographic_entry))
print(contains_word("Peroni", "Asprino", bibliographic_entry))
print(contains_word("Reforgiato", "Osborne", bibliographic_entry))
print(contains_word("Reforgiato", "Asprino", bibliographic_entry))
| 47.863636 | 110 | 0.674264 | 127 | 1,053 | 5.299213 | 0.456693 | 0.213967 | 0.10104 | 0.138187 | 0.206538 | 0.206538 | 0 | 0 | 0 | 0 | 0 | 0.02934 | 0.223172 | 1,053 | 21 | 111 | 50.142857 | 0.793399 | 0 | 0 | 0 | 0 | 0.055556 | 0.320988 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0 | 0 | 0.222222 | 0.222222 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
60fe83ebf52b160eaac4df2f49fea7ababebb7f8 | 1,659 | py | Python | esmvalcore/cmor/_fixes/cmip6/cesm2.py | aperezpredictia/ESMValCore | d5bf3f459ff3a43e780d75d57b63b88b6cc8c4f2 | [
"Apache-2.0"
] | 1 | 2019-11-28T13:09:42.000Z | 2019-11-28T13:09:42.000Z | esmvalcore/cmor/_fixes/cmip6/cesm2.py | aperezpredictia/ESMValCore | d5bf3f459ff3a43e780d75d57b63b88b6cc8c4f2 | [
"Apache-2.0"
] | null | null | null | esmvalcore/cmor/_fixes/cmip6/cesm2.py | aperezpredictia/ESMValCore | d5bf3f459ff3a43e780d75d57b63b88b6cc8c4f2 | [
"Apache-2.0"
] | 1 | 2019-11-29T00:50:30.000Z | 2019-11-29T00:50:30.000Z | """Fixes for CESM2 model."""
from ..fix import Fix
from ..shared import (add_scalar_depth_coord, add_scalar_height_coord,
add_scalar_typeland_coord, add_scalar_typesea_coord)
class Fgco2(Fix):
"""Fixes for fgco2."""
def fix_metadata(self, cubes):
"""Add depth (0m) coordinate.
Parameters
----------
cube : iris.cube.CubeList
Returns
-------
iris.cube.Cube
"""
cube = self.get_cube_from_list(cubes)
add_scalar_depth_coord(cube)
return cubes
class Tas(Fix):
"""Fixes for tas."""
def fix_metadata(self, cubes):
"""Add height (2m) coordinate.
Parameters
----------
cube : iris.cube.CubeList
Returns
-------
iris.cube.Cube
"""
cube = self.get_cube_from_list(cubes)
add_scalar_height_coord(cube)
return cubes
class Sftlf(Fix):
"""Fixes for sftlf."""
def fix_metadata(self, cubes):
"""Add typeland coordinate.
Parameters
----------
cube : iris.cube.CubeList
Returns
-------
iris.cube.Cube
"""
cube = self.get_cube_from_list(cubes)
add_scalar_typeland_coord(cube)
return cubes
class Sftof(Fix):
"""Fixes for sftof."""
def fix_metadata(self, cubes):
"""Add typesea coordinate.
Parameters
----------
cube : iris.cube.CubeList
Returns
-------
iris.cube.Cube
"""
cube = self.get_cube_from_list(cubes)
add_scalar_typesea_coord(cube)
return cubes
| 20.481481 | 74 | 0.540084 | 177 | 1,659 | 4.836158 | 0.19774 | 0.084112 | 0.051402 | 0.084112 | 0.65771 | 0.570093 | 0.448598 | 0.448598 | 0.448598 | 0.448598 | 0 | 0.004496 | 0.329717 | 1,659 | 80 | 75 | 20.7375 | 0.765288 | 0.311031 | 0 | 0.521739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.086957 | 0 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
60ff5c6f7092666241901b36f6825248e6f4d160 | 360 | py | Python | api/flat/urls.py | SanjarbekSaminjonov/musofirlar.backend | 23b09e90cc4e3d153063ad1768b5ae1c18ff866d | [
"Apache-2.0"
] | 1 | 2021-12-23T12:43:17.000Z | 2021-12-23T12:43:17.000Z | api/flat/urls.py | SanjarbekSaminjonov/musofirlar.backend | 23b09e90cc4e3d153063ad1768b5ae1c18ff866d | [
"Apache-2.0"
] | null | null | null | api/flat/urls.py | SanjarbekSaminjonov/musofirlar.backend | 23b09e90cc4e3d153063ad1768b5ae1c18ff866d | [
"Apache-2.0"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('', views.FlatListAPIView.as_view()),
path('create/', views.FlatCreateAPIView.as_view()),
path('<int:pk>/', views.FlatDetailAPIView.as_view()),
path('<int:pk>/update/', views.FlatUpdateAPIView.as_view()),
path('<int:pk>/delete/', views.FlatDeleteAPIView.as_view()),
]
| 30 | 64 | 0.683333 | 43 | 360 | 5.604651 | 0.44186 | 0.124481 | 0.165975 | 0.161826 | 0.186722 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122222 | 360 | 11 | 65 | 32.727273 | 0.762658 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8804c3b09c4502328bb0532182f3bbfcec72facf | 2,171 | py | Python | shop/models.py | mohammadanarul/Ecommerce-Django-YT | afecc8f41693925619b81986d979706c64175360 | [
"MIT"
] | null | null | null | shop/models.py | mohammadanarul/Ecommerce-Django-YT | afecc8f41693925619b81986d979706c64175360 | [
"MIT"
] | null | null | null | shop/models.py | mohammadanarul/Ecommerce-Django-YT | afecc8f41693925619b81986d979706c64175360 | [
"MIT"
] | null | null | null | from ctypes.wintypes import CHAR
from distutils.command.upload import upload
from random import choice
from telnetlib import STATUS
from unicodedata import category
from django.db import models
from ckeditor.fields import RichTextField
from taggit.managers import TaggableManager
# Create your models here.
from mptt.models import MPTTModel, TreeForeignKey
class Category(MPTTModel):
name = models.CharField(max_length=50, unique=True)
parent = TreeForeignKey('self', on_delete=models.CASCADE, null=True, blank=True, related_name='children')
is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class MPTTMeta:
order_insertion_by = ['name']
class Brand(models.Model):
name = models.CharField(max_length=50)
is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Product(models.Model):
STATUS_CHOICES = (
('NONE', 'NONE'),
('NEW', 'NEW'),
('SALE', 'SALE'),
('HOT', 'HOT'),
)
title = models.CharField(max_length=50)
price = models.DecimalField(max_digits=5, decimal_places=2)
short_description = RichTextField()
tags = TaggableManager()
description = RichTextField()
specification = RichTextField()
image = models.ImageField(upload_to='product/')
category = models.ForeignKey(Category, on_delete=models.CASCADE)
brand = models.ForeignKey(Brand, on_delete=models.CASCADE)
stack = models.IntegerField(default=5)
status = models.CharField(max_length=5, choices=STATUS_CHOICES, default='NONE')
is_fetured = models.BooleanField(default=False)
is_special = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class ProductImages(models.Model):
category = models.ForeignKey(Product, on_delete=models.CASCADE, related_name='images')
image = models.ImageField(upload_to='products/') | 38.767857 | 109 | 0.740212 | 261 | 2,171 | 6.007663 | 0.356322 | 0.030612 | 0.080357 | 0.095663 | 0.366709 | 0.273597 | 0.235332 | 0.235332 | 0.235332 | 0.235332 | 0 | 0.005444 | 0.153846 | 2,171 | 56 | 110 | 38.767857 | 0.848122 | 0.011055 | 0 | 0.1875 | 0 | 0 | 0.033085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8805a00d3b1fcbc6ac9137bed25cfb76407c9dfe | 663 | py | Python | mirari/TCS/migrations/0042_auto_20190726_0145.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | mirari/TCS/migrations/0042_auto_20190726_0145.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | 18 | 2019-12-27T19:58:20.000Z | 2022-02-27T08:17:49.000Z | mirari/TCS/migrations/0042_auto_20190726_0145.py | gcastellan0s/mirariapp | 24a9db06d10f96c894d817ef7ccfeec2a25788b7 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.5 on 2019-07-26 06:45
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('TCS', '0041_auto_20190726_0030'),
]
operations = [
migrations.AlterModelOptions(
name='modelo',
options={'default_permissions': [], 'ordering': ['-id'], 'permissions': [('Can_View__Modelo', 'Ve modelos'), ('Can_Create__Modelo', 'Crea modelos'), ('Can_Update__Modelo', 'Modifica modelos'), ('Can_Delete__Modelo', 'Elimina modelos'), ('Can_Change__ModelTCS', 'Modifica modelos de equipo')], 'verbose_name': 'Modelo', 'verbose_name_plural': 'Modelos'},
),
]
| 36.833333 | 365 | 0.653092 | 72 | 663 | 5.708333 | 0.680556 | 0.097324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057301 | 0.184012 | 663 | 17 | 366 | 39 | 0.702403 | 0.067873 | 0 | 0 | 1 | 0 | 0.464286 | 0.037338 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
880784410cfda04eacd518622e54861cdb7a1605 | 6,288 | py | Python | manubot/cite/tests/test_citekey_api.py | shuvro-zz/manubot | 9023b7fbfa0b235c14a4d702516bc0cd6d3101ed | [
"BSD-3-Clause"
] | 1 | 2019-11-11T05:17:28.000Z | 2019-11-11T05:17:28.000Z | manubot/cite/tests/test_citekey_api.py | shuvro-zz/manubot | 9023b7fbfa0b235c14a4d702516bc0cd6d3101ed | [
"BSD-3-Clause"
] | null | null | null | manubot/cite/tests/test_citekey_api.py | shuvro-zz/manubot | 9023b7fbfa0b235c14a4d702516bc0cd6d3101ed | [
"BSD-3-Clause"
] | null | null | null | """Tests API-level functions in manubot.cite. Both functions are found in citekey.py"""
import pytest
from manubot.cite import citekey_to_csl_item, standardize_citekey
@pytest.mark.parametrize(
"citekey,expected",
[
("doi:10.5061/DRYAD.q447c/1", "doi:10.5061/dryad.q447c/1"),
("doi:10.5061/dryad.q447c/1", "doi:10.5061/dryad.q447c/1"),
("doi:10/b6vnmd", "doi:10.1016/s0933-3657(96)00367-3"),
("doi:10/B6VNMD", "doi:10.1016/s0933-3657(96)00367-3"),
(
"doi:10/xxxxxxxxxxxxxYY",
"doi:10/xxxxxxxxxxxxxyy",
), # passthrough non-existent shortDOI
("pmid:24159271", "pmid:24159271"),
("isbn:1339919885", "isbn:9781339919881"),
("isbn:1-339-91988-5", "isbn:9781339919881"),
("isbn:978-0-387-95069-3", "isbn:9780387950693"),
("isbn:9780387950938", "isbn:9780387950938"),
("isbn:1-55860-510-X", "isbn:9781558605107"),
("isbn:1-55860-510-x", "isbn:9781558605107"),
],
)
def test_standardize_citekey(citekey, expected):
"""
Standardize identifiers based on their source
"""
output = standardize_citekey(citekey)
assert output == expected
@pytest.mark.xfail(reason="https://twitter.com/dhimmel/status/950443969313419264")
def test_citekey_to_csl_item_doi_datacite():
citekey = "doi:10.7287/peerj.preprints.3100v1"
csl_item = citekey_to_csl_item(citekey)
assert csl_item["id"] == "11cb5HXoY"
assert csl_item["URL"] == "https://doi.org/10.7287/peerj.preprints.3100v1"
assert csl_item["DOI"] == "10.7287/peerj.preprints.3100v1"
assert csl_item["type"] == "report"
assert (
csl_item["title"]
== "Sci-Hub provides access to nearly all scholarly literature"
)
authors = csl_item["author"]
assert authors[0]["family"] == "Himmelstein"
assert authors[-1]["family"] == "Greene"
def test_citekey_to_csl_item_arxiv():
citekey = "arxiv:cond-mat/0703470v2"
csl_item = citekey_to_csl_item(citekey)
assert csl_item["id"] == "ES92tcdg"
assert csl_item["URL"] == "https://arxiv.org/abs/cond-mat/0703470v2"
assert csl_item["number"] == "cond-mat/0703470v2"
assert csl_item["version"] == "2"
assert csl_item["type"] == "report"
assert csl_item["container-title"] == "arXiv"
assert csl_item["title"] == "Portraits of Complex Networks"
authors = csl_item["author"]
assert authors[0]["literal"] == "J. P. Bagrow"
assert csl_item["DOI"] == "10.1209/0295-5075/81/68004"
def test_citekey_to_csl_item_pmc():
"""
https://api.ncbi.nlm.nih.gov/lit/ctxp/v1/pmc/?format=csl&id=3041534
"""
citekey = f"pmcid:PMC3041534"
csl_item = citekey_to_csl_item(citekey)
assert csl_item["id"] == "RoOhUFKU"
assert csl_item["URL"] == "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3041534/"
assert csl_item["container-title-short"] == "Summit Transl Bioinform"
assert (
csl_item["title"]
== "Secondary Use of EHR: Data Quality Issues and Informatics Opportunities"
)
authors = csl_item["author"]
assert authors[0]["family"] == "Botsis"
assert csl_item["PMID"] == "21347133"
assert csl_item["PMCID"] == "PMC3041534"
assert "generated by Manubot" in csl_item["note"]
assert "standard_id: pmcid:PMC3041534" in csl_item["note"]
def test_citekey_to_csl_item_pubmed_1():
"""
Generated from XML returned by
https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id=21347133&rettype=full
"""
citekey = "pmid:21347133"
csl_item = citekey_to_csl_item(citekey)
assert csl_item["id"] == "y9ONtSZ9"
assert csl_item["type"] == "article-journal"
assert csl_item["URL"] == "https://www.ncbi.nlm.nih.gov/pubmed/21347133"
assert csl_item["container-title"] == "Summit on translational bioinformatics"
assert (
csl_item["title"]
== "Secondary Use of EHR: Data Quality Issues and Informatics Opportunities."
)
assert csl_item["issued"]["date-parts"] == [[2010, 3, 1]]
authors = csl_item["author"]
assert authors[0]["given"] == "Taxiarchis"
assert authors[0]["family"] == "Botsis"
assert csl_item["PMID"] == "21347133"
assert csl_item["PMCID"] == "PMC3041534"
def test_citekey_to_csl_item_pubmed_2():
"""
Generated from XML returned by
https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id=27094199&rettype=full
"""
citekey = "pmid:27094199"
csl_item = citekey_to_csl_item(citekey)
print(csl_item)
assert csl_item["id"] == "alaFV9OY"
assert csl_item["type"] == "article-journal"
assert csl_item["URL"] == "https://www.ncbi.nlm.nih.gov/pubmed/27094199"
assert csl_item["container-title"] == "Circulation. Cardiovascular genetics"
assert csl_item["container-title-short"] == "Circ Cardiovasc Genet"
assert csl_item["page"] == "179-84"
assert (
csl_item["title"]
== "Genetic Association-Guided Analysis of Gene Networks for the Study of Complex Traits."
)
assert csl_item["issued"]["date-parts"] == [[2016, 4]]
authors = csl_item["author"]
assert authors[0]["given"] == "Casey S"
assert authors[0]["family"] == "Greene"
assert csl_item["PMID"] == "27094199"
assert csl_item["DOI"] == "10.1161/circgenetics.115.001181"
def test_citekey_to_csl_item_pubmed_with_numeric_month():
"""
Generated from XML returned by
https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id=29028984&rettype=full
See https://github.com/manubot/manubot/issues/69
"""
citekey = "pmid:29028984"
csl_item = citekey_to_csl_item(citekey)
print(csl_item)
assert csl_item["issued"]["date-parts"] == [[2018, 3, 15]]
def test_citekey_to_csl_item_pubmed_book():
"""
Extracting CSL metadata from books in PubMed is not supported.
Logic not implemented to parse XML returned by
https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id=29227604&rettype=full
"""
with pytest.raises(NotImplementedError):
citekey_to_csl_item("pmid:29227604")
def test_citekey_to_csl_item_isbn():
csl_item = citekey_to_csl_item("isbn:9780387950693")
assert csl_item["type"] == "book"
assert csl_item["title"] == "Complex analysis"
| 38.109091 | 98 | 0.667144 | 832 | 6,288 | 4.879808 | 0.276442 | 0.125862 | 0.128079 | 0.066995 | 0.553202 | 0.515271 | 0.431527 | 0.37931 | 0.311576 | 0.311576 | 0 | 0.109354 | 0.175413 | 6,288 | 164 | 99 | 38.341463 | 0.673674 | 0.135973 | 0 | 0.262712 | 0 | 0 | 0.389987 | 0.078863 | 0 | 0 | 0 | 0 | 0.432203 | 1 | 0.076271 | false | 0 | 0.016949 | 0 | 0.09322 | 0.042373 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
880bba102de2d9226a037a90ff3d98814009f0c2 | 2,549 | py | Python | pyspectator/collection.py | maximilionus/pyspectator-x | 1265f1f39e7ca0534f9e6ffcd7087f2ebced3397 | [
"BSD-3-Clause"
] | 39 | 2017-02-27T15:21:21.000Z | 2021-12-31T03:23:43.000Z | pyspectator/collection.py | maximilionus/pyspectator-x | 1265f1f39e7ca0534f9e6ffcd7087f2ebced3397 | [
"BSD-3-Clause"
] | 18 | 2017-07-09T00:16:28.000Z | 2021-12-03T21:01:38.000Z | pyspectator/collection.py | maximilionus/pyspectator-x | 1265f1f39e7ca0534f9e6ffcd7087f2ebced3397 | [
"BSD-3-Clause"
] | 25 | 2017-03-05T07:59:34.000Z | 2021-12-15T15:22:58.000Z | from collections import MutableMapping, Container
from datetime import datetime, timedelta
from pyvalid import accepts
class LimitedTimeTable(MutableMapping, Container):
def __init__(self, time_span):
self.__storage = dict()
self.__time_span = None
self.time_span = time_span
@property
def time_span(self):
return self.__time_span
@time_span.setter
@accepts(object, timedelta)
def time_span(self, value):
self.__time_span = value
@property
def oldest(self):
value = None
if self.__len__() > 0:
value = min(self.__storage.keys())
return value
@property
def newest(self):
value = None
if self.__len__() > 0:
value = max(self.__storage.keys())
return value
def oldest_keys(self, size):
for key in self.__get_slice(0, size):
yield key
def oldest_values(self, size):
for key in self.oldest_keys(size):
yield self.__storage.get(key)
def oldest_items(self, size):
for key in self.oldest_keys(size):
yield (key, self.__storage.get(key))
def newest_keys(self, size):
for key in self.__get_slice(-size, None):
yield key
def newest_values(self, size):
for key in self.newest_keys(size):
yield self.__storage.get(key)
def newest_items(self, size):
for key in self.newest_keys(size):
yield (key, self.__storage.get(key))
def __get_slice(self, start, end):
keys = sorted(self.keys())
return keys[start:end]
def __getitem__(self, item):
return self.__storage.__getitem__(item)
@accepts(object, datetime, object)
def __setitem__(self, key, value):
now = datetime.now()
if key > now:
raise ValueError('Can\'t set item from future!')
oldest = self.oldest
if (oldest is not None) and (oldest != key):
longest_time_span = now - oldest
# Item is too old for current timetable
if longest_time_span >= self.time_span:
self.__delitem__(oldest)
return self.__storage.__setitem__(key, value)
def __delitem__(self, key):
return self.__storage.__delitem__(key)
def __len__(self):
return self.__storage.__len__()
def __iter__(self):
return self.__storage.__iter__()
def __contains__(self, item):
return self.__storage.__contains__(item)
__all__ = ['LimitedTimeTable']
| 27.117021 | 60 | 0.617497 | 312 | 2,549 | 4.61859 | 0.211538 | 0.099237 | 0.049965 | 0.058293 | 0.373352 | 0.274809 | 0.266482 | 0.251214 | 0.184594 | 0.108258 | 0 | 0.001648 | 0.285995 | 2,549 | 93 | 61 | 27.408602 | 0.79011 | 0.014516 | 0 | 0.271429 | 0 | 0 | 0.007968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.257143 | false | 0 | 0.042857 | 0.085714 | 0.457143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
714e6f1bdf4058bf187b53f8c773baa127319b6d | 546 | py | Python | streams/blog/migrations/0012_auto_20200928_1212.py | Engerrs/ckan.org | a5a9b63b0ca16cb5aa4f709f7a264b8f6c265158 | [
"BSD-3-Clause"
] | 1 | 2022-03-18T03:20:00.000Z | 2022-03-18T03:20:00.000Z | streams/blog/migrations/0012_auto_20200928_1212.py | Engerrs/ckan.org | a5a9b63b0ca16cb5aa4f709f7a264b8f6c265158 | [
"BSD-3-Clause"
] | 26 | 2021-07-07T08:42:42.000Z | 2022-03-29T14:34:59.000Z | streams/blog/migrations/0012_auto_20200928_1212.py | Engerrs/ckan.org | a5a9b63b0ca16cb5aa4f709f7a264b8f6c265158 | [
"BSD-3-Clause"
] | 3 | 2021-07-07T22:11:03.000Z | 2021-09-15T18:19:10.000Z | # Generated by Django 3.1.1 on 2020-09-28 12:12
import datetime
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blog', '0011_blogpostpage_featured'),
]
operations = [
migrations.RemoveField(
model_name='blogpostpage',
name='date',
),
migrations.AddField(
model_name='blogpostpage',
name='created',
field=models.DateTimeField(blank=True, default=datetime.datetime.now),
),
]
| 22.75 | 82 | 0.600733 | 53 | 546 | 6.113208 | 0.679245 | 0.055556 | 0.12963 | 0.154321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048969 | 0.289377 | 546 | 23 | 83 | 23.73913 | 0.786082 | 0.082418 | 0 | 0.235294 | 1 | 0 | 0.130261 | 0.052104 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
714ebaf58f896dbaa65742bb16b60c72d8438768 | 252 | py | Python | create_read_write_1/Writing/to_csv.py | CodeXfull/Pandas | 08b0adc28eedba47f6eb8303ba6a36a37ababb92 | [
"MIT"
] | null | null | null | create_read_write_1/Writing/to_csv.py | CodeXfull/Pandas | 08b0adc28eedba47f6eb8303ba6a36a37ababb92 | [
"MIT"
] | null | null | null | create_read_write_1/Writing/to_csv.py | CodeXfull/Pandas | 08b0adc28eedba47f6eb8303ba6a36a37ababb92 | [
"MIT"
] | null | null | null | """
Converter um DataFrame para CSV
"""
import pandas as pd
dataset = pd.DataFrame({'Frutas': ["Abacaxi", "Mamão"],
"Nomes": ["Éverton", "Márcia"]},
index=["Linha 1", "Linha 2"])
dataset.to_csv("dataset.csv") | 25.2 | 55 | 0.543651 | 28 | 252 | 4.857143 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010811 | 0.265873 | 252 | 10 | 56 | 25.2 | 0.724324 | 0.123016 | 0 | 0 | 0 | 0 | 0.285047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
714f78bb4bb01676183ee7d2b3639573c3d0ac56 | 712 | py | Python | test/manual/documents/test_iter_documents.py | membranepotential/mendeley-python-sdk | 0336f0164f4d409309e813cbd0140011b5b2ff8f | [
"Apache-2.0"
] | 103 | 2015-01-12T00:40:51.000Z | 2022-03-29T07:02:06.000Z | test/manual/documents/test_iter_documents.py | membranepotential/mendeley-python-sdk | 0336f0164f4d409309e813cbd0140011b5b2ff8f | [
"Apache-2.0"
] | 26 | 2015-01-10T04:08:41.000Z | 2021-02-05T16:31:37.000Z | test/manual/documents/test_iter_documents.py | membranepotential/mendeley-python-sdk | 0336f0164f4d409309e813cbd0140011b5b2ff8f | [
"Apache-2.0"
] | 43 | 2015-03-04T18:11:06.000Z | 2022-03-13T02:33:34.000Z | from itertools import islice
from test import get_user_session, cassette
from test.resources.documents import delete_all_documents, create_document
def test_should_iterate_through_documents():
session = get_user_session()
delete_all_documents()
with cassette('fixtures/resources/documents/iter_documents/iterate_through_documents.yaml'):
create_document(session, 'title 1')
create_document(session, 'title 2')
create_document(session, 'title 3')
docs = list(islice(session.documents.iter(page_size=2), 3))
assert len(docs) == 3
assert docs[0].title == 'title 1'
assert docs[1].title == 'title 2'
assert docs[2].title == 'title 3'
| 32.363636 | 96 | 0.706461 | 92 | 712 | 5.25 | 0.369565 | 0.115942 | 0.130435 | 0.161491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0.191011 | 712 | 21 | 97 | 33.904762 | 0.817708 | 0 | 0 | 0 | 0 | 0 | 0.162921 | 0.103933 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.066667 | false | 0 | 0.2 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
715027948c136a1c6e6c296495419c7112dea3be | 1,929 | py | Python | test_project/settings.py | incuna/incuna-groups | 148c181faf66fe73792cb2c5bbf5500ba61aa22d | [
"BSD-2-Clause"
] | 1 | 2017-09-29T23:58:02.000Z | 2017-09-29T23:58:02.000Z | test_project/settings.py | incuna/incuna-groups | 148c181faf66fe73792cb2c5bbf5500ba61aa22d | [
"BSD-2-Clause"
] | 51 | 2015-03-30T08:58:15.000Z | 2022-01-13T00:40:17.000Z | test_project/settings.py | incuna/incuna-groups | 148c181faf66fe73792cb2c5bbf5500ba61aa22d | [
"BSD-2-Clause"
] | null | null | null | import os
import dj_database_url
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DEBUG = True
ALLOWED_HOSTS = []
ROOT_URLCONF = 'groups.tests.urls'
STATIC_URL = '/static/'
SECRET_KEY = 'krc34ji^-fd-=+r6e%p!0u0k9h$9!q*_#l=6)74h#o(jrxsx4p'
PASSWORD_HASHERS = ('django.contrib.auth.hashers.MD5PasswordHasher',)
DATABASES = {
'default': dj_database_url.config(default='postgres://localhost/groups')
}
DEFAULT_FILE_STORAGE = 'inmemorystorage.InMemoryStorage'
INSTALLED_APPS = (
'groups',
'crispy_forms',
'pagination',
'polymorphic',
# Put contenttypes before auth to work around test issue.
# See: https://code.djangoproject.com/ticket/10827#comment:12
'django.contrib.contenttypes',
'django.contrib.auth',
'django.contrib.admin',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
)
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'groups', 'tests', 'templates')
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.request',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
],
},
},
]
CRISPY_TEMPLATE_PACK = 'bootstrap3'
TEST_RUNNER = 'test_project.test_runner.Runner'
| 28.367647 | 76 | 0.659927 | 196 | 1,929 | 6.311224 | 0.5 | 0.115602 | 0.101859 | 0.150364 | 0.025869 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01436 | 0.205806 | 1,929 | 67 | 77 | 28.791045 | 0.793081 | 0.059098 | 0 | 0.038462 | 0 | 0.019231 | 0.526781 | 0.425179 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.019231 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71548039cb810f86d8a1fe4c36b02cd515b16949 | 558 | py | Python | ble.py | Ladvien/esp32_upython_env | 8b0feab940efd3feff16220473e1b5b27d679a56 | [
"MIT"
] | null | null | null | ble.py | Ladvien/esp32_upython_env | 8b0feab940efd3feff16220473e1b5b27d679a56 | [
"MIT"
] | null | null | null | ble.py | Ladvien/esp32_upython_env | 8b0feab940efd3feff16220473e1b5b27d679a56 | [
"MIT"
] | null | null | null | import bluetooth
import time
bt = bluetooth.BLE() # singleton
bt.active(True) # activate BT stack
UART_UUID = bluetooth.UUID('6E400001-B5A3-F393-E0A9-E50E24DCCA9E')
UART_TX = (bluetooth.UUID('6E400003-B5A3-F393-E0A9-E50E24DCCA9E'), bluetooth.FLAG_READ | bluetooth.FLAG_NOTIFY,)
UART_RX = (bluetooth.UUID('6E400002-B5A3-F393-E0A9-E50E24DCCA9E'), bluetooth.FLAG_WRITE,)
UART_SERVICE = (UART_UUID, (UART_TX, UART_RX,),)
SERVICES = (UART_SERVICE,)
( (tx, rx,), ) = bt.gatts_register_services(SERVICES)
bt.gap_advertise(100) | 50.727273 | 112 | 0.716846 | 72 | 558 | 5.361111 | 0.444444 | 0.101036 | 0.093264 | 0.186529 | 0.19171 | 0.19171 | 0 | 0 | 0 | 0 | 0 | 0.125523 | 0.143369 | 558 | 11 | 113 | 50.727273 | 0.682008 | 0.048387 | 0 | 0 | 0 | 0 | 0.204159 | 0.204159 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
715562602b941a7d39f1c3b9c3f9ed3ae5bab180 | 952 | py | Python | examples/custom-generator/customer.py | luxbe/sledo | 26aa2b59b11ea115afc25bb407602578cb342170 | [
"MIT"
] | 4 | 2021-12-13T17:52:52.000Z | 2021-12-28T09:40:52.000Z | examples/custom-generator/customer.py | luxbe/sledo | 26aa2b59b11ea115afc25bb407602578cb342170 | [
"MIT"
] | null | null | null | examples/custom-generator/customer.py | luxbe/sledo | 26aa2b59b11ea115afc25bb407602578cb342170 | [
"MIT"
] | null | null | null | from random import randint
from sledo.generate.field_generators.base import FieldGenerator
values = ("Austria",
"Belgium",
"Bulgaria",
"Croatia",
"Cyprus",
"Czech Republic",
"Denmark",
"Estonia",
"Finland",
"France",
"Germany",
"Greece",
"Hungary",
"Ireland",
"Italy",
"Latvia",
"Lithuania",
"Luxembourg",
"Malta",
"Netherlands",
"Poland",
"Portugal",
"Romania",
"Slovakia",
"Slovenia",
"Spain",
"Sweden",
"United States",
"Japan",
"United Kingdom",
"Bangladesh",
"Argentina",
"China")
count = len(values) - 1
class CustomerAddressGenerator(FieldGenerator):
def generate(self, **_):
return values[randint(0, count)]
| 21.636364 | 63 | 0.456933 | 64 | 952 | 6.765625 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003591 | 0.414916 | 952 | 43 | 64 | 22.139535 | 0.773788 | 0 | 0 | 0 | 1 | 0 | 0.265756 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.051282 | 0.025641 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
715d6a83862066d08f507e36bb0ef91281fb5c5f | 4,977 | py | Python | tests/test_cecum.py | hsorby/scaffoldmaker | 5e3b4531665dbc465b53acc1662f8d9bbb9dc1e1 | [
"Apache-2.0"
] | null | null | null | tests/test_cecum.py | hsorby/scaffoldmaker | 5e3b4531665dbc465b53acc1662f8d9bbb9dc1e1 | [
"Apache-2.0"
] | 38 | 2018-04-04T10:40:26.000Z | 2022-03-14T22:02:26.000Z | tests/test_cecum.py | hsorby/scaffoldmaker | 5e3b4531665dbc465b53acc1662f8d9bbb9dc1e1 | [
"Apache-2.0"
] | 28 | 2018-03-11T19:31:35.000Z | 2022-02-03T23:14:21.000Z | import unittest
from opencmiss.utils.zinc.finiteelement import evaluateFieldNodesetRange
from opencmiss.utils.zinc.general import ChangeManager
from opencmiss.zinc.context import Context
from opencmiss.zinc.element import Element
from opencmiss.zinc.field import Field
from opencmiss.zinc.result import RESULT_OK
from scaffoldmaker.meshtypes.meshtype_3d_cecum1 import MeshType_3d_cecum1
from scaffoldmaker.utils.zinc_utils import createFaceMeshGroupExteriorOnFace
from testutils import assertAlmostEqualList
class CecumScaffoldTestCase(unittest.TestCase):
def test_cecum1(self):
"""
Test creation of cecum scaffold.
"""
parameterSetNames = MeshType_3d_cecum1.getParameterSetNames()
self.assertEqual(parameterSetNames, ["Default", "Pig 1"])
options = MeshType_3d_cecum1.getDefaultOptions("Pig 1")
self.assertEqual(30, len(options))
self.assertEqual(5, options.get("Number of segments"))
self.assertEqual(2, options.get("Number of elements around tenia coli"))
self.assertEqual(8, options.get("Number of elements along segment"))
self.assertEqual(1, options.get("Number of elements through wall"))
self.assertEqual(35.0, options.get("Start inner radius"))
self.assertEqual(3.0, options.get("Start inner radius derivative"))
self.assertEqual(38.0, options.get("End inner radius"))
self.assertEqual(3.0, options.get("End inner radius derivative"))
self.assertEqual(0.5, options.get("Corner inner radius factor"))
self.assertEqual(0.25, options.get("Haustrum inner radius factor"))
self.assertEqual(4.0, options.get("Segment length mid derivative factor"))
self.assertEqual(3, options.get("Number of tenia coli"))
self.assertEqual(5.0, options.get("Start tenia coli width"))
self.assertEqual(0.0, options.get("End tenia coli width derivative"))
self.assertEqual(2.0, options.get("Wall thickness"))
ostiumOptions = options['Ileocecal junction']
ostiumSettings = ostiumOptions.getScaffoldSettings()
self.assertEqual(1, ostiumSettings.get("Number of vessels"))
self.assertEqual(8, ostiumSettings.get("Number of elements around ostium"))
self.assertEqual(1, ostiumSettings.get("Number of elements through wall"))
self.assertEqual(20.0, ostiumSettings.get("Ostium diameter"))
self.assertEqual(10.0, ostiumSettings.get("Vessel inner diameter"))
self.assertEqual(60, options.get("Ileocecal junction angular position degrees"))
self.assertEqual(0.5, options.get("Ileocecal junction position along factor"))
context = Context("Test")
region = context.getDefaultRegion()
self.assertTrue(region.isValid())
annotationGroups = MeshType_3d_cecum1.generateBaseMesh(region, options)
self.assertEqual(2, len(annotationGroups))
fieldmodule = region.getFieldmodule()
self.assertEqual(RESULT_OK, fieldmodule.defineAllFaces())
mesh3d = fieldmodule.findMeshByDimension(3)
self.assertEqual(1492, mesh3d.getSize())
mesh2d = fieldmodule.findMeshByDimension(2)
self.assertEqual(5617, mesh2d.getSize())
mesh1d = fieldmodule.findMeshByDimension(1)
self.assertEqual(6767, mesh1d.getSize())
nodes = fieldmodule.findNodesetByFieldDomainType(Field.DOMAIN_TYPE_NODES)
self.assertEqual(2642, nodes.getSize())
datapoints = fieldmodule.findNodesetByFieldDomainType(Field.DOMAIN_TYPE_DATAPOINTS)
self.assertEqual(0, datapoints.getSize())
coordinates = fieldmodule.findFieldByName("coordinates").castFiniteElement()
self.assertTrue(coordinates.isValid())
minimums, maximums = evaluateFieldNodesetRange(coordinates, nodes)
assertAlmostEqualList(self, minimums, [-49.01658984455258, -46.89686037622053, -2.343256155753525], 1.0E-6)
assertAlmostEqualList(self, maximums, [42.18085849205387, 54.89264119402881, 180.0], 1.0E-6)
with ChangeManager(fieldmodule):
one = fieldmodule.createFieldConstant(1.0)
faceMeshGroup = createFaceMeshGroupExteriorOnFace(fieldmodule, Element.FACE_TYPE_XI3_1)
surfaceAreaField = fieldmodule.createFieldMeshIntegral(one, coordinates, faceMeshGroup)
surfaceAreaField.setNumbersOfPoints(4)
volumeField = fieldmodule.createFieldMeshIntegral(one, coordinates, mesh3d)
volumeField.setNumbersOfPoints(3)
fieldcache = fieldmodule.createFieldcache()
result, surfaceArea = surfaceAreaField.evaluateReal(fieldcache, 1)
self.assertEqual(result, RESULT_OK)
self.assertAlmostEqual(surfaceArea, 65960.20655074248, delta=1.0E-6)
result, volume = volumeField.evaluateReal(fieldcache, 1)
self.assertEqual(result, RESULT_OK)
self.assertAlmostEqual(volume, 127905.28250502056, delta=1.0E-6)
if __name__ == "__main__":
unittest.main()
| 53.516129 | 115 | 0.723327 | 519 | 4,977 | 6.876686 | 0.296724 | 0.138694 | 0.024657 | 0.025217 | 0.226394 | 0.139255 | 0.107313 | 0.087419 | 0.040908 | 0.040908 | 0 | 0.053837 | 0.175206 | 4,977 | 92 | 116 | 54.097826 | 0.815591 | 0.00643 | 0 | 0.025641 | 0 | 0 | 0.130258 | 0 | 0 | 0 | 0 | 0 | 0.512821 | 1 | 0.012821 | false | 0 | 0.128205 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
716192be9eb9b6903ed659ac040571121cd26498 | 344 | py | Python | muni_portal/core/migrations/0030_remove_servicerequest_mobile_reference.py | desafinadude/muni-portal-backend | 9ffc447194b8f29619585cd919f67d62062457a3 | [
"MIT"
] | 1 | 2021-01-18T13:01:04.000Z | 2021-01-18T13:01:04.000Z | muni_portal/core/migrations/0030_remove_servicerequest_mobile_reference.py | desafinadude/muni-portal-backend | 9ffc447194b8f29619585cd919f67d62062457a3 | [
"MIT"
] | 42 | 2020-08-29T08:55:53.000Z | 2021-04-14T16:41:29.000Z | muni_portal/core/migrations/0030_remove_servicerequest_mobile_reference.py | desafinadude/muni-portal-backend | 9ffc447194b8f29619585cd919f67d62062457a3 | [
"MIT"
] | 2 | 2020-10-28T16:34:41.000Z | 2022-02-07T10:29:31.000Z | # Generated by Django 2.2.10 on 2021-02-24 09:42
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('core', '0029_auto_20210224_0936'),
]
operations = [
migrations.RemoveField(
model_name='servicerequest',
name='mobile_reference',
),
]
| 19.111111 | 48 | 0.610465 | 36 | 344 | 5.694444 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129555 | 0.281977 | 344 | 17 | 49 | 20.235294 | 0.700405 | 0.133721 | 0 | 0 | 1 | 0 | 0.192568 | 0.077703 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7169f3d04044834201fc8a2b35d915d5a016859d | 1,283 | py | Python | dnd/mobile/urls.py | dndtools2/dndtools2 | 6bd794349b84f3018dd0bd12712535924557c166 | [
"MIT"
] | null | null | null | dnd/mobile/urls.py | dndtools2/dndtools2 | 6bd794349b84f3018dd0bd12712535924557c166 | [
"MIT"
] | null | null | null | dnd/mobile/urls.py | dndtools2/dndtools2 | 6bd794349b84f3018dd0bd12712535924557c166 | [
"MIT"
] | null | null | null | from django.conf.urls import patterns, url, include
from .views import force_desktop_version, return_to_mobile_version
app_name = 'mobile'
urlpatterns = [
# force desktop
url(r'^force-desktop-version/$', force_desktop_version, name='force_desktop_version'),
# return to mobile version
url(r'^return-to-mobile-version/$', return_to_mobile_version, name='return_to_mobile_version'),
# index
url(r'^', include('dnd.mobile.index.urls')),
# character classes
url(r'^classes/', include('dnd.mobile.character_classes.urls')),
# feats
url(r'^feats/', include('dnd.mobile.feats.urls')),
# items
url(r'^items/', include('dnd.mobile.items.urls')),
# languages
url(r'^languages/', include('dnd.mobile.languages.urls')),
# monsters
url(r'^monsters/', include('dnd.mobile.monsters.urls')),
# races
url(r'^races/', include('dnd.mobile.races.urls')),
# rulebooks
url(r'^rulebooks/', include('dnd.mobile.rulebooks.urls')),
# rules
url(r'^rules/', include('dnd.mobile.rules.urls')),
# skills
url(r'^skills/', include('dnd.mobile.skills.urls')),
# spells
url(r'^spells/', include('dnd.mobile.spells.urls')),
# deities
url(r'^deities/', include('dnd.mobile.deities.urls')),
]
| 26.183673 | 99 | 0.653936 | 164 | 1,283 | 5.012195 | 0.20122 | 0.068127 | 0.233577 | 0.127737 | 0.131387 | 0.097324 | 0.097324 | 0 | 0 | 0 | 0 | 0 | 0.16212 | 1,283 | 48 | 100 | 26.729167 | 0.764651 | 0.106781 | 0 | 0 | 0 | 0 | 0.420866 | 0.331565 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
717a53af9750d33e9be1b7de3f152d83339bf874 | 969 | py | Python | tests/gejun_sum.py | jeffzhengye/pylearn | a140d0fca8a371faada194cb0126192675cc2045 | [
"Unlicense"
] | 2 | 2016-02-17T06:00:35.000Z | 2020-11-23T13:34:00.000Z | tests/gejun_sum.py | jeffzhengye/pylearn | a140d0fca8a371faada194cb0126192675cc2045 | [
"Unlicense"
] | null | null | null | tests/gejun_sum.py | jeffzhengye/pylearn | a140d0fca8a371faada194cb0126192675cc2045 | [
"Unlicense"
] | null | null | null | __author__ = 'jeffye'
def sum_consecutives(s):
i = 1
li = []
if i < len(s):
n = 1
while s[i] != s[i + 1] and s[i] != s[i - 1]:
sum = s[i]
i = i + 1
return sum
while s[i] == s[i + 1]:
n = n + 1
sum = s[i] * n
i = i + 1
return sum
li.append(sum)
return li
def sum_consecutives_corrected(s):
start = 0
li = []
n = 1
while start < len(s):
if start == len(s) - 1: # last element
li.append(s[start])
break
elif s[start] == s[start + n]: # equal, just record the length
n += 1
else: # first not equal, sum all previous equal elements and append to li
li.append(sum(s[start: start + n]))
start += n
n = 1
return li
if __name__ == '__main__':
test_li = [-5, -5, 7, 7, 12, 0] # should return [-10,14,12,0]
print sum_consecutives_corrected(test_li)
| 20.617021 | 82 | 0.472652 | 146 | 969 | 3.006849 | 0.308219 | 0.041002 | 0.027335 | 0.027335 | 0.111617 | 0.045558 | 0 | 0 | 0 | 0 | 0 | 0.045997 | 0.394221 | 969 | 46 | 83 | 21.065217 | 0.701874 | 0.140351 | 0 | 0.314286 | 0 | 0 | 0.016929 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71866e54c9be9ceced231705351ad07d4dec3246 | 244 | py | Python | src/tests/test_app_db.py | kazqvaizer/arq-sqlalchemy-boilerplate | c14596ed358a061e6eb2a380f4bd962242b123f3 | [
"MIT"
] | 6 | 2021-12-20T14:49:14.000Z | 2022-03-21T14:32:49.000Z | src/tests/test_app_db.py | kazqvaizer/arq-sqlalchemy-boilerplate | c14596ed358a061e6eb2a380f4bd962242b123f3 | [
"MIT"
] | null | null | null | src/tests/test_app_db.py | kazqvaizer/arq-sqlalchemy-boilerplate | c14596ed358a061e6eb2a380f4bd962242b123f3 | [
"MIT"
] | null | null | null | import pytest
from app.db import session_scope
pytestmark = pytest.mark.asyncio
async def test_engine_configured(env):
async with session_scope() as session:
assert str(session.bind.engine.url) == env("SQLALCHEMY_DATABASE_URI")
| 22.181818 | 77 | 0.762295 | 34 | 244 | 5.294118 | 0.735294 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151639 | 244 | 10 | 78 | 24.4 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0.094262 | 0.094262 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
718e41b1051f8c81e49363a47885bbfedb81564d | 2,027 | py | Python | external/model-preparation-algorithm/tests/conftest.py | opencv/openvino_training_extensions | f5d809741e192a2345558efc75899a475019cf98 | [
"Apache-2.0"
] | 775 | 2019-03-01T02:13:33.000Z | 2020-09-07T22:49:15.000Z | external/model-preparation-algorithm/tests/conftest.py | opencv/openvino_training_extensions | f5d809741e192a2345558efc75899a475019cf98 | [
"Apache-2.0"
] | 229 | 2019-02-28T21:37:08.000Z | 2020-09-07T15:11:49.000Z | external/model-preparation-algorithm/tests/conftest.py | opencv/openvino_training_extensions | f5d809741e192a2345558efc75899a475019cf98 | [
"Apache-2.0"
] | 290 | 2019-02-28T20:32:11.000Z | 2020-09-07T05:51:41.000Z | # Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
try:
import e2e.fixtures
from e2e.conftest_utils import * # noqa
from e2e.conftest_utils import pytest_addoption as _e2e_pytest_addoption # noqa
from e2e import config # noqa
from e2e.utils import get_plugins_from_packages
pytest_plugins = get_plugins_from_packages([e2e])
except ImportError:
_e2e_pytest_addoption = None
pass
import config
import pytest
from ote_sdk.test_suite.pytest_insertions import *
from ote_sdk.test_suite.training_tests_common import REALLIFE_USECASE_CONSTANT
pytest_plugins = get_pytest_plugins_from_ote()
ote_conftest_insertion(default_repository_name='ote/training_extensions/external/model-preparation-algorithm')
@pytest.fixture
def ote_test_domain_fx():
return 'model-preparation-algorithm'
@pytest.fixture
def ote_test_scenario_fx(current_test_parameters_fx):
assert isinstance(current_test_parameters_fx, dict)
if current_test_parameters_fx.get('usecase') == REALLIFE_USECASE_CONSTANT:
return 'performance'
else:
return 'integration'
@pytest.fixture(scope='session')
def ote_templates_root_dir_fx():
import os.path as osp
import logging
logger = logging.getLogger(__name__)
root = osp.dirname(osp.dirname(osp.realpath(__file__)))
root = f'{root}/configs/'
logger.debug(f'overloaded ote_templates_root_dir_fx: return {root}')
return root
@pytest.fixture(scope='session')
def ote_reference_root_dir_fx():
import os.path as osp
import logging
logger = logging.getLogger(__name__)
root = osp.dirname(osp.dirname(osp.realpath(__file__)))
root = f'{root}/tests/reference/'
logger.debug(f'overloaded ote_reference_root_dir_fx: return {root}')
return root
# pytest magic
def pytest_generate_tests(metafunc):
ote_pytest_generate_tests_insertion(metafunc)
def pytest_addoption(parser):
ote_pytest_addoption_insertion(parser)
| 32.174603 | 111 | 0.750863 | 263 | 2,027 | 5.429658 | 0.334601 | 0.052521 | 0.02521 | 0.048319 | 0.436975 | 0.313725 | 0.27451 | 0.27451 | 0.158263 | 0.158263 | 0 | 0.008294 | 0.167242 | 2,027 | 62 | 112 | 32.693548 | 0.837678 | 0.049334 | 0 | 0.291667 | 0 | 0 | 0.145396 | 0.087237 | 0 | 0 | 0 | 0 | 0.020833 | 1 | 0.125 | false | 0.020833 | 0.291667 | 0.020833 | 0.520833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7195924eb07d641386ea892a9ee9a4835feb2275 | 11,102 | py | Python | gym_flock/envs/old/flocking_position.py | katetolstaya/gym-flock | 3236d1dafcb1b9be0cf78b471672e8becb2d37af | [
"MIT"
] | 19 | 2019-07-29T22:19:58.000Z | 2022-01-27T04:38:38.000Z | gym_flock/envs/old/flocking_position.py | henghenghahei849/gym-flock | b09bdfbbe4a96fe052958d1f9e1e9dd314f58419 | [
"MIT"
] | null | null | null | gym_flock/envs/old/flocking_position.py | henghenghahei849/gym-flock | b09bdfbbe4a96fe052958d1f9e1e9dd314f58419 | [
"MIT"
] | 5 | 2019-10-03T14:44:49.000Z | 2021-12-09T20:39:39.000Z | import gym
from gym import spaces, error, utils
from gym.utils import seeding
import numpy as np
from scipy.spatial.distance import pdist, squareform
import configparser
from os import path
import matplotlib.pyplot as plt
from matplotlib.pyplot import gca
font = {'family' : 'sans-serif',
'weight' : 'bold',
'size' : 14}
class FlockingEnv(gym.Env):
def __init__(self):
config_file = path.join(path.dirname(__file__), "params_flock.cfg")
config = configparser.ConfigParser()
config.read(config_file)
config = config['flock']
self.fig = None
self.line1 = None
self.filter_len = int(config['filter_length'])
self.nx_system = 4
self.n_nodes = int(config['network_size'])
self.comm_radius = float(config['comm_radius'])
self.dt = float(config['system_dt'])
self.v_max = float(config['max_vel_init'])
self.v_bias = self.v_max # 0.5 * self.v_max
self.r_max = float(config['max_rad_init'])
self.std_dev = float(config['std_dev']) * self.dt
self.pooling = []
if config.getboolean('sum_pooling'):
self.pooling.append(np.nansum)
if config.getboolean('min_pooling'):
self.pooling.append(np.nanmin)
if config.getboolean('max_pooling'):
self.pooling.append(np.nanmax)
self.n_pools = len(self.pooling)
# number of features and outputs
self.n_features = int(config['N_features'])
self.nx = int(self.n_features / self.n_pools / self.filter_len)
self.nu = int(config['N_outputs']) # outputs
self.x_agg = np.zeros((self.n_nodes, self.nx * self.filter_len, self.n_pools))
self.x = np.zeros((self.n_nodes, self.nx_system))
self.u = np.zeros((self.n_nodes, self.nu))
self.mean_vel = np.zeros((self.n_nodes, self.nu))
# TODO
self.max_accel = 40
self.max_z = 200
# self.b = np.ones((self.n_nodes,1))
# self.action_space = spaces.Box(low=-self.max_accel, high=self.max_accel, shape=(self.n_nodes, 2), dtype=np.float32 )
# self.observation_space = spaces.Box(low=-self.max_z, high=self.max_z, shape=(
# self.n_nodes, self.nx * self.filter_len * self.n_pools) , dtype=np.float32)
self.action_space = spaces.Box(low=-self.max_accel, high=self.max_accel, shape=(2,) , dtype=np.float32 )
self.observation_space = spaces.Box(low=-self.max_z, high=self.max_z, shape=(self.n_features, ), dtype=np.float32)
self.seed()
def render(self, mode='human'):
if self.fig is None:
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
line1, = ax.plot(self.x[:, 0], self.x[:, 1], 'bo') # Returns a tuple of line objects, thus the comma
ax.plot([0], [0], 'kx')
plt.ylim(-1.0 * self.r_max, 1.0 * self.r_max)
plt.xlim(-1.0 * self.r_max, 1.0 * self.r_max)
a = gca()
a.set_xticklabels(a.get_xticks(), font)
a.set_yticklabels(a.get_yticks(), font)
plt.title('GNN Controller')
self.fig = fig
self.line1 = line1
self.line1.set_xdata(self.x[:, 0])
self.line1.set_ydata(self.x[:, 1])
self.fig.canvas.draw()
self.fig.canvas.flush_events()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def step(self, u):
x = self.x
x_ = np.zeros((self.n_nodes, self.nx_system))
#u = np.vstack((np.zeros((self.n_leaders, 2)), u))
# x position
x_[:, 0] = x[:, 0] + x[:, 2] * self.dt
# y position
x_[:, 1] = x[:, 1] + x[:, 3] * self.dt
# x velocity
x_[:, 2] = x[:, 2] + 0.1 * u[:, 0] * self.dt + np.random.normal(0, self.std_dev,(self.n_nodes,))
# y velocity
x_[:, 3] = x[:, 3] + 0.1 * u[:, 1] * self.dt + np.random.normal(0, self.std_dev,(self.n_nodes,))
# TODO - check the 0.1
self.x = x_
self.x_agg = self.aggregate(self.x, self.x_agg)
self.u = u
return self._get_obs(), -self.instant_cost(), False, {}
def instant_cost(self): # sum of differences in velocities
return np.sum(np.var(self.x[:, 2:4], axis=0)) #+ np.sum(np.square(self.u)) * 0.00001
#return np.sum(np.square(self.x[:,2:4] - self.mean_vel))
def _get_obs(self):
reshaped = self.x_agg.reshape((self.n_nodes, self.n_features))
clipped = np.clip(reshaped, a_min=-self.max_z, a_max=self.max_z)
return clipped #[self.n_leaders:, :]
def reset(self):
x = np.zeros((self.n_nodes, self.nx_system))
degree = 0
min_dist = 0
while degree < 2 or min_dist < 0.1: # < 0.25: # 0.25: #0.5: #min_dist < 0.25:
# randomly initialize the state of all agents
length = np.sqrt(np.random.uniform(0, self.r_max, size=(self.n_nodes,)))
angle = np.pi * np.random.uniform(0, 2, size=(self.n_nodes,))
x[:, 0] = length * np.cos(angle)
x[:, 1] = length * np.sin(angle)
bias = np.random.uniform(low=-self.v_bias, high=self.v_bias, size=(2,))
x[:, 2] = np.random.uniform(low=-self.v_max, high=self.v_max, size=(self.n_nodes,)) + bias[0]
x[:, 3] = np.random.uniform(low=-self.v_max, high=self.v_max, size=(self.n_nodes,)) + bias[1]
# compute distances between agents
x_t_loc = x[:, 0:2] # x,y location determines connectivity
a_net = squareform(pdist(x_t_loc.reshape((self.n_nodes, 2)), 'euclidean'))
# no self loops
a_net = a_net + 2 * self.comm_radius * np.eye(self.n_nodes)
# compute minimum distance between agents and degree of network
min_dist = np.min(np.min(a_net))
a_net = a_net < self.comm_radius
degree = np.min(np.sum(a_net.astype(int), axis=1))
self.mean_vel = np.mean(x[:,2:4],axis=0)
self.x = x
self.x_agg = np.zeros((self.n_nodes, self.nx * self.filter_len, self.n_pools))
self.x_agg = self.aggregate(self.x, self.x_agg)
return self._get_obs()
# def render(self, mode='human'):
# pass
def close(self):
pass
def aggregate(self, xt, x_agg):
"""
Perform aggegration operation for all possible pooling operations using helper functions get_pool and get_comms
Args:
x_agg (): Last time step's aggregated info
xt (): Current state of all agents
Returns:
Aggregated state values
"""
x_features = self.get_x_features(xt)
a_net = self.get_connectivity(xt)
for k in range(0, self.n_pools):
comm_data = self.get_comms(np.dstack((x_features, self.get_features(x_agg[:, :, k]))), a_net)
x_agg[:, :, k] = self.get_pool(comm_data, self.pooling[k])
return x_agg
def get_connectivity(self, x):
"""
Get the adjacency matrix of the network based on agent locations by computing pairwise distances using pdist
Args:
x (): current states of all agents
Returns: adjacency matrix of network
"""
x_t_loc = x[:, 0:2] # x,y location determines connectivity
a_net = squareform(pdist(x_t_loc.reshape((self.n_nodes, 2)), 'euclidean'))
a_net = (a_net < self.comm_radius).astype(float)
np.fill_diagonal(a_net, 0)
return a_net
def get_x_features(self, xt): # TODO
"""
Compute the non-linear features necessary for implementing Turner 2003
Args:
xt (): current state of all agents
Returns: matrix of features for each agent
"""
diff = xt.reshape((self.n_nodes, 1, self.nx_system)) - xt.reshape((1, self.n_nodes, self.nx_system))
r2 = np.multiply(diff[:, :, 0], diff[:, :, 0]) + np.multiply(diff[:, :, 1], diff[:, :, 1]) + np.eye(
self.n_nodes)
return np.dstack((diff[:, :, 2], np.divide(diff[:, :, 0], np.multiply(r2, r2)), np.divide(diff[:, :, 0], r2),
diff[:, :, 3], np.divide(diff[:, :, 1], np.multiply(r2, r2)), np.divide(diff[:, :, 1], r2)))
def get_features(self, agg):
"""
Matrix of
Args:
agg (): the aggregated matrix from the last time step
Returns: matrix of aggregated features from all nodes at current time
"""
return np.tile(agg[:, :-self.nx].reshape((self.n_nodes, 1, -1)), (1, self.n_nodes, 1)) # TODO check indexing
def get_comms(self, mat, a_net):
"""
Enforces that agents who are not connected in the network cannot observe each others' states
Args:
mat (): matrix of state information for the whole graph
a_net (): adjacency matrix for flock network (weighted networks unsupported for now)
Returns:
mat (): sparse matrix with NaN values where agents can't communicate
"""
a_net[a_net == 0] = np.nan
return mat * a_net.reshape(self.n_nodes, self.n_nodes, 1)
def get_pool(self, mat, func):
"""
Perform pooling operations on the matrix of state information. The replacement of values with NaNs for agents who
can't communicate must already be enforced.
Args:
mat (): matrix of state information
func (): pooling function (np.nansum(), np.nanmin() or np.nanmax()). Must ignore NaNs.
Returns:
information pooled from neighbors for each agent
"""
return func(mat, axis=1).reshape((self.n_nodes, self.n_features)) # TODO check this axis = 1
def controller(self):
"""
The controller for flocking from Turner 2003.
Args:
x (): the current state
Returns: the optimal action
"""
x = self.x
s_diff = x.reshape((self.n_nodes, 1, self.nx_system)) - x.reshape((1, self.n_nodes, self.nx_system))
r2 = np.multiply(s_diff[:, :, 0], s_diff[:, :, 0]) + np.multiply(s_diff[:, :, 1], s_diff[:, :, 1]) + np.eye(
self.n_nodes)
p = np.dstack((s_diff, self.potential_grad(s_diff[:, :, 0], r2), self.potential_grad(s_diff[:, :, 1], r2)))
p_sum = np.nansum(p, axis=1).reshape((self.n_nodes, self.nx_system + 2))
return np.hstack(((- p_sum[:, 4] - p_sum[:, 2]).reshape((-1, 1)), (- p_sum[:, 3] - p_sum[:, 5]).reshape(-1, 1)))
def potential_grad(self, pos_diff, r2):
"""
Computes the gradient of the potential function for flocking proposed in Turner 2003.
Args:
pos_diff (): difference in a component of position among all agents
r2 (): distance squared between agents
Returns: corresponding component of the gradient of the potential
"""
grad = -2.0 * np.divide(pos_diff, np.multiply(r2, r2)) + 2 * np.divide(pos_diff, r2)
grad[r2 > self.comm_radius] = 0
return grad
| 38.682927 | 126 | 0.582418 | 1,615 | 11,102 | 3.857585 | 0.19195 | 0.036918 | 0.05297 | 0.031461 | 0.310915 | 0.257785 | 0.2374 | 0.183307 | 0.173676 | 0.16886 | 0 | 0.023033 | 0.276527 | 11,102 | 286 | 127 | 38.818182 | 0.752615 | 0.2559 | 0 | 0.07947 | 0 | 0 | 0.028278 | 0 | 0 | 0 | 0 | 0.013986 | 0 | 1 | 0.10596 | false | 0.006623 | 0.059603 | 0.006623 | 0.258278 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7196a7afa44165b6070e17839c160c5651229421 | 406 | py | Python | main/migrations/0006_labourer_allproj.py | kevinmuturi5/farm-Management-system | 61929d7998d92d56daac67c2f8ace3cc76b6ee8b | [
"MIT"
] | 1 | 2020-11-24T14:39:54.000Z | 2020-11-24T14:39:54.000Z | main/migrations/0006_labourer_allproj.py | kevinmuturi5/farm-Management-system | 61929d7998d92d56daac67c2f8ace3cc76b6ee8b | [
"MIT"
] | null | null | null | main/migrations/0006_labourer_allproj.py | kevinmuturi5/farm-Management-system | 61929d7998d92d56daac67c2f8ace3cc76b6ee8b | [
"MIT"
] | null | null | null | # Generated by Django 3.1.2 on 2020-10-18 16:07
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0005_auto_20201018_1902'),
]
operations = [
migrations.AddField(
model_name='labourer',
name='allproj',
field=models.ManyToManyField(blank=True, to='main.Listing'),
),
]
| 21.368421 | 72 | 0.603448 | 44 | 406 | 5.477273 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105442 | 0.275862 | 406 | 18 | 73 | 22.555556 | 0.714286 | 0.110837 | 0 | 0 | 1 | 0 | 0.150418 | 0.064067 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7197c87f66af380e5e98dd30c64711ce25f12d71 | 607 | py | Python | items/models.py | roberthtamayose/digitalmenu | 19c6633844934fd95f861674946da386411a19c9 | [
"MIT"
] | null | null | null | items/models.py | roberthtamayose/digitalmenu | 19c6633844934fd95f861674946da386411a19c9 | [
"MIT"
] | null | null | null | items/models.py | roberthtamayose/digitalmenu | 19c6633844934fd95f861674946da386411a19c9 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils import timezone
class Categoria(models.Model):
nome = models.CharField(max_length=255)
def __str__(self):
return self.nome
class Item(models.Model):
nome = models.CharField(max_length=255)
data_criacao = models.DateTimeField(default=timezone.now)
descricao = models.TextField(blank=True)
categoria = models.ForeignKey(Categoria, on_delete=models.DO_NOTHING)
ocultar = models.BooleanField(default=False)
foto = models.ImageField(blank=True, upload_to='fotos/%y/%m/')
def __str__(self):
return self.nome
| 27.590909 | 73 | 0.726524 | 78 | 607 | 5.474359 | 0.564103 | 0.046838 | 0.070258 | 0.098361 | 0.309133 | 0.309133 | 0.196721 | 0.196721 | 0 | 0 | 0 | 0.011858 | 0.166392 | 607 | 21 | 74 | 28.904762 | 0.832016 | 0 | 0 | 0.4 | 0 | 0 | 0.019769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.133333 | 0.133333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
719bca03a01e24f7c868ad83a281e40679838ca7 | 1,521 | py | Python | jupyter/settings.py | nguyenngtt/GSE---TEAM-A | 4f78c1ace051d4f2ff30a039aa481aa9b79d3242 | [
"MIT"
] | 3 | 2021-11-21T08:47:18.000Z | 2021-11-28T10:35:10.000Z | jupyter/settings.py | nguyenngtt/GSE---TEAM-A | 4f78c1ace051d4f2ff30a039aa481aa9b79d3242 | [
"MIT"
] | 6 | 2021-11-29T02:00:49.000Z | 2022-02-08T09:21:38.000Z | jupyter/settings.py | nguyenngtt/GSE---TEAM-A | 4f78c1ace051d4f2ff30a039aa481aa9b79d3242 | [
"MIT"
] | 3 | 2021-12-11T08:11:08.000Z | 2022-01-10T12:51:48.000Z | import pandas as pd
import numpy as np
import os
import logging
# suppress warnings
import warnings;
warnings.filterwarnings('ignore');
from tqdm.autonotebook import tqdm
# register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm`
tqdm.pandas()
# https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html#available-options
# adjust pandas display
pd.options.display.max_columns = 30 # default 20
pd.options.display.max_rows = 200 # default 60
pd.options.display.float_format = '{:.2f}'.format
# pd.options.display.precision = 2
pd.options.display.max_colwidth = 200 # default 50; None = all
# Number of array items in summary at beginning and end of each dimension
# np.set_printoptions(edgeitems=3) # default 3
np.set_printoptions(suppress=True) # no scientific notation for small numbers
# IPython (Jupyter) setting:
# Print out every value instead of just "last_expr" (default)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import matplotlib as mpl
from matplotlib import pyplot as plt
# defaults: mpl.rcParamsDefault
rc_params = {'figure.figsize': (8, 4),
'axes.labelsize': 'large',
'axes.titlesize': 'large',
'xtick.labelsize': 'large',
'ytick.labelsize': 'large',
'savefig.dpi': 100,
'figure.dpi': 100 }
# adjust matplotlib defaults
mpl.rcParams.update(rc_params)
import seaborn as sns
sns.set_style("darkgrid")
# sns.set()
| 30.42 | 88 | 0.724523 | 201 | 1,521 | 5.40796 | 0.577114 | 0.041398 | 0.073597 | 0.052438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02057 | 0.168968 | 1,521 | 49 | 89 | 31.040816 | 0.839399 | 0.387903 | 0 | 0 | 0 | 0 | 0.149123 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.37037 | 0 | 0.37037 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
719e5a0939a4c90bfd66956e7385e51aac9d612e | 340 | py | Python | pset_functions/db_search/p1.py | mottaquikarim/pydev-psets | 9749e0d216ee0a5c586d0d3013ef481cc21dee27 | [
"MIT"
] | 5 | 2019-04-08T20:05:37.000Z | 2019-12-04T20:48:45.000Z | pset_functions/db_search/p1.py | mottaquikarim/pydev-psets | 9749e0d216ee0a5c586d0d3013ef481cc21dee27 | [
"MIT"
] | 8 | 2019-04-15T15:16:05.000Z | 2022-02-12T10:33:32.000Z | pset_functions/db_search/p1.py | mottaquikarim/pydev-psets | 9749e0d216ee0a5c586d0d3013ef481cc21dee27 | [
"MIT"
] | 2 | 2019-04-10T00:14:42.000Z | 2020-02-26T20:35:21.000Z | """
GPA Calculator
"""
# Write a function called "simple_gpa" to find GPA when student enters a letter grade as a string. Assign the result to a variable called "gpa".
"""
Use these conversions:
A+ --> 4.0
A --> 4.0
A- --> 3.7
B+ --> 3.3
B --> 3.0
B- --> 2.7
C+ --> 2.3
C --> 2.0
C- --> 1.7
D+ --> 1.3
D --> 1.0
D- --> 0.7
F --> 0.0
"""
| 14.166667 | 144 | 0.538235 | 70 | 340 | 2.6 | 0.471429 | 0.021978 | 0.032967 | 0.043956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098859 | 0.226471 | 340 | 23 | 145 | 14.782609 | 0.593156 | 0.464706 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71a1f9b966a655c142f90e8f1814eebae105ba9e | 373 | py | Python | setup.py | johnmartingodo/pyKinematicsKineticsToolbox | 4ffc99885f3c637b8c33914a4e50ccb4595fc844 | [
"MIT"
] | null | null | null | setup.py | johnmartingodo/pyKinematicsKineticsToolbox | 4ffc99885f3c637b8c33914a4e50ccb4595fc844 | [
"MIT"
] | null | null | null | setup.py | johnmartingodo/pyKinematicsKineticsToolbox | 4ffc99885f3c637b8c33914a4e50ccb4595fc844 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(name="pykinematicskineticstoolbox",
version="0.0",
description="Installable python package which collects useful kinematics and kinetics functions",
author="John Martin K. Godø",
author_email="john.martin.kleven.godo@gmail.com",
license="MIT",
packages=["pykinematicskineticstoolbox"],
install_requires=["numpy"],
)
| 31.083333 | 100 | 0.753351 | 41 | 373 | 6.804878 | 0.853659 | 0.071685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006135 | 0.126005 | 373 | 11 | 101 | 33.909091 | 0.849693 | 0 | 0 | 0 | 0 | 0 | 0.533512 | 0.233244 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71a73e1712465a4bec511db6faf72a21ab1c2e2c | 946 | py | Python | openskill/statistics.py | CalColson/openskill.py | ab61ca57fa6e60140d0a292c73440f22ceabd9a2 | [
"MIT"
] | 120 | 2021-09-03T03:06:11.000Z | 2022-03-28T05:54:54.000Z | openskill/statistics.py | CalColson/openskill.py | ab61ca57fa6e60140d0a292c73440f22ceabd9a2 | [
"MIT"
] | 48 | 2021-09-23T07:15:13.000Z | 2022-03-31T14:47:25.000Z | openskill/statistics.py | CalColson/openskill.py | ab61ca57fa6e60140d0a292c73440f22ceabd9a2 | [
"MIT"
] | 6 | 2022-01-20T16:45:28.000Z | 2022-03-28T23:48:07.000Z | import sys
import scipy.stats
normal = scipy.stats.norm(0, 1)
def phi_major(x):
return normal.cdf(x)
def phi_minor(x):
return normal.pdf(x)
def v(x, t):
xt = x - t
denom = phi_major(xt)
return -xt if (denom < sys.float_info.epsilon) else phi_minor(xt) / denom
def w(x, t):
xt = x - t
denom = phi_major(xt)
if denom < sys.float_info.epsilon:
return 1 if (x < 0) else 0
return v(x, t) * (v(x, t) + xt)
def vt(x, t):
xx = abs(x)
b = phi_major(t - xx) - phi_major(-t - xx)
if b < 1e-5:
if x < 0:
return -x - t
return -x + t
a = phi_minor(-t - xx) - phi_minor(t - xx)
return (-a if x < 0 else a) / b
def wt(x, t):
xx = abs(x)
b = phi_major(t - xx) - phi_major(-t - xx)
if b < sys.float_info.epsilon:
return 1.0
return ((t - xx) * phi_minor(t - xx) + (t + xx) * phi_minor(-t - xx)) / b + vt(
x, t
) * vt(x, t)
| 19.306122 | 83 | 0.516913 | 174 | 946 | 2.718391 | 0.195402 | 0.05074 | 0.063425 | 0.093023 | 0.505285 | 0.505285 | 0.346723 | 0.232558 | 0.232558 | 0.143763 | 0 | 0.017214 | 0.324524 | 946 | 48 | 84 | 19.708333 | 0.723005 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.058824 | 0.058824 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
71abaaff24dc05f9c229f77e4b27cc8d68a5b7f5 | 14,189 | py | Python | src/openalea/container/graph.py | revesansparole/oacontainer | 066a15b8b1b22f857bf25ed443c5f39f4cbefb3e | [
"MIT"
] | null | null | null | src/openalea/container/graph.py | revesansparole/oacontainer | 066a15b8b1b22f857bf25ed443c5f39f4cbefb3e | [
"MIT"
] | null | null | null | src/openalea/container/graph.py | revesansparole/oacontainer | 066a15b8b1b22f857bf25ed443c5f39f4cbefb3e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Graph : graph package
#
# Copyright or Copr. 2006 INRIA - CIRAD - INRA
#
# File author(s): Jerome Chopard <jerome.chopard@sophia.inria.fr>
#
# Distributed under the Cecill-C License.
# See accompanying file LICENSE.txt or copy at
# http://www.cecill.info/licences/Licence_CeCILL-C_V1-en.html
#
# VPlants WebSite : https://gforge.inria.fr/projects/vplants/
#
"""This module provide a simple pure python implementation
for a graph interface
does not implement copy concept
"""
from id_dict import IdDict
class GraphError(Exception):
"""
base class of all graph exceptions
"""
class InvalidEdge(GraphError, KeyError):
"""
exception raised when a wrong edge id is provided
"""
class InvalidVertex(GraphError, KeyError):
"""
exception raised when a wrong vertex id is provided
"""
class Graph(object):
"""Directed graph with multiple links
in this implementation :
- vertices are tuple of edge_in,edge_out
- edges are tuple of source,target
"""
def __init__(self, graph=None, idgenerator="set"):
"""constructor
if graph is not none make a copy of the topological structure of graph
(i.e. don't use the same id)
args:
- graph (Graph): the graph to copy, default=None
- idgenerator (str): type of idgenerator to use, default 'set'
"""
self._vertices = IdDict(idgenerator=idgenerator)
self._edges = IdDict(idgenerator=idgenerator)
if graph is not None:
self.extend(graph)
# ##########################################################
#
# Graph concept
#
# ##########################################################
def source(self, eid):
"""Retrieve the source vertex of an edge
args:
- eid (int): edge id
return:
- (int): vertex id
"""
try:
return self._edges[eid][0]
except KeyError:
raise InvalidEdge(eid)
def target(self, eid):
"""Retrieve the target vertex of an edge
args:
- eid (int): edge id
return:
- (int): vertex id
"""
try:
return self._edges[eid][1]
except KeyError:
raise InvalidEdge(eid)
def edge_vertices(self, eid):
"""Retrieve both source and target vertex of an edge
args:
- eid (int): edge id
return:
- (int, int): source id, target id
"""
try:
return self._edges[eid]
except KeyError:
raise InvalidEdge(eid)
def edge(self, source, target):
"""Find the matching edge with same source and same target
return None if it don't succeed
args:
- source (int): source vertex
- target (int): target vertex
return:
- (int): edge id with same source and target
- (None): if search is unsuccessful
"""
if target not in self:
raise InvalidVertex(target)
for eid in self.out_edges(source):
if self.target(eid) == target:
return eid
return None
def __contains__(self, vid):
"""magic alias for `has_vertex`
"""
return self.has_vertex(vid)
def has_vertex(self, vid):
"""test whether a vertex belong to the graph
args:
- vid (int): id of vertex
return:
- (bool)
"""
return vid in self._vertices
def has_edge(self, eid):
"""test whether an edge belong to the graph
args:
- eid (int): id of edge
return:
- (bool)
"""
return eid in self._edges
def is_valid(self):
"""Test the validity of the graph
return:
- (bool)
"""
return True
# ##########################################################
#
# Vertex List Graph Concept
#
# ##########################################################
def vertices(self):
"""Iterator on all vertices
return:
- (iter of int)
"""
return iter(self._vertices)
def __iter__(self):
"""Magic alias for `vertices`
"""
return iter(self._vertices)
def nb_vertices(self):
"""Total number of vertices in the graph
return:
- (int)
"""
return len(self._vertices)
def __len__(self):
"""Magic alias for `nb_vertices`
"""
return self.nb_vertices()
def in_neighbors(self, vid):
"""Iterator on the neighbors of vid
where edges are directed from neighbor to vid
args:
- vid (int): vertex id
return:
- (iter of int): iter of vertex id
"""
if vid not in self:
raise InvalidVertex(vid)
neighbors_list = [self.source(eid) for eid in self._vertices[vid][0]]
return iter(set(neighbors_list))
def out_neighbors(self, vid):
"""Iterator on the neighbors of vid
where edges are directed from vid to neighbor
args:
- vid (int): vertex id
return:
- (iter of int): iter of vertex id
"""
if vid not in self:
raise InvalidVertex(vid)
neighbors_list = [self.target(eid) for eid in self._vertices[vid][1]]
return iter(set(neighbors_list))
def neighbors(self, vid):
"""Iterator on all neighbors of vid both in and out
args:
- vid (int): vertex id
return:
- (iter of int): iter of vertex id
"""
neighbors_list = list(self.in_neighbors(vid))
neighbors_list.extend(self.out_neighbors(vid))
return iter(set(neighbors_list))
def nb_in_neighbors(self, vid):
"""Number of in neighbors of vid
where edges are directed from neighbor to vid
args:
- vid (int): vertex id
return:
- (int)
"""
neighbors_set = list(self.in_neighbors(vid))
return len(neighbors_set)
def nb_out_neighbors(self, vid):
"""Number of out neighbors of vid
where edges are directed from vid to neighbor
args:
- vid (int): vertex id
return:
- (int)
"""
neighbors_set = list(self.out_neighbors(vid))
return len(neighbors_set)
def nb_neighbors(self, vid):
"""Total number of both in and out neighbors of vid
args:
- vid (int): vertex id
return:
- (int)
"""
neighbors_set = list(self.neighbors(vid))
return len(neighbors_set)
# ##########################################################
#
# Edge List Graph Concept
#
# ##########################################################
def _iter_edges(self, vid):
"""
internal function that perform 'edges' with vid not None
"""
link_in, link_out = self._vertices[vid]
for eid in link_in:
yield eid
for eid in link_out:
yield eid
def edges(self, vid=None):
"""Iterate on all edges connected to a given vertex.
If vid is None (default), iterate on all edges in the graph
args:
- vid (int): vertex holdings edges, default (None)
return:
- (iter of int): iterator on edge ids
"""
if vid is None:
return iter(self._edges)
if vid not in self:
raise InvalidVertex(vid)
return self._iter_edges(vid)
def nb_edges(self, vid=None):
"""Number of edges connected to a given vertex.
If vid is None (default), total number of edges in the graph
args:
- vid (int): vertex holdings edges, default (None)
return:
- (int)
"""
if vid is None:
return len(self._edges)
if vid not in self:
raise InvalidVertex(vid)
return len(self._vertices[vid][0]) + len(self._vertices[vid][1])
def in_edges(self, vid):
"""Iterate on all edges pointing to a given vertex.
args:
- vid (int): vertex target of edges
return:
- (iter of int): iterator on edge ids
"""
if vid not in self:
raise InvalidVertex(vid)
for eid in self._vertices[vid][0]:
yield eid
def out_edges(self, vid):
"""Iterate on all edges away from a given vertex.
args:
- vid (int): vertex source of edges
return:
- (iter of int): iterator on edge ids
"""
if vid not in self:
raise InvalidVertex(vid)
for eid in self._vertices[vid][1]:
yield eid
def nb_in_edges(self, vid):
"""Number of edges pointing to a given vertex.
args:
- vid (int): vertex target of edges
return:
- (int)
"""
if vid not in self:
raise InvalidVertex(vid)
return len(self._vertices[vid][0])
def nb_out_edges(self, vid):
"""Number of edges away from a given vertex.
args:
- vid (int): vertex source of edges
return:
- (int)
"""
if vid not in self:
raise InvalidVertex(vid)
return len(self._vertices[vid][1])
# ##########################################################
#
# Mutable Vertex Graph concept
#
# ##########################################################
def add_vertex(self, vid=None):
"""Add a vertex to the graph.
If vid is not provided create a new vid
args:
- vid (int): id to use. If None (default) will generate a new one
return:
- vid (int): id used for the new vertex
"""
try:
return self._vertices.add((set(), set()), vid)
except KeyError:
raise InvalidVertex(vid)
def remove_vertex(self, vid):
"""Remove a specified vertex of the graph.
Also remove all edge attached to it.
args:
- vid (int): id of vertex to remove
"""
if vid not in self:
raise InvalidVertex(vid)
link_in, link_out = self._vertices[vid]
for edge in list(link_in):
self.remove_edge(edge)
for edge in list(link_out):
self.remove_edge(edge)
del self._vertices[vid]
def clear(self):
"""Remove all vertices and edges
don't change references to objects
"""
self._edges.clear()
self._vertices.clear()
# ##########################################################
#
# Mutable Edge Graph concept
#
# ##########################################################
def add_edge(self, sid, tid, eid=None):
"""Add an edge to the graph.
If eid is not provided generate a new one.
args:
- sid (int): id of source vertex
- tid (int): id of target vertex
- eid (int): id to use. If None (default) will generate a new one
return:
- eid (int): id used for new edge
"""
if sid not in self:
raise InvalidVertex(sid)
if tid not in self:
raise InvalidVertex(tid)
try:
eid = self._edges.add((sid, tid), eid)
except KeyError:
raise InvalidEdge(eid)
self._vertices[sid][1].add(eid)
self._vertices[tid][0].add(eid)
return eid
def remove_edge(self, eid):
"""Remove a specified edge from the graph.
args:
- eid (int): id of edge to remove
"""
if not self.has_edge(eid):
raise InvalidEdge(eid)
sid, tid = self._edges[eid]
self._vertices[sid][1].remove(eid)
self._vertices[tid][0].remove(eid)
del self._edges[eid]
def clear_edges(self):
"""Remove all the edges of the graph
don't change references to objects
"""
self._edges.clear()
for vid, (in_set, out_set) in self._vertices.iteritems():
in_set.clear()
out_set.clear()
# ##########################################################
#
# Extend Graph concept
#
# ##########################################################
def extend(self, graph):
"""Add the specified graph to self, create new vid and eid
args:
- graph (Graph): the graph to add
return:
- (dict of (int, int)): mapping between vertex id in graph and
vertex id in extended self
- (dict of (int, int)): mapping between edge id in graph and
edge id in extended self
"""
# vertex adding
trans_vid = {}
for vid in list(graph.vertices()):
trans_vid[vid] = self.add_vertex()
# edge adding
trans_eid = {}
for eid in list(graph.edges()):
sid = trans_vid[graph.source(eid)]
tid = trans_vid[graph.target(eid)]
trans_eid[eid] = self.add_edge(sid, tid)
return trans_vid, trans_eid
def sub_graph(self, vids):
"""
"""
raise NotImplemented
# from copy import deepcopy
# vids = set(vids)
#
# result = deepcopy(self)
# result._vertices.clear()
# result._edges.clear()
#
# for key, edges in self._vertices.items():
# if key in vids:
# inedges, outedges = edges
# sortedinedges = set(
# [eid for eid in inedges if self.source(eid) in vids])
# sortedoutedges = set(
# [eid for eid in outedges if self.target(eid) in vids])
# result._vertices.add((sortedinedges, sortedoutedges), key)
# for eid in sortedoutedges:
# result._edges.add(self._edges[eid], eid)
#
# return result
| 26.571161 | 78 | 0.512651 | 1,639 | 14,189 | 4.353874 | 0.125076 | 0.040359 | 0.02102 | 0.023543 | 0.462864 | 0.394479 | 0.339126 | 0.301289 | 0.269759 | 0.257707 | 0 | 0.002165 | 0.348932 | 14,189 | 533 | 79 | 26.621013 | 0.770297 | 0.399676 | 0 | 0.35 | 0 | 0 | 0.000477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2125 | false | 0 | 0.00625 | 0 | 0.41875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71acaf064514ffdbe1a52492a693bd272d32dbf5 | 8,439 | py | Python | nets/mobilenet_v2_ssd.py | GT-AcerZhang/PaddlePaddle-SSD | 3833afe3470b7dc811409b3d8111b98dc31c6d0e | [
"Apache-2.0"
] | 47 | 2020-03-25T01:42:45.000Z | 2022-03-23T12:03:46.000Z | nets/mobilenet_v2_ssd.py | tianxiehu/PaddlePaddle-SSD | ae2ec69b65cc181fdb4275b295f145dc22e71ddb | [
"Apache-2.0"
] | 1 | 2021-06-30T13:02:59.000Z | 2022-01-13T09:48:07.000Z | nets/mobilenet_v2_ssd.py | tianxiehu/PaddlePaddle-SSD | ae2ec69b65cc181fdb4275b295f145dc22e71ddb | [
"Apache-2.0"
] | 9 | 2020-06-01T13:28:44.000Z | 2021-06-17T02:42:55.000Z | import paddle.fluid as fluid
from paddle.fluid.initializer import MSRA
from paddle.fluid.param_attr import ParamAttr
class MobileNetV2SSD:
def __init__(self, img, num_classes, img_shape):
self.img = img
self.num_classes = num_classes
self.img_shape = img_shape
def ssd_net(self, scale=1.0):
# 300x300
bottleneck_params_list = [(1, 16, 1, 1),
(6, 24, 2, 2),
(6, 32, 3, 2),
(6, 64, 4, 2),
(6, 96, 3, 1)]
# conv1
input = self.conv_bn_layer(input=self.img,
num_filters=int(32 * scale),
filter_size=3,
stride=2,
padding=1,
if_act=True)
# bottleneck sequences
in_c = int(32 * scale)
for layer_setting in bottleneck_params_list:
t, c, n, s = layer_setting
input = self.invresi_blocks(input=input, in_c=in_c, t=t, c=int(c * scale), n=n, s=s)
in_c = int(c * scale)
# 19x19
module11 = input
tmp = self.invresi_blocks(input=input, in_c=in_c, t=6, c=int(160 * scale), n=3, s=2)
# 10x10
module13 = self.invresi_blocks(input=tmp, in_c=int(160 * scale), t=6, c=int(320 * scale), n=1, s=1)
module14 = self.extra_block(module13, 256, 512, 1)
# 5x5
module15 = self.extra_block(module14, 128, 256, 1)
# 3x3
module16 = self.extra_block(module15, 128, 256, 1)
# 2x2
module17 = self.extra_block(module16, 64, 128, 1)
mbox_locs, mbox_confs, box, box_var = fluid.layers.multi_box_head(
inputs=[module11, module13, module14, module15, module16, module17],
image=self.img,
num_classes=self.num_classes,
min_ratio=20,
max_ratio=90,
min_sizes=[60.0, 105.0, 150.0, 195.0, 240.0, 285.0],
max_sizes=[[], 150.0, 195.0, 240.0, 285.0, 300.0],
aspect_ratios=[[2.], [2., 3.], [2., 3.], [2., 3.], [2., 3.], [2., 3.]],
base_size=self.img_shape[2],
offset=0.5,
flip=True)
return mbox_locs, mbox_confs, box, box_var
def conv_bn_layer(self, input, filter_size, num_filters, stride, padding, num_groups=1, if_act=True,
use_cudnn=True):
parameter_attr = ParamAttr(learning_rate=0.1, initializer=MSRA())
conv = fluid.layers.conv2d(input=input,
num_filters=num_filters,
filter_size=filter_size,
stride=stride,
padding=padding,
groups=num_groups,
use_cudnn=use_cudnn,
param_attr=parameter_attr,
bias_attr=False)
bn = fluid.layers.batch_norm(input=conv)
if if_act:
return fluid.layers.relu6(bn)
else:
return bn
def shortcut(self, input, data_residual):
return fluid.layers.elementwise_add(input, data_residual)
def inverted_residual_unit(self,
input,
num_in_filter,
num_filters,
ifshortcut,
stride,
filter_size,
padding,
expansion_factor):
num_expfilter = int(round(num_in_filter * expansion_factor))
channel_expand = self.conv_bn_layer(input=input,
num_filters=num_expfilter,
filter_size=1,
stride=1,
padding=0,
num_groups=1,
if_act=True)
bottleneck_conv = self.conv_bn_layer(input=channel_expand,
num_filters=num_expfilter,
filter_size=filter_size,
stride=stride,
padding=padding,
num_groups=num_expfilter,
if_act=True,
use_cudnn=False)
linear_out = self.conv_bn_layer(input=bottleneck_conv,
num_filters=num_filters,
filter_size=1,
stride=1,
padding=0,
num_groups=1,
if_act=False)
if ifshortcut:
out = self.shortcut(input=input, data_residual=linear_out)
return out
else:
return linear_out
def invresi_blocks(self, input, in_c, t, c, n, s):
first_block = self.inverted_residual_unit(input=input,
num_in_filter=in_c,
num_filters=c,
ifshortcut=False,
stride=s,
filter_size=3,
padding=1,
expansion_factor=t)
last_residual_block = first_block
last_c = c
for i in range(1, n):
last_residual_block = self.inverted_residual_unit(input=last_residual_block,
num_in_filter=last_c,
num_filters=c,
ifshortcut=True,
stride=1,
filter_size=3,
padding=1,
expansion_factor=t)
return last_residual_block
def conv_bn(self, input, filter_size, num_filters, stride, padding, num_groups=1, act='relu', use_cudnn=True):
parameter_attr = ParamAttr(learning_rate=0.1, initializer=MSRA())
conv = fluid.layers.conv2d(input=input,
num_filters=num_filters,
filter_size=filter_size,
stride=stride,
padding=padding,
groups=num_groups,
use_cudnn=use_cudnn,
param_attr=parameter_attr,
bias_attr=False)
return fluid.layers.batch_norm(input=conv, act=act)
def extra_block(self, input, num_filters1, num_filters2, num_groups):
# 1x1 conv
pointwise_conv = self.conv_bn(input=input,
filter_size=1,
num_filters=int(num_filters1),
stride=1,
num_groups=int(num_groups),
padding=0)
# 3x3 conv
normal_conv = self.conv_bn(input=pointwise_conv,
filter_size=3,
num_filters=int(num_filters2),
stride=2,
num_groups=int(num_groups),
padding=1)
return normal_conv
def build_ssd(img, num_classes, img_shape):
ssd_model = MobileNetV2SSD(img, num_classes, img_shape)
return ssd_model.ssd_net()
if __name__ == '__main__':
data = fluid.data(name='data', shape=[None, 3, 300, 300])
build_ssd(data, 21, img_shape=[3, 300, 300])
| 44.650794 | 114 | 0.422799 | 799 | 8,439 | 4.20776 | 0.193992 | 0.047591 | 0.017847 | 0.017847 | 0.431886 | 0.349494 | 0.272457 | 0.25699 | 0.209994 | 0.209994 | 0 | 0.058361 | 0.502548 | 8,439 | 188 | 115 | 44.888298 | 0.742496 | 0.009006 | 0 | 0.363636 | 0 | 0 | 0.001916 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058442 | false | 0 | 0.019481 | 0.006494 | 0.149351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71b98f59428322523fe15276f1dd95e05126903b | 1,330 | py | Python | social_auth_ragtag_id/backends.py | RagtagOpen/python-social-auth-ragtag-id | 8d8e005231c09535098136213347934e9da7b3f2 | [
"MIT"
] | null | null | null | social_auth_ragtag_id/backends.py | RagtagOpen/python-social-auth-ragtag-id | 8d8e005231c09535098136213347934e9da7b3f2 | [
"MIT"
] | 3 | 2020-03-24T16:26:22.000Z | 2021-02-02T21:55:45.000Z | social_auth_ragtag_id/backends.py | RagtagOpen/python-social-auth-ragtag-id | 8d8e005231c09535098136213347934e9da7b3f2 | [
"MIT"
] | null | null | null | from social_core.backends.oauth import BaseOAuth2
class RagtagOAuth2(BaseOAuth2):
"""Ragtag ID OAuth authentication backend"""
name = "ragtag"
AUTHORIZATION_URL = "https://id.ragtag.org/oauth/authorize/"
ACCESS_TOKEN_URL = "https://id.ragtag.org/oauth/token/"
ACCESS_TOKEN_METHOD = "POST"
REVOKE_TOKEN_URL = "https://id.ragtag.org/oauth/revoke_token/"
SCOPE_SEPARATOR = " "
ID_KEY = "id"
def get_user_details(self, response):
"""Return user details from Ragtag ID account"""
return {
"username": response.get("username"),
"email": response.get("email"),
"first_name": response.get("first_name"),
"last_name": response.get("last_name"),
}
def user_data(self, access_token, *args, **kwargs):
"""Fetches user data from id.ragtag.org"""
return self.get_json(
"https://id.ragtag.org/api/me/",
headers={"Authorization": "Bearer {}".format(access_token)},
)
def auth_params(self, state=None):
params = super(RagtagOAuth2, self).auth_params(state=state)
approval_prompt = self.setting("APPROVAL_PROMPT", "auto")
if not approval_prompt == "auto":
params["approval_prompt"] = self.setting("APPROVAL_PROMPT", "")
return params
| 35.945946 | 75 | 0.627068 | 153 | 1,330 | 5.267974 | 0.392157 | 0.049628 | 0.068238 | 0.079404 | 0.198511 | 0.198511 | 0.07196 | 0 | 0 | 0 | 0 | 0.00391 | 0.230827 | 1,330 | 36 | 76 | 36.944444 | 0.783969 | 0.088722 | 0 | 0 | 0 | 0 | 0.245819 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.037037 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
71c4e4d0291e170dbdedace4be31a3f5ab545979 | 3,259 | py | Python | SWHT/Ylm.py | 2baOrNot2ba/SWHT | 738718e90d615e624dacf7746f8a2dfa973ec9fe | [
"BSD-3-Clause"
] | null | null | null | SWHT/Ylm.py | 2baOrNot2ba/SWHT | 738718e90d615e624dacf7746f8a2dfa973ec9fe | [
"BSD-3-Clause"
] | null | null | null | SWHT/Ylm.py | 2baOrNot2ba/SWHT | 738718e90d615e624dacf7746f8a2dfa973ec9fe | [
"BSD-3-Clause"
] | null | null | null | """
An implementation on spherical harmonics in python becasue scipy.special.sph_harm in scipy<=0.13 is very slow
Originally written by Jozef Vesely
https://github.com/scipy/scipy/issues/1280
"""
import numpy as np
def xfact(m):
# computes (2m-1)!!/sqrt((2m)!)
res = 1.
for i in xrange(1, 2*m+1):
if i % 2: res *= i # (2m-1)!!
res /= np.sqrt(i) # sqrt((2m)!)
return res
def lplm_n(l, m, x):
# associated legendre polynomials normalized as in Ylm, from Numerical Recipes 6.7
l,m = int(l),int(m)
assert 0<=m<=l and np.all(np.abs(x)<=1.)
norm = np.sqrt(2. * l + 1.) / np.sqrt(4. * np.pi)
if m == 0:
pmm = norm * np.ones_like(x)
else:
pmm = (-1.)**m * norm * xfact(m) * (1.-x**2.)**(m/2.)
if l == m:
return pmm
pmmp1 = x * pmm * np.sqrt(2.*m+1.)
if l == m+1:
return pmmp1
for ll in xrange(m+2, l+1):
pll = (x*(2.*ll-1.)*pmmp1 - np.sqrt( (ll-1.)**2. - m**2.)*pmm)/np.sqrt(ll**2.-m**2.)
pmm = pmmp1
pmmp1 = pll
return pll
def Ylm(l, m, phi, theta):
# spherical harmonics
# theta is from 0 to pi with pi/2 on equator
l,m = int(l),int(m)
assert 0 <= np.abs(m) <=l
if m > 0:
return lplm_n(l, m, np.cos(theta)) * np.exp(1J * m * phi)
elif m < 0:
return (-1.)**m * lplm_n(l, -m, np.cos(theta)) * np.exp(1J * m * phi)
return lplm_n(l, m, np.cos(theta)) * np.ones_like(phi)
def Ylmr(l, m, phi, theta):
# real spherical harmonics
# theta is from 0 to pi with pi/2 on equator
l,m = int(l),int(m)
assert 0 <= np.abs(m) <=l
if m > 0:
return lplm_n(l, m, np.cos(theta)) * np.cos(m * phi) * np.sqrt(2.)
elif m < 0:
return (-1.)**m * lplm_n(l, -m, np.cos(theta)) * np.sin(-m * phi) * np.sqrt(2.)
return lplm_n(l, m, np.cos(theta)) * np.ones_like(phi)
if __name__ == "__main__":
from scipy.special import sph_harm
from scipy.misc import factorial2, factorial
from timeit import Timer
def ref_xfact(m):
return factorial2(2*m-1)/np.sqrt(factorial(2*m))
print "Time: xfact(10)", Timer("xfact(10)",
"from __main__ import xfact, ref_xfact").timeit(100)
print "Time: ref_xfact(10)", Timer("ref_xfact(10)",
"from __main__ import xfact, ref_xfact").timeit(100)
print "Time: xfact(80)", Timer("xfact(80)",
"from __main__ import xfact, ref_xfact").timeit(100)
print "Time: ref_xfact(80)", Timer("ref_xfact(80)",
"from __main__ import xfact, ref_xfact").timeit(100)
print "m", "xfact", "ref_xfact"
for m in range(10) + range(80,90):
a = xfact(m)
b = ref_xfact(m)
print m, a, b
phi, theta = np.ogrid[0:2*np.pi:10j,-np.pi/2:np.pi/2:10j]
print "Time: Ylm(1,1,phi,theta)", Timer("Ylm(1,1,phi,theta)",
"from __main__ import Ylm, sph_harm, phi, theta").timeit(10)
print "Time: sph_harm(1,1,phi,theta)", Timer("sph_harm(1,1,phi,theta)",
"from __main__ import Ylm, sph_harm, phi, theta").timeit(10)
print "l", "m", "max|Ylm-sph_harm|"
for l in xrange(0,10):
for m in xrange(-l,l+1):
a = Ylm(l,m,phi,theta)
b = sph_harm(m,l,phi,theta)
print l,m, np.amax(np.abs(a-b))
| 32.919192 | 109 | 0.56367 | 574 | 3,259 | 3.094077 | 0.186411 | 0.019144 | 0.023649 | 0.02759 | 0.454392 | 0.412725 | 0.399212 | 0.399212 | 0.38964 | 0.38964 | 0 | 0.051916 | 0.255293 | 3,259 | 98 | 110 | 33.255102 | 0.679852 | 0.080393 | 0 | 0.239437 | 0 | 0 | 0.174848 | 0.016482 | 0 | 0 | 0 | 0 | 0.042254 | 0 | null | null | 0 | 0.140845 | null | null | 0.140845 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71c972e7ade9ba017ed68282dda02ffa0b10d89d | 5,110 | py | Python | src/solana/rpc/responses.py | broper2/solana-py | 146390d959f017e137238335ee6fa362ad1a1ab4 | [
"MIT"
] | 1 | 2021-12-13T05:28:52.000Z | 2021-12-13T05:28:52.000Z | src/solana/rpc/responses.py | broper2/solana-py | 146390d959f017e137238335ee6fa362ad1a1ab4 | [
"MIT"
] | 1 | 2021-08-11T10:33:16.000Z | 2021-08-11T10:33:16.000Z | src/solana/rpc/responses.py | broper2/solana-py | 146390d959f017e137238335ee6fa362ad1a1ab4 | [
"MIT"
] | 2 | 2021-06-23T15:29:56.000Z | 2022-01-29T06:24:01.000Z | """This module contains code for parsing RPC responses."""
from dataclasses import dataclass, field
from typing import Union, Tuple, Any, Dict, List, Optional, Literal
from apischema import alias
from apischema.conversions import as_str
from solana.publickey import PublicKey
from solana.transaction import TransactionSignature
as_str(PublicKey)
TransactionErrorResult = Optional[dict]
@dataclass
class TransactionErr:
"""Container for possible transaction errors."""
err: TransactionErrorResult
@dataclass
class Context:
"""RPC result context."""
slot: int
@dataclass
class WithContext:
"""Base class for RPC result including context."""
context: Context
@dataclass
class AccountInfo:
"""Account information."""
lamports: int
owner: PublicKey
data: Union[Literal[""], Tuple[str, str], Dict[str, Any]]
executable: bool
rent_epoch: int = field(metadata=alias("rentEpoch"))
@dataclass
class AccountInfoAndContext(WithContext):
"""Account info and RPC result context."""
value: AccountInfo
@dataclass
class SubscriptionNotificationBase:
"""Base class for RPC subscription notifications."""
subscription: int
result: Any
@dataclass
class AccountNotification(SubscriptionNotificationBase):
"""Account subscription notification."""
result: AccountInfoAndContext
@dataclass
class LogItem(TransactionErr):
"""Container for logs from logSubscribe."""
signature: TransactionSignature
logs: Optional[List[str]]
@dataclass
class LogItemAndContext(WithContext):
"""Log item with RPC result context."""
value: LogItem
@dataclass
class LogsNotification(SubscriptionNotificationBase):
"""Logs subscription notification."""
result: LogItemAndContext
@dataclass
class ProgramAccount:
"""Program account pubkey and account info."""
pubkey: PublicKey
account: AccountInfo
@dataclass
class ProgramAccountAndContext(WithContext):
"""Program subscription data with RPC result context."""
value: ProgramAccount
@dataclass
class ProgramNotification(SubscriptionNotificationBase):
"""Program subscription notification."""
result: ProgramAccountAndContext
@dataclass
class SignatureErrAndContext(WithContext):
"""Signature subscription error info with RPC result context."""
value: TransactionErr
@dataclass
class SignatureNotification(SubscriptionNotificationBase):
"""Signature subscription notification."""
result: SignatureErrAndContext
@dataclass
class SlotBase:
"""Base class for slot container."""
slot: int
@dataclass
class SlotInfo(SlotBase):
"""Slot info."""
parent: int
root: int
@dataclass
class SlotNotification(SubscriptionNotificationBase):
"""Slot subscription notification."""
result: SlotInfo
@dataclass
class RootNotification(SubscriptionNotificationBase):
"""Root subscription notification."""
result: int
@dataclass
class SlotAndTimestampBase(SlotBase):
"""Base class for a slot with timestamp."""
timestamp: int
@dataclass
class FirstShredReceived(SlotAndTimestampBase):
"""First shread received update."""
type: Literal["firstShredReceived"]
@dataclass
class Completed(SlotAndTimestampBase):
"""Slot completed update."""
type: Literal["completed"]
@dataclass
class CreatedBank(SlotAndTimestampBase):
"""Created bank update."""
parent: int
type: Literal["createdBank"]
@dataclass
class SlotTransactionStats:
"""Slot transaction stats."""
num_transaction_entries: int = field(metadata=alias("numTransactionEntries"))
num_successful_transactions: int = field(metadata=alias("numSuccessfulTransactions"))
num_failed_transactions: int = field(metadata=alias("numFailedTransactions"))
max_transactions_per_entry: int = field(metadata=alias("maxTransactionsPerEntry"))
@dataclass
class Frozen(SlotAndTimestampBase):
"""Slot frozen update."""
stats: SlotTransactionStats
type: Literal["frozen"]
@dataclass
class Dead(SlotAndTimestampBase):
"""Dead slot update."""
err: str
type: Literal["dead"]
@dataclass
class OptimisticConfirmation(SlotAndTimestampBase):
"""Optimistic confirmation update."""
type: Literal["optimisticConfirmation"]
@dataclass
class Root(SlotAndTimestampBase):
"""Root update."""
type: Literal["root"]
SlotsUpdatesItem = Union[FirstShredReceived, Completed, CreatedBank, Frozen, Dead, OptimisticConfirmation, Root]
@dataclass
class SlotsUpdatesNotification(SubscriptionNotificationBase):
"""Slots updates notification."""
result: SlotsUpdatesItem
@dataclass
class VoteItem:
"""Vote data."""
hash: str
slots: List[int]
timestamp: Optional[int]
@dataclass
class VoteNotification(SubscriptionNotificationBase):
"""Vote update notification."""
result: VoteItem
SubscriptionNotification = Union[
AccountNotification,
LogsNotification,
ProgramNotification,
SignatureNotification,
SlotNotification,
RootNotification,
SlotsUpdatesNotification,
VoteNotification,
]
| 19.429658 | 112 | 0.73816 | 450 | 5,110 | 8.355556 | 0.286667 | 0.115426 | 0.027128 | 0.027926 | 0.0375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163601 | 5,110 | 262 | 113 | 19.503817 | 0.879738 | 0.199022 | 0 | 0.273438 | 0 | 0 | 0.043998 | 0.028484 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046875 | 0 | 0.65625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
71cb7166ebfc6dcb81586c67d3970300c6d339d5 | 2,850 | py | Python | src/tools/pch.py | MaxSac/build | 482c25f3a26171073c7e6c59f0427f2259a63fec | [
"BSL-1.0"
] | 11,356 | 2017-12-08T19:42:32.000Z | 2022-03-31T16:55:25.000Z | src/tools/pch.py | MaxSac/build | 482c25f3a26171073c7e6c59f0427f2259a63fec | [
"BSL-1.0"
] | 2,402 | 2017-12-08T22:31:01.000Z | 2022-03-28T19:25:52.000Z | src/tools/pch.py | MaxSac/build | 482c25f3a26171073c7e6c59f0427f2259a63fec | [
"BSL-1.0"
] | 1,343 | 2017-12-08T19:47:19.000Z | 2022-03-26T11:31:36.000Z | # Status: Being ported by Steven Watanabe
# Base revision: 47077
#
# Copyright (c) 2005 Reece H. Dunn.
# Copyright 2006 Ilya Sokolov
# Copyright (c) 2008 Steven Watanabe
#
# Use, modification and distribution is subject to the Boost Software
# License Version 1.0. (See accompanying file LICENSE_1_0.txt or
# http://www.boost.org/LICENSE_1_0.txt)
##### Using Precompiled Headers (Quick Guide) #####
#
# Make precompiled mypch.hpp:
#
# import pch ;
#
# cpp-pch mypch
# : # sources
# mypch.hpp
# : # requiremnts
# <toolset>msvc:<source>mypch.cpp
# ;
#
# Add cpp-pch to sources:
#
# exe hello
# : main.cpp hello.cpp mypch
# ;
from b2.build import type, feature, generators
from b2.tools import builtin
type.register('PCH', ['pch'])
type.register('C_PCH', [], 'PCH')
type.register('CPP_PCH', [], 'PCH')
# Control precompiled header (PCH) generation.
feature.feature('pch',
['on', 'off'],
['propagated'])
feature.feature('pch-header', [], ['free', 'dependency'])
feature.feature('pch-file', [], ['free', 'dependency'])
class PchGenerator(generators.Generator):
"""
Base PCH generator. The 'run' method has the logic to prevent this generator
from being run unless it's being used for a top-level PCH target.
"""
def action_class(self):
return builtin.CompileAction
def run(self, project, name, prop_set, sources):
if not name:
# Unless this generator is invoked as the top-most generator for a
# main target, fail. This allows using 'H' type as input type for
# this generator, while preventing Boost.Build to try this generator
# when not explicitly asked for.
#
# One bad example is msvc, where pch generator produces both PCH
# target and OBJ target, so if there's any header generated (like by
# bison, or by msidl), we'd try to use pch generator to get OBJ from
# that H, which is completely wrong. By restricting this generator
# only to pch main target, such problem is solved.
pass
else:
r = self.run_pch(project, name,
prop_set.add_raw(['<define>BOOST_BUILD_PCH_ENABLED']),
sources)
return generators.add_usage_requirements(
r, ['<define>BOOST_BUILD_PCH_ENABLED'])
# This rule must be overridden by the derived classes.
def run_pch(self, project, name, prop_set, sources):
pass
# NOTE: requirements are empty, default pch generator can be applied when
# pch=off.
generators.register(builtin.DummyGenerator(
"pch.default-c-pch-generator", False, [], ['C_PCH'], []))
generators.register(builtin.DummyGenerator(
"pch.default-cpp-pch-generator", False, [], ['CPP_PCH'], []))
| 33.928571 | 84 | 0.635789 | 369 | 2,850 | 4.848238 | 0.452575 | 0.040246 | 0.028508 | 0.030184 | 0.116266 | 0.0872 | 0 | 0 | 0 | 0 | 0 | 0.011715 | 0.251228 | 2,850 | 83 | 85 | 34.337349 | 0.826617 | 0.513684 | 0 | 0.142857 | 0 | 0 | 0.166922 | 0.090352 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0.071429 | 0.071429 | 0.035714 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
71cba591b2f2458645ed1d92e6c6191526e0649e | 3,274 | py | Python | packages/pytest-simcore/src/pytest_simcore/helpers/utils_login.py | GitHK/osparc-simcore-forked | 5b01a28d1b8028afcf9a735e1d46a73daa13686e | [
"MIT"
] | null | null | null | packages/pytest-simcore/src/pytest_simcore/helpers/utils_login.py | GitHK/osparc-simcore-forked | 5b01a28d1b8028afcf9a735e1d46a73daa13686e | [
"MIT"
] | 17 | 2020-10-15T16:06:05.000Z | 2022-03-21T18:48:21.000Z | packages/pytest-simcore/src/pytest_simcore/helpers/utils_login.py | GitHK/osparc-simcore-forked | 5b01a28d1b8028afcf9a735e1d46a73daa13686e | [
"MIT"
] | null | null | null | import re
from typing import Dict
from aiohttp import web
from yarl import URL
from simcore_service_webserver.db_models import UserRole, UserStatus
from simcore_service_webserver.login.cfg import cfg, get_storage
from simcore_service_webserver.login.registration import create_invitation
from simcore_service_webserver.login.utils import encrypt_password, get_random_string
from .utils_assert import assert_status
TEST_MARKS = re.compile(r"TEST (\w+):(.*)")
def parse_test_marks(text):
"""Checs for marks as
TEST name:123123
TEST link:some-value
"""
marks = {}
for m in TEST_MARKS.finditer(text):
key, value = m.groups()
marks[key] = value.strip()
return marks
def parse_link(text):
link = parse_test_marks(text)["link"]
return URL(link).path
async def create_user(data=None) -> Dict:
data = data or {}
password = get_random_string(10)
params = {
"name": get_random_string(10),
"email": "{}@gmail.com".format(get_random_string(10)),
"password_hash": encrypt_password(password),
}
params.update(data)
params.setdefault("status", UserStatus.ACTIVE.name)
params.setdefault("role", UserRole.USER.name)
params.setdefault("created_ip", "127.0.0.1")
user = await cfg.STORAGE.create_user(params)
user["raw_password"] = password
return user
async def log_client_in(client, user_data=None, *, enable_check=True) -> Dict:
# creates user directly in db
user = await create_user(user_data)
# login
url = client.app.router["auth_login"].url_for()
r = await client.post(
url,
json={
"email": user["email"],
"password": user["raw_password"],
},
)
if enable_check:
await assert_status(r, web.HTTPOk, cfg.MSG_LOGGED_IN)
return user
class NewUser:
def __init__(self, params=None, app: web.Application = None):
self.params = params
self.user = None
self.db = get_storage(app) if app else cfg.STORAGE # FIXME:
async def __aenter__(self):
self.user = await create_user(self.params)
return self.user
async def __aexit__(self, *args):
await self.db.delete_user(self.user)
class LoggedUser(NewUser):
def __init__(self, client, params=None, *, check_if_succeeds=True):
super().__init__(params, client.app)
self.client = client
self.enable_check = check_if_succeeds
async def __aenter__(self):
self.user = await log_client_in(
self.client, self.params, enable_check=self.enable_check
)
return self.user
class NewInvitation(NewUser):
def __init__(self, client, guest="", host=None):
super().__init__(host, client.app)
self.client = client
self.guest = guest or get_random_string(10)
self.confirmation = None
async def __aenter__(self):
# creates host user
self.user = await create_user(self.params)
self.confirmation = await create_invitation(self.user, self.guest, self.db)
return self.confirmation
async def __aexit__(self, *args):
if await self.db.get_confirmation(self.confirmation):
await self.db.delete_confirmation(self.confirmation)
| 28.224138 | 85 | 0.668907 | 426 | 3,274 | 4.892019 | 0.260563 | 0.03071 | 0.035988 | 0.051823 | 0.172265 | 0.083013 | 0.055182 | 0 | 0 | 0 | 0 | 0.007874 | 0.224191 | 3,274 | 115 | 86 | 28.469565 | 0.812598 | 0.035736 | 0 | 0.1625 | 0 | 0 | 0.042743 | 0 | 0 | 0 | 0 | 0.008696 | 0.025 | 1 | 0.0625 | false | 0.0625 | 0.1125 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
71cc3ccb2a64dc7939c236363e16d9e1816e901e | 2,806 | py | Python | indra/tests/test_sparser.py | jmuhlich/indra | feab2c08541ea73f328579faa6a21b08082cb026 | [
"BSD-2-Clause"
] | null | null | null | indra/tests/test_sparser.py | jmuhlich/indra | feab2c08541ea73f328579faa6a21b08082cb026 | [
"BSD-2-Clause"
] | null | null | null | indra/tests/test_sparser.py | jmuhlich/indra | feab2c08541ea73f328579faa6a21b08082cb026 | [
"BSD-2-Clause"
] | null | null | null | from indra import sparser
xml_str1 = '''
<article pmid="54321">
<interpretation>
<sentence-text>MEK1 phosphorylates ERK1</sentence-text>
<sem>
<ref category="phosphorylate">
<var name="agent">
<ref category="protein">
<var name="name">MP2K1_HUMAN</var>
<var name="uid">UP:MP2K1_HUMAN</var>
</ref>
</var>
<var name="substrate">
<ref category="protein">
<var name="name">MK03_HUMAN</var>
<var name="uid">UP:MK03_HUMAN</var>
</ref>
</var>
<var name="present"><ref category="present"></ref></var>
</ref>
</sem>
</interpretation>
</article>
'''
xml_str2 = '''
<article pmid="12345">
<interpretation>
<sentence-text>Hence ASPP2 can be phosphorylated at serine 827 by MAPK1 in vitro</sentence-text>
<sem>
<ref category="phosphorylate">
<var name="subordinate-conjunction">
<ref category="subordinate-conjunction"><var name="word">hence</var></ref></var>
<var name="substrate">
<ref category="protein">
<var name="name">ASPP2_HUMAN</var>
<var name="uid">UP:ASPP2_HUMAN</var>
</ref>
</var>
<var name="agent">
<ref category="protein">
<var name="context">
<ref category="in-vitro"></ref>
</var>
<var name="uid">UP:MK01_HUMAN</var>
<var name="name">MK01_HUMAN</var>
</ref>
</var>
<var name="site">
<ref category="residue-on-protein">
<var name="amino-acid">
<ref category="amino-acid"><var name="name">serine</var></ref>
</var>
<var name="position"> 827</var>
</ref>
</var>
<var name="modal"><ref category="can"></ref></var>
</ref>
</sem>
</interpretation>
</article>
'''
def test_invalid_xml():
sp = sparser.process_xml('xyz')
assert(sp is None)
def test_phosphorylation():
sp = sparser.process_xml(xml_str1)
assert(len(sp.statements) == 1)
assert(sp.statements[0].enz.name == 'MAP2K1')
assert(sp.statements[0].sub.name == 'MAPK3')
assert(len(sp.statements[0].evidence) == 1)
ev = sp.statements[0].evidence[0]
assert(ev.pmid == '54321')
assert(ev.text)
assert(ev.source_api == 'sparser')
def test_phosphorylation2():
sp = sparser.process_xml(xml_str2)
assert(len(sp.statements) == 1)
assert(sp.statements[0].enz.name == 'MAPK1')
assert(sp.statements[0].sub.name == 'TP53BP2')
assert(sp.statements[0].residue == 'S')
assert(sp.statements[0].position == '827')
assert (len(sp.statements[0].evidence) == 1)
ev = sp.statements[0].evidence[0]
assert (ev.pmid == '12345')
assert (ev.text)
assert (ev.source_api == 'sparser')
| 29.229167 | 98 | 0.573414 | 340 | 2,806 | 4.670588 | 0.238235 | 0.092569 | 0.075567 | 0.065491 | 0.56864 | 0.511335 | 0.358312 | 0.358312 | 0.212846 | 0.212846 | 0 | 0.035394 | 0.244833 | 2,806 | 95 | 99 | 29.536842 | 0.714016 | 0 | 0 | 0.528736 | 0 | 0.022989 | 0.660727 | 0.207413 | 0 | 0 | 0 | 0 | 0.195402 | 1 | 0.034483 | false | 0 | 0.011494 | 0 | 0.045977 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71cd46d78b6a8276fbfad5958ac1ac90396f36d3 | 685 | py | Python | examples/quickstart/run_example.py | siforrer/coreali | 261e321b546192e608edf87c47719d2173ab4645 | [
"MIT"
] | null | null | null | examples/quickstart/run_example.py | siforrer/coreali | 261e321b546192e608edf87c47719d2173ab4645 | [
"MIT"
] | null | null | null | examples/quickstart/run_example.py | siforrer/coreali | 261e321b546192e608edf87c47719d2173ab4645 | [
"MIT"
] | null | null | null | """ Simple Example using coreali to access a register model. Needs no h^ardware"""
# Import dependencies and compile register model with systemrdl-compiler
from systemrdl import RDLCompiler
import coreali
import numpy as np
import os
from coreali import RegisterModel
rdlc = RDLCompiler()
rdlc.compile_file(os.path.dirname(__file__)+"/../systemrdl/logger.rdl")
root = rdlc.elaborate()
# Generate hierarchical register model
rio = coreali.registerio.RegIoNoHW(np.zeros([256], np.uint8()))
logger = RegisterModel(root, rio)
# Use the generated register model
logger.Ctrl.read()
logger.LogMem.write(0,[1,2,3])
logger.LogMem.read()
logger.LogMem[1].write(0,[11,12,13])
print(logger)
| 28.541667 | 82 | 0.769343 | 98 | 685 | 5.326531 | 0.591837 | 0.099617 | 0.061303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02623 | 0.109489 | 685 | 23 | 83 | 29.782609 | 0.829508 | 0.318248 | 0 | 0 | 0 | 0 | 0.052402 | 0.052402 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
71d0fd4625a3f29310594a80dc408cff1d45b254 | 1,196 | py | Python | gamesystem.py | cristilianojr/JOKENPOH | 604970d4f3cfbcc5f851e993af72d3bc86926ae5 | [
"MIT"
] | 1 | 2022-02-02T15:23:00.000Z | 2022-02-02T15:23:00.000Z | gamesystem.py | cristilianojr/JOKENPOH | 604970d4f3cfbcc5f851e993af72d3bc86926ae5 | [
"MIT"
] | null | null | null | gamesystem.py | cristilianojr/JOKENPOH | 604970d4f3cfbcc5f851e993af72d3bc86926ae5 | [
"MIT"
] | null | null | null | import random
from tkinter import PhotoImage
"""
Esse arquivo define os estados do game
"""
def ia_chocer():
"""IA faz a escolha de um numero aleatório"""
posibility = ['rock', 'paper', 'scissor']
value = posibility[random.randint(0, 2)]
return value
def battle_verification(player_choice, ia_choice):
state_victoryorlose = ''
if player_choice == 'rock':
if ia_choice == 'rock':
state_victoryorlose = 'draw'
elif ia_choice == 'scissor':
state_victoryorlose = 'victory'
elif ia_choice == 'paper':
state_victoryorlose = 'defeat'
elif player_choice == 'scissor':
if ia_choice == 'rock':
state_victoryorlose = 'defeat'
elif ia_choice == 'scissor':
state_victoryorlose = 'draw'
elif ia_choice == 'paper':
state_victoryorlose = 'victory'
elif player_choice == 'paper':
if ia_choice == 'rock':
state_victoryorlose = 'victory'
elif ia_choice == 'scissor':
state_victoryorlose = 'defeat'
elif ia_choice == 'paper':
state_victoryorlose = 'draw'
return state_victoryorlose
| 23 | 50 | 0.598662 | 123 | 1,196 | 5.601626 | 0.341463 | 0.287373 | 0.104499 | 0.060958 | 0.502177 | 0.502177 | 0 | 0 | 0 | 0 | 0 | 0.002372 | 0.295151 | 1,196 | 51 | 51 | 23.45098 | 0.814947 | 0.032609 | 0 | 0.6 | 0 | 0 | 0.119854 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71dbada9e9753ae2bf8f715ece9e73e4bfe13daa | 418 | py | Python | google_search.py | Jaram2019/minwoo | d98ce8a84675281e237368cbe97d8a2120ce5840 | [
"MIT"
] | null | null | null | google_search.py | Jaram2019/minwoo | d98ce8a84675281e237368cbe97d8a2120ce5840 | [
"MIT"
] | null | null | null | google_search.py | Jaram2019/minwoo | d98ce8a84675281e237368cbe97d8a2120ce5840 | [
"MIT"
] | null | null | null | import requests
from bs4 import BeautifulSoup
import re
rq = requests.get("https://play.google.com/store/apps/category/GAME_MUSIC?hl=ko")
rqctnt = rq.content
soup = BeautifulSoup(rqctnt,"html.parser")
soup = soup.find_all(attrs={'class':'title'})
blacklsit = ["앱","영화/TV","음악","도서","기기","엔터테인먼트","음악"]
for link in soup:
if link.text.strip() in blacklsit:
pass
else:
print(link.text.strip())
| 24.588235 | 81 | 0.674641 | 62 | 418 | 4.516129 | 0.741935 | 0.057143 | 0.092857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002801 | 0.145933 | 418 | 16 | 82 | 26.125 | 0.781513 | 0 | 0 | 0 | 0 | 0 | 0.241627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.230769 | 0 | 0.230769 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
71e26509acc4657f75e6829c9ddc1b7eeabb62a1 | 2,530 | py | Python | Files/joinfiles.py | LeoCruzG/4chan-thread-downloader | d449e50fc7f2a6273a11da3d8ff2f46aad4951d2 | [
"MIT"
] | null | null | null | Files/joinfiles.py | LeoCruzG/4chan-thread-downloader | d449e50fc7f2a6273a11da3d8ff2f46aad4951d2 | [
"MIT"
] | null | null | null | Files/joinfiles.py | LeoCruzG/4chan-thread-downloader | d449e50fc7f2a6273a11da3d8ff2f46aad4951d2 | [
"MIT"
] | null | null | null | # Importamos la librería para leer archivos json
import json
# Abrimos el archivo master en modo lectura ('r') con todos los id de los archivos descargados
with open('master.json', 'r') as f:
# Guardamos en la variable lista el contenido de master
lista = json.load(f)
# En este ejemplo se representa cómo se asignaría a la lista archivos específicos
#lista = ['2095303', '2169202']
# Abrimos el archivo tryall.json en modo lectura ('w'), si no está creado previamente
# se crea en este momento, se puede cambiar nombre a este archivo
with open('tryall.json', 'w') as outfile:
# Iniciamos un contador para ir marcando cuántos archivos llevamos unidos
contador = 0
# Esta variable ayuda a guardar el nombre del archivo anterior para
# corroborar si no se está repitiendo con el anterior
helper = 0
# Esta variable nos indica que tenemos que escribir dentro del documento lo que hay
# dentro del archivo actual
update = True
# Recorremos toda la lista de archivos descargados
for names in lista:
# Abrimos cada archivo
with open(f'{names}.json') as infile:
# Leemos los primeras 3 líneas
infile.readline()
infile.readline()
infile.readline()
# Guardamos el contenido de la 4° que tiene el número del thread
# en una variable temportal
temp = infile.readline()
# Comprobamos si helper tiene el mismo contenido que temp
if helper != temp:
# Si es diferente se puede hacer la actualización ya que no se va
# a tener threads repetidos
update = True
# asignamos el nuevo contenido a la variable persistente
helper = temp
# Si tienen el mismo contenido entonces no se hace la actualización
else:
update = False
# Abrimos nuevamente el archivo
with open(f'{names}.json') as infile:
# Si el post no está repetido entra
if update == True:
# Se escribe el contenido completo del thread en el archivo de salida
outfile.write(infile.read())
# Se aumenta el contador ya que se escribió un documento nuevo
contador+=1
# Se imporime el contador con el nombre del archivo leído
print(contador, names)
# Se pone un salto de página para escribir el contenido del archivo siguiente
outfile.write("\n") | 42.166667 | 94 | 0.633992 | 334 | 2,530 | 4.805389 | 0.428144 | 0.02243 | 0.028037 | 0.02243 | 0.041122 | 0.041122 | 0.041122 | 0.041122 | 0 | 0 | 0 | 0.010957 | 0.314625 | 2,530 | 60 | 95 | 42.166667 | 0.914072 | 0.613043 | 0 | 0.291667 | 0 | 0 | 0.052576 | 0 | 0 | 0 | 0 | 0.016667 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71e353fb7e64c4a4a126f497726d3763f4f1a40c | 2,860 | py | Python | pycopula/archimedean_generators.py | SvenSerneels/pycopula | 27c703ab0d25356f6e78b7cc16c8ece1ed80f871 | [
"Apache-2.0"
] | 2 | 2020-05-09T21:08:34.000Z | 2021-02-23T06:58:51.000Z | pycopula/archimedean_generators.py | SvenSerneels/pycopula | 27c703ab0d25356f6e78b7cc16c8ece1ed80f871 | [
"Apache-2.0"
] | null | null | null | pycopula/archimedean_generators.py | SvenSerneels/pycopula | 27c703ab0d25356f6e78b7cc16c8ece1ed80f871 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This file contains the generators and their inverses for common archimedean copulas.
"""
import numpy as np
def boundsConditions(x):
if x < 0 or x > 1:
raise ValueError("Unable to compute generator for x equals to {}".format(x))
def claytonGenerator(x, theta):
boundsConditions(x)
if theta == 0:
raise ValueError("The parameter of a Clayton copula must not be equal to 0.")
if theta < -1:
raise ValueError("The parameter of a Clayton copula must be greater than -1 and different from 0.")
return (1. / theta) * (x**(-theta) - 1.)
def claytonGeneratorInvert(x, theta):
if theta == 0:
raise ValueError("The parameter of a Clayton copula must not be equal to 0.")
if theta < -1:
raise ValueError("The parameter of a Clayton copula must be greater than -1 and different from 0.")
return (1. + theta * x)**(-1. / max(theta,1e-6))
def gumbelGenerator(x, theta):
boundsConditions(x)
if theta < 1:
raise ValueError("The parameter of a Gumbel copula must be greater than 1.")
return (-np.log(x))**theta
def gumbelGeneratorInvert(x, theta):
if len(theta) > 1:
theta = theta[0]
if theta < 1:
raise ValueError("The parameter of a Gumbel copula must be greater than 1.")
if (x < 1 and theta != 1):
raise(ValueError("The inverse Gumbel generator cannot be evaluated for negative input and theta > 1"))
return np.exp(-np.power(x,np.divide(1, theta)))
def frankGenerator(x, theta):
boundsConditions(x)
if theta == 0:
raise ValueError("The parameter of a Frank copula must not be equal to 0.")
return -np.log((np.exp(-theta[0] * x) - 1) / (np.exp(-theta[0]) - 1))
def frankGeneratorInvert(x, theta):
if theta == 0:
raise ValueError("The parameter of a Frank copula must not be equal to 0.")
return -1. / theta * np.log(1. + np.exp(-x) * (np.exp(-theta) - 1.))
def joeGenerator(x, theta):
boundsConditions(x)
if theta < 1:
raise ValueError("The parameter of a Joe copula must be greater than 1.")
return -np.log(1. - (1. - x)**theta)
def joeGeneratorInvert(x, theta):
if theta < 1:
raise ValueError("The parameter of a Joe copula must be greater than 1.")
return 1. - (1. - np.exp(-x))**(1. / max(theta,1e-6))
def aliMikhailHaqGenerator(x, theta):
boundsConditions(x)
if theta < -1 or theta >= 1:
raise ValueError("The parameter of an Ali-Mikhail-Haq copula must be between -1 included and 1 excluded.")
return np.log((1. - theta * (1. - x)) / x)
def aliMikhailHaqGeneratorInvert(x, theta):
if theta < -1 or theta >= 1:
raise ValueError("The parameter of an Ali-Mikhail-Haq copula must be between -1 included and 1 excluded.")
return (1. - theta) / (np.exp(x) - theta)
| 37.142857 | 114 | 0.642308 | 432 | 2,860 | 4.252315 | 0.199074 | 0.052259 | 0.127382 | 0.176375 | 0.641807 | 0.62221 | 0.62221 | 0.592814 | 0.592814 | 0.584105 | 0 | 0.028195 | 0.231119 | 2,860 | 76 | 115 | 37.631579 | 0.807185 | 0.044406 | 0 | 0.508772 | 0 | 0 | 0.330515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192982 | false | 0 | 0.017544 | 0 | 0.385965 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e07cfc67fee77a3b15475a7b5db3f7fe4ab08200 | 14,515 | py | Python | rbc/libfuncs.py | plures/rbc | 57c170c148000e7b56f0cda2f0dbea7bdcfa0e1b | [
"BSD-3-Clause"
] | 1 | 2019-02-15T14:14:58.000Z | 2019-02-15T14:14:58.000Z | rbc/libfuncs.py | plures/rbc | 57c170c148000e7b56f0cda2f0dbea7bdcfa0e1b | [
"BSD-3-Clause"
] | null | null | null | rbc/libfuncs.py | plures/rbc | 57c170c148000e7b56f0cda2f0dbea7bdcfa0e1b | [
"BSD-3-Clause"
] | null | null | null | """Collections of library function names.
"""
class Library:
"""Base class for a collection of library function names.
"""
@staticmethod
def get(libname, _cache={}):
if libname in _cache:
return _cache[libname]
if libname == 'stdlib':
r = Stdlib()
elif libname == 'stdio':
r = Stdio()
elif libname == 'm':
r = Mlib()
elif libname == 'libdevice':
r = Libdevice()
elif libname == 'nvvm':
r = NVVMIntrinsics()
elif libname == 'llvm':
r = LLVMIntrinsics()
elif libname == 'heavydb':
r = HeavyDB()
else:
raise ValueError(f'Unknown library {libname}')
_cache[libname] = r
return r
def __contains__(self, fname):
return self.check(fname)
def check(self, fname):
"""
Return True if library contains a function with given name.
"""
if fname in self._function_names:
return True
for func in self._function_names:
if func.endswith('.*') and fname.startswith(func[:-2]):
return True
return False
class HeavyDB(Library):
name = 'heavydb'
_function_names = list('''
allocate_varlen_buffer set_output_row_size
TableFunctionManager_error_message TableFunctionManager_set_output_row_size
table_function_error
'''.strip().split())
class Stdlib(Library):
"""
Reference: http://www.cplusplus.com/reference/cstdlib/
"""
name = 'stdlib'
_function_names = list(''' atof atoi atol atoll strtod strtof strtol strtold strtoll strtoul
strtoull rand srand calloc free malloc realloc abort atexit
at_quick_exit exit getenv quick_exit system bsearch qsort abs div
labs ldiv llabs lldiv mblen mbtowc wctomb mbstowcs wcstombs '''.strip().split())
class Stdio(Library):
"""
Reference: http://www.cplusplus.com/reference/cstdio/
"""
name = 'stdio'
_function_names = list(''' remove rename tmpfile tmpnam fclose fflush fopen freopen setbuf
setvbuf fprintf fscanf printf scanf snprintf sprintf sscanf
vfprintf vfscanf vprintf vscanf vsnprintf vsprintf vsscanf fgetc
fgets fputc fputs getc getchar gets putc putchar puts ungetc fread
fwrite fgetpos fseek fsetpos ftell rewind clearerr feof ferror
perror '''.strip().split())
class Mlib(Library):
"""
References:
https://www.gnu.org/software/libc/manual/html_node/Mathematics.html
https://en.cppreference.com/w/cpp/header/cmath
"""
name = 'm'
_function_names = list('''sin sinf sinl cos cosf cosl tan tanf tanl sincos sincosf sincosl
csin csinf csinl ccos ccosf ccosl ctan ctanf ctanl asin asinf
asinl acos acosf acosl atan atanf atanl atan2 atan2f atan2l casin
casinf casinl cacos cacosf cacosl catan catanf catanl exp expf
expl exp2 exp2f exp2l exp10 exp10f exp10l log logf logl log2 log2f
log2l log10 log10f log10l logb logbf logbl ilogb ilogbf ilogbl pow
powf powl sqrt sqrtf sqrtl cbrt cbrtf cbrtl hypot hypotf hypotl
expm1 expm1f expm1l log1p log1pf log1pl clog clogf clogl clog10
clog10f clog10l csqrt csqrtf csqrtl cpow cpowf cpowl sinh sinhf
sinhl cosh coshf coshl tanh tanhf tanhl csinh csinhf csinhl ccosh
ccoshf ccoshl ctanh ctanhf ctanhl asinh asinhf asinhl acosh acoshf
acoshl atanh atanhf atanhl casinh casinhf casinhl cacosh cacoshf
cacoshl catanh catanhf catanhl erf erff erfl erfc erfcf erfcl
lgamma lgammaf lgammal tgamma tgammaf tgammal lgamma_r lgammaf_r
lgammal_r gamma gammaf gammal j0 j0f j0l j1 j1f j1l jn jnf jnl y0
y0f y0l y1 y1f y1l yn ynf ynl rand srand rand_r random srandom
initstate setstate random_r srandom_r initstate_r setstate_r
drand48 erand48 lrand48 nrand48 mrand48 jrand48 srand48 seed48
lcong48 drand48_r erand48_r lrand48_r nrand48_r mrand48_r
jrand48_r srand48_r seed48_r lcong48_r abs labs llabs fabs fabsf
fabsl cabs cabsf cabsl frexp frexpf frexpl ldexp ldexpf ldexpl
scalb scalbf scalbl scalbn scalbnf scalbnl significand
significandf significandl ceil ceilf ceill floor floorf floorl
trunc truncf truncl rint rintf rintl nearbyint nearbyintf
nearbyintl round roundf roundl roundeven roundevenf roundevenl
lrint lrintf lrintl lround lroundf lroundl llround llroundf
llroundl fromfp fromfpf fromfpl ufromfp ufromfpf ufromfpl fromfpx
fromfpxf fromfpxl ufromfpx ufromfpxf ufromfpxl modf modff modfl
fmod fmodf fmodl remainder remainderf remainderl drem dremf dreml
copysign copysignf copysignl signbit signbitf signbitl nextafter
nextafterf nextafterl nexttoward nexttowardf nexttowardl nextup
nextupf nextupl nextdown nextdownf nextdownl nan nanf nanl
canonicalize canonicalizef canonicalizel getpayload getpayloadf
getpayloadl setpayload setpayloadf setpayloadl setpayloadsig
setpayloadsigf setpayloadsigl isgreater isgreaterequal isless
islessequal islessgreater isunordered iseqsig totalorder
totalorderf totalorderl totalordermag totalorderf totalorderl fmin
fminf fminl fmax fmaxf fmaxl fminmag fminmagf fminmagl fmaxmag
fmaxmagf fmaxmagl fdim fdimf fdiml fma fmaf fmal fadd faddf faddl
fsub fsubf fsubl fmul fmulf fmull fdiv fdivf fdivl llrint llrintf
llrintl'''.strip().split())
def drop_suffix(f):
s = f.rsplit('.', 1)[-1]
if s in ['p0i8', 'f64', 'f32', 'i1', 'i8', 'i16', 'i32', 'i64', 'i128']:
f = f[:-len(s)-1]
return drop_suffix(f)
return f
def get_llvm_name(f, prefix='llvm.'):
"""Return normalized name of a llvm intrinsic name.
"""
if f.startswith(prefix):
return drop_suffix(f[len(prefix):])
return f
class LLVMIntrinsics(Library):
"""LLVM intrinsic function names with prefix `llvm.` removed.
Reference: https://llvm.org/docs/LangRef.html#intrinsic-functions
"""
name = 'llvm'
def check(self, fname):
if fname.startswith('llvm.'):
return Library.check(self, get_llvm_name(fname))
return False
_function_names = list(''' va_start va_end va_copy gcroot gcread gcwrite returnaddress
addressofreturnaddress sponentry frameaddress stacksave
stackrestore get.dynamic.area.offset prefetch pcmarker
readcyclecounter clear_cache instrprof.increment
instrprof.increment.step instrprof.value.profile thread.pointer
call.preallocated.setup call.preallocated.arg
call.preallocated.teardown abs smax smin umax umin memcpy
memcpy.inline memmove sqrt powi sin cos pow exp exp2 log log10
log2 fma fabs minnum maxnum minimum maximum copysign floor ceil
trunc rint nearbyint round roundeven lround llround lrint llrint
ctpop ctlz cttz fshl fshr sadd.with.overflow uadd.with.overflow
ssub.with.overflow usub.with.overflow smul.with.overflow
umul.with.overflow sadd.sat uadd.sat ssub.sat usub.sat sshl.sat
ushl.sat smul.fix umul.fix smul.fix.sat umul.fix.sat sdiv.fix
udiv.fix sdiv.fix.sat udiv.fix.sat canonicalize fmuladd
set.loop.iterations test.set.loop.iterations loop.decrement.reg
loop.decrement vector.reduce.add vector.reduce.fadd
vector.reduce.mul vector.reduce.fmul vector.reduce.and
vector.reduce.or vector.reduce.xor vector.reduce.smax
vector.reduce.smin vector.reduce.umax vector.reduce.umin
vector.reduce.fmax vector.reduce.fmin matrix.transpose
matrix.multiply matrix.column.major.load matrix.column.major.store
convert.to.fp16 convert.from.fp16 init.trampoline
adjust.trampoline lifetime.start lifetime.end invariant.start
invariant.end launder.invariant.group strip.invariant.group
experimental.constrained.fadd experimental.constrained.fsub
experimental.constrained.fmul experimental.constrained.fdiv
experimental.constrained.frem experimental.constrained.fma
experimental.constrained.fptoui experimental.constrained.fptosi
experimental.constrained.uitofp experimental.constrained.sitofp
experimental.constrained.fptrunc experimental.constrained.fpext
experimental.constrained.fmuladd experimental.constrained.sqrt
experimental.constrained.pow experimental.constrained.powi
experimental.constrained.sin experimental.constrained.cos
experimental.constrained.exp experimental.constrained.exp2
experimental.constrained.log experimental.constrained.log10
experimental.constrained.log2 experimental.constrained.rint
experimental.constrained.lrint experimental.constrained.llrint
experimental.constrained.nearbyint experimental.constrained.maxnum
experimental.constrained.minnum experimental.constrained.maximum
experimental.constrained.minimum experimental.constrained.ceil
experimental.constrained.floor experimental.constrained.round
experimental.constrained.roundeven experimental.constrained.lround
experimental.constrained.llround experimental.constrained.trunc
experimental.gc.statepoint experimental.gc.result experimental.gc.relocate
experimental.gc.get.pointer.base experimental.gc.get.pointer.offset
experimental.vector.reduce.add.* experimental.vector.reduce.fadd.*
experimental.vector.reduce.mul.* experimental.vector.reduce.fmul.*
experimental.vector.reduce.and.* experimental.vector.reduce.or.*
experimental.vector.reduce.xor.* experimental.vector.reduce.smax.*
experimental.vector.reduce.smin.* experimental.vector.reduce.umax.*
experimental.vector.reduce.umin.* experimental.vector.reduce.fmax.*
experimental.vector.reduce.fmin.*
flt.rounds var.annotation ptr.annotation annotation
codeview.annotation trap debugtrap stackprotector stackguard
objectsize expect expect.with.probability assume ssa_copy
type.test type.checked.load donothing experimental.deoptimize
experimental.guard experimental.widenable.condition load.relative
sideeffect is.constant ptrmask vscale
memcpy.element.unordered.atomic memmove.element.unordered.atomic
memset.element.unordered.atomic objc.autorelease
objc.autoreleasePoolPop objc.autoreleasePoolPush
objc.autoreleaseReturnValue objc.copyWeak objc.destroyWeak
objc.initWeak objc.loadWeak objc.loadWeakRetained objc.moveWeak
objc.release objc.retain objc.retainAutorelease
objc.retainAutoreleaseReturnValue
objc.retainAutoreleasedReturnValue objc.retainBlock
objc.storeStrong objc.storeWeak preserve.array.access.index
preserve.union.access.index preserve.struct.access.index
masked.store.* memset'''.strip().split())
class NVVMIntrinsics(Library):
"""NVVM intrinsic function names with prefix `llvm.` removed.
Reference: https://docs.nvidia.com/cuda/nvvm-ir-spec/index.html#intrinsic-functions
"""
name = 'nvvm'
def check(self, fname):
if fname.startswith('llvm.'):
return Library.check(self, get_llvm_name(fname))
return False
_function_names = list(''' memcpy memmove memset sqrt fma bswap ctpop ctlz cttz fmuladd
convert.to.fp16.f32 convert.from.fp16.f32 convert.to.fp16
convert.from.fp16 lifetime.start lifetime.end invariant.start
invariant.end var.annotation ptr.annotation annotation expect
donothing '''.strip().split())
class Libdevice(Library):
"""NVIDIA libdevice function names with prefix `__nv_` removed.
Reference: https://docs.nvidia.com/cuda/libdevice-users-guide/function-desc.html#function-desc
"""
name = 'libdevice'
def check(self, fname):
if fname.startswith('__nv_'):
return Library.check(self, get_llvm_name(fname, prefix='__nv_'))
return False
_function_names = list(''' abs acos acosf acosh acoshf asin asinf asinh asinhf atan atan2
atan2f atanf atanh atanhf brev brevll byte_perm cbrt cbrtf ceil
ceilf clz clzll copysign copysignf cos cosf cosh coshf cospi
cospif dadd_rd dadd_rn dadd_ru dadd_rz ddiv_rd ddiv_rn ddiv_ru
ddiv_rz dmul_rd dmul_rn dmul_ru dmul_rz double2float_rd
double2float_rn double2float_ru double2float_rz double2hiint
double2int_rd double2int_rn double2int_ru double2int_rz
double2ll_rd double2ll_rn double2ll_ru double2ll_rz double2loint
double2uint_rd double2uint_rn double2uint_ru double2uint_rz
double2ull_rd double2ull_rn double2ull_ru double2ull_rz
double_as_longlong drcp_rd drcp_rn drcp_ru drcp_rz dsqrt_rd
dsqrt_rn dsqrt_ru dsqrt_rz erf erfc erfcf erfcinv erfcinvf erfcx
erfcxf erff erfinv erfinvf exp exp10 exp10f exp2 exp2f expf expm1
expm1f fabs fabsf fadd_rd fadd_rn fadd_ru fadd_rz fast_cosf
fast_exp10f fast_expf fast_fdividef fast_log10f fast_log2f
fast_logf fast_powf fast_sincosf fast_sinf fast_tanf fdim fdimf
fdiv_rd fdiv_rn fdiv_ru fdiv_rz ffs ffsll finitef float2half_rn
float2int_rd float2int_rn float2int_ru float2int_rz float2ll_rd
float2ll_rn float2ll_ru float2ll_rz float2uint_rd float2uint_rn
float2uint_ru float2uint_rz float2ull_rd float2ull_rn float2ull_ru
float2ull_rz float_as_int floor floorf fma fma_rd fma_rn fma_ru
fma_rz fmaf fmaf_rd fmaf_rn fmaf_ru fmaf_rz fmax fmaxf fmin fminf
fmod fmodf fmul_rd fmul_rn fmul_ru fmul_rz frcp_rd frcp_rn frcp_ru
frcp_rz frexp frexpf frsqrt_rn fsqrt_rd fsqrt_rn fsqrt_ru fsqrt_rz
fsub_rd fsub_rn fsub_ru fsub_rz hadd half2float hiloint2double
hypot hypotf ilogb ilogbf int2double_rn int2float_rd int2float_rn
int2float_ru int2float_rz int_as_float isfinited isinfd isinff
isnand isnanf j0 j0f j1 j1f jn jnf ldexp ldexpf lgamma lgammaf
ll2double_rd ll2double_rn ll2double_ru ll2double_rz ll2float_rd
ll2float_rn ll2float_ru ll2float_rz llabs llmax llmin llrint
llrintf llround llroundf log log10 log10f log1p log1pf log2 log2f
logb logbf logf longlong_as_double max min modf modff mul24
mul64hi mulhi nan nanf nearbyint nearbyintf nextafter nextafterf
normcdf normcdff normcdfinv normcdfinvf popc popcll pow powf powi
powif rcbrt rcbrtf remainder remainderf remquo remquof rhadd rint
rintf round roundf rsqrt rsqrtf sad saturatef scalbn scalbnf
signbitd signbitf sin sincos sincosf sincospi sincospif sinf sinh
sinhf sinpi sinpif sqrt sqrtf tan tanf tanh tanhf tgamma tgammaf
trunc truncf uhadd uint2double_rn uint2float_rd uint2float_rn
uint2float_ru uint2float_rz ull2double_rd ull2double_rn
ull2double_ru ull2double_rz ull2float_rd ull2float_rn ull2float_ru
ull2float_rz ullmax ullmin umax umin umul24 umul64hi umulhi urhadd
usad y0 y0f y1 y1f yn ynf '''.strip().split())
| 46.822581 | 98 | 0.759904 | 1,884 | 14,515 | 5.737261 | 0.420913 | 0.080859 | 0.028865 | 0.006291 | 0.073365 | 0.064113 | 0.058932 | 0.042557 | 0.02979 | 0.019243 | 0 | 0.019511 | 0.180779 | 14,515 | 309 | 99 | 46.97411 | 0.889496 | 0.060558 | 0 | 0.066116 | 0 | 0.004132 | 0.824235 | 0.187986 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033058 | false | 0 | 0 | 0.004132 | 0.190083 | 0.008264 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e086489d041ecf108f68b331e71375202940ac34 | 2,768 | py | Python | .leetcode/506.relative-ranks.py | KuiyuanFu/PythonLeetCode | 8962df2fa838eb7ae48fa59de272ba55a89756d8 | [
"MIT"
] | null | null | null | .leetcode/506.relative-ranks.py | KuiyuanFu/PythonLeetCode | 8962df2fa838eb7ae48fa59de272ba55a89756d8 | [
"MIT"
] | null | null | null | .leetcode/506.relative-ranks.py | KuiyuanFu/PythonLeetCode | 8962df2fa838eb7ae48fa59de272ba55a89756d8 | [
"MIT"
] | null | null | null | # @lc app=leetcode id=506 lang=python3
#
# [506] Relative Ranks
#
# https://leetcode.com/problems/relative-ranks/description/
#
# algorithms
# Easy (53.46%)
# Likes: 188
# Dislikes: 9
# Total Accepted: 71.1K
# Total Submissions: 132.4K
# Testcase Example: '[5,4,3,2,1]'
#
# You are given an integer array score of size n, where score[i] is the score
# of the i^th athlete in a competition. All the scores are guaranteed to be
# unique.
#
# The athletes are placed based on their scores, where the 1^st place athlete
# has the highest score, the 2^nd place athlete has the 2^nd highest score, and
# so on. The placement of each athlete determines their rank:
#
#
# The 1^st place athlete's rank is "Gold Medal".
# The 2^nd place athlete's rank is "Silver Medal".
# The 3^rd place athlete's rank is "Bronze Medal".
# For the 4^th place to the n^th place athlete, their rank is their placement
# number (i.e., the x^th place athlete's rank is "x").
#
#
# Return an array answer of size n where answer[i] is the rank of the i^th
# athlete.
#
#
# Example 1:
#
#
# Input: score = [5,4,3,2,1]
# Output: ["Gold Medal","Silver Medal","Bronze Medal","4","5"]
# Explanation: The placements are [1^st, 2^nd, 3^rd, 4^th, 5^th].
#
# Example 2:
#
#
# Input: score = [10,3,8,9,4]
# Output: ["Gold Medal","5","Bronze Medal","Silver Medal","4"]
# Explanation: The placements are [1^st, 5^th, 3^rd, 2^nd, 4^th].
#
#
#
#
# Constraints:
#
#
# n == score.length
# 1 <= n <= 10^4
# 0 <= score[i] <= 10^6
# All the values in score are unique.
#
#
#
# @lc tags=Unknown
# @lc imports=start
from imports import *
# @lc imports=end
# @lc idea=start
#
# 排序。
#
# @lc idea=end
# @lc group=
# @lc rank=
# @lc code=start
class Solution:
def findRelativeRanks(self, score: List[int]) -> List[str]:
s = [(-s, i) for i, s in enumerate(score)]
s.sort()
ss = ['Gold Medal', 'Silver Medal', 'Bronze Medal']
def toS(idx):
if idx >= 3:
return str(idx + 1)
return ss[idx]
res = [''] * len(score)
for idx, (_, i) in enumerate(s):
res[i] = toS(idx)
return res
# @lc code=end
# @lc main=start
if __name__ == '__main__':
print('Example 1:')
print('Input : ')
print('score = [5,4,3,2,1]')
print('Exception :')
print('["Gold Medal","Silver Medal","Bronze Medal","4","5"]')
print('Output :')
print(str(Solution().findRelativeRanks([5, 4, 3, 2, 1])))
print()
print('Example 2:')
print('Input : ')
print('score = [10,3,8,9,4]')
print('Exception :')
print('["Gold Medal","5","Bronze Medal","Silver Medal","4"]')
print('Output :')
print(str(Solution().findRelativeRanks([10, 3, 8, 9, 4])))
print()
pass
# @lc main=end | 22.322581 | 79 | 0.600795 | 437 | 2,768 | 3.784897 | 0.299771 | 0.050786 | 0.048368 | 0.009674 | 0.345224 | 0.232164 | 0.079807 | 0.079807 | 0 | 0 | 0 | 0.046154 | 0.225072 | 2,768 | 124 | 80 | 22.322581 | 0.724942 | 0.566835 | 0 | 0.25 | 0 | 0 | 0.230427 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.03125 | 0.03125 | 0 | 0.21875 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
e086d33095f9872583586fa709b215338a8bb617 | 8,084 | py | Python | application/core/migrations/0001_initial.py | victor-freitas/ProjetoNCS | 7c80fad11e49f4ed00eefb90638730d340d78e1f | [
"Apache-2.0"
] | null | null | null | application/core/migrations/0001_initial.py | victor-freitas/ProjetoNCS | 7c80fad11e49f4ed00eefb90638730d340d78e1f | [
"Apache-2.0"
] | 2 | 2020-06-05T18:58:17.000Z | 2021-06-10T20:50:12.000Z | application/core/migrations/0001_initial.py | victor-freitas/ProjetoNCS | 7c80fad11e49f4ed00eefb90638730d340d78e1f | [
"Apache-2.0"
] | 1 | 2018-09-17T18:14:18.000Z | 2018-09-17T18:14:18.000Z | # Generated by Django 2.0.6 on 2018-06-17 04:47
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Cliente',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('cpf_cnpj', models.IntegerField(db_column='CPF_CNPJ', unique=True)),
('razao', models.CharField(blank=True, db_column='RAZAO', max_length=100, null=True)),
('endereco', models.CharField(db_column='ENDERECO', max_length=80)),
('cep', models.CharField(db_column='CEP', max_length=20)),
('email', models.CharField(db_column='EMAIL', max_length=200)),
('telefone', models.CharField(db_column='TELEFONE', max_length=11)),
('celular', models.CharField(blank=True, db_column='CELULAR', max_length=11, null=True)),
],
options={
'db_table': 'Cliente',
'managed': False,
},
),
migrations.CreateModel(
name='Fornecedor',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('cpf_cnpj', models.IntegerField(db_column='CPF_CNPJ', unique=True)),
('razao', models.CharField(blank=True, db_column='RAZAO', max_length=100, null=True)),
('endereco', models.CharField(db_column='ENDERECO', max_length=80)),
('cep', models.CharField(db_column='CEP', max_length=20)),
('email', models.CharField(db_column='EMAIL', max_length=200)),
('telefone', models.CharField(db_column='TELEFONE', max_length=11)),
('celular', models.CharField(blank=True, db_column='CELULAR', max_length=11, null=True)),
('pessoa_contato', models.CharField(blank=True, db_column='PESSOA_CONTATO', max_length=100, null=True)),
],
options={
'db_table': 'Fornecedor',
'managed': False,
},
),
migrations.CreateModel(
name='Funcionario',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('nome', models.CharField(db_column='NOME', max_length=100)),
('cpf', models.IntegerField(db_column='CPF')),
('cargo', models.SmallIntegerField(db_column='CARGO')),
('login', models.CharField(db_column='LOGIN', max_length=100)),
('senha', models.CharField(db_column='SENHA', max_length=50)),
],
options={
'db_table': 'Funcionario',
'managed': False,
},
),
migrations.CreateModel(
name='Materiaprima',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('nome', models.CharField(db_column='NOME', max_length=60)),
('forma_emb', models.CharField(db_column='FORMA_EMB', max_length=60)),
('peso', models.CharField(db_column='PESO', max_length=20)),
('unid_medida', models.CharField(db_column='UNID_MEDIDA', max_length=50)),
('quantidade', models.IntegerField(db_column='QUANTIDADE')),
('quantidade_min', models.IntegerField(db_column='QUANTIDADE_MIN')),
('descricao', models.CharField(db_column='DESCRICAO', max_length=500)),
('data_recebimento', models.DateField(db_column='DATA_RECEBIMENTO')),
],
options={
'db_table': 'MateriaPrima',
'managed': False,
},
),
migrations.CreateModel(
name='Ordemdeproducao',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('descricao', models.CharField(db_column='Descricao', max_length=500)),
],
options={
'db_table': 'OrdemDeProducao',
'managed': False,
},
),
migrations.CreateModel(
name='Pedido',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('data_pedido', models.DateField(db_column='DATA_PEDIDO')),
('valor', models.CharField(blank=True, db_column='VALOR', max_length=20, null=True)),
],
options={
'db_table': 'Pedido',
'managed': False,
},
),
migrations.CreateModel(
name='Pedidomp',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('data_pedido', models.DateField(db_column='DATA_PEDIDO')),
('data_prevista', models.DateField(db_column='DATA_PREVISTA')),
('descricao', models.CharField(blank=True, db_column='DESCRICAO', max_length=500, null=True)),
('valor', models.CharField(blank=True, db_column='VALOR', max_length=20, null=True)),
],
options={
'db_table': 'PedidoMP',
'managed': False,
},
),
migrations.CreateModel(
name='Produto',
fields=[
('id', models.AutoField(db_column='ID', primary_key=True, serialize=False)),
('nome', models.CharField(db_column='NOME', max_length=60)),
('forma_emb', models.CharField(db_column='FORMA_EMB', max_length=60)),
('peso', models.CharField(db_column='PESO', max_length=20)),
('unid_medida', models.CharField(db_column='UNID_MEDIDA', max_length=50)),
('preco', models.CharField(blank=True, db_column='PRECO', max_length=10, null=True)),
('quantidade', models.IntegerField(blank=True, db_column='QUANTIDADE', null=True)),
('desc_produto', models.CharField(db_column='DESC_PRODUTO', max_length=500)),
],
options={
'db_table': 'Produto',
'managed': False,
},
),
migrations.CreateModel(
name='Setor',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('nome', models.CharField(db_column='NOME', max_length=100)),
],
options={
'db_table': 'Setor',
'managed': False,
},
),
migrations.CreateModel(
name='Statusordemproducao',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('status_nome', models.CharField(db_column='STATUS_NOME', max_length=30)),
],
options={
'db_table': 'StatusOrdemProducao',
'managed': False,
},
),
migrations.CreateModel(
name='Tipoproduto',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('nome', models.CharField(db_column='NOME', max_length=100)),
],
options={
'db_table': 'TipoProduto',
'managed': False,
},
),
migrations.CreateModel(
name='Tiposeguimento',
fields=[
('id', models.SmallIntegerField(db_column='ID', primary_key=True, serialize=False)),
('nome', models.CharField(db_column='NOME', max_length=100)),
],
options={
'db_table': 'TipoSeguimento',
'managed': False,
},
),
]
| 44.662983 | 120 | 0.530678 | 743 | 8,084 | 5.578735 | 0.129206 | 0.111942 | 0.106635 | 0.14427 | 0.783353 | 0.629674 | 0.591797 | 0.591797 | 0.591797 | 0.566224 | 0 | 0.018069 | 0.322241 | 8,084 | 180 | 121 | 44.911111 | 0.738456 | 0.005567 | 0 | 0.653179 | 1 | 0 | 0.144581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.00578 | 0 | 0.028902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e088029d2dd84f5afa6eb4738d0ecbd65b7b7d99 | 3,718 | py | Python | awacs/proton.py | alanjjenkins/awacs | 0065e1833eae6a6070edb4ab4f180fd10b26c19a | [
"BSD-2-Clause"
] | null | null | null | awacs/proton.py | alanjjenkins/awacs | 0065e1833eae6a6070edb4ab4f180fd10b26c19a | [
"BSD-2-Clause"
] | null | null | null | awacs/proton.py | alanjjenkins/awacs | 0065e1833eae6a6070edb4ab4f180fd10b26c19a | [
"BSD-2-Clause"
] | null | null | null | # Copyright (c) 2012-2021, Mark Peek <mark@peek.org>
# All rights reserved.
#
# See LICENSE file for full license.
from .aws import Action as BaseAction
from .aws import BaseARN
service_name = "AWS Proton"
prefix = "proton"
class Action(BaseAction):
def __init__(self, action: str = None) -> None:
super().__init__(prefix, action)
class ARN(BaseARN):
def __init__(self, resource: str = "", region: str = "", account: str = "") -> None:
super().__init__(
service=prefix, resource=resource, region=region, account=account
)
CreateEnvironment = Action("CreateEnvironment")
CreateEnvironmentTemplate = Action("CreateEnvironmentTemplate")
CreateEnvironmentTemplateMajorVersion = Action("CreateEnvironmentTemplateMajorVersion")
CreateEnvironmentTemplateMinorVersion = Action("CreateEnvironmentTemplateMinorVersion")
CreateService = Action("CreateService")
CreateServiceTemplate = Action("CreateServiceTemplate")
CreateServiceTemplateMajorVersion = Action("CreateServiceTemplateMajorVersion")
CreateServiceTemplateMinorVersion = Action("CreateServiceTemplateMinorVersion")
DeleteAccountRoles = Action("DeleteAccountRoles")
DeleteEnvironment = Action("DeleteEnvironment")
DeleteEnvironmentTemplate = Action("DeleteEnvironmentTemplate")
DeleteEnvironmentTemplateMajorVersion = Action("DeleteEnvironmentTemplateMajorVersion")
DeleteEnvironmentTemplateMinorVersion = Action("DeleteEnvironmentTemplateMinorVersion")
DeleteService = Action("DeleteService")
DeleteServiceTemplate = Action("DeleteServiceTemplate")
DeleteServiceTemplateMajorVersion = Action("DeleteServiceTemplateMajorVersion")
DeleteServiceTemplateMinorVersion = Action("DeleteServiceTemplateMinorVersion")
GetAccountRoles = Action("GetAccountRoles")
GetEnvironment = Action("GetEnvironment")
GetEnvironmentTemplate = Action("GetEnvironmentTemplate")
GetEnvironmentTemplateMajorVersion = Action("GetEnvironmentTemplateMajorVersion")
GetEnvironmentTemplateMinorVersion = Action("GetEnvironmentTemplateMinorVersion")
GetService = Action("GetService")
GetServiceInstance = Action("GetServiceInstance")
GetServiceTemplate = Action("GetServiceTemplate")
GetServiceTemplateMajorVersion = Action("GetServiceTemplateMajorVersion")
GetServiceTemplateMinorVersion = Action("GetServiceTemplateMinorVersion")
ListEnvironmentTemplateMajorVersions = Action("ListEnvironmentTemplateMajorVersions")
ListEnvironmentTemplateMinorVersions = Action("ListEnvironmentTemplateMinorVersions")
ListEnvironmentTemplates = Action("ListEnvironmentTemplates")
ListEnvironments = Action("ListEnvironments")
ListServiceInstances = Action("ListServiceInstances")
ListServiceTemplateMajorVersions = Action("ListServiceTemplateMajorVersions")
ListServiceTemplateMinorVersions = Action("ListServiceTemplateMinorVersions")
ListServiceTemplates = Action("ListServiceTemplates")
ListServices = Action("ListServices")
ListTagsForResource = Action("ListTagsForResource")
TagResource = Action("TagResource")
UntagResource = Action("UntagResource")
UpdateAccountRoles = Action("UpdateAccountRoles")
UpdateEnvironment = Action("UpdateEnvironment")
UpdateEnvironmentTemplate = Action("UpdateEnvironmentTemplate")
UpdateEnvironmentTemplateMajorVersion = Action("UpdateEnvironmentTemplateMajorVersion")
UpdateEnvironmentTemplateMinorVersion = Action("UpdateEnvironmentTemplateMinorVersion")
UpdateService = Action("UpdateService")
UpdateServiceInstance = Action("UpdateServiceInstance")
UpdateServicePipeline = Action("UpdateServicePipeline")
UpdateServiceTemplate = Action("UpdateServiceTemplate")
UpdateServiceTemplateMajorVersion = Action("UpdateServiceTemplateMajorVersion")
UpdateServiceTemplateMinorVersion = Action("UpdateServiceTemplateMinorVersion")
| 49.573333 | 88 | 0.839161 | 221 | 3,718 | 14.040724 | 0.393665 | 0.005156 | 0.008379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002323 | 0.073696 | 3,718 | 74 | 89 | 50.243243 | 0.898664 | 0.02851 | 0 | 0 | 0 | 0 | 0.343222 | 0.252287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.032258 | 0 | 0.096774 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e08bf2bd2a9a71b40e56dae49102323a555d5695 | 3,664 | py | Python | test/unit/mysql_class/slaverep_isslverror.py | deepcoder42/mysql-lib | d3d2459e0476fdbc4465e1d9389612e58d36fb25 | [
"MIT"
] | 1 | 2022-03-23T04:53:19.000Z | 2022-03-23T04:53:19.000Z | test/unit/mysql_class/slaverep_isslverror.py | deepcoder42/mysql-lib | d3d2459e0476fdbc4465e1d9389612e58d36fb25 | [
"MIT"
] | null | null | null | test/unit/mysql_class/slaverep_isslverror.py | deepcoder42/mysql-lib | d3d2459e0476fdbc4465e1d9389612e58d36fb25 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# Classification (U)
"""Program: slaverep_isslverror.py
Description: Unit testing of SlaveRep.is_slv_error in mysql_class.py.
Usage:
test/unit/mysql_class/slaverep_isslverror.py
Arguments:
"""
# Libraries and Global Variables
# Standard
import sys
import os
if sys.version_info < (2, 7):
import unittest2 as unittest
else:
import unittest
# Third-party
# Local
sys.path.append(os.getcwd())
import mysql_class
import lib.machine as machine
import version
__version__ = version.__version__
class UnitTest(unittest.TestCase):
"""Class: UnitTest
Description: Class which is a representation of a unit testing.
Methods:
setUp -> Initialize testing environment.
test_slv_both_true -> Test with all attrs set to True.
test_sql_err_true -> Test with sql_err set to True.
test_io_err_true -> Test with io_err set to True.
test_default -> Test show_slv_state method.
"""
def setUp(self):
"""Function: setUp
Description: Initialization for unit testing.
Arguments:
"""
self.name = "Mysql_Server"
self.server_id = 10
self.sql_user = "mysql_user"
self.sql_pass = "my_japd"
self.machine = getattr(machine, "Linux")()
self.host = "host_server"
self.port = 3307
self.defaults_file = "def_cfg_file"
self.extra_def_file = "extra_cfg_file"
def test_slv_both_true(self):
"""Function: test_slv_both_true
Description: Test with all attrs set to True.
Arguments:
"""
mysqlrep = mysql_class.SlaveRep(self.name, self.server_id,
self.sql_user, self.sql_pass,
self.machine,
defaults_file=self.defaults_file)
mysqlrep.sql_err = "Yes"
mysqlrep.io_err = "Yes"
self.assertTrue(mysqlrep.is_slv_error())
def test_sql_err_true(self):
"""Function: test_sql_err_true
Description: Test with sql_err set to True.
Arguments:
"""
mysqlrep = mysql_class.SlaveRep(self.name, self.server_id,
self.sql_user, self.sql_pass,
self.machine,
defaults_file=self.defaults_file)
mysqlrep.sql_err = "Yes"
mysqlrep.io_err = None
self.assertTrue(mysqlrep.is_slv_error())
def test_io_err_true(self):
"""Function: test_io_err_true
Description: Test with io_err set to True.
Arguments:
"""
mysqlrep = mysql_class.SlaveRep(self.name, self.server_id,
self.sql_user, self.sql_pass,
self.machine,
defaults_file=self.defaults_file)
mysqlrep.sql_err = None
mysqlrep.io_err = "Yes"
self.assertTrue(mysqlrep.is_slv_error())
def test_default(self):
"""Function: test_default
Description: Test is_slv_error method.
Arguments:
"""
mysqlrep = mysql_class.SlaveRep(self.name, self.server_id,
self.sql_user, self.sql_pass,
self.machine,
defaults_file=self.defaults_file)
mysqlrep.sql_err = None
mysqlrep.io_err = None
self.assertFalse(mysqlrep.is_slv_error())
if __name__ == "__main__":
unittest.main()
| 24.264901 | 74 | 0.570142 | 411 | 3,664 | 4.800487 | 0.23601 | 0.035479 | 0.030411 | 0.038013 | 0.482007 | 0.430309 | 0.430309 | 0.376077 | 0.35631 | 0.35631 | 0 | 0.003769 | 0.348253 | 3,664 | 150 | 75 | 24.426667 | 0.822446 | 0.283297 | 0 | 0.482143 | 0 | 0 | 0.037869 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.089286 | false | 0.089286 | 0.125 | 0 | 0.232143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e09320dd89276dff3fa36b0f354e1b3cd7cffc60 | 1,767 | py | Python | Python2/src/main.py | nataddrho/digicueblue | 246c87129e6a70d384b1553688672bb3d5c6643e | [
"MIT"
] | 8 | 2017-11-02T16:04:15.000Z | 2021-11-21T18:36:18.000Z | Python2/src/main.py | nataddrho/digicueblue | 246c87129e6a70d384b1553688672bb3d5c6643e | [
"MIT"
] | 2 | 2018-01-09T15:15:12.000Z | 2018-10-11T23:56:59.000Z | Python2/src/main.py | nataddrho/digicueblue | 246c87129e6a70d384b1553688672bb3d5c6643e | [
"MIT"
] | 5 | 2018-01-08T16:06:45.000Z | 2021-11-21T19:50:20.000Z | #!/usr/bin/env python
# Nathan Rhoades 10/13/2017
import serial
import serialport
import bgapi
import gui
import digicueblue
import traceback
import time
import threading
import sys
if sys.version_info[0] < 3:
import Tkinter as Tk
else:
import tkinter as Tk
class App(threading.Thread): # thread GUI to that BGAPI can run in background
def __init__(self, dcb):
self.dcb = dcb
threading.Thread.__init__(self)
self.start()
def callback(self):
self.root.quit()
def run(self):
self.root = Tk.Tk()
self.gui = gui.GUI(self.root, self.dcb)
self.root.mainloop()
def main():
try:
f = open("comport.cfg", "r")
comport = f.readline().strip(' ')
f.close()
except BaseException:
# open comport selection gui
serialport.launch_selection()
return
try:
# open serial port and launch application
print "Opening %s" % comport
ser = serial.Serial(comport, 115200, timeout=1, writeTimeout=1)
dcb = digicueblue.DigicueBlue(filename="data.csv", debugprint=False)
app = App(dcb)
bg = bgapi.Bluegiga(dcb, ser, debugprint=True)
except BaseException:
print traceback.format_exc()
try:
ser.close()
except BaseException:
pass
text = """Please make sure the BLED112 dongle is plugged into the COM port
specified in comport.cfg, and that no other programs are using the port.
Use the serialport GUI to help select the correct port."""
text = text.replace('\n', ' ')
text = text.replace('\t', '')
print text
serialport.launch_selection()
if __name__ == '__main__':
main()
| 24.887324 | 88 | 0.612903 | 219 | 1,767 | 4.853881 | 0.493151 | 0.030103 | 0.028222 | 0.031985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016706 | 0.288625 | 1,767 | 70 | 89 | 25.242857 | 0.828958 | 0.090549 | 0 | 0.150943 | 0 | 0 | 0.167915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.018868 | 0.207547 | null | null | 0.09434 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e098554863c066d539fc03234656fd767444cd09 | 477 | py | Python | webapp/ex.py | jykim-rust/python | 50efe51733976d9f8ae3be47d628601ad002d836 | [
"MIT"
] | null | null | null | webapp/ex.py | jykim-rust/python | 50efe51733976d9f8ae3be47d628601ad002d836 | [
"MIT"
] | null | null | null | webapp/ex.py | jykim-rust/python | 50efe51733976d9f8ae3be47d628601ad002d836 | [
"MIT"
] | null | null | null | from flask import escape
'''with open('ex') as full:
for line in full:
print(line,end='**')
'''
'''
a=[]
with open('ex') as full:
for line in full:
a.append(line.split('|'))
print(a)
'''
'''
with open('ex') as full:
for line in full.readline():
print(line)
'''
contents=[]
with open('ex') as log:
for line in log:
#contents.append([])
for item in line.split('|'):
contents.append(item)
print(contents)
| 14.90625 | 36 | 0.540881 | 66 | 477 | 3.909091 | 0.318182 | 0.124031 | 0.155039 | 0.186047 | 0.344961 | 0.344961 | 0.344961 | 0.344961 | 0.344961 | 0.232558 | 0 | 0 | 0.27044 | 477 | 31 | 37 | 15.387097 | 0.741379 | 0.039832 | 0 | 0 | 0 | 0 | 0.016129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e09cac0b5e9ae4638a705a4871a79857b3f43a52 | 1,049 | py | Python | analysis/migrations/0032_auto_20210409_1333.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 5 | 2021-01-14T03:34:42.000Z | 2022-03-07T15:34:18.000Z | analysis/migrations/0032_auto_20210409_1333.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 551 | 2020-10-19T00:02:38.000Z | 2022-03-30T02:18:22.000Z | analysis/migrations/0032_auto_20210409_1333.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | null | null | null | # Generated by Django 3.1.3 on 2021-04-09 04:03
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('snpdb', '0030_one_off_fix_cohort_sample_order'),
('analysis', '0031_auto_20210331_1826'),
]
operations = [
migrations.AddField(
model_name='varianttag',
name='genome_build',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='snpdb.genomebuild'),
),
migrations.AddField(
model_name='varianttag',
name='location',
field=models.CharField(choices=[('A', 'Analysis'), ('E', 'External Import'), ('G', 'Gene Page'), ('V', 'Variant Details')], default='A', max_length=1),
),
migrations.AlterField(
model_name='varianttag',
name='analysis',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='analysis.analysis'),
),
]
| 33.83871 | 163 | 0.613918 | 115 | 1,049 | 5.452174 | 0.556522 | 0.051037 | 0.066986 | 0.105263 | 0.318979 | 0.318979 | 0.188198 | 0.188198 | 0.188198 | 0.188198 | 0 | 0.045226 | 0.241182 | 1,049 | 30 | 164 | 34.966667 | 0.742462 | 0.042898 | 0 | 0.333333 | 1 | 0 | 0.215569 | 0.058882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0a616cc9e84aa1cf90226b12de930157c9cb478 | 2,538 | py | Python | src/Products/Five/viewlet/viewlet.py | rbanffy/Zope | ecf6770219052e7c7f8c9634ddf187a1e6280742 | [
"ZPL-2.1"
] | 289 | 2015-01-05T12:38:21.000Z | 2022-03-05T21:20:39.000Z | src/Products/Five/viewlet/viewlet.py | rbanffy/Zope | ecf6770219052e7c7f8c9634ddf187a1e6280742 | [
"ZPL-2.1"
] | 732 | 2015-02-09T23:35:57.000Z | 2022-03-31T09:10:13.000Z | src/Products/Five/viewlet/viewlet.py | rbanffy/Zope | ecf6770219052e7c7f8c9634ddf187a1e6280742 | [
"ZPL-2.1"
] | 102 | 2015-01-12T14:03:35.000Z | 2022-03-30T11:02:44.000Z | ##############################################################################
#
# Copyright (c) 2006 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Viewlet.
"""
import os
import zope.viewlet.viewlet
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
class ViewletBase(zope.viewlet.viewlet.ViewletBase):
pass
class SimpleAttributeViewlet(zope.viewlet.viewlet.SimpleAttributeViewlet):
pass
class simple(zope.viewlet.viewlet.simple):
# We need to ensure that the proper __init__ is called.
__init__ = ViewletBase.__init__
def SimpleViewletClass(template, bases=(), attributes=None, name=''):
"""A function that can be used to generate a viewlet from a set of
information.
"""
# Create the base class hierarchy
bases += (simple, ViewletBase)
attrs = {'index': ViewPageTemplateFile(template),
'__name__': name}
if attributes:
attrs.update(attributes)
# Generate a derived view class.
class_ = type("SimpleViewletClass from %s" % template, bases, attrs)
return class_
class ResourceViewletBase(zope.viewlet.viewlet.ResourceViewletBase):
pass
def JavaScriptViewlet(path):
"""Create a viewlet that can simply insert a javascript link."""
src = os.path.join(os.path.dirname(__file__), 'javascript_viewlet.pt')
klass = type('JavaScriptViewlet',
(ResourceViewletBase, ViewletBase),
{'index': ViewPageTemplateFile(src), '_path': path})
return klass
class CSSResourceViewletBase(zope.viewlet.viewlet.CSSResourceViewletBase):
pass
def CSSViewlet(path, media="all", rel="stylesheet"):
"""Create a viewlet that can simply insert a javascript link."""
src = os.path.join(os.path.dirname(__file__), 'css_viewlet.pt')
klass = type('CSSViewlet',
(CSSResourceViewletBase, ViewletBase),
{'index': ViewPageTemplateFile(src),
'_path': path,
'_media': media,
'_rel': rel})
return klass
| 29.511628 | 78 | 0.648542 | 267 | 2,538 | 6.044944 | 0.434457 | 0.040892 | 0.066915 | 0.022305 | 0.154895 | 0.154895 | 0.096654 | 0.096654 | 0.096654 | 0.096654 | 0 | 0.002948 | 0.198188 | 2,538 | 85 | 79 | 29.858824 | 0.790172 | 0.306147 | 0 | 0.166667 | 0 | 0 | 0.092249 | 0.013453 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.111111 | 0.083333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e0a621c3b559223c04896e67f79d61f3971a7ebf | 920 | py | Python | problema21.py | bptfreitas/Project-Euler | 02b3ef8f8e3754b886b266fcd5eee7fd00d97dde | [
"MIT"
] | null | null | null | problema21.py | bptfreitas/Project-Euler | 02b3ef8f8e3754b886b266fcd5eee7fd00d97dde | [
"MIT"
] | null | null | null | problema21.py | bptfreitas/Project-Euler | 02b3ef8f8e3754b886b266fcd5eee7fd00d97dde | [
"MIT"
] | null | null | null | #Let d(n) be defined as the sum of proper divisors of n (numbers less than n which divide evenly into n).
#If d(a) = b and d(b) = a, where a b, then a and b are an amicable pair and each of a and b are called amicable numbers.
#For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and 142; so d(284) = 220.
#Evaluate the sum of all the amicable numbers under 10000.
import euler
def d(n):
return sum(euler.get_divisors(n))
print euler.get_divisors(284)
print sum(euler.get_divisors(284))
limit=10000
perc=5
step=perc*limit/100
cp=0
a=1
amics=[]
print "Starting..."
for a in range(1,limit+1):
b=d(a)
if a==d(b) and a!=b:
print "Pair:" + str(a) + " and " + str(b)
if (a not in amics):
amics.append(a)
if (b not in amics):
amics.append(b)
print "Sum of amicables:"
print sum(amics)
| 20.444444 | 181 | 0.661957 | 185 | 920 | 3.275676 | 0.394595 | 0.024752 | 0.079208 | 0.026403 | 0.069307 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095041 | 0.21087 | 920 | 44 | 182 | 20.909091 | 0.739669 | 0.501087 | 0 | 0 | 0 | 0 | 0.0837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.045455 | null | null | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0a906006c6cdb005397afa90c409162626abaca | 536 | py | Python | tkinter_examples/draw_chess_board.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 117 | 2015-12-18T07:18:27.000Z | 2022-03-28T00:25:54.000Z | tkinter_examples/draw_chess_board.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 8 | 2018-10-03T09:38:46.000Z | 2021-12-13T19:51:09.000Z | tkinter_examples/draw_chess_board.py | DazEB2/SimplePyScripts | 1dde0a42ba93fe89609855d6db8af1c63b1ab7cc | [
"CC-BY-4.0"
] | 28 | 2016-08-02T17:43:47.000Z | 2022-03-21T08:31:12.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
__author__ = 'ipetrash'
from tkinter import *
root = Tk()
root.title('Chess board')
canvas = Canvas(root, width=700, height=700, bg='#fff')
canvas.pack()
fill = '#fff'
outline = '#000'
size = 88
for i in range(8):
for j in range(8):
x1, y1, x2, y2 = i * size, j * size, i * size + size, j * size + size
canvas.create_rectangle(x1, y1, x2, y2, fill=fill, outline=outline)
fill, outline = outline, fill
fill, outline = outline, fill
root.mainloop()
| 18.482759 | 77 | 0.606343 | 80 | 536 | 4 | 0.5125 | 0.103125 | 0.16875 | 0.20625 | 0.1625 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05569 | 0.229478 | 536 | 28 | 78 | 19.142857 | 0.719128 | 0.080224 | 0 | 0.125 | 0 | 0 | 0.063136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0b07a325b5b8caffedac977ee80452502819e41 | 4,103 | py | Python | reproducing/generator_datacreation/data/readers/SEReader.py | UKPLab/emnlp2019-duplicate_question_detection | 17a9d97c2414666fcc08015a58619fe9722daf2b | [
"Apache-2.0"
] | 5 | 2019-09-30T11:09:39.000Z | 2020-07-17T07:32:17.000Z | reproducing/generator_datacreation/data/readers/SEReader.py | UKPLab/emnlp2019-duplicate_question_detection | 17a9d97c2414666fcc08015a58619fe9722daf2b | [
"Apache-2.0"
] | 1 | 2020-07-17T07:32:46.000Z | 2020-07-17T07:34:19.000Z | reproducing/generator_datacreation/data/readers/SEReader.py | UKPLab/emnlp2019-duplicate_question_detection | 17a9d97c2414666fcc08015a58619fe9722daf2b | [
"Apache-2.0"
] | 2 | 2020-04-09T01:27:50.000Z | 2022-03-12T07:53:15.000Z | import logging
import subprocess
import xml.etree.ElementTree as ET
from tqdm import tqdm
logger = logging.getLogger('root')
POST_TYPE_QUESTION = '1'
POST_TYPE_ANSWER = '2'
class SEDataReader(object):
"""
NOTE: - a typical xml string for question in original data looks like
<row Id="4" PostTypeId="1" AcceptedAnswerId="7" CreationDate="2008-07-31T21:42:52.667" Score="543" ViewCount="34799" Body="<p>I want to use a track-bar to change a form's opacity.</p>

<p>This is my code:</p>

<pre><code>decimal trans = trackBar1.Value / 5000;
this.Opacity = trans;
</code></pre>

<p>When I build the application, it gives the following error:</p>

<blockquote>
 <p>Cannot implicitly convert type <code>'decimal'</code> to <code>'double'</code>.</p>
</blockquote>

<p>I tried using <code>trans</code> and <code>double</code> but then the control doesn't work. This code worked fine in a past VB.NET project.</p>
" OwnerUserId="8" LastEditorUserId="3151675" LastEditorDisplayName="Rich B" LastEditDate="2017-09-27T05:52:59.927" LastActivityDate="2018-02-22T16:40:13.577" Title="While applying opacity to a form, should we use a decimal or a double value?" Tags="<c#><winforms><type-conversion><decimal><opacity>" AnswerCount="13" CommentCount="1" FavoriteCount="39" CommunityOwnedDate="2012-10-31T16:42:47.213" />
"""
num_skipped_sample_none = 0
num_skipped_other_posttype = 0
num_retrieved = 0
def __init__(self, data_file_path):
"""
data_file_path : (string) path to the posts.xml file
"""
self.data_file_path = data_file_path
def question_data_filter(self, sample):
return dict((k, sample.get(k, None)) for k in ['Id', 'Title', 'Body', 'Tags', 'ParentId', 'PostTypeId'])
def n_items_unfiltered(self):
out = subprocess.check_output(['wc', '-l', self.data_file_path])
return int(out.split()[0])
def read_items(self, allowed_post_types=(POST_TYPE_QUESTION, ), min_score=0, max_year=None):
questions_dict = dict() if POST_TYPE_ANSWER in allowed_post_types else None
with open(self.data_file_path, 'r') as f:
for l in tqdm(f):
try:
sample = ET.fromstring(l.strip()).attrib
except ET.ParseError as e:
print('(Ignoring) ERROR in parsing line (QUESTION READER):\n{}\n'.format(l.strip))
sample = None
if sample:
if questions_dict is not None and sample['PostTypeId'] == POST_TYPE_QUESTION:
filtered_sample = self.question_data_filter(sample)
questions_dict[filtered_sample['Id']] = filtered_sample
has_min_score = int(sample['Score']) >= min_score
has_max_year = True if max_year is None else int(sample['CreationDate'][:4]) <= max_year
if sample['PostTypeId'] in allowed_post_types and has_max_year and has_min_score:
SEDataReader.num_retrieved += 1
filtered_sample = self.question_data_filter(sample)
if sample['PostTypeId'] == POST_TYPE_ANSWER:
q = questions_dict.get(filtered_sample['ParentId'], None)
if q is None:
print('Skipping answer because parent question is unknown')
continue
else:
filtered_sample['Title'] = q.get('Title')
filtered_sample['Tags'] = q.get('Tags')
yield filtered_sample
else:
SEDataReader.num_skipped_other_posttype += 1
else:
SEDataReader.num_skipped_sample_none += 1
| 55.445946 | 1,284 | 0.604436 | 562 | 4,103 | 4.265125 | 0.371886 | 0.012516 | 0.020859 | 0.014602 | 0.112641 | 0.10096 | 0.076763 | 0 | 0 | 0 | 0 | 0.035857 | 0.265903 | 4,103 | 73 | 1,285 | 56.205479 | 0.75996 | 0.341945 | 0 | 0.102041 | 0 | 0 | 0.08509 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.081633 | 0.020408 | 0.285714 | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0b36f9e11e7c9c01752718dbe837832a4596d2c | 533 | py | Python | sra_django_api/user/migrations/0003_auto_20180914_1242.py | tflati/ncbi-search | 2f31c57ffb95c2c874b65c03c58edd96eb822dfb | [
"MIT"
] | null | null | null | sra_django_api/user/migrations/0003_auto_20180914_1242.py | tflati/ncbi-search | 2f31c57ffb95c2c874b65c03c58edd96eb822dfb | [
"MIT"
] | null | null | null | sra_django_api/user/migrations/0003_auto_20180914_1242.py | tflati/ncbi-search | 2f31c57ffb95c2c874b65c03c58edd96eb822dfb | [
"MIT"
] | null | null | null | # Generated by Django 2.0.3 on 2018-09-14 12:42
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('user', '0002_project'),
]
operations = [
migrations.RenameField(
model_name='project',
old_name='file_path',
new_name='base_path',
),
migrations.AlterField(
model_name='project',
name='creation_date',
field=models.DateTimeField(auto_now_add=True),
),
]
| 22.208333 | 58 | 0.572233 | 55 | 533 | 5.363636 | 0.745455 | 0.061017 | 0.108475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052055 | 0.315197 | 533 | 23 | 59 | 23.173913 | 0.756164 | 0.084428 | 0 | 0.235294 | 1 | 0 | 0.125514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0b6b0833654d657edf6b38b5e343a5dbb47c6d9 | 825 | py | Python | text.py | Kedyn/PingPong | 47e39a9d30e1a3a7b828c5b5e85b0666d67b0d7b | [
"MIT"
] | null | null | null | text.py | Kedyn/PingPong | 47e39a9d30e1a3a7b828c5b5e85b0666d67b0d7b | [
"MIT"
] | null | null | null | text.py | Kedyn/PingPong | 47e39a9d30e1a3a7b828c5b5e85b0666d67b0d7b | [
"MIT"
] | null | null | null | import pygame.font
import copy
class Text:
"""Draws a text to the screen."""
def __init__(self, rect, size, color, screen, text):
self.screen = screen
self.rect = copy.deepcopy(rect)
self.text = text
self.color = color
self.font = pygame.font.SysFont(None, size)
self.text_image = None
self.text_image_rect = None
self.prep_img()
def prep_img(self):
"""Turn msg into a rendered image, and center text on the button."""
self.text_image = self.font.render(self.text, True,
self.color)
self.text_image_rect = self.text_image.get_rect()
self.text_image_rect.center = self.rect.center
def render(self):
self.screen.blit(self.text_image, self.text_image_rect)
| 27.5 | 76 | 0.604848 | 111 | 825 | 4.324324 | 0.315315 | 0.166667 | 0.216667 | 0.141667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.293333 | 825 | 29 | 77 | 28.448276 | 0.823328 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.105263 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0ba84d4c91d76772295ffa29072d6de02d1a1ae | 4,516 | py | Python | analyzer/BannerTool.py | Gr1ph00n/staticwebanalyzer | 8bf6337a77192b85913d75778830ccbb9006081f | [
"MIT"
] | null | null | null | analyzer/BannerTool.py | Gr1ph00n/staticwebanalyzer | 8bf6337a77192b85913d75778830ccbb9006081f | [
"MIT"
] | null | null | null | analyzer/BannerTool.py | Gr1ph00n/staticwebanalyzer | 8bf6337a77192b85913d75778830ccbb9006081f | [
"MIT"
] | null | null | null | #FILE NAME: BannerTool.py
#created by: Ciro Veneruso
#purpose: banner localization
#last edited by: Ciro Veneruso
#INSTALL: BeautifulSoup
#TODO: this code is a blob, must be refactorized!!!!
import re
import mechanize
import socket
import urllib
from tools import BaseTool
from bs4 import BeautifulSoup
from pprint import pprint
from ipwhois import IPWhois, WhoisLookupError
from tld import get_tld
import urlparse
from tld.exceptions import TldIOError, TldDomainNotFound, TldBadUrl
from tools import ToolException
class BannerTool(BaseTool):
def __init__(self, config):
BaseTool.__init__(self, "BannerAnalyzer", config, needRefresh = True)
self.values = []
def run(self, browser):
try:
url = browser.url.replace('http://','')
print url+"\n"
#response = browser.open(url)
html = browser.httpResponse #response.get_data()
site_domain_name = get_tld(browser.url)
#print(site_domain_name)
soup = BeautifulSoup(html)
links = soup.findAll('a')
response_domain = ""
addr = ""
name = ""
state = ""
city = ""
description = ""
country = ""
foo_flag = 0
flag = 0
for link in links:
foo = link.findChild('img')
#print foo
if foo is not None:
foo_flag = 1
flag = 1
href = link.get('href')
if href is None:
continue
print(href+"\n")
if href.startswith('/'):
response_domain ="link interno"
print ("link interno\n")
elif href.startswith('/'):
response_domain ="Link interno"
print ("link interno\n")
elif href.startswith("http://"+url):
response_domain ="link interno"
print ("link interno\n")
elif href.startswith("https://"+url):
response_domain ="link interno"
print ("link interno\n")
else:
response_domain ="link esterno"
print ("link esterno... Geolocalizzazione:\n")
try:
banner_domain_name = get_tld(href)
print(banner_domain_name+"\n")
print(site_domain_name)
url = 'https://' + url if not banner_domain_name.startswith('http') else banner_domain_name.replace('http:', 'https:')
parsed = urlparse.urlparse(url)
hostname = "%s://%s" % (parsed.scheme, parsed.netloc)
url = url.split("//")[1]
url_s = url.split("/")[0]
ip = socket.gethostbyname(url_s)
#print(href)
#get ip by url
#ip = socket.gethostbyname(banner_domain_name)
#get information by ip
result = None
try:
obj = IPWhois(ip)
result = obj.lookup()
except Error as e:
continue
addr = result['nets'][0]['address'] if result['nets'][0]['address'] != None else 'None'
name = result['nets'][0]['name'] if result['nets'][0]['name'] != None else 'None'
state = result['nets'][0]['state'] if result['nets'][0]['state'] != None else 'None'
city = result['nets'][0]['city'] if result['nets'][0]['city'] != None else 'None'
description = result['nets'][0]['description'] if result['nets'][0]['description'] != None else 'None'
country = result['nets'][0]['country'] if result['nets'][0]['country'] != None else 'None'
'''
self.values.append(["Link analyzed",href])
self.values.append(["Response",response_domain])
self.values.append(["Address", addr])
self.values.append(["Name", name])
self.values.append(["State", state])
self.values.append(["City", city])
self.values.append(["Description", description])
self.values.append(["Country", country])
print('Name: ' + name + '\n' + 'Description: ' + description + '\n' + 'Address: ' +
addr + '\n' + 'Country: ' + country + '\n' + 'State: ' + state + '\n' + 'City: ' + city)
'''
temp = {
"Url" : url,
"Address" : addr,
"Name" : name,
"State" : state,
"City" : city,
"Description" : description,
"Country" : country,
"Response" : response_domain
}
self.values.append({ "Link analyzed %s" % (href) : temp })
except TldBadUrl as e:
print ("Bad URL!")
if flag == 0:
print("There aren' t extra domain banners in this site")
if(foo_flag == 0):
print("There aren't banner in this site")
except WhoisLookupError as e:
raise ToolException(str(e))
return len(self.values) >= self.config.getInt("banner_count_treshold", 0)
def createModel(self):
return False, ["key","value"], self.values
| 31.802817 | 125 | 0.601639 | 542 | 4,516 | 4.933579 | 0.249077 | 0.044877 | 0.049364 | 0.02917 | 0.148841 | 0.133882 | 0.090501 | 0.090501 | 0.090501 | 0.07255 | 0 | 0.006454 | 0.245128 | 4,516 | 141 | 126 | 32.028369 | 0.777941 | 0.077059 | 0 | 0.118812 | 0 | 0 | 0.158855 | 0.005894 | 0 | 0 | 0 | 0.007092 | 0 | 0 | null | null | 0 | 0.118812 | null | null | 0.128713 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0bb4548a5ce53a84a9faca646cafe6530aa2def | 12,725 | py | Python | app/AccountManagment.py | fredpan/Prosopagnosia_Web_Server | b56b58eccdbbde6b158802d49f7bcc1b44b18b69 | [
"Apache-2.0"
] | null | null | null | app/AccountManagment.py | fredpan/Prosopagnosia_Web_Server | b56b58eccdbbde6b158802d49f7bcc1b44b18b69 | [
"Apache-2.0"
] | null | null | null | app/AccountManagment.py | fredpan/Prosopagnosia_Web_Server | b56b58eccdbbde6b158802d49f7bcc1b44b18b69 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 EraO Prosopagnosia Helper Dev Team, Liren Pan, Yixiao Hong, Hongzheng Xu, Stephen Huang, Tiancong Wang
#
# Supervised by Prof. Steve Mann (http://www.eecg.toronto.edu/~mann/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import mysql.connector
import re
import time
from app.sql.config.DbConfig import db_config
from flask import render_template, redirect, url_for, request, g, session
from flask_bcrypt import Bcrypt
from app import EmailSender as email_confirmation
from app import webapp
validUsernameChar = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
# The function used to establish connection to sql database
def connect_to_database():
'''
Function used to connect to database
:return:
'''
return mysql.connector.connect(user=db_config['user'], password=db_config['password'], host=db_config['host'],
database=db_config['database'], use_pure=True)
def get_database():
'''
function used to get database
:return:
'''
db = getattr(g, '_database', None)
if db is None:
db = g._database = connect_to_database()
return db
"""
#############################################################
Login Settings
############################################################
"""
@webapp.route('/login', methods=['GET', 'POST'])
def user_login():
'''
This function takes GET/POST http request with URL of "/login"
It returns the user with an html website of the login page
:return: the rendered "login_index.html"
'''
return render_template("/login_index.html", title="Welcome")
@webapp.route('/login_submit', methods=['POST'])
def login_submit():
'''
This function takes POST http request with URL of "/login_submit". It firstly reads the user submitted username,
password and the check statue of "remember me" option based on whether the user checked "remember me" the function
adjust the session expiry time by adjusting the value of webapp.permanent_session_lifetime. The function then
connects to the database and reads the search results based on user inputs. If no search results find based on
the user provided username, the function will return the user with "login_index.html" with error message; if the
user input password doesn't match the database password after bcrypt,the function will return the user with
login_index.html" with error message; If it passed all the condition, the function will redirect to URL"/secure/index"
:return: /login_index.html or /secure/index
'''
session.permanent = True
bcrypt = Bcrypt(webapp)
username = request.form['username']
password = request.form['password']
remember = request.form.get('remember')
print(remember)
rememberMe = False
# if remember!=None and remember=="on":
if remember:
rememberMe = True
else:
session.clear()
webapp.permanent_session_lifetime = datetime.timedelta(milliseconds=0)
# password = bcrypt.generate_password_hash(password).decode("utf-8")
# bcrypt.check_password_hash
# connect to database
cnx = get_database()
cursor = cnx.cursor()
query = "SELECT password FROM user_info WHERE username = %s and active = 1"
cursor.execute(query, (username,))
results = cursor.fetchall()
if len(results) == 1:
hashed_pwd = results[0][0]
if bcrypt.check_password_hash(hashed_pwd, password):
session['authenticated'] = True
session['username'] = username
session['error'] = None
if rememberMe:
webapp.permanent_session_lifetime = datetime.timedelta(weeks=1)
return redirect(url_for('sensitive'))
session['username'] = username
session['error'] = "<=Error! Incorrect username or password!=>"
return render_template("/login_index.html", title="Main Page", username=username, error=session['error'])
"""
#############################################################
Sign up Settings
############################################################
"""
# Display an empty HTML form that allows users to fill the info and sign up.
@webapp.route('/signup', methods=['GET'])
def user_signup():
'''
This function takes GET http request with URL of "/signup"
It returns the user with an html website of the signup page
:return: the rendered "signup_index.html"
'''
return render_template("signup_index.html", title="Join Us!")
# Create a new account and save them in the database.
@webapp.route('/signup/save', methods=['POST'])
def sign_up_save():
'''
This function takes POST http request with a URL of "/signup/save". It firstly reads the user submitted username,
password1 and password2. It then connects to the database to check if there is already an existing username in the
database. The function also checks whether the user provided all the necessary information; whether the format of
the username and password are correct and whether the two passwords match. If any of the above condition failed,
the function will return user with "signup_index.html" with error message. If not, the function will insert the
user provided information to the database and return "signup_succeed_index.html" page to user indicating the user
has successfully created a new account.
:return: "signup_index.html" or "signup_succeed_index.html"
'''
bcrypt = Bcrypt(webapp)
# need to trim the user name
username = request.form.get('username', "")
password1 = request.form.get('password1', "")
password2 = request.form.get('password2', "")
# connect to database
cnx = get_database()
cursor = cnx.cursor()
query = "SELECT COUNT(username) FROM user_info WHERE username = %s "
cursor.execute(query, (username,))
results = cursor.fetchall()
numberOfExistUser = results[0][0]
if username == "" or password1 == "" or password2 == "":
error_msg = "Error: All fields are required!"
return render_template("signup_index.html", title="Sign Up", error_msg=error_msg,
username=username, password1=password1, password2=password2)
if re.findall(r'\s+', username) != []:
error_msg = "Error: No space allowed in user name!"
return render_template("signup_index.html", title="Sign Up", error_msg=error_msg,
username=username, password1=password1, password2=password2)
if numberOfExistUser != 0:
error_msg = "Error: User name already exist!"
return render_template("signup_index.html", title="Sign Up", error_msg=error_msg,
username=username, password1=password1, password2=password2)
if not (password1 == password2):
error_msg = "Error: Two passwords not matching!"
return render_template("signup_index.html", title="Sign Up", error_msg=error_msg,
username=username, password1=password1, password2=password2)
if (len(username) > 20 or len(username) < 1) or not all(c in validUsernameChar for c in username):
print(len(username))
error_msg = "Error: Username violation, username must have length between 1 to 20, only letters and numbers allowed"
return render_template("signup_index.html", title="Sign Up", error_msg=error_msg,
username=username, password1=password1, password2=password2)
if len(password1) > 16 or len(password1) < 1:
error_msg = "Error: Password length violation"
return render_template("signup_index.html", title="Sign Up", error_msg=error_msg,
username=username, password1=password1, password2=password2)
ts = time.time()
timestamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
password = bcrypt.generate_password_hash(password1).decode("utf-8")
query = ''' INSERT INTO user_info (username,password,create_date,active)
VALUES (%s,%s, %s,1)
'''
cursor.execute(query, (username, password, timestamp))
cnx.commit()
# Add error catch here for sql
return render_template("signup_succeed_index.html", title="Sign Up Succeed", username=username, password=password1)
"""
#############################################################
Secure Index
############################################################
"""
@webapp.route('/secure/index', methods=['GET', 'POST'])
def sensitive():
'''
This function takes GET/POST http request with URL of "/secure/index". The function firstly check if the user
session has key of “authenticated” and value of True which indicating the user has passed the security check.
If not, the user will be redirected back to ‘/user_login’. If the user session contains “authenticated” and
has a value of True, the function will perform a database search based on the “username” in the client’s
session and store the user’s uid, upload_counter and create_date into the session and return the page
of "/secured_index.html".
:return: "/secure/index" or "/secured_index.html"
'''
if 'authenticated' not in session:
return redirect(url_for('user_login'))
# ==========Read user Info and sign in =========#
if session['authenticated'] == True:
# connect to database
cnx = get_database()
cursor = cnx.cursor()
query = "SELECT uid , create_date FROM user_info WHERE username = %s and active = 1"
cursor.execute(query, (session['username'],))
results = cursor.fetchall()
uid = results[0][0]
memberSince = results[0][1]
session['uid'] = uid
session['membersince'] = memberSince
return render_template("/secured_index.html", username=session['username'], membersince=session['membersince'])
else:
return redirect(url_for('user_login'))
@webapp.route('/logout', methods=['GET', 'POST'])
def logout():
'''
This function takes GET/POST http request with URL of “/logout”. The function clear all the contents in the
current user’s session and terminate the user’s session’s lifetime. The function then redirect the user to
the main page.
:return: /secure/index
'''
session.clear()
webapp.permanent_session_lifetime = datetime.timedelta(milliseconds=0)
return redirect(url_for("sensitive"))
"""
#############################################################
Send Email
############################################################
"""
# Create a new account and save them in the database.
@webapp.route('/signup/send_email', methods=['POST'])
def send_email():
'''
This function takes POST http request with URL of “/signup/send_email”. The function read the user email,
username and password and check if the user email is in correct form with Regex, if the email address is correct,
it will call “send_email” function in “EmailSender” class which can send an email to the user with registered
username and password and redirect the user back to “signup_succeed_index.html” with success message. If the user
provided email address is not a correct form, the function will redirect back to “signup_succeed_index.html” with
error message.
:return: “signup_succeed_index.html”
'''
# need to trim the user name
email = request.form.get('email', "")
username = request.form.get('username', "")
password = request.form.get('password', "")
if not re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]*$", email):
error_msg = "Error: Not a correct email address!"
return render_template("signup_succeed_index.html", title="Sign Up Succeed", username=username,
password=password, error_msg=error_msg)
# send email
email_confirmation.send_email(email, username, password)
success_msg = "=================Email Sent!==================="
return render_template("signup_succeed_index.html", title="Sign Up Succeed", username=username, password=password,
success_msg=success_msg)
| 41.9967 | 124 | 0.659018 | 1,612 | 12,725 | 5.115385 | 0.200372 | 0.029469 | 0.022071 | 0.03153 | 0.360296 | 0.309483 | 0.279529 | 0.233447 | 0.233447 | 0.223502 | 0 | 0.008575 | 0.202672 | 12,725 | 302 | 125 | 42.135762 | 0.804159 | 0.376503 | 0 | 0.272059 | 0 | 0.007353 | 0.220709 | 0.038156 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066176 | false | 0.191176 | 0.066176 | 0 | 0.272059 | 0.014706 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e0d2d2e5b75b0d0e7b995b20657ba25ef18190ee | 1,351 | py | Python | login.py | harryzcy/canvas-file-syncer | 16b98ee164df8570605b1a274c02f0dc7403730e | [
"MIT"
] | null | null | null | login.py | harryzcy/canvas-file-syncer | 16b98ee164df8570605b1a274c02f0dc7403730e | [
"MIT"
] | null | null | null | login.py | harryzcy/canvas-file-syncer | 16b98ee164df8570605b1a274c02f0dc7403730e | [
"MIT"
] | null | null | null | import time
from config import get_password, get_username
from playwright.sync_api import Page
def login(page: Page, url: str, landing_url: str):
raise RuntimeError("default login not supported")
def login_kenan_flagler(page: Page, url: str, landing_url: str) -> None:
page.goto(url)
page.wait_for_load_state('load')
if page.url.startswith(landing_url):
return
with page.expect_navigation():
page.locator("text=ONYEN Login").click()
time.sleep(0.5)
page.locator("input[type=email]").fill(get_username())
with page.expect_navigation():
page.locator("input[type=submit]").click()
time.sleep(1)
page.locator("input[type=password]").fill(get_password())
with page.expect_navigation():
page.click('input[type=submit]')
if page.url.endswith("/login"):
# 2-factor auth
page.locator("div[role=\"button\"]:has-text(\"Text\")").click()
print("Enter code: ", end="")
code = input()
code = code.strip()
page.locator("[aria-label=\"Code\"]").fill(code)
with page.expect_navigation():
page.locator("text=Verify").click()
page.locator("[aria-label=\"Don\\'t\\ show\\ this\\ again\"]").check()
page.locator("text=Yes").click()
time.sleep(0.5)
assert page.url.startswith(landing_url)
| 29.369565 | 78 | 0.635085 | 177 | 1,351 | 4.745763 | 0.40678 | 0.117857 | 0.066667 | 0.114286 | 0.334524 | 0.19881 | 0.157143 | 0 | 0 | 0 | 0 | 0.005515 | 0.194671 | 1,351 | 45 | 79 | 30.022222 | 0.766544 | 0.009623 | 0 | 0.1875 | 0 | 0 | 0.155689 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 1 | 0.0625 | false | 0.0625 | 0.09375 | 0 | 0.1875 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e0d3e0406d466ee1eb3c15b25d95347feecab756 | 1,265 | py | Python | custom_components/ge_home/entities/common/ge_water_heater.py | olds/ha_gehome | 5cb24deab64bcade45861da0497a84631845922c | [
"MIT"
] | 41 | 2021-08-02T02:15:54.000Z | 2022-03-30T11:11:42.000Z | custom_components/ge_home/entities/common/ge_water_heater.py | olds/ha_gehome | 5cb24deab64bcade45861da0497a84631845922c | [
"MIT"
] | 46 | 2021-08-03T02:20:59.000Z | 2022-03-30T11:17:15.000Z | custom_components/ge_home/entities/common/ge_water_heater.py | olds/ha_gehome | 5cb24deab64bcade45861da0497a84631845922c | [
"MIT"
] | 15 | 2021-08-31T00:21:33.000Z | 2022-03-30T12:53:21.000Z | import abc
import logging
from typing import Any, Dict, List, Optional
from homeassistant.components.water_heater import WaterHeaterEntity
from homeassistant.const import (
TEMP_FAHRENHEIT,
TEMP_CELSIUS
)
from gehomesdk import ErdCode, ErdMeasurementUnits
from ...const import DOMAIN
from .ge_erd_entity import GeEntity
_LOGGER = logging.getLogger(__name__)
class GeWaterHeater(GeEntity, WaterHeaterEntity, metaclass=abc.ABCMeta):
"""Mock temperature/operation mode supporting device as a water heater"""
@property
def heater_type(self) -> str:
raise NotImplementedError
@property
def operation_list(self) -> List[str]:
raise NotImplementedError
@property
def unique_id(self) -> str:
return f"{DOMAIN}_{self.serial_or_mac}_{self.heater_type}"
@property
def name(self) -> Optional[str]:
return f"{self.serial_or_mac} {self.heater_type.title()}"
@property
def temperature_unit(self):
measurement_system = self.appliance.get_erd_value(ErdCode.TEMPERATURE_UNIT)
if measurement_system == ErdMeasurementUnits.METRIC:
return TEMP_CELSIUS
return TEMP_FAHRENHEIT
@property
def supported_features(self):
raise NotImplementedError
| 28.111111 | 83 | 0.728854 | 144 | 1,265 | 6.194444 | 0.458333 | 0.073991 | 0.060538 | 0.078475 | 0.150224 | 0.065022 | 0.065022 | 0 | 0 | 0 | 0 | 0 | 0.192885 | 1,265 | 44 | 84 | 28.75 | 0.873653 | 0.052964 | 0 | 0.264706 | 0 | 0 | 0.079698 | 0.062081 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.235294 | 0.058824 | 0.558824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e0d9891ebc981dc1dd6b1f55b9d60d1a276b4e1d | 1,283 | py | Python | setup.py | Cloudlock/bravado | bacf49ea9d791ec9f564a3a141c77995d2f395b0 | [
"BSD-3-Clause"
] | null | null | null | setup.py | Cloudlock/bravado | bacf49ea9d791ec9f564a3a141c77995d2f395b0 | [
"BSD-3-Clause"
] | 2 | 2016-12-14T07:33:33.000Z | 2017-05-11T13:40:48.000Z | setup.py | Cloudlock/bravado | bacf49ea9d791ec9f564a3a141c77995d2f395b0 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2013, Digium, Inc.
# Copyright (c) 2014-2016, Yelp, Inc.
import os
from setuptools import setup
import bravado
setup(
name="bravado",
# cloudlock version, no twisted dependency
version=bravado.version + "cl",
license="BSD 3-Clause License",
description="Library for accessing Swagger-enabled API's",
long_description=open(os.path.join(os.path.dirname(__file__),
"README.rst")).read(),
author="Digium, Inc. and Yelp, Inc.",
author_email="opensource+bravado@yelp.com",
url="https://github.com/Yelp/bravado",
packages=["bravado"],
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.4",
],
install_requires=[
"bravado-core >= 4.2.2",
"yelp_bytes",
"python-dateutil",
"pyyaml",
"requests",
"six",
],
extras_require={
},
)
| 29.159091 | 71 | 0.59392 | 136 | 1,283 | 5.536765 | 0.654412 | 0.075697 | 0.099602 | 0.069057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024262 | 0.261107 | 1,283 | 43 | 72 | 29.837209 | 0.770042 | 0.101325 | 0 | 0.057143 | 0 | 0 | 0.482158 | 0.023499 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.085714 | 0 | 0.085714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0daa9911332d847f463b64cda5a0ec1b5c5fe82 | 562 | py | Python | server/website/website/migrations/0003_background_task_optimization.py | mjain2/ottertune | 011e896bf89df831fb1189b1ab4c9a7d7dca420a | [
"Apache-2.0"
] | 1 | 2019-08-16T19:35:35.000Z | 2019-08-16T19:35:35.000Z | server/website/website/migrations/0003_background_task_optimization.py | mjain2/ottertune | 011e896bf89df831fb1189b1ab4c9a7d7dca420a | [
"Apache-2.0"
] | null | null | null | server/website/website/migrations/0003_background_task_optimization.py | mjain2/ottertune | 011e896bf89df831fb1189b1ab4c9a7d7dca420a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2018-08-02 07:58
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('website', '0002_enable_compression'),
]
operations = [
migrations.AddField(
model_name='workload',
name='status',
field=models.IntegerField(choices=[(1, 'MODIFIED'), (2, 'PROCESSING'), (3, 'PROCESSED')], default=1, editable=False),
)
]
| 25.545455 | 129 | 0.631673 | 62 | 562 | 5.596774 | 0.758065 | 0.04611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057737 | 0.229537 | 562 | 21 | 130 | 26.761905 | 0.743649 | 0.120996 | 0 | 0 | 1 | 0 | 0.144603 | 0.046843 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0e302f950371ae0056fe68d4bc413f86d8a3bb5 | 1,180 | py | Python | tests/pure-req.py | rbanffy/bjoern | b177b62aa626cee97972a2e73f8543e6d86b5eb7 | [
"BSD-2-Clause"
] | 2,326 | 2015-01-05T21:20:29.000Z | 2022-03-31T15:42:24.000Z | tests/pure-req.py | rbanffy/bjoern | b177b62aa626cee97972a2e73f8543e6d86b5eb7 | [
"BSD-2-Clause"
] | 127 | 2015-04-09T03:33:17.000Z | 2022-03-11T13:43:55.000Z | tests/pure-req.py | rbanffy/bjoern | b177b62aa626cee97972a2e73f8543e6d86b5eb7 | [
"BSD-2-Clause"
] | 190 | 2015-01-20T10:54:38.000Z | 2022-02-23T05:45:45.000Z | import sys
import socket
conn = socket.create_connection(('0.0.0.0', 8080))
msgs = [
# 0 Keep-Alive, Transfer-Encoding chunked
'GET / HTTP/1.1\r\nConnection: Keep-Alive\r\n\r\n',
# 1,2,3 Close, EOF "encoding"
'GET / HTTP/1.1\r\n\r\n',
'GET / HTTP/1.1\r\nConnection: close\r\n\r\n',
'GET / HTTP/1.0\r\nConnection: Keep-Alive\r\n\r\n',
# 4 Bad Request
'GET /%20%20% HTTP/1.1\r\n\r\n',
# 5 Bug #14
'GET /%20abc HTTP/1.0\r\n\r\n',
# 6 Content-{Length, Type}
'GET / HTTP/1.0\r\nContent-Length: 11\r\n'
'Content-Type: text/blah\r\nContent-Fype: bla\r\n'
'Content-Tength: bla\r\n\r\nhello world',
# 7 POST memory leak
'POST / HTTP/1.0\r\nContent-Length: 1000\r\n\r\n%s' % ('a'*1000),
# 8,9 CVE-2015-0219
'GET / HTTP/1.1\r\nFoo_Bar: bad\r\n\r\n',
'GET / HTTP/1.1\r\nFoo-Bar: good\r\nFoo_Bar: bad\r\n\r\n'
]
conn.send(msgs[int(sys.argv[1])].encode())
while 1:
data = conn.recv(100)
if not data: break
print(repr(data))
if data.endswith(b'0\r\n\r\n'):
if raw_input('new request? Y/n') == 'n':
exit()
conn.send(b'GET / HTTP/1.1\r\nConnection: Keep-Alive\r\n\r\n')
| 28.780488 | 70 | 0.580508 | 232 | 1,180 | 2.935345 | 0.344828 | 0.073421 | 0.052863 | 0.064611 | 0.38326 | 0.358297 | 0.28047 | 0.215859 | 0.140969 | 0.099853 | 0 | 0.076514 | 0.202542 | 1,180 | 40 | 71 | 29.5 | 0.647184 | 0.128814 | 0 | 0 | 0 | 0.269231 | 0.557409 | 0.168793 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0e46a7c483f2f163fea7d76f7af991fbac7e1d8 | 1,167 | py | Python | stack.py | henryoliver/data-structures | eb3d709543ace5197236164998b8295e72187cb0 | [
"MIT"
] | null | null | null | stack.py | henryoliver/data-structures | eb3d709543ace5197236164998b8295e72187cb0 | [
"MIT"
] | null | null | null | stack.py | henryoliver/data-structures | eb3d709543ace5197236164998b8295e72187cb0 | [
"MIT"
] | null | null | null | class Stack:
def __init__(self):
self.stack = []
self.minMaxStack = []
# O(1) time | O(1) space
def peek(self):
if (len(self.stack)):
return self.stack[-1]
return None
# O(1) time | O(1) space
def pop(self):
if (len(self.stack)):
self.minMaxStack.pop()
return self.stack.pop()
return None
# Procedure
# O(1) time | O(1) space
def push(self, value):
minNumber = value
maxNumber = value
if (len(self.minMaxStack)):
lastMinMax = self.minMaxStack[-1]
minNumber = min(lastMinMax[0], minNumber)
maxNumber = max(lastMinMax[1], maxNumber)
self.stack.append(value)
self.minMaxStack.append((minNumber, maxNumber))
print(self.stack)
print(self.minMaxStack)
# O(1) time | O(1) space
def getMin(self):
if (len(self.minMaxStack)):
return self.minMaxStack[-1][0]
return None
# O(1) time | O(1) space
def getMax(self):
if (len(self.minMaxStack)):
return self.minMaxStack[-1][1]
return None
| 22.882353 | 55 | 0.533847 | 138 | 1,167 | 4.485507 | 0.210145 | 0.242326 | 0.048465 | 0.056543 | 0.416801 | 0.358643 | 0.358643 | 0.332795 | 0.332795 | 0 | 0 | 0.023316 | 0.338475 | 1,167 | 50 | 56 | 23.34 | 0.778497 | 0.107112 | 0 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0 | 0 | 0.46875 | 0.0625 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0e6d5767d002512b6411fc77a77af2956e16766 | 19,876 | py | Python | graph-to-graph/elf_correlate.py | mbrcknl/graph-refine | 78c74f18127db53606f18f775a5a50de86bc6b97 | [
"BSD-2-Clause",
"MIT"
] | 6 | 2017-03-17T17:57:48.000Z | 2019-12-28T13:03:23.000Z | graph-to-graph/elf_correlate.py | myuluo/graph-refine | 8924af0c61c3c863db7957e0ebac7ae1ceefad9d | [
"BSD-2-Clause",
"MIT"
] | 3 | 2020-04-07T09:15:55.000Z | 2022-03-23T01:36:26.000Z | graph-to-graph/elf_correlate.py | myuluo/graph-refine | 8924af0c61c3c863db7957e0ebac7ae1ceefad9d | [
"BSD-2-Clause",
"MIT"
] | 7 | 2020-04-16T00:32:41.000Z | 2022-03-09T05:25:23.000Z | #
# Copyright 2020, Data61, CSIRO (ABN 41 687 119 230)
#
# SPDX-License-Identifier: BSD-2-Clause
#
import re
import graph_refine.syntax as syntax
import graph_refine.problem as problem
import graph_refine.stack_logic as stack_logic
from graph_refine.syntax import true_term, false_term, mk_not
from graph_refine.check import *
import graph_refine.search as search
import elf_parser
import graph_refine.target_objects as target_objects
from imm_utils import *
from elf_file import *
from addr_utils import *
from call_graph_utils import gFuncsCalled
from dot_utils import toDot,toGraph
from addr_utils import gToPAddrP,callNodes
def loadCounts(dir_name):
#loop_counts.py must contain exactly 1 dict called man_loop_counts
context = {}
execfile('%s/loop_counts.py' % dir_name,context)
#we should have a dict of addr -> bound
assert 'loops_by_fs' in context
lbfs = context['loops_by_fs']
return lbfs
class immFunc (Borg):
def __init__(self,elf_fun=None,load_counts=False):
Borg.__init__(self)
if not elf_fun:
return
self.elf_fun = elf_fun
self.name = elf_fun.name
self.addr = elf_fun.addr
self.g_f = elf_fun.g_f
self.asm_fs = elfFile().asm_fs
self.imm_nodes = {}
self.bbs = {}
self.loaded_loop_counts = False
self.parse_only = False
self.loop_bounds = {}
# dict of f-> loop_heads -> (bound, description)
self.loops_by_fs = {}
#f -> p_n
self.p_entries = {}
if load_counts:
self.loaded_loops_by_fs = loadCounts(elfFile().dir_name)
self.loaded_loop_counts = True
def process(self):
if self.bbs != {}:
return
self.makeBinGraph()
self.loopheads = {}
self.findLoopheads()
lbfs = self.loops_by_fs
if self.loaded_loop_counts:
self.bin_loops_by_fs = self.loaded_loops_by_fs
print 'loaded loop counts from file'
else:
#build bin_loops_by_fs from loops_by_fs
self.bin_loops_by_fs = {}
blbf = self.bin_loops_by_fs
for f in lbfs:
blbf[f] = {}
p = self.f_problems[f]
pA = lambda x: phyAddrP(x,p)
loops = lbfs[f]
for p_head in loops:
assert pA(p_head) not in blbf
blbf[f][pA(p_head)] = loops[p_head]
def isBBHead(self,p_nf):
if not self.isRealNode(p_nf):
return False
g_n = self.phyAddr(p_nf)
if not type(g_n) == int:
return False
return g_n in self.bbs
#bin addr to bb addr
def bbAddr(self,addr):
bbs = self.bbs
for x in bbs:
if addr in bbs[x]:
return x
print 'addr: %x' % addr
assert False, 'BB not found !!'
def toPhyAddrs(self, p_nis):
return [self.phyAddr(x) for x in p_nis]
#find all possible entries of the loop for Chronos
def findLoopEntries(self, loop, f):
p = self.f_problems[f]
head = None
lp = [x for x in list(loop) if self.isRealNode( (x,f) )]
lpp = []
lp_phys = self.toPhyAddrs([(x,f) for x in lp])
for x in lp:
#loop entry, must be
#1. a basic block head and
#2. has >=1 edge from outside the loop
if (x, f ) in self.pf_deadends:
##gotta be halt / branch to halt
continue
phy_n = self.phyAddr((x,f))
node = self.imm_nodes[phy_n]
imm_ext_edges_to = [y for y in node.edges_to if (y not in lp_phys)]
if ( len(imm_ext_edges_to) >= 1 and self.isBBHead((x,f)) ):
lpp.append(x)
return lpp
def findLoopheads(self):
self.imm_loopheads = {}
#loopheads = {}
loopheads = []
#self.loopheads = loopheads
loops_by_fs = self.loops_by_fs
for (f,p) in [(f,self.f_problems[f]) for f in self.f_problems]:
p.compute_preds()
p.do_loop_analysis()
l = p.loop_data
if p.loop_heads():
loops_by_fs[f] = {}
for x in p.loop_heads():
fun,_ = self.pNToFunGN((x,f))
#dodge halt
if fun in elfFile().deadend_funcs:
continue
loopheads.append((x, f))
#the 0 worker_id will get ignored by genLoopHeads.
#FIXME: do this properly..
loops_by_fs[f][x] = (2**30,'dummy',0)
assert loopheads
for p_nf in loopheads:
p_n, f = p_nf
p = self.f_problems[f]
ll = p.loop_data[p_n][1]
z = self.findLoopEntries(ll, f)
#map from potential heads -> head, hack around chronos 'feature'
for q in z:
assert q not in self.imm_loopheads, 'one addr cannot have >1 loopcounts !'
self.imm_loopheads[self.phyAddr((q,f))] = p_nf
return
def firstRealNodes(self,p_nf,visited = None,may_multi=False,may_call=False,skip_ret=False):
"""
Locate the first real node from, and including, p_addr,
or branch targets if it hits a branch before that.
Returns a list of p_nf
"""
elf_fun = self.elf_fun
p_n,f = p_nf
next_p_nf = p_nf
ret = []
if visited == None:
#print 'fRN on p_n %d, fun: %s' % (p_n,f)
visited = []
if p_nf in visited:
return []
visited.append(p_nf)
assert self.pf_deadends != None
while True:
if self.isRealNode(next_p_nf):
return [next_p_nf]
next_p_n , next_f, next_p = self.unpackPNF(next_p_nf)
if ( next_p_n == 'Ret' and f == self.name):
return [('Ret',f)]
elif next_p_n == 'Ret':
if skip_ret:
return []
assert False,'firstRealNodes reached Ret when skip_ret is False'
p_node, edges = self.pNodeConts(next_p_nf, may_call=may_call)
if edges == []:
return []
assert (edges)
if len(edges) > 1:
assert may_multi
for p_e in edges:
for ee in self.firstRealNodes(p_e ,visited = list(visited),may_multi=may_multi,may_call=may_call,skip_ret=skip_ret):
ret.append(ee)
return ret
else:
next_p_nf = edges[0]
#function p_n belongs to, g_n
def pNToFunGN(self,p_nf):
p_n,f,p = self.unpackPNF(p_nf)
tag = p.node_tags[p_n]
_, x = tag
f_name, g_n = x
return f_name,g_n
#given p_n is an imm call, return is_taillcall
def isCallTailCall(self,p_nf):
# suc = p_n_cs[0]
g_n = self.phyAddr(p_nf)
return elf_parser.isDirectBranch(g_n)
def isStraightToRetToRoot(self,p_nf):
p_n,f,p = self.unpackPNF(p_nf)
if p_n == 'Ret' and f == self.name:
return True
elif p_n == 'Ret':
return False
if self.isRealNode(p_nf):
return False
if self.phyAddr(p_nf)=='RetToCaller':
return False
elif type(p_n) == int:
_,pf_conts = self.pNodeConts(p_nf)
p_conts = [x[0] for x in pf_conts]
if len(p_conts) == 1:
return self.isStraightToRetToRoot((p_conts[0],f))
return False
#whether the corresponding imm has a return edge
def isImmRootReturn(self,p_nf):
p_n,f = p_nf
if f != self.name :
return False
_, pf_conts = self.pNodeConts(p_nf)
for x in pf_conts:
if self.isStraightToRetToRoot(x):
return True
return False
#whether p_n leads straightly to RetToCaller
def isStraightToRetToCaller(self,p_nf):
p_n,f = p_nf
if p_n == 'Ret':
if f != self.name:
return True
else:
return False
if self.isRealNode(p_nf):
return False
if self.phyAddr(p_nf)=="RetToCaller":
return True
elif type(p_n) == int:
_,pf_conts = self.pNodeConts(p_nf)
p_conts = [x[0] for x in pf_conts]
if len(p_conts) == 1:
return self.isStraightToRetToCaller((p_conts[0],f))
return False
#All return except the root one
def isImmRetToCaller(self,p_nf):
g_n = self.phyAddr(p_nf)
p_n,f,p = self.unpackPNF(p_nf)
if isCall(p.nodes[p_n]):
return False
p_node,pf_conts = self.pNodeConts(p_nf)
p_conts = [x[0] for x in pf_conts]
conts = [x for x in p_conts if type(p_n) == int]
#print ' p_n %s p_conts %s' % (p_n,p_conts)
n_rtc = 0
assert self.phyAddr(p_nf) == g_n
for pf_cont in pf_conts:
cont_n,cont_f = pf_cont
if not isCall(self.f_problems[cont_f].nodes[cont_n]):
if self.isStraightToRetToCaller(pf_cont):
ret = (pf_cont)
n_rtc += 1
if not ( n_rtc <= 1):
#print 'p_n %s g_n %s: n_rtc %s' % (p_n, self.phyAddr(p_n), n_rtc)
assert False
if n_rtc > 0:
return ret
return False
def funName(self,p_nf):
p_n,f = p_nf
fname = self.f_problems[f].nodes[p_n].fname
if '.' in fname:
#print 'f: %s' % fname
s = []
for c in fname:
if c == '.':
s.append('_')
else:
s.append(c)
return ''.join(s)
return fname
def makeProblem(self,f):
p = problem.Problem(None, 'Functions (%s)' % f)
p.add_entry_function(self.asm_fs[f], 'ASM')
p.do_analysis()
return p
def isSpecInsFunc(self,f):
"""
Returns whether f is the name of a special function
used to model special instruction
"""
return f.startswith ("instruction'")
def makeBinGraph(self):
"""
Prepare problems for all functions transitively called by self,
and turn this into a binary CFG
"""
self.f_problems = {}
if self.name not in elfFile().tcg:
print elfFile().tcg.keys()
tc_fs = list(elfFile().tcg[self.name])
for f in tc_fs + [self.name]:
assert '.' not in f
if self.isSpecInsFunc(f):
continue
p = problem.Problem(None, 'Functions (%s)' % f)
p.add_entry_function(self.asm_fs[f], 'ASM')
self.f_problems[f] = p
#print 'f %s, p.nodes: %d' % (f,len(p.nodes) )
#get its entry
assert len(p.entries) == 1
self.p_entries[f] = p.entries[0][0]
print 'all problems generated'
self.findAllDeadends()
print "all deadends found"
#now generate the bin graph
for f,p in self.f_problems.iteritems():
for p_n in p.nodes:
if type(p_n) != int:
continue
p_nf = (p_n,f)
if p_nf in self.pf_deadends:
continue
if self.isRealNode(p_nf):
#print 'adding: %s' % str(p_nf)
self.addImmNode(p_nf)
self.imm_entry = self.phyAddr(self.firstRealNodes((self.p_entries[self.name], self.name ))[0])
#print 'self.imm_entry %x' % self.imm_entry
self.bbs = findBBs(self.imm_entry,self)
def findAllDeadends(self):
self.pf_deadends = []
pf_deadends = self.pf_deadends
self.deadend_g_ns = set()
#Halt is a deadend function, and should never be called, it's equivalent to Err for our purpose
for dead_f in elfFile().deadend_funcs:
print 'dead_f %s' % dead_f
deadend_f_g_n = elfFile().funcs[dead_f].addr
self.deadend_g_ns.add (deadend_f_g_n)
print 'deadend_f_g_n 0x%x' % deadend_f_g_n
for (f,p) in self.f_problems.iteritems():
for p_n in p.nodes:
if self.isDeadend((p_n,f)):
pf_deadends.append((p_n,f))
def isDeadend(self,p_nf,visited=None):
'''
Determine if p_nf (p_n, function) is a deadend node
'''
if p_nf in self.pf_deadends:
return True
p_n, f, p = self.unpackPNF(p_nf)
if visited == None:
visited = []
if p_n == 'Err':
return True
if p_n == 'Ret':
return False
if p_nf in visited:
return True
if isCall(p.nodes[p_n]):
#walk into the callee problem
f = self.funName(p_nf)
#FIXME: dodge dummy functions
if 'instruction' in f:
return False
if f in elfFile().deadend_funcs:
return True
p_callee = self.f_problems[f]
assert len(p_callee.entries) == 1
p_callee_n = p_callee.entries[0][0]
return self.isDeadend((p_callee_n,f),visited=visited + [p_nf])
if type(p_n) == int and self.phyAddr(p_nf) == 'RetToCaller':
return False
g_n = self.phyAddr(p_nf)
if g_n in self.deadend_g_ns:
return True
#note: pNodeConts ensures we stay in the same problem
node,fconts = self.pNodeConts(p_nf)
conts = [ x[0] for x in fconts]
for p_c in conts:
assert p_c != p_n
if not self.isDeadend( (p_c,f), visited = visited + [p_nf]):
return False
#all ends are dead, thus deadend
return True
def unpackPNF(self,p_nf):
p_n,f = p_nf
p = self.f_problems[f]
return (p_n,f,p)
def phyAddr (self,p_nf) :
p_n, f , p = self.unpackPNF(p_nf)
if not isinstance(p_n,int):
return p_n
_,x = p.node_tags[p_n]
if x == 'LoopReturn':
return 'LoopReturn'
try:
f_name,g_addr = x
except:
print f
print 'tags: %s'% str(p.node_tags[p_n])
assert False
return g_addr
#must not reach Ret
def pNodeConts(self, p_nf, no_deadends=False, may_call = False):
p_n,f, p = self.unpackPNF(p_nf)
p_node = p.nodes[p_n]
if isCall(p_node):
assert may_call
fun_called = self.funName(p_nf)
p = self.f_problems[fun_called]
entry = self.p_entries[fun_called]
pf_conts = [(entry,fun_called)]
return p_node, pf_conts
assert p_n != 'Ret'
p_conts = filter(lambda x: x != 'Err', p_node.get_conts())
if no_deadends:
p_conts = filter(lambda x: (x, p_i) not in pi_deadends, p_conts)
pf_conts = [(x , f) for x in p_conts]
return p_node,pf_conts
def isRealNode(self,p_nf):
p_n,f = p_nf
if p_n == 'Ret':
return False
g_n = self.phyAddr(p_nf)
if g_n == 'RetToCaller':
return False
elif self.isLoopReturn(p_nf):
return False
elif type(g_n) != int:
print 'g_n %s' % str(g_n)
assert False, 'g_n expected of typ int'
#elif g_n % 4 == 0 and not self.isLoopReturn(p_nf):
elif g_n % 4 == 0:
assert not self.isLoopReturn(p_nf)
return True
else:
return False
def isLoopReturn(self,p_nf):
p_n,f = p_nf
p = self.f_problems[f]
tag = p.node_tags[p_n]
return tag[1] == 'LoopReturn'
def addImmNode(self,p_nf):
imm_nodes = self.imm_nodes
g_n = self.phyAddr(p_nf)
p_node,pf_conts = self.pNodeConts(p_nf)
p_conts = [x[0] for x in pf_conts]
p_n,f,p = self.unpackPNF(p_nf)
#print "adding imm_node p_n: %s f: %s" % (p_n,f)
if g_n in imm_nodes:
#we have been here before
node = imm_nodes[g_n]
else:
node = immNode(g_n,rawVals(g_n))
imm_nodes[g_n] = node
dont_emit = []
p_imm_return_to_caller_edge = self.isImmRetToCaller(p_nf)
call_pn = self.getCallTarg(p_nf)
if call_pn:
fun_called = self.funName((call_pn, f))
if self.isSpecInsFunc(fun_called):
#Hack: go straight to the return node, do nothing else
next_addrs = p.nodes[call_pn].get_conts()
assert len(next_addrs) == 1
next_addr = next_addrs[0]
assert next_addr not in ['Ret','Err']
phy_next_addr = self.phyAddr((next_addr,f))
i_e = immEdge(phy_next_addr, emit = True)
node.addEdge(i_e)
return
imm_call = self.parseImmCall(p_nf)
assert not p_imm_return_to_caller_edge
g_call_targ,g_ret_addr,is_tail_call = imm_call
dont_emit.append(g_call_targ)
node.addCallRetEdges(g_call_targ, g_ret_addr,is_tail_call)
elif p_imm_return_to_caller_edge or self.isImmRootReturn(p_nf):
node.addRetEdge()
#add edges to the imm node,ingore Err and halt
for p_targ in p_conts:
if type(p_targ) == int and (p_targ, f) not in self.pf_deadends:
if p_targ == 'Ret':
continue
edges = self.firstRealNodes((p_targ,f),may_multi=True,may_call=True,skip_ret=True)
for p_e in edges :
#dodge halt
if (p_e) in self.pf_deadends:
continue
g_e = self.phyAddr(p_e)
assert g_e != None
if g_e == 'Ret':
continue
assert g_e != 'Ret'
i_e = immEdge(g_e,emit = g_e not in dont_emit)
node.addEdge(i_e)
def retPF(self,call_p_nf):
p_n,f,p = self.unpackPNF(call_p_nf)
assert len(p.nodes[p_n].get_conts()) == 1
return ( (p.nodes[p_n].get_conts())[0] , f)
def getCallTarg(self, p_nf):
p_n,f,p = self.unpackPNF(p_nf)
_, pf_conts = self.pNodeConts(p_nf)
p_conts = map(lambda x: x[0],pf_conts)
#is Imm call iff there is a successor of kind Call in the g graph
p_n_cs = filter(lambda p_n_c:
type(p_n_c) == int
and not self.isLoopReturn(( p_n_c, f))
and isCall(self.gNode((p_n_c,f)))
, p_conts)
if not p_n_cs:
return None
assert len(p_n_cs) == 1
#return the p_n of the call node
return p_n_cs[0]
def parseImmCall(self,p_nf):
"""
Returns (entry point to the called function, return addr, is_tailcall)
"""
call_pn = self.getCallTarg(p_nf)
assert call_pn != None
p_n,f,p = self.unpackPNF(p_nf)
#print "p_n: %s, f: %s" % (p_n,f)
p_nodes = p.nodes
#find the return addr
#print "call_pn = %d" % call_pn
suc = self.firstRealNodes( (call_pn, f) ,may_multi=False,may_call=True)
pf_call_targ = suc[0]
g_call_targ = self.phyAddr(pf_call_targ)
#locate the call return address
f_caller, _ = self.pNToFunGN(p_nf)
is_tailcall = self.isCallTailCall(p_nf)
if not is_tailcall:
#return the return addr
phy_ret_addr = self.phyAddr(self.retPF((call_pn,f)))
else:
phy_ret_addr = None
assert type(phy_ret_addr) == int or is_tailcall, "g_call_targ %s phy_ret_addr %s" % (g_call_targ, phy_ret_addr)
#print 'call detected: phy_ret_addr %x' % phy_ret_addr
return (g_call_targ, phy_ret_addr,is_tailcall)
def gNode(self,p_nf):
p_n,f,p = self.unpackPNF(p_nf)
tag = p.node_tags[p_n]
f = tag[1][0]
g_n = tag[1][1]
return self.asm_fs[f].nodes[g_n]
| 33.574324 | 132 | 0.550513 | 2,886 | 19,876 | 3.563063 | 0.119543 | 0.027132 | 0.007877 | 0.008169 | 0.25158 | 0.188272 | 0.136439 | 0.126617 | 0.112516 | 0.095984 | 0 | 0.005503 | 0.350875 | 19,876 | 591 | 133 | 33.631134 | 0.791505 | 0.098209 | 0 | 0.295652 | 0 | 0 | 0.028772 | 0 | 0 | 0 | 0 | 0.003384 | 0.063043 | 0 | null | null | 0 | 0.032609 | null | null | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0f788d3a6de0a5319ec594f010d2fc7e6bb3c93 | 3,206 | py | Python | third_party/logging.py | sweeneyb/iot-core-micropython | 7fc341902fbf8fa587f0dc3aa10c0803a5e0d6a5 | [
"Apache-2.0"
] | 50 | 2019-06-12T22:30:43.000Z | 2022-02-22T18:30:26.000Z | third_party/logging.py | cashoefman/Resto-Score-for-Hack-2021-for-Positivity | cb3227c8db4b1be0e46cfe4b8d3973c1af4fe49b | [
"Apache-2.0"
] | 12 | 2019-07-05T09:39:45.000Z | 2022-03-12T05:45:01.000Z | third_party/logging.py | cashoefman/Resto-Score-for-Hack-2021-for-Positivity | cb3227c8db4b1be0e46cfe4b8d3973c1af4fe49b | [
"Apache-2.0"
] | 16 | 2019-06-13T09:12:24.000Z | 2022-03-12T05:47:22.000Z | # MIT License
#
# Copyright (c) 2019 Johan Brichau
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import sys
CRITICAL = 50
ERROR = 40
WARNING = 30
INFO = 20
DEBUG = 10
NOTSET = 0
_level_dict = {
CRITICAL: "CRIT",
ERROR: "ERROR",
WARNING: "WARN",
INFO: "INFO",
DEBUG: "DEBUG",
}
_stream = sys.stderr
class Logger:
level = NOTSET
def __init__(self, name):
self.name = name
def _level_str(self, level):
l = _level_dict.get(level)
if l is not None:
return l
return "LVL%s" % level
def setLevel(self, level):
self.level = level
def isEnabledFor(self, level):
return level >= (self.level or _level)
def log(self, level, msg, *args):
if level >= (self.level or _level):
_stream.write("%s:%s:" % (self._level_str(level), self.name))
if not args:
print(msg, file=_stream)
else:
print(msg % args, file=_stream)
def debug(self, msg, *args):
self.log(DEBUG, msg, *args)
def info(self, msg, *args):
self.log(INFO, msg, *args)
def warning(self, msg, *args):
self.log(WARNING, msg, *args)
def error(self, msg, *args):
self.log(ERROR, msg, *args)
def critical(self, msg, *args):
self.log(CRITICAL, msg, *args)
def exc(self, e, msg, *args):
self.log(ERROR, msg, *args)
sys.print_exception(e, _stream)
def exception(self, msg, *args):
self.exc(sys.exc_info()[1], msg, *args)
_level = INFO
_loggers = {}
def getLogger(name):
if name in _loggers:
return _loggers[name]
l = Logger(name)
_loggers[name] = l
return l
def info(msg, *args):
getLogger(None).info(msg, *args)
def debug(msg, *args):
getLogger(None).debug(msg, *args)
def basicConfig(level=INFO, filename=None, stream=None, format=None):
global _level, _stream
_level = level
if stream:
_stream = stream
if filename is not None:
print("logging.basicConfig: filename arg is not supported")
if format is not None:
print("logging.basicConfig: format arg is not supported")
| 27.637931 | 80 | 0.647224 | 446 | 3,206 | 4.585202 | 0.343049 | 0.06846 | 0.037653 | 0.04401 | 0.114425 | 0.056724 | 0.025428 | 0 | 0 | 0 | 0 | 0.006664 | 0.251092 | 3,206 | 115 | 81 | 27.878261 | 0.845065 | 0.333437 | 0 | 0.056338 | 0 | 0 | 0.061939 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225352 | false | 0 | 0.014085 | 0.014085 | 0.338028 | 0.070423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0f88df1e0c707f34a922f07d69b31d76bda43ee | 536 | py | Python | assessments/migrations/0003_auto_20210212_1943.py | acounsel/django_msat | 86a54e43429001cb6433e28b294d6b8a94b97e6e | [
"MIT"
] | null | null | null | assessments/migrations/0003_auto_20210212_1943.py | acounsel/django_msat | 86a54e43429001cb6433e28b294d6b8a94b97e6e | [
"MIT"
] | null | null | null | assessments/migrations/0003_auto_20210212_1943.py | acounsel/django_msat | 86a54e43429001cb6433e28b294d6b8a94b97e6e | [
"MIT"
] | null | null | null | # Generated by Django 3.1.6 on 2021-02-12 19:43
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('assessments', '0002_auto_20210212_1904'),
]
operations = [
migrations.AlterField(
model_name='country',
name='region',
field=models.CharField(blank=True, choices=[('america', 'Americas'), ('europe', 'Europe'), ('africa', 'Africa'), ('asia', 'Asia'), ('oceania', 'Oceania')], max_length=100, null=True),
),
]
| 28.210526 | 195 | 0.600746 | 57 | 536 | 5.561404 | 0.807018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082324 | 0.229478 | 536 | 18 | 196 | 29.777778 | 0.68523 | 0.083955 | 0 | 0 | 1 | 0 | 0.220859 | 0.047035 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e0f9f1296b023c0d6516cc11729dc5f2eb6ecdd7 | 4,291 | py | Python | sdk/python/tests/dsl/metadata_tests.py | ConverJens/pipelines | a1d453af214ec9eebad73fb05845dd3499d60d00 | [
"Apache-2.0"
] | 6 | 2020-05-19T02:35:11.000Z | 2020-05-29T17:58:42.000Z | sdk/python/tests/dsl/metadata_tests.py | ConverJens/pipelines | a1d453af214ec9eebad73fb05845dd3499d60d00 | [
"Apache-2.0"
] | 1,932 | 2021-01-25T11:23:37.000Z | 2022-03-31T17:10:18.000Z | sdk/python/tests/dsl/metadata_tests.py | ConverJens/pipelines | a1d453af214ec9eebad73fb05845dd3499d60d00 | [
"Apache-2.0"
] | 11 | 2020-05-19T22:26:41.000Z | 2021-01-25T09:56:21.000Z | # Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from kfp.components.structures import ComponentSpec, InputSpec, OutputSpec
import unittest
class TestComponentMeta(unittest.TestCase):
def test_to_dict(self):
component_meta = ComponentSpec(name='foobar',
description='foobar example',
inputs=[InputSpec(name='input1',
description='input1 desc',
type={'GCSPath': {
'bucket_type': 'directory',
'file_type': 'csv'
}},
default='default1'
),
InputSpec(name='input2',
description='input2 desc',
type={'TFModel': {
'input_data': 'tensor',
'version': '1.8.0'
}},
default='default2'
),
InputSpec(name='input3',
description='input3 desc',
type='Integer',
default='default3'
),
],
outputs=[OutputSpec(name='output1',
description='output1 desc',
type={'Schema': {
'file_type': 'tsv'
}},
)
]
)
golden_meta = {
'name': 'foobar',
'description': 'foobar example',
'inputs': [
{
'name': 'input1',
'description': 'input1 desc',
'type': {
'GCSPath': {
'bucket_type': 'directory',
'file_type': 'csv'
}
},
'default': 'default1'
},
{
'name': 'input2',
'description': 'input2 desc',
'type': {
'TFModel': {
'input_data': 'tensor',
'version': '1.8.0'
}
},
'default': 'default2'
},
{
'name': 'input3',
'description': 'input3 desc',
'type': 'Integer',
'default': 'default3'
}
],
'outputs': [
{
'name': 'output1',
'description': 'output1 desc',
'type': {
'Schema': {
'file_type': 'tsv'
}
},
}
]
}
self.assertEqual(component_meta.to_dict(), golden_meta)
| 42.91 | 88 | 0.316476 | 240 | 4,291 | 5.595833 | 0.466667 | 0.047655 | 0.01936 | 0.023827 | 0.486969 | 0.486969 | 0.427401 | 0.427401 | 0.427401 | 0.427401 | 0 | 0.021053 | 0.601491 | 4,291 | 99 | 89 | 43.343434 | 0.764327 | 0.127709 | 0 | 0.256098 | 0 | 0 | 0.147761 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 1 | 0.012195 | false | 0 | 0.02439 | 0 | 0.04878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46023337d875860607f4a1fabbf53f963a80b88e | 2,647 | py | Python | users/migrations/0008_profile_fields_optional.py | mitodl/mit-xpro | 981d6c87d963837f0b9ccdd996067fe81394dba4 | [
"BSD-3-Clause"
] | 10 | 2019-02-20T18:41:32.000Z | 2021-07-26T10:39:58.000Z | users/migrations/0008_profile_fields_optional.py | mitodl/mit-xpro | 981d6c87d963837f0b9ccdd996067fe81394dba4 | [
"BSD-3-Clause"
] | 2,226 | 2019-02-20T20:03:57.000Z | 2022-03-31T11:18:56.000Z | users/migrations/0008_profile_fields_optional.py | mitodl/mit-xpro | 981d6c87d963837f0b9ccdd996067fe81394dba4 | [
"BSD-3-Clause"
] | 4 | 2020-08-26T19:26:02.000Z | 2021-03-09T17:46:47.000Z | # Generated by Django 2.2.3 on 2019-07-15 19:24
from django.db import migrations, models
def backpopulate_incomplete_profiles(apps, schema):
"""Backpopulate users who don't have a profile record"""
User = apps.get_model("users", "User")
Profile = apps.get_model("users", "Profile")
for user in User.objects.annotate(
has_profile=models.Exists(Profile.objects.filter(user=models.OuterRef("pk")))
).filter(has_profile=False):
Profile.objects.get_or_create(user=user)
def remove_incomplete_profiles(apps, schema):
"""Delete records that will cause rollbacks on nullable/blankable fields to fail"""
Profile = apps.get_model("users", "Profile")
Profile.objects.filter(
models.Q(birth_year__isnull=True)
| models.Q(gender__exact="")
| models.Q(job_title__exact="")
| models.Q(company__exact="")
).delete()
class Migration(migrations.Migration):
dependencies = [("users", "0007_validate_country_and_state")]
operations = [
migrations.AlterField(
model_name="profile",
name="birth_year",
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name="profile",
name="company",
field=models.CharField(blank=True, default="", max_length=128),
),
migrations.AlterField(
model_name="profile",
name="gender",
field=models.CharField(
blank=True,
choices=[
("m", "Male"),
("f", "Female"),
("o", "Other/Prefer Not to Say"),
],
default="",
max_length=10,
),
),
migrations.AlterField(
model_name="profile",
name="industry",
field=models.CharField(blank=True, default="", max_length=60),
),
migrations.AlterField(
model_name="profile",
name="job_function",
field=models.CharField(blank=True, default="", max_length=60),
),
migrations.AlterField(
model_name="profile",
name="job_title",
field=models.CharField(blank=True, default="", max_length=128),
),
migrations.AlterField(
model_name="profile",
name="leadership_level",
field=models.CharField(blank=True, default="", max_length=60),
),
migrations.RunPython(
backpopulate_incomplete_profiles, reverse_code=remove_incomplete_profiles
),
]
| 32.280488 | 87 | 0.574613 | 267 | 2,647 | 5.520599 | 0.397004 | 0.09498 | 0.118725 | 0.13772 | 0.423338 | 0.403664 | 0.28019 | 0.28019 | 0.28019 | 0.28019 | 0 | 0.01779 | 0.299207 | 2,647 | 81 | 88 | 32.679012 | 0.776819 | 0.066113 | 0 | 0.447761 | 1 | 0 | 0.09102 | 0.012597 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.014925 | 0 | 0.089552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4602f7f46fb5876b81f698415a8de6581f32551a | 2,827 | py | Python | osr_stat_generator/generator.py | brian-thomas/osr_stat_generator | 89f6a71e17c274befa3af7222a24c34a77f1f40e | [
"MIT"
] | null | null | null | osr_stat_generator/generator.py | brian-thomas/osr_stat_generator | 89f6a71e17c274befa3af7222a24c34a77f1f40e | [
"MIT"
] | null | null | null | osr_stat_generator/generator.py | brian-thomas/osr_stat_generator | 89f6a71e17c274befa3af7222a24c34a77f1f40e | [
"MIT"
] | null | null | null |
"""
OSR (LOTFP) stat generator.
"""
import random
def d(num_sides):
"""
Represents rolling a die of size 'num_sides'.
Returns random number from that size die
"""
return random.randint(1, num_sides)
def xdy(num_dice, num_sides):
""" represents rolling num_dice of size num_sides.
Returns random number from that many dice being 'rolled'.
"""
return sum(d(num_sides) for i in range(num_dice))
class LotFP_Stat (object):
def _get_bonus(attribute):
if attribute <= 3:
return -3
if attribute >= 4 and attribute <= 5:
return -2
if attribute >= 6 and attribute <= 8:
return -1
if attribute >= 13 and attribute <= 15:
return 1
if attribute >= 16 and attribute <= 17:
return 2
if attribute >= 18:
return 3
# the default
return 0
@property
def bonus(self): return self._bonus
@property
def name(self): return self._name
@property
def value(self): return self._value
def __str__(self):
return (f"%s : %s(%s)" % (self.name, self.value, self.bonus))
def __init__(self, name, value):
self._name = name
self._value = value
self._bonus = LotFP_Stat._get_bonus(value)
class Stat_Set(object):
"""
Define a package of OSR/DnD stats
"""
_Stat_Name = ["CON", "DEX", "INT", "WIS", "STR", "CHA"]
@property
def stats(self)->list:
return self._stats
def sum(self)->int:
# get a summed value for all stats in this set
ssum = 0
for s in self.stats:
ssum += s.value
return ssum
@property
def is_hopeless(self)->bool:
""" Determine if the character is so poorly stated they have
bonus sum less than 1.
"""
bonuses = [s.bonus for s in self._stats]
if sum(bonuses) < 1:
return True
return False
def __str__(self)->str:
string = ""
for stat in stats:
string = string + " " + str(stat.value) + " ("+str(stat.bonus) + ")"
return string
def __init__(self, stats):
self._stats = []
for i in range(0,len(stats)):
self._stats.append(LotFP_Stat(Stat_Set._Stat_Name[i], stats[i]))
def generate_stats (nrof_sets:int=1, no_hopeless_char:bool=True)->list:
""" Generate stats for a character.
"""
stat_sets = []
while (nrof_sets > 0):
stats = []
for i in range (0, 6):
stats.append(xdy(3,6))
stat_set = Stat_Set(stats)
# no "hopeless" characters
if no_hopeless_char and stat_set.is_hopeless:
continue
stat_sets.append(stat_set)
nrof_sets -= 1
return stat_sets
| 22.798387 | 81 | 0.562434 | 371 | 2,827 | 4.110512 | 0.277628 | 0.031475 | 0.011803 | 0.021639 | 0.095738 | 0.076066 | 0.05377 | 0.05377 | 0.05377 | 0 | 0 | 0.017923 | 0.328971 | 2,827 | 123 | 82 | 22.98374 | 0.785978 | 0.161655 | 0 | 0.072464 | 0 | 0 | 0.014499 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.202899 | false | 0 | 0.014493 | 0.072464 | 0.492754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
460510280808ce5edf4deb0f4ea853f4de886f05 | 4,149 | py | Python | TextRank/textrank.py | nihanjali/PageRank | baea9d89fb962fd1311a61127123bf36d9d2dd38 | [
"Apache-2.0"
] | null | null | null | TextRank/textrank.py | nihanjali/PageRank | baea9d89fb962fd1311a61127123bf36d9d2dd38 | [
"Apache-2.0"
] | null | null | null | TextRank/textrank.py | nihanjali/PageRank | baea9d89fb962fd1311a61127123bf36d9d2dd38 | [
"Apache-2.0"
] | null | null | null | import os
import sys
import copy
import collections
import nltk
import nltk.tokenize
sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
import pagerank
'''
textrank.py
-----------
This module implements TextRank, an unsupervised keyword
significance scoring algorithm. TextRank builds a weighted
graph representation of a document using words as nodes
and coocurrence frequencies between pairs of words as edge
weights. It then applies PageRank to this graph, and
treats the PageRank score of each word as its significance.
The original research paper proposing this algorithm is
available here:
https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf
'''
## TextRank #####################################################################################
def __preprocessDocument(document, relevantPosTags):
'''
This function accepts a string representation
of a document as input, and returns a tokenized
list of words corresponding to that document.
'''
words = __tokenizeWords(document)
posTags = __tagPartsOfSpeech(words)
# Filter out words with irrelevant POS tags
filteredWords = []
for index, word in enumerate(words):
word = word.lower()
tag = posTags[index]
if not __isPunctuation(word) and tag in relevantPosTags:
filteredWords.append(word)
return filteredWords
def textrank(document, windowSize=2, rsp=0.15, relevantPosTags=["NN", "ADJ"]):
'''
This function accepts a string representation
of a document and three hyperperameters as input.
It returns Pandas matrix (that can be treated
as a dictionary) that maps words in the
document to their associated TextRank significance
scores. Note that only words that are classified
as having relevant POS tags are present in the
map.
'''
# Tokenize document:
words = __preprocessDocument(document, relevantPosTags)
# Build a weighted graph where nodes are words and
# edge weights are the number of times words cooccur
# within a window of predetermined size. In doing so
# we double count each coocurrence, but that will not
# alter relative weights which ultimately determine
# TextRank scores.
edgeWeights = collections.defaultdict(lambda: collections.Counter())
for index, word in enumerate(words):
for otherIndex in range(index - windowSize, index + windowSize + 1):
if otherIndex >= 0 and otherIndex < len(words) and otherIndex != index:
otherWord = words[otherIndex]
edgeWeights[word][otherWord] += 1.0
# Apply PageRank to the weighted graph:
wordProbabilities = pagerank.powerIteration(edgeWeights, rsp=rsp)
wordProbabilities.sort_values(ascending=False)
return wordProbabilities
## NLP utilities ################################################################################
def __asciiOnly(string):
return "".join([char if ord(char) < 128 else "" for char in string])
def __isPunctuation(word):
return word in [".", "?", "!", ",", "\"", ":", ";", "'", "-"]
def __tagPartsOfSpeech(words):
return [pair[1] for pair in nltk.pos_tag(words)]
def __tokenizeWords(sentence):
return nltk.tokenize.word_tokenize(sentence)
## tests ########################################################################################
def applyTextRank(fileName, title="a document"):
print
print "Reading \"%s\" ..." % title
filePath = os.path.join(os.path.dirname(__file__), fileName)
document = open(filePath).read()
document = __asciiOnly(document)
print "Applying TextRank to \"%s\" ..." % title
keywordScores = textrank(document)
print
header = "Keyword Significance Scores for \"%s\":" % title
print header
print "-" * len(header)
print keywordScores
print
def main():
applyTextRank("Cinderalla.txt", "Cinderalla")
applyTextRank("Beauty_and_the_Beast.txt", "Beauty and the Beast")
applyTextRank("Rapunzel.txt", "Rapunzel")
if __name__ == "__main__":
main()
| 33.731707 | 97 | 0.644493 | 460 | 4,149 | 5.721739 | 0.421739 | 0.009119 | 0.019377 | 0.028495 | 0.080547 | 0.080547 | 0.059271 | 0.038754 | 0.038754 | 0 | 0 | 0.004555 | 0.206315 | 4,149 | 122 | 98 | 34.008197 | 0.794716 | 0.096168 | 0 | 0.087719 | 0 | 0 | 0.084317 | 0.010118 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.122807 | null | null | 0.140351 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
460a17e5a2d4a382d4b56fd109a6fc2250ecd27e | 535 | py | Python | examples/index/context.py | rmorshea/viewdom | 24c528642e9ef0179999936b2e6f3b8a9d770df8 | [
"MIT"
] | null | null | null | examples/index/context.py | rmorshea/viewdom | 24c528642e9ef0179999936b2e6f3b8a9d770df8 | [
"MIT"
] | null | null | null | examples/index/context.py | rmorshea/viewdom | 24c528642e9ef0179999936b2e6f3b8a9d770df8 | [
"MIT"
] | null | null | null | from viewdom import html, render, use_context, Context
expected = '<h1>My Todos</h1><ul><li>Item: first</li></ul>'
# start-after
title = 'My Todos'
todos = ['first']
def Todo(label):
prefix = use_context('prefix')
return html('<li>{prefix}{label}</li>')
def TodoList(todos):
return html('<ul>{[Todo(label) for label in todos]}</ul>')
result = render(html('''
<{Context} prefix="Item: ">
<h1>{title}</h1>
<{TodoList} todos={todos} />
<//>
'''))
# '<h1>My Todos</h1><ul><li>Item: first</li></ul>'
| 20.576923 | 62 | 0.588785 | 74 | 535 | 4.22973 | 0.364865 | 0.067093 | 0.057508 | 0.070288 | 0.178914 | 0.178914 | 0.178914 | 0.178914 | 0.178914 | 0.178914 | 0 | 0.013544 | 0.171963 | 535 | 25 | 63 | 21.4 | 0.693002 | 0.11215 | 0 | 0 | 0 | 0 | 0.491525 | 0.099576 | 0 | 0 | 0 | 0.04 | 0 | 1 | 0.133333 | false | 0 | 0.066667 | 0.066667 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
460e42fc65b233b72dc53c72b8b11991e2b52abd | 1,390 | py | Python | snewpdag/plugins/Copy.py | SNEWS2/snewpdag | 57f886cf08e1acd8af48ee486a1c2400e2b77b8b | [
"BSD-3-Clause"
] | null | null | null | snewpdag/plugins/Copy.py | SNEWS2/snewpdag | 57f886cf08e1acd8af48ee486a1c2400e2b77b8b | [
"BSD-3-Clause"
] | 21 | 2021-06-09T10:31:40.000Z | 2022-03-21T17:14:03.000Z | snewpdag/plugins/Copy.py | SNEWS2/snewpdag | 57f886cf08e1acd8af48ee486a1c2400e2b77b8b | [
"BSD-3-Clause"
] | 9 | 2021-07-11T17:45:42.000Z | 2021-09-09T22:07:18.000Z | """
Copy - copy fields into other (possibly new) fields
configuration:
on: list of 'alert', 'revoke', 'report', 'reset' (optional: def 'alert' only)
cp: ( (in,out), ... )
Field names take the form of dir1/dir2/dir3,
which in the payload will be data[dir1][dir2][dir3]
"""
import logging
from snewpdag.dag import Node
class Copy(Node):
def __init__(self, cp, **kwargs):
self.cp = []
for op in cp:
src = op[0].split('/')
dst = op[1].split('/')
self.cp.append( [src, dst[:-1], dst[-1]] )
self.on = kwargs.pop('on', [ 'alert' ])
super().__init__(**kwargs)
def copy(self, data):
for op in self.cp:
v = data # should just follow references
for k in op[0]:
if k in v:
v = v[k]
else:
logging.warning('Field {} not found from source {}'.format(k, op[0]))
continue
# v should now hold the value to be copied
d = data
for k in op[1]:
if k not in d:
d[k] = {}
d = d[k]
d[op[2]] = v
return data
def alert(self, data):
return self.copy(data) if 'alert' in self.on else True
def revoke(self, data):
return self.copy(data) if 'revoke' in self.on else True
def reset(self, data):
return self.copy(data) if 'reset' in self.on else True
def report(self, data):
return self.copy(data) if 'report' in self.on else True
| 25.272727 | 79 | 0.576978 | 220 | 1,390 | 3.609091 | 0.345455 | 0.037783 | 0.070529 | 0.09068 | 0.232997 | 0.212846 | 0.141058 | 0 | 0 | 0 | 0 | 0.013917 | 0.276259 | 1,390 | 54 | 80 | 25.740741 | 0.775348 | 0.245324 | 0 | 0 | 0 | 0 | 0.061598 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0 | 0.057143 | 0.114286 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
460ec208b3e5e026467442540a94ba66de7fb38a | 1,562 | py | Python | pages/aboutus.py | BuildWeek-AirBnB-Optimal-Price/application | c65e1ec3db309dd3afbcf9429d16489ccda534d8 | [
"MIT"
] | null | null | null | pages/aboutus.py | BuildWeek-AirBnB-Optimal-Price/application | c65e1ec3db309dd3afbcf9429d16489ccda534d8 | [
"MIT"
] | 1 | 2021-08-23T21:22:51.000Z | 2021-08-23T21:22:51.000Z | pages/aboutus.py | BuildWeek-AirBnB-Optimal-Price/application | c65e1ec3db309dd3afbcf9429d16489ccda534d8 | [
"MIT"
] | null | null | null | '''
houses each team member's
link to personal GitHub io or
website or blog space
RJProctor
'''
# Imports from 3rd party libraries
import dash
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
from app import app
# 1 column layout
# https://dash-bootstrap-components.opensource.faculty.ai/l/components/layout
column1 = dbc.Col(
[
dcc.Markdown(
"""
## The Team:
Select a link to learn more about each of our
team members.
"""
),
],
)
# create footer
column2 = dbc.Col(
[
dcc.Markdown(
"""
**Debbie Cohen**
https://github.com/dscohen75/dscohen75.github.io
https://medium.com/@debbiecohen_22419
**Moe Fa**
---
**Eduardo Padilla**
https://medium.com/@eprecendez
---
**R. Jeannine Proctor**
https://jproctor-rebecca.github.io/
https://medium.com/@jproctor.m.ed.tn
---
**Code Review Team Members:**
Taylor Curran,
Regina Dircio,
Robert Giuffrie,
Ryan Herr,
Brendon Hoss,
Anika Nacey,
Tomas Phillips,
Raymond Tan,
and
Rebecca Duke-Wiesenberg
"""
),
],
)
layout = dbc.Row([column1, column2]) | 20.025641 | 77 | 0.518566 | 156 | 1,562 | 5.147436 | 0.628205 | 0.049813 | 0.052304 | 0.042341 | 0.054795 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015707 | 0.388604 | 1,562 | 78 | 78 | 20.025641 | 0.825131 | 0.145327 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.315789 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
46114f85ae8065583b2a1f11692dfc71d3d35a86 | 480 | py | Python | 3. count_words/solution.py | dcragusa/WeeklyPythonExerciseB2 | a7da3830e27891060dcfb0804c81f52b1f250ce8 | [
"MIT"
] | null | null | null | 3. count_words/solution.py | dcragusa/WeeklyPythonExerciseB2 | a7da3830e27891060dcfb0804c81f52b1f250ce8 | [
"MIT"
] | null | null | null | 3. count_words/solution.py | dcragusa/WeeklyPythonExerciseB2 | a7da3830e27891060dcfb0804c81f52b1f250ce8 | [
"MIT"
] | null | null | null | import os
from glob import iglob
from concurrent.futures import ThreadPoolExecutor
def count_words_file(path):
if not os.path.isfile(path):
return 0
with open(path) as file:
return sum(len(line.split()) for line in file)
def count_words_sequential(pattern):
return sum(map(count_words_file, iglob(pattern)))
def count_words_threading(pattern):
with ThreadPoolExecutor() as pool:
return sum(pool.map(count_words_file, iglob(pattern)))
| 24 | 62 | 0.729167 | 69 | 480 | 4.927536 | 0.463768 | 0.147059 | 0.114706 | 0.1 | 0.170588 | 0.170588 | 0 | 0 | 0 | 0 | 0 | 0.002545 | 0.18125 | 480 | 19 | 63 | 25.263158 | 0.862595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.230769 | 0.076923 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4612f94a5e19222b82ccef62fa3f6a2d792d2930 | 2,617 | py | Python | tests/test_apyhgnc.py | robertopreste/apyhgnc | 26efa764a2262cfa12c8a2f0f077569e7f3e4e79 | [
"MIT"
] | null | null | null | tests/test_apyhgnc.py | robertopreste/apyhgnc | 26efa764a2262cfa12c8a2f0f077569e7f3e4e79 | [
"MIT"
] | 64 | 2019-04-21T09:22:01.000Z | 2022-03-31T23:01:14.000Z | tests/test_apyhgnc.py | robertopreste/apyhgnc | 26efa764a2262cfa12c8a2f0f077569e7f3e4e79 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
# Created by Roberto Preste
import pytest
import asyncio
from pandas.testing import assert_frame_equal
from apyhgnc import apyhgnc
# apyhgnc.info
def test_info_searchableFields(searchable_fields):
result = apyhgnc.info().searchableFields
assert result == searchable_fields
def test_info_storedFields(stored_fields):
result = apyhgnc.info().storedFields
assert result == stored_fields
def test_info_url():
result = apyhgnc.info().url
assert result == "http://rest.genenames.org/info"
# apyhgnc.fetch
def test_fetch_symbol_znf3(df_fetch_symbol_znf3):
result = apyhgnc.fetch("symbol", "ZNF3")
assert_frame_equal(result, df_fetch_symbol_znf3)
def test_fetch_symbol_znf3_async(df_fetch_symbol_znf3):
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
apyhgnc.afetch("symbol", "ZNF3")
)
assert_frame_equal(result, df_fetch_symbol_znf3)
# apyhgnc.search
def test_search_all_braf(df_search_all_braf):
result = apyhgnc.search("BRAF")
assert_frame_equal(result, df_search_all_braf)
def test_search_all_braf_async(df_search_all_braf):
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
apyhgnc.asearch("BRAF")
)
assert_frame_equal(result, df_search_all_braf)
def test_search_symbol_braf(df_search_symbol_braf):
result = apyhgnc.search("symbol", "BRAF")
assert_frame_equal(result, df_search_symbol_braf)
def test_search_symbol_braf_async(df_search_symbol_braf):
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
apyhgnc.asearch("symbol", "BRAF")
)
assert_frame_equal(result, df_search_symbol_braf)
def test_search_symbols_braf_znf3(df_search_symbols_braf_znf3):
result = apyhgnc.search(symbol=["BRAF", "ZNF3"])
assert_frame_equal(result, df_search_symbols_braf_znf3)
def test_search_symbols_braf_znf3_async(df_search_symbols_braf_znf3):
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
apyhgnc.asearch(symbol=["BRAF", "ZNF3"])
)
assert_frame_equal(result, df_search_symbols_braf_znf3)
def test_search_symbol_and_status(df_search_symbol_and_status):
result = apyhgnc.search(symbol="BRAF", status="Approved")
assert_frame_equal(result, df_search_symbol_and_status)
def test_search_symbol_and_status_async(df_search_symbol_and_status):
loop = asyncio.get_event_loop()
result = loop.run_until_complete(
apyhgnc.asearch(symbol="BRAF", status="Approved")
)
assert_frame_equal(result, df_search_symbol_and_status)
| 27.840426 | 69 | 0.762323 | 361 | 2,617 | 5.085873 | 0.152355 | 0.069717 | 0.095861 | 0.119826 | 0.679194 | 0.561547 | 0.510893 | 0.510893 | 0.510893 | 0.510893 | 0 | 0.007576 | 0.14253 | 2,617 | 93 | 70 | 28.139785 | 0.810606 | 0.042033 | 0 | 0.344828 | 0 | 0 | 0.047181 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 1 | 0.224138 | false | 0 | 0.068966 | 0 | 0.293103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46134dbafabea71d237df09c989e61b23906b699 | 2,770 | py | Python | bdaq/tools/extract_enums.py | magnium/pybdaq | 8511cff42b7f953c105e7a78c8c62f6deb2430ec | [
"MIT"
] | null | null | null | bdaq/tools/extract_enums.py | magnium/pybdaq | 8511cff42b7f953c105e7a78c8c62f6deb2430ec | [
"MIT"
] | null | null | null | bdaq/tools/extract_enums.py | magnium/pybdaq | 8511cff42b7f953c105e7a78c8c62f6deb2430ec | [
"MIT"
] | 1 | 2018-08-28T06:12:06.000Z | 2018-08-28T06:12:06.000Z | import os.path
import argparse
from xml.etree import ElementTree as ET
class ExtractedEnum(object):
def __init__(self, tag_name, value_names):
self.tag_name = tag_name
self.value_names = value_names
def write_pxd(self, file_):
file_.write("\n ctypedef enum {}:\n".format(self.tag_name))
for name in self.value_names:
file_.write(" " * 8 + "{}\n".format(name))
def write_pyx(self, file_):
file_.write("\nclass {}(enum.Enum):\n".format(self.tag_name))
for name in self.value_names:
file_.write(" " * 4 + "{0} = _c.{0}\n".format(name))
@staticmethod
def from_xml(element, typedefs):
value_names = [v.attrib["name"] for v in element.findall("EnumValue")]
return ExtractedEnum(
typedefs[element.attrib["id"]],
value_names)
def find_enums(file_or_path):
# parse XML
tree = ET.parse(file_or_path)
# extract typedefs
typedefs = {}
for element in tree.findall("Typedef"):
typedefs[element.attrib["type"]] = element.attrib["name"]
# extract enums
enums = []
for element in tree.findall("Enumeration"):
enums.append(ExtractedEnum.from_xml(element, typedefs))
print "Found {} enums to extract.".format(len(enums))
return enums
def write_cython(pyx_file, pxd_file, enums):
# write pxd file header
pxd_file.write("# GENERATED FILE; DO NOT MODIFY\n\n")
pxd_file.write(
'cdef extern from "bdaqctrl.h" namespace "Automation::BDaq":')
# write pyx file header
pyx_file.write("# GENERATED FILE; DO NOT MODIFY\n\n")
pyx_file.write("import enum\n\n")
pyx_file.write("cimport wrapper_enums_c as _c\n\n")
# write enums
for extracted in enums:
print "Extracting definition of {}...".format(extracted.tag_name)
extracted.write_pyx(pyx_file)
extracted.write_pxd(pxd_file)
print "Done extracting definitions."
def main():
# parse script arguments
parser = argparse.ArgumentParser(
description="Extract enum definitions from header.")
parser.add_argument(
"--xml-in",
default="bdaqctrl.h.xml",
help="path to gccxml result")
parser.add_argument(
"--path-out",
default=".",
help="path to output directory")
args = parser.parse_args()
# extract enums
enums = find_enums(args.xml_in)
out_pyx_path = os.path.join(args.path_out, "wrapper_enums.pyx")
out_pxd_path = os.path.join(args.path_out, "wrapper_enums_c.pxd")
with open(out_pyx_path, "wb") as out_pyx_file:
with open(out_pxd_path, "wb") as out_pxd_file:
write_cython(out_pyx_file, out_pxd_file, enums)
if __name__ == "__main__":
main()
| 27.156863 | 78 | 0.640794 | 372 | 2,770 | 4.548387 | 0.268817 | 0.053191 | 0.026005 | 0.020095 | 0.191489 | 0.148936 | 0.148936 | 0.148936 | 0.148936 | 0.06383 | 0 | 0.001882 | 0.232852 | 2,770 | 101 | 79 | 27.425743 | 0.794353 | 0.048014 | 0 | 0.063492 | 0 | 0 | 0.203196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.079365 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46138642b2c4dc6d2e31c500bf65e34679d07086 | 1,230 | py | Python | Objetos/biblioteca.py | SebaB29/Python | 8fe7b375e200d2a629e3ef83a2356002621267a6 | [
"MIT"
] | null | null | null | Objetos/biblioteca.py | SebaB29/Python | 8fe7b375e200d2a629e3ef83a2356002621267a6 | [
"MIT"
] | null | null | null | Objetos/biblioteca.py | SebaB29/Python | 8fe7b375e200d2a629e3ef83a2356002621267a6 | [
"MIT"
] | null | null | null | class Libro:
def __init__(self, titulo, autor):
"""..."""
self.titulo = titulo
self.autor = autor
def obtener_titulo(self):
"""..."""
return str(self.titulo)
def obtener_autor(self):
"""..."""
return str(self.autor)
class Biblioteca:
def __init__(self):
"""..."""
self.coleccion = set()
def agregar_libro(self, libro):
"""..."""
self.coleccion.add((libro.titulo, libro.autor))
def sacar_libro(self, titulo, autor):
"""..."""
if not (titulo, autor) in self.coleccion:
raise Exception("El libro no esta en la colección")
self.coleccion.remove((titulo, autor))
return f"Libro: {titulo}, Autor: {autor}"
def contiene_libro(self, titulo, autor):
"""..."""
return (titulo, autor) in self.coleccion
libro = Libro("HyP", "JK")
libro1 = Libro("La Isla M", "JCortazar")
libro2 = Libro("El tunel", "Sabato")
biblio = Biblioteca()
biblio.agregar_libro(libro)
biblio.agregar_libro(libro1)
biblio.agregar_libro(libro2)
print(biblio.contiene_libro("HyP", "JK"))
print(biblio.sacar_libro("HyP", "JK"))
print(biblio.contiene_libro("HyP", "JK")) | 27.954545 | 63 | 0.582927 | 141 | 1,230 | 4.950355 | 0.29078 | 0.110315 | 0.057307 | 0.048711 | 0.187679 | 0.083095 | 0 | 0 | 0 | 0 | 0 | 0.00431 | 0.245528 | 1,230 | 44 | 64 | 27.954545 | 0.747845 | 0.021951 | 0 | 0.066667 | 0 | 0 | 0.098459 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0 | 0 | 0 | 0.433333 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
461741b98d074cd4fe4db2b2b85584fd2a2f2b2d | 442 | py | Python | 11.-Operaciones_entero_con_float_python.py | emiliocarcanobringas/11.-Operaciones_entero_con_float_python | faf3bf2fae3b7a990e0d20483ac9ac219dcb7857 | [
"MIT"
] | null | null | null | 11.-Operaciones_entero_con_float_python.py | emiliocarcanobringas/11.-Operaciones_entero_con_float_python | faf3bf2fae3b7a990e0d20483ac9ac219dcb7857 | [
"MIT"
] | null | null | null | 11.-Operaciones_entero_con_float_python.py | emiliocarcanobringas/11.-Operaciones_entero_con_float_python | faf3bf2fae3b7a990e0d20483ac9ac219dcb7857 | [
"MIT"
] | null | null | null | # Este programa muestra la suma de dos variables, de tipo int y float
print("Este programa muestra la suma de dos variables, de tipo int y float")
print("También muestra que la variable que realiza la operación es de tipo float")
numero1 = 7
numero2 = 3.1416
sumadeambos = numero1 + numero2
print("El resultado de la suma es: ")
print(sumadeambos)
print(type(sumadeambos))
# Este programa fue escrito por Emilio Carcaño Bringas
| 34 | 83 | 0.748869 | 69 | 442 | 4.797101 | 0.492754 | 0.108761 | 0.114804 | 0.126888 | 0.356495 | 0.356495 | 0.356495 | 0.356495 | 0.356495 | 0.356495 | 0 | 0.027855 | 0.187783 | 442 | 12 | 84 | 36.833333 | 0.89415 | 0.271493 | 0 | 0 | 0 | 0 | 0.547231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
46195f448a6bacb19ed1a57308155484d10e7c8b | 2,137 | py | Python | datadog_cluster_agent/tests/test_datadog_cluster_agent.py | tdimnet/integrations-core | a78133a3b71a1b8377fa214d121a98647031ab06 | [
"BSD-3-Clause"
] | 1 | 2021-12-15T22:45:14.000Z | 2021-12-15T22:45:14.000Z | datadog_cluster_agent/tests/test_datadog_cluster_agent.py | tdimnet/integrations-core | a78133a3b71a1b8377fa214d121a98647031ab06 | [
"BSD-3-Clause"
] | null | null | null | datadog_cluster_agent/tests/test_datadog_cluster_agent.py | tdimnet/integrations-core | a78133a3b71a1b8377fa214d121a98647031ab06 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2021-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from typing import Any, Dict
from datadog_checks.base.stubs.aggregator import AggregatorStub
from datadog_checks.datadog_cluster_agent import DatadogClusterAgentCheck
from datadog_checks.dev.utils import get_metadata_metrics
NAMESPACE = 'datadog.cluster_agent'
METRICS = [
'admission_webhooks.certificate_expiry',
'admission_webhooks.mutation_attempts',
'admission_webhooks.mutation_errors',
'admission_webhooks.reconcile_errors',
'admission_webhooks.reconcile_success',
'admission_webhooks.webhooks_received',
'aggregator.flush',
'aggregator.processed',
'api_requests',
'cluster_checks.busyness',
'cluster_checks.configs_dangling',
'cluster_checks.configs_dispatched',
'cluster_checks.failed_stats_collection',
'cluster_checks.nodes_reporting',
'cluster_checks.rebalancing_decisions',
'cluster_checks.rebalancing_duration_seconds',
'cluster_checks.successful_rebalancing_moves',
'cluster_checks.updating_stats_duration_seconds',
'datadog.rate_limit_queries.limit',
'datadog.rate_limit_queries.period',
'datadog.rate_limit_queries.remaining',
'datadog.rate_limit_queries.reset',
'datadog.requests',
'external_metrics',
'external_metrics.datadog_metrics',
'external_metrics.delay_seconds',
'external_metrics.processed_value',
'secret_backend.elapsed',
'go.goroutines',
'go.memstats.alloc_bytes',
'go.threads',
]
def test_check(aggregator, instance, mock_metrics_endpoint):
# type: (AggregatorStub, Dict[str, Any]) -> None
check = DatadogClusterAgentCheck('datadog_cluster_agent', {}, [instance])
# dry run to build mapping for label joins
check.check(instance)
check.check(instance)
for metric in METRICS:
aggregator.assert_metric(NAMESPACE + '.' + metric)
aggregator.assert_metric_has_tag_prefix(NAMESPACE + '.' + metric, 'is_leader:')
aggregator.assert_all_metrics_covered()
aggregator.assert_metrics_using_metadata(get_metadata_metrics())
| 34.467742 | 87 | 0.759944 | 236 | 2,137 | 6.54661 | 0.470339 | 0.075728 | 0.041424 | 0.059547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00272 | 0.139916 | 2,137 | 61 | 88 | 35.032787 | 0.837867 | 0.091717 | 0 | 0.042553 | 0 | 0 | 0.499225 | 0.439793 | 0 | 0 | 0 | 0 | 0.085106 | 1 | 0.021277 | false | 0 | 0.085106 | 0 | 0.106383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
461c16f8a2b21046567d37b8c2d1c4dfd8195d43 | 1,415 | py | Python | pyeccodes/defs/grib2/tables/15/3_11_table.py | ecmwf/pyeccodes | dce2c72d3adcc0cb801731366be53327ce13a00b | [
"Apache-2.0"
] | 7 | 2020-04-14T09:41:17.000Z | 2021-08-06T09:38:19.000Z | pyeccodes/defs/grib2/tables/8/3_11_table.py | ecmwf/pyeccodes | dce2c72d3adcc0cb801731366be53327ce13a00b | [
"Apache-2.0"
] | null | null | null | pyeccodes/defs/grib2/tables/8/3_11_table.py | ecmwf/pyeccodes | dce2c72d3adcc0cb801731366be53327ce13a00b | [
"Apache-2.0"
] | 3 | 2020-04-30T12:44:48.000Z | 2020-12-15T08:40:26.000Z | def load(h):
return ({'abbr': 0, 'code': 0, 'title': 'There is no appended list'},
{'abbr': 1,
'code': 1,
'title': 'Numbers define number of points corresponding to full coordinate '
'circles (i.e. parallels), coordinate values on each circle are '
'multiple of the circle mesh, and extreme coordinate values given '
'in grid definition (i.e. extreme longitudes) may not be reached in '
'all rows'},
{'abbr': 2,
'code': 2,
'title': 'Numbers define number of points corresponding to coordinate lines '
'delimited by extreme coordinate values given in grid definition '
'(i.e. extreme longitudes) which are present in each row'},
{'abbr': 3,
'code': 3,
'title': 'Numbers define the actual latitudes for each row in the grid. The '
'list of numbers are integer values of the valid latitudes in '
'microdegrees (scaled by 10-6) or in unit equal to the ratio of the '
'basic angle and the subdivisions number for each row, in the same '
'order as specified in the scanning mode flag',
'units': 'bit no. 2'},
{'abbr': None, 'code': 255, 'title': 'Missing'})
| 58.958333 | 91 | 0.530035 | 167 | 1,415 | 4.491018 | 0.502994 | 0.048 | 0.072 | 0.064 | 0.333333 | 0.293333 | 0.293333 | 0.293333 | 0.168 | 0.168 | 0 | 0.017045 | 0.378092 | 1,415 | 23 | 92 | 61.521739 | 0.835227 | 0 | 0 | 0 | 0 | 0 | 0.613428 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0.043478 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ca867b22f2e4c2942595bca95ab919246220f6f | 342 | py | Python | python/Recursion.py | itzsoumyadip/vs | acf32cd0bacb26e62854060e0acf5eb41b7a68c8 | [
"Unlicense"
] | 1 | 2019-07-05T04:27:05.000Z | 2019-07-05T04:27:05.000Z | python/Recursion.py | itzsoumyadip/vs | acf32cd0bacb26e62854060e0acf5eb41b7a68c8 | [
"Unlicense"
] | null | null | null | python/Recursion.py | itzsoumyadip/vs | acf32cd0bacb26e62854060e0acf5eb41b7a68c8 | [
"Unlicense"
] | null | null | null | ## to change recursion limit
import sys
print(sys.getrecursionlimit()) #Return the current value of the recursion limit
#1000
## change the limit
sys.setrecursionlimit(2000) # change value of the recursion limit
#2000
i=0
def greet():
global i
i+=1
print('hellow',i)
greet()
greet() # hellow 1996 then error
| 18 | 80 | 0.678363 | 48 | 342 | 4.833333 | 0.541667 | 0.181034 | 0.086207 | 0.163793 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.22807 | 342 | 19 | 81 | 18 | 0.810606 | 0.479532 | 0 | 0.2 | 0 | 0 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cb668eb96e3db81034b5b4b90591cfcdc750510 | 2,325 | py | Python | tests/test_clients.py | rodrigoapereira/python-hydra-sdk | ea3d61ed6f7ef1bc1990c442548d21b10155d075 | [
"MIT"
] | null | null | null | tests/test_clients.py | rodrigoapereira/python-hydra-sdk | ea3d61ed6f7ef1bc1990c442548d21b10155d075 | [
"MIT"
] | null | null | null | tests/test_clients.py | rodrigoapereira/python-hydra-sdk | ea3d61ed6f7ef1bc1990c442548d21b10155d075 | [
"MIT"
] | null | null | null | # Copyright (C) 2017 O.S. Systems Software LTDA.
# This software is released under the MIT License
import unittest
from hydra import Hydra, Client
class ClientsTestCase(unittest.TestCase):
def setUp(self):
self.hydra = Hydra('http://localhost:4444', 'client', 'secret')
self.client = Client(
name='new-client',
secret='client-secret',
scopes=['devices', 'products'],
redirect_uris=['http://localhost/callback'],
)
def test_can_create_client(self):
client = self.hydra.clients.create(self.client)
self.addCleanup(self.hydra.clients.delete, client_id=client.id)
self.assertEqual(client.name, 'new-client')
self.assertEqual(client.secret, 'client-secret')
self.assertEqual(client.scopes, ['devices', 'products'])
self.assertEqual(client.redirect_uris, ['http://localhost/callback'])
def test_can_get_client(self):
client_id = self.hydra.clients.create(self.client).id
self.addCleanup(self.hydra.clients.delete, client_id=client_id)
client = self.hydra.clients.get(client_id)
self.assertEqual(client.id, client_id)
def test_can_update_client(self):
client = self.hydra.clients.create(self.client)
self.addCleanup(self.hydra.clients.delete, client_id=client.id)
self.assertEqual(client.name, 'new-client')
client.name = 'new-client-name'
self.hydra.clients.update(client)
self.assertEqual(client.name, 'new-client-name')
def test_can_delete_client(self):
client = self.hydra.clients.create(self.client)
self.addCleanup(self.hydra.clients.delete, client_id=client.id)
self.assertIsNotNone(self.hydra.clients.get(client.id))
self.hydra.clients.delete(client.id)
self.assertIsNone(self.hydra.clients.get(client.id))
def test_can_list_all_clients(self):
client1 = self.hydra.clients.create(self.client)
self.addCleanup(self.hydra.clients.delete, client_id=client1.id)
client2 = self.hydra.clients.create(self.client)
self.addCleanup(self.hydra.clients.delete, client_id=client2.id)
clients = [c.id for c in self.hydra.clients.all()]
self.assertIn(client1.id, clients)
self.assertIn(client2.id, clients)
| 40.086207 | 77 | 0.67871 | 294 | 2,325 | 5.272109 | 0.193878 | 0.110323 | 0.185806 | 0.099355 | 0.611613 | 0.543871 | 0.467097 | 0.427097 | 0.371613 | 0.371613 | 0 | 0.007463 | 0.193118 | 2,325 | 57 | 78 | 40.789474 | 0.818763 | 0.04043 | 0 | 0.181818 | 0 | 0 | 0.089318 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.136364 | false | 0 | 0.045455 | 0 | 0.204545 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cb6b746023f10a214f482d1a5a600bbf6962f4e | 4,097 | py | Python | test/PR_test/unit_test/backend/test_binary_crossentropy.py | Phillistan16/fastestimator | 54c9254098aee89520814ed54b6e6016b821424f | [
"Apache-2.0"
] | null | null | null | test/PR_test/unit_test/backend/test_binary_crossentropy.py | Phillistan16/fastestimator | 54c9254098aee89520814ed54b6e6016b821424f | [
"Apache-2.0"
] | null | null | null | test/PR_test/unit_test/backend/test_binary_crossentropy.py | Phillistan16/fastestimator | 54c9254098aee89520814ed54b6e6016b821424f | [
"Apache-2.0"
] | 1 | 2020-04-28T12:16:10.000Z | 2020-04-28T12:16:10.000Z | # Copyright 2020 The FastEstimator Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import unittest
import numpy as np
import tensorflow as tf
import torch
import fastestimator as fe
class TestBinarayCrossEntropy(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.tf_true = tf.constant([[1], [0], [1], [0]])
cls.tf_pred = tf.constant([[0.9], [0.3], [0.8], [0.1]])
cls.torch_true = torch.tensor([[1], [0], [1], [0]])
cls.torch_pred = torch.tensor([[0.9], [0.3], [0.8], [0.1]])
def test_binaray_crossentropy_average_loss_true_tf(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.tf_pred, y_true=self.tf_true).numpy()
obj2 = 0.19763474
self.assertTrue(np.allclose(obj1, obj2))
def test_binaray_crossentropy_average_loss_false_tf(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.tf_pred, y_true=self.tf_true, average_loss=False).numpy()
obj2 = np.array([0.10536041, 0.3566748, 0.22314338, 0.10536041])
self.assertTrue(np.allclose(obj1, obj2))
def test_binaray_crossentropy_average_from_logit_average_loss_true_tf(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.tf_pred,
y_true=self.tf_true,
from_logits=True,
average_loss=True).numpy()
obj2 = 0.57775164
self.assertTrue(np.allclose(obj1, obj2))
def test_binaray_crossentropy_average_from_logit_average_loss_false_tf(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.tf_pred,
y_true=self.tf_true,
from_logits=True,
average_loss=False).numpy()
obj2 = np.array([0.34115386, 0.8543553, 0.37110066, 0.7443967])
self.assertTrue(np.allclose(obj1, obj2))
def test_binaray_crossentropy_average_loss_true_torch(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.torch_pred, y_true=self.torch_true).numpy()
obj2 = 0.19763474
self.assertTrue(np.allclose(obj1, obj2))
def test_binaray_crossentropy_average_loss_false_torch(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.torch_pred, y_true=self.torch_true,
average_loss=False).numpy()
obj2 = np.array([0.10536041, 0.3566748, 0.22314338, 0.10536041])
self.assertTrue(np.allclose(obj1, obj2))
def test_binaray_crossentropy_average_from_logit_average_loss_true_torch(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.torch_pred,
y_true=self.torch_true,
from_logits=True,
average_loss=True).numpy()
obj2 = 0.57775164
self.assertTrue(np.allclose(obj1, obj2))
def test_binaray_crossentropy_average_from_logit_average_loss_false_torch(self):
obj1 = fe.backend.binary_crossentropy(y_pred=self.torch_pred,
y_true=self.torch_true,
from_logits=True,
average_loss=False).numpy()
obj2 = np.array([0.34115386, 0.8543553, 0.37110066, 0.7443967])
self.assertTrue(np.allclose(obj1, obj2))
| 48.77381 | 115 | 0.608494 | 508 | 4,097 | 4.683071 | 0.228346 | 0.064733 | 0.047079 | 0.087432 | 0.705338 | 0.699454 | 0.699454 | 0.688525 | 0.681799 | 0.681799 | 0 | 0.080324 | 0.276788 | 4,097 | 83 | 116 | 49.361446 | 0.722578 | 0.161826 | 0 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 1 | 0.157895 | false | 0 | 0.087719 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cb6d29da4f211a81a8c27dc9b9e2bda5b85f6c6 | 619 | py | Python | ats_hex.py | kyeser/scTools | c4c7dee0c41c8afe1da6350243df5f9d9b929c7f | [
"MIT"
] | null | null | null | ats_hex.py | kyeser/scTools | c4c7dee0c41c8afe1da6350243df5f9d9b929c7f | [
"MIT"
] | null | null | null | ats_hex.py | kyeser/scTools | c4c7dee0c41c8afe1da6350243df5f9d9b929c7f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from scTools import interval, primeForm
from scTools.rowData import ats
from scTools.scData import *
count = 1
for w in ats:
prime = primeForm(w[0:6])
print '%3d\t' % count,
for x in w:
print '%X' % x,
print ' ',
intervals = interval(w)
for y in intervals:
print '%X' % y,
print '\t%2d\t' % sc6.index(prime),
if prime == sc6[1] or prime == sc6[7] or prime == sc6[8] or \
prime == sc6[20] or prime == sc6[32] or prime == sc6[35]:
print 'AC'
elif prime == sc6[17]:
print 'AT'
else:
print
count += 1
| 20.633333 | 65 | 0.544426 | 94 | 619 | 3.585106 | 0.43617 | 0.166172 | 0.148368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058685 | 0.311793 | 619 | 29 | 66 | 21.344828 | 0.732394 | 0.03231 | 0 | 0 | 0 | 0 | 0.040134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.136364 | null | null | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cbc8c259914408b1b3a8b596c9f92062c17a6d8 | 1,902 | py | Python | OpenCV/bookIntroCV_008_binarizacao.py | fotavio16/PycharmProjects | f5be49db941de69159ec543e8a6dde61f9f94d86 | [
"MIT"
] | null | null | null | OpenCV/bookIntroCV_008_binarizacao.py | fotavio16/PycharmProjects | f5be49db941de69159ec543e8a6dde61f9f94d86 | [
"MIT"
] | null | null | null | OpenCV/bookIntroCV_008_binarizacao.py | fotavio16/PycharmProjects | f5be49db941de69159ec543e8a6dde61f9f94d86 | [
"MIT"
] | null | null | null | '''
Livro-Introdução-a-Visão-Computacional-com-Python-e-OpenCV-3
Repositório de imagens
https://github.com/opencv/opencv/tree/master/samples/data
'''
import cv2
import numpy as np
from matplotlib import pyplot as plt
#import mahotas
VERMELHO = (0, 0, 255)
VERDE = (0, 255, 0)
AZUL = (255, 0, 0)
AMARELO = (0, 255, 255)
BRANCO = (255,255,255)
CIANO = (255, 255, 0)
PRETO = (0, 0, 0)
img = cv2.imread('ponte2.jpg') # Flag 1 = Color, 0 = Gray, -1 = Unchanged
img = img[::2,::2] # Diminui a imagem
#Binarização com limiar
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
suave = cv2.GaussianBlur(img, (7, 7), 0) # aplica blur
(T, bin) = cv2.threshold(suave, 160, 255, cv2.THRESH_BINARY)
(T, binI) = cv2.threshold(suave, 160, 255, cv2.THRESH_BINARY_INV)
'''
resultado = np.vstack([
np.hstack([suave, bin]),
np.hstack([binI, cv2.bitwise_and(img, img, mask = binI)])
])
'''
resultado = np.vstack([
np.hstack([img, suave]),
np.hstack([bin, binI])
])
cv2.imshow("Binarização da imagem", resultado)
cv2.waitKey(0)
#Threshold adaptativo
bin1 = cv2.adaptiveThreshold(suave, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 21, 5)
bin2 = cv2.adaptiveThreshold(suave, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 21, 5)
resultado = np.vstack([
np.hstack([img, suave]),
np.hstack([bin1, bin2])
])
cv2.imshow("Binarização adaptativa da imagem", resultado)
cv2.waitKey(0)
#Threshold com Otsu e Riddler-Calvard
'''
T = mahotas.thresholding.otsu(suave)
temp = img.copy()
temp[temp > T] = 255
temp[temp < 255] = 0
temp = cv2.bitwise_not(temp)
T = mahotas.thresholding.rc(suave)
temp2 = img.copy()
temp2[temp2 > T] = 255
temp2[temp2 < 255] = 0
temp2 = cv2.bitwise_not(temp2)
resultado = np.vstack([
np.hstack([img, suave]),
np.hstack([temp, temp2])
])
cv2.imshow("Binarização com método Otsu e Riddler-Calvard", resultado)
cv2.waitKey(0)
'''
| 24.384615 | 102 | 0.684017 | 287 | 1,902 | 4.473868 | 0.337979 | 0.049844 | 0.046729 | 0.05919 | 0.336449 | 0.316978 | 0.316978 | 0.154984 | 0.095794 | 0 | 0 | 0.079975 | 0.151945 | 1,902 | 77 | 103 | 24.701299 | 0.716057 | 0.160358 | 0 | 0.266667 | 0 | 0 | 0.062315 | 0 | 0 | 0 | 0 | 0.012987 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ccbfd84ed556c53c45b775bad63d6e98f029035 | 2,444 | py | Python | accounts/migrations/0001_initial.py | vikifox/CMDB | bac9b7da204c3eee344f55bb2187df38ef3b3d4c | [
"Apache-2.0"
] | 16 | 2020-08-13T04:28:50.000Z | 2021-06-10T06:24:51.000Z | accounts/migrations/0001_initial.py | vikifox/CMDB | bac9b7da204c3eee344f55bb2187df38ef3b3d4c | [
"Apache-2.0"
] | 1 | 2019-04-15T07:01:42.000Z | 2019-04-15T07:01:42.000Z | accounts/migrations/0001_initial.py | vikifox/CMDB | bac9b7da204c3eee344f55bb2187df38ef3b3d4c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.20 on 2019-04-18 05:56
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('cmdb', '0001_initial'),
('appconf', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='UserInfo',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('password', models.CharField(max_length=128, verbose_name='password')),
('last_login', models.DateTimeField(blank=True, null=True, verbose_name='last login')),
('username', models.CharField(db_index=True, max_length=40, unique=True)),
('email', models.EmailField(max_length=255)),
('is_active', models.BooleanField(default=False)),
('is_superuser', models.BooleanField(default=False)),
('nickname', models.CharField(blank=True, max_length=64, null=True)),
('ldap_name', models.CharField(blank=True, max_length=64)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='PermissionList',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=64)),
('url', models.CharField(max_length=255)),
],
),
migrations.CreateModel(
name='RoleList',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=64)),
('delivery', models.ManyToManyField(blank=True, to='appconf.Project')),
('permission', models.ManyToManyField(blank=True, to='accounts.PermissionList')),
('webssh', models.ManyToManyField(blank=True, to='cmdb.HostGroup')),
],
),
migrations.AddField(
model_name='userinfo',
name='role',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='accounts.RoleList'),
),
]
| 40.733333 | 128 | 0.576105 | 240 | 2,444 | 5.729167 | 0.379167 | 0.052364 | 0.052364 | 0.069818 | 0.336727 | 0.266909 | 0.266909 | 0.216 | 0.216 | 0.216 | 0 | 0.025583 | 0.280278 | 2,444 | 59 | 129 | 41.423729 | 0.756111 | 0.028232 | 0 | 0.352941 | 1 | 0 | 0.121417 | 0.009696 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.019608 | 0.058824 | 0 | 0.137255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cce392f19d05213758223c7be8ae0c890defc93 | 772 | py | Python | sort_insertion.py | rachitmishra/45 | c38650f4fa2ea1857848b95320cdc37929b39197 | [
"MIT"
] | null | null | null | sort_insertion.py | rachitmishra/45 | c38650f4fa2ea1857848b95320cdc37929b39197 | [
"MIT"
] | null | null | null | sort_insertion.py | rachitmishra/45 | c38650f4fa2ea1857848b95320cdc37929b39197 | [
"MIT"
] | null | null | null | """
Insertion Sort
Approach: Loop
Complexity: O(n2)
"""
def sort_insertion(input_arr):
print("""""""""""""""""""""""""")
print("input " + str(input_arr))
print("""""""""""""""""""""""""")
ln = len(input_arr)
i = 1 # Assuming first element is sorted
while i < ln: # n times
c = input_arr[i]
p = i
while p > 0 and input_arr[p - 1] > c: # n times
input_arr[p] = input_arr[p - 1]
p -= 1
input_arr[p] = c
i += 1
print("pass " + str(i) + " " + str(input_arr))
print("""""""""""""""""""""""""")
print("result " + str(input_arr))
print("""""""""""""""""""""""""")
if __name__ == '__main__':
arr = [21, 4, 1, 3, 9, 20, 25, 6, 21, 14]
sort_insertion(arr)
| 22.057143 | 56 | 0.443005 | 98 | 772 | 3.285714 | 0.418367 | 0.248447 | 0.161491 | 0.149068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040367 | 0.294041 | 772 | 34 | 57 | 22.705882 | 0.550459 | 0.126943 | 0 | 0.190476 | 0 | 0 | 0.040663 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.047619 | 0 | 0 | 0.047619 | 0.333333 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cd0eac7f0e61913c8d825507589abb58c69759a | 14,852 | py | Python | tests/rest/test_rest.py | sapshah-cisco/cobra | e2b5a75495931844180b05d776c15829e63f0dab | [
"Apache-2.0"
] | 93 | 2015-02-11T01:41:22.000Z | 2022-02-03T22:55:57.000Z | tests/rest/test_rest.py | sapshah-cisco/cobra | e2b5a75495931844180b05d776c15829e63f0dab | [
"Apache-2.0"
] | 112 | 2015-02-23T22:20:29.000Z | 2022-03-22T21:46:52.000Z | tests/rest/test_rest.py | sapshah-cisco/cobra | e2b5a75495931844180b05d776c15829e63f0dab | [
"Apache-2.0"
] | 61 | 2015-02-22T01:34:01.000Z | 2022-01-19T09:50:21.000Z | # Copyright 2015 Cisco Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
from future import standard_library
standard_library.install_aliases()
from builtins import str
from builtins import range
from builtins import object
import http.client
import os
import pytest
import random
import string
import time
import xml.etree.ElementTree as ET
import logging
from cobra.internal.codec.jsoncodec import toJSONStr, fromJSONStr
from cobra.internal.codec.xmlcodec import toXMLStr, fromXMLStr
import cobra.mit.access
import cobra.mit.request
import cobra.mit.session
cobra = pytest.importorskip("cobra")
cobra.model = pytest.importorskip("cobra.model")
cobra.model.fv = pytest.importorskip("cobra.model.fv")
import cobra.model.pol
import cobra.model.infra
import cobra.services
pytestmark = pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="You must specify at least one --apic " +
"option on the CLI")
slow = pytest.mark.slow
http.client.HTTPConnection.debuglevel = 1
logging.basicConfig(level=logging.DEBUG)
fakeDevicePackageZip = 'Archive.zip'
realDevicePackageZip = 'asa-device-pkg.zip'
@pytest.fixture(params=pytest.config.getvalue('apic'))
def moDir(request):
url, user, password, secure = request.param
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, password,
secure=secure)
md = cobra.mit.access.MoDirectory(session)
md.login()
return md
class Test_rest_configrequest(object):
def test_createtenant(self, moDir, tenantname):
"""
create a tenant and commit it
"""
dcid = str(time.time()).replace('.', '')
polUni = cobra.model.pol.Uni('')
tenant = cobra.model.fv.Tenant(polUni, tenantname[0])
configRequest = cobra.mit.request.ConfigRequest()
configRequest.addMo(tenant)
configRequest.subtree = 'full'
configRequest.id = dcid
mos = moDir.commit(configRequest)
assert mos
mo = mos[0]
assert len(mos) > 0
assert str(mo.dn) == str(tenant.dn)
assert len(list(mo.children)) >= 1
def test_lookupcreatedtenant(self, moDir, tenantname):
tenant = moDir.lookupByDn('uni/tn-{0}'.format(tenantname[0]))
assert tenant
def test_deletetenant(self, moDir, tenantname):
tenant = moDir.lookupByDn('uni/tn-{0}'.format(tenantname[0]))
tenant.delete()
configRequest = cobra.mit.request.ConfigRequest()
configRequest.addMo(tenant)
r = moDir.commit(configRequest)
assert r == []
tenant = moDir.lookupByDn('uni/tn-{0}'.format(tenantname[0]))
assert not tenant
class Test_rest_classquery(object):
def test_classquery_shorthand_filter(self, moDir):
"""
check that lookupByClass is able to lookup tenant common and only one
item is returned
"""
commonTn = moDir.lookupByClass(
'fvTenant', propFilter='eq(fvTenant.name, "common")')
assert len(commonTn) == 1
commonTn = commonTn[0]
assert str(commonTn.dn) == 'uni/tn-common'
def test_classquery_normal(self, moDir):
"""
check that a class query with no special properties succeeds
we should get at least three tenants (infra, mgmt, common)
"""
classQuery = cobra.mit.request.ClassQuery('fvTenant')
commonTn = moDir.query(classQuery)
def findtn(tnlist, tnname):
for tn in tnlist:
if tn.name == tnname:
return True
return False
assert findtn(commonTn, 'common')
assert findtn(commonTn, 'infra')
assert findtn(commonTn, 'mgmt')
def test_classquery_filter(self, moDir):
"""
check that a class query with a property filter works
"""
classQuery = cobra.mit.request.ClassQuery('fvTenant')
classQuery.propFilter = 'eq(fvTenant.name, "common")'
commonTn = moDir.query(classQuery)
commonTn = commonTn[0]
assert str(commonTn.dn) == 'uni/tn-common'
def test_classquery_subtree(self, moDir):
"""
check that a class query with a subtree response
"""
classQuery = cobra.mit.request.ClassQuery('fvTenant')
classQuery.subtree = 'full'
classQuery.propFilter = 'eq(fvTenant.name, "common")'
commonTn = moDir.query(classQuery)
commonTn = commonTn[0]
assert str(commonTn.dn) == 'uni/tn-common'
# expect at least 3 child objects
assert len(list(commonTn.children)) >= 3
assert str(commonTn.BD['default'].dn) == 'uni/tn-common/BD-default'
@pytest.mark.parametrize("cls,subtree", [
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")(('fvTenant', 'full')),
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")(('infraInfra', 'no')),
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")(('fvAEPg', 'full')),
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")(('infraFuncP', 'full')),
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")(('fabricNode', 'no')),
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")(('topSystem', 'full')),
])
def test_classquery_many(self, moDir, cls, subtree):
classQuery = cobra.mit.request.ClassQuery(cls)
classQuery.subtree = subtree
# classQuery.propFilter='eq(fvTenant.name, "common")'
mos = moDir.query(classQuery)
assert len(mos) > 0
def test_classquery_verifyxml(self, moDir):
"""
verify that the XML returned by lookupByClass is valid
"""
commonTn = moDir.lookupByClass(
'fvTenant', propFilter='eq(fvTenant.name, "common")')
commonTn = commonTn[0]
xml = ET.fromstring(toXMLStr(commonTn))
assert xml.tag == 'fvTenant'
def test_classquery_negative(self, moDir):
"""
generate a random tenant name and ensure that we dont find a match for it
"""
tenantName = ''.join(random.choice(string.ascii_lowercase)
for i in range(64))
tenant = moDir.lookupByClass(
'fvTenant', propFilter='eq(fvTenant.name, "{0}")'.format(tenantName))
assert len(tenant) == 0
class Test_rest_dnquery(object):
def test_dnquery_normal(self, moDir, dn):
dnQuery = cobra.mit.request.DnQuery(dn)
dnQuery.subtree = 'full'
commonTn = moDir.query(dnQuery)
assert len(commonTn) == 1
commonTn = commonTn[0]
assert str(commonTn.dn) == str(dn)
# expect at least 3 child objects
assert len(list(commonTn.children)) >= 3
assert str(commonTn.BD['default'].dn) == 'uni/tn-common/BD-default'
def test_dnquery_shorthand(self, moDir, dn):
commonTn = moDir.lookupByDn(dn)
assert str(commonTn.dn) == str(dn)
class Test_rest_getLoginDomains(object):
def test_getDomains(self, apic):
"""Verify that the getLoginDomains() method works.
"""
url, user, password, secure = apic
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, password,
secure=secure)
session.getLoginDomains()
assert session.domains != []
def test_loginDomains_setting(self, apic):
"""Verify that the loginDomain can be set."""
url, user, password, secure = apic
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, password,
secure=secure)
session.getLoginDomains()
session.loginDomain = session.domains[0]
assert session.loginDomain == session.domains[0]
class Test_rest_login(object):
def test_login_positive(self, apic):
"""
verify that the login function works
"""
url, user, password, secure = apic
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, password,
secure=secure)
moDir = cobra.mit.access.MoDirectory(session)
moDir.login()
assert moDir._session
def test_login_negative(self, apic):
"""
verify that the invalid logins throw an exception
"""
url, user, password, secure = apic
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, 'wrongpassword',
secure=secure)
moDir = cobra.mit.access.MoDirectory(session)
with pytest.raises(cobra.mit.session.LoginError):
moDir.login()
@slow
def test_login_timeout(self, apic):
"""
verify that the session times out properly
"""
url, user, password, secure = apic
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, password,
secure=secure)
moDir = cobra.mit.access.MoDirectory(session)
moDir.login()
start = time.time()
pki = moDir.lookupByDn('uni/userext/pkiext/webtokendata')
refreshTime = pki.webtokenTimeoutSeconds
sleepTime = float(refreshTime) - (time.time() - start)
sleepTime += 1.0 # one second buffer, for good measure
time.sleep(sleepTime)
with pytest.raises(cobra.mit.request.QueryError):
moDir.lookupByClass('pkiWebTokenData')
def test_login_get_timeout(self, apic):
"""
verify that the session times out properly
"""
url, user, password, secure = apic
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, password,
secure=secure)
moDir = cobra.mit.access.MoDirectory(session)
moDir.login()
assert moDir._session.refreshTime > int(time.time())
assert moDir._session.refreshTimeoutSeconds > 0
def test_rest_login_reauth(self, apic):
"""Verify that the reauth call returns a different session cookie."""
url, user, password, secure = apic
secure = False if secure == 'False' else True
session = cobra.mit.session.LoginSession(url, user, password,
secure=secure)
moDir = cobra.mit.access.MoDirectory(session)
moDir.login()
orig_cookie = session.cookie
# sleep for 5 seconds to ensure we get a different cookie.
time.sleep(5)
moDir.reauth()
assert orig_cookie != session.cookie
class Test_rest_tracequery(object):
@pytest.mark.parametrize("cls", [
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")('fvEpP'),
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")('vlanCktEp'),
pytest.mark.skipif(pytest.config.getvalue('apic') == [],
reason="no --apic")('actrlRule'),
])
def test_tracequery(self, moDir, cls):
"""
Query every leaf in the fabric for some concrete objects and try to
find at least one response. If we don't get that, we fail
"""
traceResponse = 0
nodes = moDir.lookupByClass(
'fabricNode', propFilter='eq(fabricNode.role,"leaf"')
assert len(nodes) > 0
for node in nodes:
a = cobra.mit.request.TraceQuery(node.dn, cls)
print(a.getUrl(moDir._session))
mos = moDir.query(a)
for mo in mos:
print(mo.dn)
traceResponse += len(mos)
assert traceResponse > 0
class Test_services_devicepackage(object):
fakePackage = os.path.join(os.path.dirname(os.path.realpath(__file__)),
fakeDevicePackageZip)
realPackage = os.path.join(os.path.dirname(os.path.realpath(__file__)),
realDevicePackageZip)
def test_packagevalidate(self):
"""
Make sure that invalid device packages throw an exception when validation
is enabled
"""
with pytest.raises(AttributeError):
cobra.services.UploadPackage(self.fakePackage, validate=True)
def test_packagedonotvalidate(self):
"""
Make sure that if validation is not enabled, no exception is thrown
"""
packageUpload = cobra.services.UploadPackage(self.fakePackage)
assert packageUpload.devicePackagePath == self.fakePackage
def test_uploadpackage(self, moDir):
"""
ensure that the device package upload returns a 200
"""
packageUpload = cobra.services.UploadPackage(self.realPackage,
validate=True)
r = moDir.commit(packageUpload)
assert r == []
def test_validateupload(self, moDir):
"""
make sure that the uploaded device package is found
"""
uni = cobra.model.pol.Uni('')
infra = cobra.model.infra.Infra(uni)
vnsQuery = cobra.mit.request.DnQuery(infra.dn)
vnsQuery.propFilter = 'eq(vnsMDev.vendor,"CISCO")'
vnsQuery.queryTarget = 'subtree'
vnsQuery.classFilter = 'vnsMDev'
packages = moDir.query(vnsQuery)
assert len(packages) > 0
package = packages[0]
assert package.vendor == 'CISCO'
assert package.model == 'ASA'
# for package in packages:
# print '\n'.join(['%s:\t%s' % (k,getattr(package,k)) for k in package.meta.props.names])
# print package.dn
| 36.581281 | 99 | 0.606181 | 1,635 | 14,852 | 5.458104 | 0.217737 | 0.025101 | 0.025213 | 0.035298 | 0.424922 | 0.374048 | 0.360601 | 0.34424 | 0.315329 | 0.293366 | 0 | 0.004698 | 0.283396 | 14,852 | 405 | 100 | 36.671605 | 0.833787 | 0.140587 | 0 | 0.34717 | 0 | 0 | 0.071139 | 0.010545 | 0 | 0 | 0 | 0 | 0.139623 | 1 | 0.098113 | false | 0.060377 | 0.090566 | 0 | 0.233962 | 0.011321 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1cd87eec694df1d23bb94dc59dabf36d48ef6f7d | 445 | py | Python | setup.py | ninezerozeronine/raytracing-one-weekend | 22ca36dcec679cbd78a7711734ca22e01ef06ef2 | [
"MIT"
] | null | null | null | setup.py | ninezerozeronine/raytracing-one-weekend | 22ca36dcec679cbd78a7711734ca22e01ef06ef2 | [
"MIT"
] | null | null | null | setup.py | ninezerozeronine/raytracing-one-weekend | 22ca36dcec679cbd78a7711734ca22e01ef06ef2 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name="raytracing-one-weekend",
version="0.0.0",
author="Andy Palmer",
author_email="contactninezerozeronine@gmail.com",
description="A raytracer achievable in a weekend.",
url="https://github.com/ninezerozeronine/raytracing-one-weekend",
install_requires=[
"Pillow",
"numpy",
],
packages=find_packages('src'),
package_dir={'': 'src'},
)
| 26.176471 | 69 | 0.662921 | 49 | 445 | 5.918367 | 0.714286 | 0.082759 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008287 | 0.186517 | 445 | 16 | 70 | 27.8125 | 0.792818 | 0 | 0 | 0 | 0 | 0 | 0.408989 | 0.123596 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cdc6e1e4c787b21a5dbe8f394976972f434c199 | 3,025 | py | Python | divsum_stats.py | fjruizruano/SatIntExt | 90b39971ee6ea3d7cfa63fbb906df3df714a5012 | [
"MIT"
] | null | null | null | divsum_stats.py | fjruizruano/SatIntExt | 90b39971ee6ea3d7cfa63fbb906df3df714a5012 | [
"MIT"
] | null | null | null | divsum_stats.py | fjruizruano/SatIntExt | 90b39971ee6ea3d7cfa63fbb906df3df714a5012 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import sys
from subprocess import call
print "divsum_count.py ListOfDivsumFiles\n"
try:
files = sys.argv[1]
except:
files = raw_input("Introduce RepeatMasker's list of Divsum files with library size (tab separated): ")
files = open(files).readlines()
to_join = []
header = "Coverage for each repeat class and divergence (Kimura)\n"
results = {}
for line in files:
line = line.split("\t")
file = line[0]
size = int(line[1])
data = open(file).readlines()
matrix_start = data.index(header)
matrix = data[matrix_start+1:]
li= []
names_line = matrix[0]
info = names_line.split()
for fam in info:
li.append([fam])
info_len = len(li)
for line in matrix[1:]:
info = line.split()
for i in range(0,info_len):
li[i].append(info[i])
out = open(file+".counts","w")
out.write("Sequence\tAbundance\n")
stats = open(file+".stats","w")
stats.write("Sequence\tDivergence\tTotalAbundance\tMaxAbundance\tMaxPeak\tRPS\tDIVPEAK\n")
for el in li[1:]:
numbers = el[1:]
numbers = [int(x) for x in numbers]
numbers_prop = [1.0*x/size for x in numbers]
prop_dict = {}
prop_li = []
for prop in range(0,len(numbers_prop)):
prop_dict[prop] = numbers_prop[prop]
prop_li.append(numbers_prop[prop])
prop_dict_sorted = sorted(prop_dict.items(), key=lambda x: x[1], reverse=True)
total = sum(numbers_prop)
top = prop_dict_sorted[0]
top_div = top[0]
top_ab = top[1]
peak = []
if top_div >= 2:
for div in range(top_div-2,top_div+3):
peak.append(prop_dict[div])
else:
for div in range(0,5):
peak.append(prop_dict[div])
sum_peak = sum(peak)
rps = sum_peak/total
divpeak = top_div
out.write(el[0]+"\t"+str(sum(numbers))+"\n")
all_divs = []
for d in li[0][1:]:
all_divs.append(int(d)+0.5)
div_sumproduct = 0
for x,y in zip(all_divs,prop_li):
div_sumproduct += x * y
divergence = div_sumproduct/total
data = "%s\t%s\t%s\t%s\t%s\t%s\t%s\n" % (el[0],str(divergence),str(total),str(top_ab),str(sum_peak),str(rps),str(divpeak))
stats.write(data)
data2 = "%s\t%s\t%s\t%s\t%s\t%s\t%s\n" % (file, str(divergence),str(total),str(top_ab),str(sum_peak),str(rps),str(divpeak))
if el[0] in results:
results[el[0]].append(data2)
else:
results[el[0]] = [data2]
out.close()
stats.close()
to_join.append(file+".counts")
out = open("results.txt", "w")
for el in sorted(results):
info = results[el]
out.write("%s\tDivergence\tTotalAbundance\tMaxAbundance\tMaxPeak\tRPS\tDIVPEAK\n" % (el))
for i in info:
out.write(i)
out.write("\n\n\n")
out.close()
call("join_multiple_lists.py %s" % (" ".join(to_join)), shell=True)
| 27.752294 | 131 | 0.57686 | 451 | 3,025 | 3.762749 | 0.252772 | 0.014143 | 0.021214 | 0.023571 | 0.176783 | 0.152033 | 0.152033 | 0.08132 | 0.08132 | 0.08132 | 0 | 0.015364 | 0.26843 | 3,025 | 108 | 132 | 28.009259 | 0.751469 | 0.005289 | 0 | 0.072289 | 0 | 0.024096 | 0.154588 | 0.080785 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.024096 | null | null | 0.012048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ce550dcd34ad1e54a6bb3af57029219d257f4d1 | 742 | py | Python | source/blog/migrations/0004_postcomments.py | JakubGutowski/PersonalBlog | 96122b36486f7e874c013e50d939732a43db309f | [
"BSD-3-Clause"
] | null | null | null | source/blog/migrations/0004_postcomments.py | JakubGutowski/PersonalBlog | 96122b36486f7e874c013e50d939732a43db309f | [
"BSD-3-Clause"
] | null | null | null | source/blog/migrations/0004_postcomments.py | JakubGutowski/PersonalBlog | 96122b36486f7e874c013e50d939732a43db309f | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.0.5 on 2018-07-02 19:46
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('blog', '0003_blogpost_author'),
]
operations = [
migrations.CreateModel(
name='PostComments',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nick', models.CharField(max_length=20)),
('comment', models.CharField(max_length=140)),
('post', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='blog.BlogPost')),
],
),
]
| 30.916667 | 115 | 0.58221 | 77 | 742 | 5.506494 | 0.675325 | 0.056604 | 0.066038 | 0.103774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045283 | 0.285714 | 742 | 23 | 116 | 32.26087 | 0.754717 | 0.060647 | 0 | 0 | 1 | 0 | 0.10119 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ce6c087a65ed77b98463ac3f530b83170cfd6d6 | 241 | py | Python | submissions/aising2019/a.py | m-star18/atcoder | 08e475810516602fa088f87daf1eba590b4e07cc | [
"Unlicense"
] | 1 | 2021-05-10T01:16:28.000Z | 2021-05-10T01:16:28.000Z | submissions/aising2019/a.py | m-star18/atcoder | 08e475810516602fa088f87daf1eba590b4e07cc | [
"Unlicense"
] | 3 | 2021-05-11T06:14:15.000Z | 2021-06-19T08:18:36.000Z | submissions/aising2019/a.py | m-star18/atcoder | 08e475810516602fa088f87daf1eba590b4e07cc | [
"Unlicense"
] | null | null | null | import sys
read = sys.stdin.buffer.read
readline = sys.stdin.buffer.readline
readlines = sys.stdin.buffer.readlines
sys.setrecursionlimit(10 ** 7)
n = int(readline())
h = int(readline())
w = int(readline())
print((n - h + 1) * (n - w + 1))
| 21.909091 | 38 | 0.676349 | 37 | 241 | 4.405405 | 0.432432 | 0.147239 | 0.257669 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024272 | 0.145228 | 241 | 10 | 39 | 24.1 | 0.76699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ce813d53ecf60bcfa1c5a10f665cbdcffd14f05 | 1,579 | py | Python | config/cf.py | rbsdev/config-client | 761f39cd8839daba10bf21b98ccdd44d33eaebe8 | [
"Apache-2.0"
] | null | null | null | config/cf.py | rbsdev/config-client | 761f39cd8839daba10bf21b98ccdd44d33eaebe8 | [
"Apache-2.0"
] | null | null | null | config/cf.py | rbsdev/config-client | 761f39cd8839daba10bf21b98ccdd44d33eaebe8 | [
"Apache-2.0"
] | null | null | null | from typing import Any, Dict, KeysView
import attr
from config.auth import OAuth2
from config.cfenv import CFenv
from config.spring import ConfigClient
@attr.s(slots=True)
class CF:
cfenv = attr.ib(
type=CFenv, factory=CFenv, validator=attr.validators.instance_of(CFenv),
)
oauth2 = attr.ib(type=OAuth2, default=None)
client = attr.ib(type=ConfigClient, default=None)
def __attrs_post_init__(self) -> None:
if not self.oauth2:
self.oauth2 = OAuth2(
access_token_uri=self.cfenv.configserver_access_token_uri(),
client_id=self.cfenv.configserver_client_id(),
client_secret=self.cfenv.configserver_client_secret(),
)
if not self.client:
self.client = ConfigClient(
address=self.cfenv.configserver_uri(),
app_name=self.cfenv.application_name,
profile=self.cfenv.space_name.lower(),
)
self.oauth2.configure()
@property
def vcap_services(self):
return self.cfenv.vcap_services
@property
def vcap_application(self):
return self.cfenv.vcap_application
def get_config(self) -> None:
header = {"Authorization": f"Bearer {self.oauth2.token}"}
self.client.get_config(headers=header)
@property
def config(self) -> Dict:
return self.client.config
def get_attribute(self, value: str) -> Any:
return self.client.get_attribute(value)
def get_keys(self) -> KeysView:
return self.client.get_keys()
| 28.709091 | 80 | 0.644712 | 189 | 1,579 | 5.216931 | 0.333333 | 0.073022 | 0.085193 | 0.054767 | 0.046653 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006809 | 0.255858 | 1,579 | 54 | 81 | 29.240741 | 0.83234 | 0 | 0 | 0.071429 | 0 | 0 | 0.024699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.119048 | 0.119048 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
1cea0e3fced4fc9fe2a48efd4c3e8de95165a2da | 948 | py | Python | src/git_portfolio/use_cases/config_repos.py | staticdev/github-portfolio | 850461eed8160e046ee16664ac3dbc19e3ec0965 | [
"MIT"
] | null | null | null | src/git_portfolio/use_cases/config_repos.py | staticdev/github-portfolio | 850461eed8160e046ee16664ac3dbc19e3ec0965 | [
"MIT"
] | null | null | null | src/git_portfolio/use_cases/config_repos.py | staticdev/github-portfolio | 850461eed8160e046ee16664ac3dbc19e3ec0965 | [
"MIT"
] | null | null | null | """Config repositories use case."""
from __future__ import annotations
import git_portfolio.config_manager as cm
import git_portfolio.domain.gh_connection_settings as cs
import git_portfolio.responses as res
class ConfigReposUseCase:
"""Gitp config repositories use case."""
def __init__(self, config_manager: cm.ConfigManager) -> None:
"""Initializer."""
self.config_manager = config_manager
def execute(
self, github_config: cs.GhConnectionSettings, selected_repos: list[str]
) -> res.Response:
"""Configuration of git repositories."""
self.config_manager.config.github_access_token = github_config.access_token
self.config_manager.config.github_hostname = github_config.hostname
self.config_manager.config.github_selected_repos = selected_repos
self.config_manager.save_config()
return res.ResponseSuccess("gitp repositories successfully configured.")
| 37.92 | 83 | 0.74789 | 109 | 948 | 6.201835 | 0.431193 | 0.153846 | 0.150888 | 0.136095 | 0.128698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167722 | 948 | 24 | 84 | 39.5 | 0.856781 | 0.118143 | 0 | 0 | 0 | 0 | 0.051534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.266667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1cea0f437c7a9f8ccbc1159b25612a99704a7170 | 943 | py | Python | test/test_logic.py | mateuszkowalke/sudoku_game | 800e33a6fe755b493d8e9c3c9a20204af5865148 | [
"MIT"
] | null | null | null | test/test_logic.py | mateuszkowalke/sudoku_game | 800e33a6fe755b493d8e9c3c9a20204af5865148 | [
"MIT"
] | null | null | null | test/test_logic.py | mateuszkowalke/sudoku_game | 800e33a6fe755b493d8e9c3c9a20204af5865148 | [
"MIT"
] | null | null | null | import pytest
from ..logic import Board, empty_board, example_board, solved_board
class TestBoard:
def test_create_board(self):
board = Board(example_board)
assert board.tiles == example_board
def test_solve_board(self):
board = Board(example_board)
board.solve()
assert board.tiles == solved_board
def test_check_if_possible(self):
board = Board(example_board)
assert board.check_if_possible(0, 0, 4) == False
assert board.check_if_possible(0, 0, 9) == True
def test_check_solution(self):
board = Board(solved_board)
assert board.check_solution()
def test_new_board(self):
board = Board(empty_board)
board.new_board(example_board)
assert board.tiles == example_board
def test_lock_tiles(self):
board = Board(example_board)
board.lock_tiles()
assert board.check_if_tile_locked(0, 1)
| 27.735294 | 67 | 0.66702 | 124 | 943 | 4.782258 | 0.25 | 0.151771 | 0.172007 | 0.141653 | 0.450253 | 0.450253 | 0.328836 | 0.175379 | 0.175379 | 0.175379 | 0 | 0.011268 | 0.247084 | 943 | 33 | 68 | 28.575758 | 0.823944 | 0 | 0 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28 | 1 | 0.24 | false | 0 | 0.08 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cecb0baeee1d541b67de121aac28491961e0c43 | 2,234 | py | Python | scripts/vcf_filter.py | bunop/cyvcf | f58860dd06b215b9d9ae80e2b46337fb6ab59139 | [
"MIT"
] | 46 | 2015-01-31T17:24:34.000Z | 2021-01-15T01:29:07.000Z | scripts/vcf_filter.py | arq5x/cyvcf | f58860dd06b215b9d9ae80e2b46337fb6ab59139 | [
"MIT"
] | 11 | 2015-01-13T17:59:32.000Z | 2016-09-23T21:50:00.000Z | scripts/vcf_filter.py | mandawilson/PyVCF | d23ab476237aced75635e543c061c1bf80a7c2a4 | [
"MIT"
] | 7 | 2015-02-10T09:12:00.000Z | 2016-06-30T03:37:37.000Z | #!/usr/bin/env python
import sys
import argparse
import pkg_resources
import vcf
from vcf.parser import _Filter
parser = argparse.ArgumentParser(description='Filter a VCF file',
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument('input', metavar='input', type=str, nargs=1,
help='File to process (use - for STDIN)')
parser.add_argument('filters', metavar='filter', type=str, nargs='+',
help='Filters to use')
parser.add_argument('--no-short-circuit', action='store_true',
help='Do not stop filter processing on a site if a single filter fails.')
parser.add_argument('--output', action='store', default=sys.stdout,
help='Filename to output (default stdout)')
parser.add_argument('--no-filtered', action='store_true',
help='Remove failed sites')
if __name__ == '__main__':
# TODO: allow filter specification by short name
# TODO: flag that writes filter output into INFO column
# TODO: argument use implies filter use
# TODO: parallelize
# TODO: prevent plugins raising an exception from crashing the script
# dynamically build the list of available filters
filters = {}
filter_help = '\n\navailable filters:'
for p in pkg_resources.iter_entry_points('vcf.filters'):
filt = p.load()
filters[filt.name] = filt
filt.customize_parser(parser)
filter_help += '\n %s:\t%s' % (filt.name, filt.description)
parser.description += filter_help
# parse command line args
args = parser.parse_args()
inp = vcf.Reader(file(args.input[0]))
# build filter chain
chain = []
for name in args.filters:
f = filters[name](args)
chain.append(f)
inp.filters[f.filter_name()] = _Filter(f.filter_name(), f.description)
oup = vcf.Writer(args.output, inp)
# apply filters
short_circuit = not args.no_short_circuit
for record in inp:
for filt in chain:
result = filt(record)
if result:
record.add_filter(filt.filter_name())
if short_circuit:
break
if (not args.no_filtered) or (record.FILTER == '.'):
oup.write_record(record)
| 30.60274 | 81 | 0.651746 | 287 | 2,234 | 4.944251 | 0.400697 | 0.031712 | 0.059901 | 0.026779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001172 | 0.2359 | 2,234 | 72 | 82 | 31.027778 | 0.830111 | 0.15667 | 0 | 0 | 0 | 0 | 0.173077 | 0 | 0 | 0 | 0 | 0.013889 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1cf2c5ea382bc1bc6087303216c79dc6b5f0dc2a | 2,681 | py | Python | features/cpp/simple/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 8 | 2017-12-14T14:25:17.000Z | 2019-03-09T03:29:12.000Z | features/cpp/simple/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 10 | 2019-06-14T09:12:55.000Z | 2021-10-01T12:15:43.000Z | features/cpp/simple/test.py | xbabka01/retdec-regression-tests | 1ac40cca5165740364e6f7fb72b20820eac9bc7c | [
"MIT"
] | 8 | 2019-05-10T14:59:48.000Z | 2022-03-07T16:34:23.000Z | from regression_tests import *
class TestBase(Test):
def test_for_main(self):
assert self.out_c.has_funcs('main') or self.out_c.has_funcs('entry_point')
def test_check_main_is_not_ctor_or_dtor(self):
for c in self.out_config.classes:
assert "main" not in c.constructors
assert "main" not in c.destructors
class TestAll(TestBase):
settings = TestSettings(
input=files_in_dir('inputs/symbols'),
args='-k'
)
def test_for_string(self):
# printf() is used -> '\n' at the end of the string
# puts() is used -> no '\n' at the end of the string
assert self.out_c.has_string_literal_matching( r'ClassA::ClassA(\\n)?' )
assert self.out_c.has_string_literal_matching( r'%i %i(\\n)?' )
assert self.out_c.has_string_literal_matching( r'~ClassA::ClassA(\\n)?' )
def test_for_vtables(self):
assert self.out_config.vtable_count == 1
vtable = self.out_config.vtables[0]
assert vtable.item_count == 1
assert "doSomething" in vtable.items[0].target_name
def test_for_classes(self):
assert self.out_config.classes_count == 1
c = self.out_config.classes[0]
assert len(c.constructors) == 2
assert len(c.destructors) == 2
assert len(c.virtualMethods) == 1
class TestAllStripped(TestBase):
settings = TestSettings(
input=files_in_dir('inputs/stripped'),
args='-k'
)
def test_for_vtables(self):
assert self.out_config.vtable_count == 1
vtable = self.out_config.vtables[0]
assert vtable.item_count == 1
assert vtable.items[0].target_name # there is some (!empty) function name
def test_for_classes(self):
assert self.out_config.classes_count == 1
c = self.out_config.classes[0]
assert len(c.virtualMethods) == 1
assert len(c.constructors) == 2
assert len(c.destructors) == 2
class TestMsvc(TestBase):
settings = TestSettings(
input='inputs/msvc/simple-msvc-release.ex',
args='-k'
)
settings_d = TestSettings(
input='inputs/msvc/simple-msvc-debug.ex',
args='-k'
)
def test_for_string(self):
assert self.out_c.has_string_literal( 'ClassA::ClassA\\n' )
assert self.out_c.has_string_literal( '~ClassA::ClassA\\n' )
assert self.out_c.has_string_literal( '%i %i\\n' )
def test_for_vtables(self):
assert self.out_config.vtable_count == 2
vtable1 = self.out_config.vtables[0]
assert vtable1.item_count == 1
vtable2 = self.out_config.vtables[0]
assert vtable2.item_count == 1
| 31.174419 | 82 | 0.638941 | 371 | 2,681 | 4.407008 | 0.221024 | 0.085627 | 0.095413 | 0.053823 | 0.770031 | 0.685627 | 0.594495 | 0.53945 | 0.479511 | 0.479511 | 0 | 0.01332 | 0.243939 | 2,681 | 85 | 83 | 31.541176 | 0.793291 | 0.0511 | 0 | 0.47619 | 0 | 0 | 0.091339 | 0.034252 | 0 | 0 | 0 | 0 | 0.412698 | 1 | 0.142857 | false | 0 | 0.015873 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.