blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 281 | content_id stringlengths 40 40 | detected_licenses listlengths 0 57 | license_type stringclasses 2
values | repo_name stringlengths 6 116 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 313
values | visit_date timestamp[us] | revision_date timestamp[us] | committer_date timestamp[us] | github_id int64 18.2k 668M ⌀ | star_events_count int64 0 102k | fork_events_count int64 0 38.2k | gha_license_id stringclasses 17
values | gha_event_created_at timestamp[us] | gha_created_at timestamp[us] | gha_language stringclasses 107
values | src_encoding stringclasses 20
values | language stringclasses 1
value | is_vendor bool 2
classes | is_generated bool 2
classes | length_bytes int64 4 6.02M | extension stringclasses 78
values | content stringlengths 2 6.02M | authors listlengths 1 1 | author stringlengths 0 175 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24d47ac4ca8378babe954e43126c1e0381473304 | b19bc472fcbd636a3c169144ec358fe8f4c1f206 | /encapsulation/calculatorStatic.py | 4f1e37a3cd151a2a4fcbfc3150cc2b853d3582dd | [] | no_license | stellapark401/python-oop | b471f86b877503620c515f972b64b9c9e64826d6 | 7a2e923dc6b0718b16b5a1b27b5e837421897d31 | refs/heads/master | 2023-04-27T00:08:54.463086 | 2021-05-28T10:30:53 | 2021-05-28T10:30:53 | 368,100,021 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 991 | py | class CalculatorStatic(object):
def __init__(self, first, second):
self.first = first
self.second = second
def add(self):
return self.first + self.second
def sub(self):
return self.first - self.second
def mul(self):
return self.first * self.second
def div(self):
if self.second != 0:
return self.first / self.second
else:
return 'This calculation can not be done.'
@staticmethod
def mAdd():
cs = CalculatorStatic(3, 0)
print(cs.add())
@staticmethod
def mSub():
cs = CalculatorStatic(3, 0)
print(cs.sub())
@staticmethod
def mMul():
cs = CalculatorStatic(3, 0)
print(cs.mul())
@staticmethod
def mDiv():
cs = CalculatorStatic(3, 0)
print(cs.div())
if __name__ == '__main__':
CalculatorStatic.mAdd()
CalculatorStatic.mSub()
CalculatorStatic.mMul()
CalculatorStatic.mDiv()
| [
"stellapark401@gmail.com"
] | stellapark401@gmail.com |
d6791080322a2b56b63de5ea3f1504caa713340c | d226fea1b2220616141600fb8bc5b251f5c9c460 | /.ipynb_checkpoints/parse-checkpoint.py | a9dccd2d2735498c011d9802382f7274a1c1d237 | [] | no_license | colobas/track-and-field-data | 25210572dedb59871352c7eae19b4523011ea1ca | 94239ef440b1b7cae3dc68332169fedcfb06a6f4 | refs/heads/master | 2020-05-22T20:51:27.284444 | 2019-08-03T16:37:29 | 2019-08-03T16:37:29 | 186,515,327 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 76 | py | # %%
with open("parsed.txt") as f:
txt = f.read()
# %%
print(txt)
# %%
| [
"mail@gpir.es"
] | mail@gpir.es |
a570bfa4558c82748471c60e1e492652bbd4c4b6 | a66460a46611483dfbdc94c7996893f427e60d97 | /ansible/my_env/lib/python2.7/site-packages/ansible/modules/network/aci/aci_domain_to_vlan_pool.py | 41bc3c7d07d911e29c5a39ab423b1aaf06d43a9d | [
"MIT"
] | permissive | otus-devops-2019-02/yyashkin_infra | 06b57807dde26f94f501828c07503d6bf1d70816 | 0cd0c003884155ac922e3e301305ac202de7028c | refs/heads/master | 2020-04-29T02:42:22.056724 | 2019-05-15T16:24:35 | 2019-05-15T16:24:35 | 175,780,718 | 0 | 0 | MIT | 2019-05-15T16:24:36 | 2019-03-15T08:37:35 | HCL | UTF-8 | Python | false | false | 10,592 | py | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Dag Wieers <dag@wieers.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = r'''
---
module: aci_domain_to_vlan_pool
short_description: Bind Domain to VLAN Pools (infra:RsVlanNs)
description:
- Bind Domain to VLAN Pools on Cisco ACI fabrics.
notes:
- The C(domain) and C(vlan_pool) parameters should exist before using this module.
The M(aci_domain) and M(aci_vlan_pool) can be used for these.
- More information about the internal APIC class B(infra:RsVlanNs) from
L(the APIC Management Information Model reference,https://developer.cisco.com/docs/apic-mim-ref/).
author:
- Dag Wieers (@dagwieers)
version_added: '2.5'
options:
domain:
description:
- Name of the domain being associated with the VLAN Pool.
aliases: [ domain_name, domain_profile ]
domain_type:
description:
- Determines if the Domain is physical (phys) or virtual (vmm).
choices: [ fc, l2dom, l3dom, phys, vmm ]
pool:
description:
- The name of the pool.
aliases: [ pool_name, vlan_pool ]
pool_allocation_mode:
description:
- The method used for allocating VLANs to resources.
choices: [ dynamic, static]
required: yes
aliases: [ allocation_mode, mode ]
state:
description:
- Use C(present) or C(absent) for adding or removing.
- Use C(query) for listing an object or multiple objects.
choices: [ absent, present, query ]
default: present
vm_provider:
description:
- The VM platform for VMM Domains.
- Support for Kubernetes was added in ACI v3.0.
- Support for CloudFoundry, OpenShift and Red Hat was added in ACI v3.1.
choices: [ cloudfoundry, kubernetes, microsoft, openshift, openstack, redhat, vmware ]
extends_documentation_fragment: aci
'''
EXAMPLES = r'''
- name: Bind a VMM domain to VLAN pool
aci_domain_to_vlan_pool:
host: apic
username: admin
password: SomeSecretPassword
domain: vmw_dom
domain_type: vmm
pool: vmw_pool
pool_allocation_mode: dynamic
vm_provider: vmware
state: present
delegate_to: localhost
- name: Remove a VMM domain to VLAN pool binding
aci_domain_to_vlan_pool:
host: apic
username: admin
password: SomeSecretPassword
domain: vmw_dom
domain_type: vmm
pool: vmw_pool
pool_allocation_mode: dynamic
vm_provider: vmware
state: absent
delegate_to: localhost
- name: Bind a physical domain to VLAN pool
aci_domain_to_vlan_pool:
host: apic
username: admin
password: SomeSecretPassword
domain: phys_dom
domain_type: phys
pool: phys_pool
pool_allocation_mode: static
state: present
delegate_to: localhost
- name: Bind a physical domain to VLAN pool
aci_domain_to_vlan_pool:
host: apic
username: admin
password: SomeSecretPassword
domain: phys_dom
domain_type: phys
pool: phys_pool
pool_allocation_mode: static
state: absent
delegate_to: localhost
- name: Query an domain to VLAN pool binding
aci_domain_to_vlan_pool:
host: apic
username: admin
password: SomeSecretPassword
domain: phys_dom
domain_type: phys
pool: phys_pool
pool_allocation_mode: static
state: query
delegate_to: localhost
register: query_result
- name: Query all domain to VLAN pool bindings
aci_domain_to_vlan_pool:
host: apic
username: admin
password: SomeSecretPassword
domain_type: phys
state: query
delegate_to: localhost
register: query_result
'''
RETURN = r'''
current:
description: The existing configuration from the APIC after the module has finished
returned: success
type: list
sample:
[
{
"fvTenant": {
"attributes": {
"descr": "Production environment",
"dn": "uni/tn-production",
"name": "production",
"nameAlias": "",
"ownerKey": "",
"ownerTag": ""
}
}
}
]
error:
description: The error information as returned from the APIC
returned: failure
type: dict
sample:
{
"code": "122",
"text": "unknown managed object class foo"
}
raw:
description: The raw output returned by the APIC REST API (xml or json)
returned: parse error
type: string
sample: '<?xml version="1.0" encoding="UTF-8"?><imdata totalCount="1"><error code="122" text="unknown managed object class foo"/></imdata>'
sent:
description: The actual/minimal configuration pushed to the APIC
returned: info
type: list
sample:
{
"fvTenant": {
"attributes": {
"descr": "Production environment"
}
}
}
previous:
description: The original configuration from the APIC before the module has started
returned: info
type: list
sample:
[
{
"fvTenant": {
"attributes": {
"descr": "Production",
"dn": "uni/tn-production",
"name": "production",
"nameAlias": "",
"ownerKey": "",
"ownerTag": ""
}
}
}
]
proposed:
description: The assembled configuration from the user-provided parameters
returned: info
type: dict
sample:
{
"fvTenant": {
"attributes": {
"descr": "Production environment",
"name": "production"
}
}
}
filter_string:
description: The filter string used for the request
returned: failure or debug
type: string
sample: ?rsp-prop-include=config-only
method:
description: The HTTP method used for the request to the APIC
returned: failure or debug
type: string
sample: POST
response:
description: The HTTP response from the APIC
returned: failure or debug
type: string
sample: OK (30 bytes)
status:
description: The HTTP status from the APIC
returned: failure or debug
type: int
sample: 200
url:
description: The HTTP url used for the request to the APIC
returned: failure or debug
type: string
sample: https://10.11.12.13/api/mo/uni/tn-production.json
'''
from ansible.module_utils.network.aci.aci import ACIModule, aci_argument_spec
from ansible.module_utils.basic import AnsibleModule
VM_PROVIDER_MAPPING = dict(
cloudfoundry='CloudFoundry',
kubernetes='Kubernetes',
microsoft='Microsoft',
openshift='OpenShift',
openstack='OpenStack',
redhat='Redhat',
vmware='VMware',
)
def main():
argument_spec = aci_argument_spec()
argument_spec.update(
domain=dict(type='str', aliases=['domain_name', 'domain_profile']), # Not required for querying all objects
domain_type=dict(type='str', required=True, choices=['fc', 'l2dom', 'l3dom', 'phys', 'vmm']), # Not required for querying all objects
pool=dict(type='str', aliases=['pool_name', 'vlan_pool']), # Not required for querying all objects
pool_allocation_mode=dict(type='str', required=True, aliases=['allocation_mode', 'mode'], choices=['dynamic', 'static']),
state=dict(type='str', default='present', choices=['absent', 'present', 'query']),
vm_provider=dict(type='str', choices=['cloudfoundry', 'kubernetes', 'microsoft', 'openshift', 'openstack', 'redhat', 'vmware']),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=[
['domain_type', 'vmm', ['vm_provider']],
['state', 'absent', ['domain', 'domain_type', 'pool']],
['state', 'present', ['domain', 'domain_type', 'pool']],
],
)
domain = module.params['domain']
domain_type = module.params['domain_type']
pool = module.params['pool']
pool_allocation_mode = module.params['pool_allocation_mode']
vm_provider = module.params['vm_provider']
state = module.params['state']
# Report when vm_provider is set when type is not virtual
if domain_type != 'vmm' and vm_provider is not None:
module.fail_json(msg="Domain type '{0}' cannot have a 'vm_provider'".format(domain_type))
# ACI Pool URL requires the allocation mode for vlan and vsan pools (ex: uni/infra/vlanns-[poolname]-static)
pool_name = pool
if pool is not None:
pool_name = '[{0}]-{1}'.format(pool, pool_allocation_mode)
# Compile the full domain for URL building
if domain_type == 'fc':
domain_class = 'fcDomP'
domain_mo = 'uni/fc-{0}'.format(domain)
domain_rn = 'fc-{0}'.format(domain)
elif domain_type == 'l2dom':
domain_class = 'l2extDomP'
domain_mo = 'uni/l2dom-{0}'.format(domain)
domain_rn = 'l2dom-{0}'.format(domain)
elif domain_type == 'l3dom':
domain_class = 'l3extDomP'
domain_mo = 'uni/l3dom-{0}'.format(domain)
domain_rn = 'l3dom-{0}'.format(domain)
elif domain_type == 'phys':
domain_class = 'physDomP'
domain_mo = 'uni/phys-{0}'.format(domain)
domain_rn = 'phys-{0}'.format(domain)
elif domain_type == 'vmm':
domain_class = 'vmmDomP'
domain_mo = 'uni/vmmp-{0}/dom-{1}'.format(VM_PROVIDER_MAPPING[vm_provider], domain)
domain_rn = 'vmmp-{0}/dom-{1}'.format(VM_PROVIDER_MAPPING[vm_provider], domain)
# Ensure that querying all objects works when only domain_type is provided
if domain is None:
domain_mo = None
aci_mo = 'uni/infra/vlanns-{0}'.format(pool_name)
aci = ACIModule(module)
aci.construct_url(
root_class=dict(
aci_class=domain_class,
aci_rn=domain_rn,
module_object=domain_mo,
target_filter={'name': domain},
),
child_classes=['infraRsVlanNs'],
)
aci.get_existing()
if state == 'present':
aci.payload(
aci_class=domain_class,
class_config=dict(name=domain),
child_configs=[
{'infraRsVlanNs': {'attributes': {'tDn': aci_mo}}},
]
)
aci.get_diff(aci_class=domain_class)
aci.post_config()
elif state == 'absent':
aci.delete_config()
aci.exit_json()
if __name__ == "__main__":
main()
| [
"theyashkins@gmail.com"
] | theyashkins@gmail.com |
00fabe3ab510b8ce79191b5b18d581764060b961 | b7c5fef61e8c1752286e01c9cf99db81108aa5c4 | /유형/구현/상하좌우/상하좌우.py | 232c193f91611c8e88f820639bc8d32cbdc681b5 | [] | no_license | hyena0608/CoTe | bb12cee08c2be4d63b1ecbefd232384fc1c536aa | 4b2ed46a926e7f1ca90aa2ae07485bc4061869b6 | refs/heads/master | 2023-08-18T10:24:47.616765 | 2021-10-04T11:57:30 | 2021-10-04T11:57:30 | 395,150,124 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 334 | py | n = int(input())
plans = input().split()
x, y = 1, 1
move_types = ['L', 'R', 'U', 'D']
dx = [0, 0, -1, 1]
dy = [-1, 1, 0, 0]
for plan in plans:
for i in range(len(move_types)):
if move_types[i] == plan:
x += dx[i]
y += dy[i]
if x < 1 or y < 1 or x > n or y >n:
x -= dx[i]
y -= dy[i]
print(x, y) | [
"82203978+hyena0608@users.noreply.github.com"
] | 82203978+hyena0608@users.noreply.github.com |
eef83d995d71d7cfa84e901c195269cede28bfb4 | df93acce3d8157243e0bbd22779de2084b634ce8 | /utility.py | 9fc94ad43772ec30deffde115e3a11bf7a1a9972 | [] | no_license | endy-see/PaddleEast | 70be10759220f78bb00efd6126055d63c1d8f927 | e80e6be9f708401e2c044c5a3a7e30aef4b3518d | refs/heads/master | 2022-03-22T07:40:41.652741 | 2019-12-09T03:16:33 | 2019-12-09T03:18:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,454 | py | """
Contains common utility functions.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import distutils.util
import argparse
import functools
from config import *
import collections
from collections import deque
import datetime
import paddle.fluid as fluid
from logging import getLogger
def add_arguments(argname, type, default, help, argparser, **kwargs):
"""
Add argparse's argument.
Usage:
parser = argparse.ArgumentParser()
add_arguments("name", str, "Jonh", "User name.", parser)
args = parser.parse_args()
"""
type = distutils.util.strtobool if type == bool else type
argparser.add_argument(
"--" + argname,
default=default,
type=type,
help=help + ' Default: %(default)s.',
**kwargs
)
def parse_args():
"""return all args"""
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)
# ENV
add_arg('parallel', bool, True, "Whether use parallel.")
add_arg('use_gpu', bool, True, "Whether use GPU.")
add_arg('model_save_dir', str, 'output', "The path to save model.")
add_arg('pretrained_model', str, 'res50', "The init model path.")
add_arg('dataset', str, 'coco2017', "ICDAR2015.")
add_arg('data_dir', str, 'dataset/icdar', "The data root path.")
add_arg('class_num', str, '81', "Class number.")
add_arg('use_pyreader', bool, True, "Use pyreader.")
add_arg('padding_minibatch', bool, False, "If False, only resize image and not pad,image shape is different between GPUs in one mini-batch. If True, image shape is the same in one mini-batch.")
#SOLVER
add_arg('learning_rate', float, 0.01, "Learning rate(default 8 GPUs.)")
add_arg('max_iter', int, 180000, "Iter number.")
add_arg('log_window', int, 20, "Log smooth window, set 1 for debug, set 20 for train.")
# EAST todo
# TRAIN VAL INFER
add_arg('im_per_batch', int, 1, "Minibatch size.")
add_arg('max_size', int, 1333, "The resized image max height.")
add_arg('scale', int, [800], "The resized image height.")
add_arg('batch_size_per_im', int, 512, "East batch size.")
add_arg('pixel_means', float, [102.9801, 115.9465, 122.7717], "pixel mean.")
add_arg('nms_thresh', float, 0.5, "NMS threshold.")
add_arg('score_thresh', float, 0.05, "Score threshold for NMS.")
add_arg('snapshot_stride', int, 2000, "Save model every snapshot stride.")
# SINGLE EVAL AND DRAW
add_arg('draw_threshold', float, 0.8, "Confidence threshold to draw bbox.")
add_arg('image_path', str, 'dataset/icdar/val', "The image path used to inference and visualize.")
args = parser.parse_args()
file_name = sys.argv[0]
if 'train' in file_name:
merge_cfg_from_args(args, 'train')
else:
merge_cfg_from_args(args, 'val')
return args
def print_arguments(args):
"""
Print argparse's arguments
Usage:
parser = argparse.ArgumentParser()
parser.add_argument("name", default="John", type=str, help="User name.")
args = parser.parse_args()
print_arguments(args)
"""
print("----------- Configuration Arguments -----------")
for arg, value in sorted(six.iteritems(vars(args))):
print("%s, %s" % (arg, value))
print("------------------------------------------------")
class SmoothedValue(object):
"""
Track a series of value and provide access to smoothed values over a window or the global series average.
"""
def __init__(self, window_size):
self.deque = deque(maxlen=window_size)
def add_value(self, value):
self.deque.append(value)
def get_median_value(self):
return np.median(self.deque)
class TrainingStats(object):
def __init__(self, window_size, stats_keys):
self.smoothed_losses_and_matrics = {
key: SmoothedValue(window_size) for key in stats_keys
}
def update(self, stats):
for k, v in self.smoothed_losses_and_matrics.items():
v.add_value(stats[k])
def get(self, extras=None):
stats = collections.OrderedDict()
if extras:
for k, v in extras.items():
stats[k] = v
for k, v in self.smoothed_losses_and_matrics.items():
stats[k] = round(v.get_median_value(), 3)
return stats
def log(self, extras=None):
d = self.get(extras)
strs = ', '.join(str(dict({x: y})).strip('{}') for x, y in d.items())
return strs
def now_time():
return datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f')
def check_gpu(use_gpu):
"""
Log error and exit when set use_gpu=true in paddlepaddle cpu version
:param use_gpu:
:return:
"""
err = "Config use_gpu cannot be set as true while you are " \
"using paddlepaddle cpu version ! \nPlease try: \n" \
"\t1. Install paddlepaddle-gpu to run model on GPU \n" \
"\t2. Set use_gpu as false in config file to run " \
"model on CPU"
try:
if use_gpu and not fluid.is_compiled_with_cuda():
logger = getLogger('MachineErrorLog')
logger.error(err)
sys.exit(1)
except Exception as e:
pass
| [
"luyang13@baidu.com"
] | luyang13@baidu.com |
144507f6e4c79f36c38c7b5f1c05dda29b7d318a | c938d1dec12a773cf97d847bdfd2c969af038731 | /day-02/part-1/david.py | 76def08c1d5052be393bb6678a66e9631e36f2f3 | [
"MIT"
] | permissive | SilvestrePerret/adventofcode-2020 | c35ffac8271bc40d8e8e5522a30970708c86e250 | c73ab4f03a66153dca5f8f0d9a7d5d010bb9eb22 | refs/heads/master | 2023-08-25T16:00:04.539307 | 2021-10-20T13:40:16 | 2021-10-20T13:40:16 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 687 | py | from tool.runners.python import SubmissionPy
class DavidSubmission(SubmissionPy):
def parse_line(self, line):
rule, password = line.split(": ")
count, letter = rule.split(" ")
lower_bound, upper_bound = map(int, count.split("-"))
return lower_bound, upper_bound, letter, password
def is_valid(self, line):
lower_bound, upper_bound, letter, password = self.parse_line(line)
return lower_bound <= password.count(letter) <= upper_bound
def run(self, s):
"""
:param s: input in string format
:return: solution flag
"""
return sum(1 for line in s.split("\n") if self.is_valid(line))
| [
"david-ds@users.noreply.github.com"
] | david-ds@users.noreply.github.com |
92279c565a333bce578b68e5b5ad1f260f4c7b8d | 0160b7f33574c59b5a835cf88945c8c20485c17f | /clothes/migrations/0010_auto_20181224_1001.py | 166b2d27ab63fab08b80741798ee40166b7c3f71 | [] | no_license | Caebixus/ComfyPenguin | a7731e8a8e96223c659963087817941873c4543a | b9e3ef5ab1a335b04897f572644db5d9987a379a | refs/heads/master | 2022-03-02T01:33:47.671335 | 2019-11-13T20:21:13 | 2019-11-13T20:21:13 | 166,684,586 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,254 | py | # Generated by Django 2.1.3 on 2018-12-24 10:01
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('clothes', '0009_auto_20181224_1000'),
]
operations = [
migrations.AlterField(
model_name='product_clothes',
name='item_asin',
field=models.CharField(max_length=40, null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_category',
field=models.CharField(choices=[('Sweatshirt', 'Sweatshirt'), ('Hoodies', 'Hoodies'), ('Jumper', 'Jumper')], max_length=40, null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_currency',
field=models.CharField(choices=[('EUR', 'EUR'), ('GPB', 'GPB'), ('USD', 'USD')], max_length=3, null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_gender',
field=models.CharField(choices=[('Male', 'Male'), ('Female', 'Female'), ('Unisex', 'Unisex')], max_length=40, null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_height',
field=models.IntegerField(default=72, help_text='Use height in centimeters', null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_price',
field=models.DecimalField(decimal_places=2, help_text='Price of product even with decimals', max_digits=10, null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_title',
field=models.CharField(max_length=35, null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_url',
field=models.URLField(help_text='To add size to link on Amazon - use Short links', max_length=150, null=True),
),
migrations.AlterField(
model_name='product_clothes',
name='item_width',
field=models.IntegerField(default=52, help_text='Use width in centimeters', null=True),
),
]
| [
"ma.davidlangr@gmail.com"
] | ma.davidlangr@gmail.com |
6987d8b64af2daa1596f4ec3babc18a794df0226 | 060818af56141e5e352606179422fe1c25e86c0b | /pipe.py | 9fde0a3848d52b43b5949f9aacb467f3cdf91e03 | [] | no_license | xxchaosxx210/TextureGame | 7d5d2cad65a9f59ca3707f41065f74a9e712ebe0 | 25f33d6e615ce60c8affae737a3c63013402e444 | refs/heads/master | 2023-02-22T13:14:13.266687 | 2021-01-28T09:58:24 | 2021-01-28T09:58:24 | 332,324,775 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,595 | py | from kivy.uix.widget import Widget
from kivy.uix.image import Image
from kivy.properties import NumericProperty
from kivy.properties import ObjectProperty
from kivy.properties import ListProperty
from kivy.clock import Clock
class Pipe(Widget):
# Numeric Attributes
GAP_SIZE = NumericProperty(100)
CAP_SIZE = NumericProperty(20) # Height of pipe_cap.png
pipe_centre = NumericProperty(0)
bottom_body_position = NumericProperty(0)
bottom_cap_position = NumericProperty(0)
top_body_position = NumericProperty(0)
top_cap_position = NumericProperty(0)
# Texture
pipe_body_texture = ObjectProperty(None)
lower_pipe_tex_coords = ListProperty((0, 0, 1, 0, 1, 1, 0, 1))
top_pipe_tex_coords = ListProperty((0, 0, 1, 0, 1, 1, 0, 1))
def __init__(self, **kwargs):
super(Pipe, self).__init__(**kwargs)
self.pipe_body_texture = Image(source="pipe_body.png").texture
self.pipe_body_texture.wrap = "repeat"
def on_size(self, *args):
# handling texture repeat when window is resized
lower_body_size = self.bottom_cap_position - self.bottom_body_position
self.lower_pipe_tex_coords[5] = lower_body_size / 20.
self.lower_pipe_tex_coords[7] = lower_body_size / 20.
top_body_size = self.top - self.top_body_position
self.top_pipe_tex_coords[5] = top_body_size/20.0
self.top_pipe_tex_coords[7] = top_body_size/20.0
def on_pipe_centre(self, *args):
# call the on_size event in a different thread
Clock.schedule_once(self.on_size, 0) | [
"pmillar1279@googlemail.com"
] | pmillar1279@googlemail.com |
8c1c4700fb5c07dad874faed9390600a71797e7e | 698bf0c7e447184a921ead50b6a3cb2d35d63549 | /store/urls.py | 675ae2bd4c6e6ef8e41b87c58901d3b33d213c02 | [] | no_license | sherzod1997/ecommerce | 5bec9ebbcb8dd4d12e989c7a68e10a2b095f2814 | b916ca4a1bc42c4f36b8215c3d94c001aef90cbd | refs/heads/main | 2023-08-10T23:30:37.135813 | 2021-09-15T12:19:15 | 2021-09-15T12:19:15 | 397,697,496 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 334 | py | from django.urls import path
from . import views
urlpatterns=[
path('',views.store,name="store"),
path('cart/',views.cart,name="cart"),
path('checkout/',views.checkout,name="checkout"),
path('update_item/',views.updateItem,name="update_item"),
path('process_order/',views.processOrder,name="process_order"),
]
| [
"mr.sherzod1997@gmail.com"
] | mr.sherzod1997@gmail.com |
658951e0e2912cb0850ebd0ce917ab88204143f9 | 30940f44a2800ef0200fbd1b19f354864e980e05 | /Python/과제/201632023_14/lab5_11.py | 25baaec9f7ad67f4eea625caf4b40d6933c055f5 | [] | no_license | H2SU/skhu-report | 942ca8cfd93d77595588d0e5e84355bceded9143 | 533a6f31b52cb7dd7617b4b3da41f116cf468d22 | refs/heads/master | 2021-01-20T22:24:07.326121 | 2018-06-25T05:57:43 | 2018-06-25T05:57:43 | 101,819,995 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,277 | py | """lab5_11
주제 : 분수
작성일 : 17. 11. 05.
작성자 : 201632023 이지훈
"""
class fraction:
def __init__(self,numer,denom):
"""
분수 초기화
:param numer: 분모
:param denom: 분자
"""
self.numer = numer
self.denom = denom
def prnt(self):
"""
출력하는 메소드
:return: 없음
"""
return "출력 : " + str(self.denom) + "/" + str(self.numer)
def __str__(self):
"""
:return: 문자열
"""
return "출력 : "+str(self.denom)+"/"+str(self.numer)
def __add__(self,other):
"""
두 분수를 더해주는 메소드
:param other: 더할 분수
:return: 두 분자의 합의 결과
"""
add = self.numer * other.denom + self.denom * other.numer
c = self.numer * other.numer
if(add == 0):
return fraction(0,0)
for i in range(2,c):
if(add%i == 0 and c%i == 0):
add = add//i
c = c//i
i = 2
s = fraction(c,add)
return s
def __sub__(self, other):
"""
두 분수를 더해주는 메소드
:param other: 더할 분수
:return: 두 분자의 합의 결과
"""
sub = self.denom * other.numer - self.numer * other.denom
c = self.numer * other.numer
if (sub == 0):
return fraction(0,0)
for i in range(2,c):
if(sub%i == 0 and c%i == 0):
sub = sub//i
c = c//i
i = 2
s = fraction(c, sub)
return s
def __eq__(self, other):
"""
두 분수가 같은지 비교해주는 메소드
:param other: 비교할 메소드
:return: True or False
"""
if(self.denom * other.numer == self.numer * other.denom):
return True
else:
return False
def __ne__(self, other):
"""
두 분수가 같지 않은지 비교해주는 메소드
:param other: 비교할 메소드
:return: True or False
"""
if (self.denom * other.numer != self.numer * other.denom):
return True
else:
return False
def __lt__(self, other):
"""
두 분수중 other가 큰지 비교해 주는 메소드
:param other: 비교할 메소드
:return: True or False
"""
if (self.denom * other.numer < self.numer * other.denom):
return True
else:
return False
def __gt__(self, other):
"""
두 분수중 self가 큰지 비교해 주는 메소드
:param other: 비교할 메소드
:return: True or False
"""
if (self.denom * other.numer > self.numer * other.denom):
return True
else:
return False
c1 = fraction(4,2)
c2 = fraction(7,4)
print("1번 분수",c1)
print("2번 분수",c2)
print("덧셈",c1+c2)
print("뺄셈",c1 - c2)
print("두 분수가 같은가? :",c1 == c2)
print("뒤쪽 분수가 더 큰가? :",c1 < c2)
print("앞쪽 분수가 더 큰가? :",c1 > c2)
print("두 분수가 같지 않은가? :",c1 != c2)
| [
"hisu138@gmail.com"
] | hisu138@gmail.com |
59fc04f65d00227209003ea2836478c904403fdb | 34f365117eb1d846fa922c24f3fc650188ce9746 | /bin/bed2countString.py | a029e30899b4529ca74ab313a5ed6655280c54f6 | [
"MIT"
] | permissive | PinarSiyah/NGStoolkit | 53ac6d87a572c498414a246ae051785b40fbc80d | b360da965c763de88c9453c4fd3d3eb7a61c935d | refs/heads/master | 2021-10-22T04:49:51.153970 | 2019-03-08T08:03:28 | 2019-03-08T08:03:28 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 744 | py | #!/usr/bin/env python
import os
import sys
import bed
import generalUtils
import argparse
import random
parser = argparse.ArgumentParser(description='count strings')
parser.add_argument('-i', required= True, help='input')
parser.add_argument('-o', required= True, help='output')
parser.add_argument('-fasta', required= True, help='fasta reference')
parser.add_argument('-string', required= True, help='string to count')
args = parser.parse_args()
bedFile = args.i
output = args.o
fasta = args.fasta
string = args.string
bedObject = bed.bed(bedFile)
for bedline in bedObject.read():
count = bedline.countString(fasta, string)
fields = bedline.fields()
fields.append(str(count))
line = bedline.fields2line(fields)
out.write(line + '\n') | [
"adebali@users.noreply.github.com"
] | adebali@users.noreply.github.com |
436dc03754786b630e14afac489c70e306ef75a0 | ff67c6f021a303c657273a51e46903527438efda | /eahub-fight-service/fights/tests.py | 44520ddcb5f16b619fd1f0455bb4ef76b9454920 | [] | no_license | bsmi021/aws_rpggame | 4f7192319c0d3a5f4d9924ccb41b25f71de44c57 | 618c275abc244dbe8eb0ab7c5948a2330debbc29 | refs/heads/master | 2023-02-14T21:35:31.201624 | 2019-09-05T09:01:19 | 2019-09-05T09:01:19 | 198,040,288 | 0 | 0 | null | 2021-01-05T13:16:24 | 2019-07-21T09:57:58 | Python | UTF-8 | Python | false | false | 4,936 | py | from datetime import datetime
from models import FightModel, Enemy, Character, AttackModel, CharacterFightModel
from utils import ModelEncoder
import json
import multiprocessing
import os
from uuid import uuid4
import random
import time
os.environ['REGION'] = 'us-east-2'
os.environ['ENV'] = '1'
def calc_dodge(dodge_chance):
return random.randint(1, 100) * .01 <= dodge_chance
def calc_block(block_chance):
return random.randint(1, 100) * .01 <= block_chance
def calc_critical(crit_chance):
return random.randint(1, 100) * .01 <= crit_chance
def calc_missed():
return random.randint(1, 100) * .01 <= .042
def calc_attack(player, fight):
attack_amt = random.randint(player.min_damage, player.max_damage)
is_missed = False
is_blocked = False
is_dodged = False
is_critical = False
blocked_amt = 0
is_missed = calc_missed()
if is_missed:
attack_amt = 0
is_critical = calc_critical(.21)
if is_critical and not is_missed:
attack_amt = round(attack_amt * 2.5)
if fight.enemy.can_block:
is_blocked = calc_block(fight.enemy.block_pct)
if is_blocked and not is_missed:
blocked = round(attack_amt * fight.enemy.block_amt)
blocked_amt = attack_amt - blocked
attack_amt = blocked
if fight.enemy.can_dodge:
if not is_blocked and not is_missed:
is_dodged = calc_dodge(fight.enemy.dodge_pct)
if is_dodged:
attack_amt = 0
return {'attack_amt': attack_amt,
'is_blocked': is_blocked,
'is_dodged': is_dodged,
'is_missed': is_missed,
'is_critical': is_critical,
'block_amt': blocked_amt}
def sim_player(player, fight):
while fight.enemy.status == 'ALIVE':
time.sleep((player.attack_speed * .001))
response = calc_attack(player, fight)
attack_amt = response['attack_amt']
if attack_amt > 0:
fight.get(fight.id)
fight.update_enemy_hp(attack_amt)
attack = AttackModel(source_id=player.id, source_tp='C', target_id=fight.enemy.id, target_tp='E',
fight_id=fight.id, attack_amt=attack_amt, is_blocked=response[
'is_blocked'],
block_amt=response['block_amt'], is_critical=response['is_critical'],
is_missed=response['is_missed'], is_dodged=response['is_dodged'], t_prev_hp=fight.enemy.prev_hp,
t_curr_hp=fight.enemy.curr_hp, attack_ts=datetime.utcnow())
attack.save()
# fight.get(fight.id)
# print(dict(fight))
print(
f'Player {player.id} attacks for {attack_amt}. Waiting {(player.attack_speed * .001)}')
def get_fight(id):
fight = FightModel.get(id)
while fight.enemy.status == 'ALIVE':
# time.sleep(1)
fight = FightModel.get(id)
print(
f'Prev HP: {fight.enemy.prev_hp} | Current HP: {fight.enemy.curr_hp}.')
if __name__ == "__main__":
os.environ['ENV'] = '1'
if not FightModel.exists():
FightModel.create_table(read_capacity_units=5,
write_capacity_units=100, wait=True)
if not AttackModel.exists():
AttackModel.create_table(read_capacity_units=5,
write_capacity_units=5,
wait=True)
if not CharacterFightModel.exists():
CharacterFightModel.create_table(read_capacity_units=5, write_capacity_units=5, wait=True)
for result in AttackModel.scan():
print(json.dumps(result, cls=ModelEncoder))
# print(AttackModel.scan())
fight = FightModel(id=str(uuid4()))
enemy = Enemy(id="1", can_block=True, can_dodge=True, block_pct=0.05,
block_amt=.54, dodge_pct=0.07, base_hp=2500, curr_hp=2500, prev_hp=2500)
char1 = Character(id="1", attack_speed=1600, crit_chance=0.17,
curr_hp=1000, prev_hp=1500, min_damage=55, max_damage=128)
char2 = Character(id="2", attack_speed=2700, crit_chance=0.27,
curr_hp=1250, prev_hp=3000, min_damage=65, max_damage=155)
char3 = Character(id="3", attack_speed=1800, crit_chance=0.19,
curr_hp=1300, prev_hp=1800, min_damage=51, max_damage=132)
char4 = Character(id="4", attack_speed=2100, crit_chance=0.21,
curr_hp=1199, prev_hp=3500, min_damage=35, max_damage=149)
fight.enemy = enemy
fight.characters = [char1, char2, char3, char4]
fight.save()
print(dict(fight))
party = []
for player in fight.characters:
p = multiprocessing.Process(target=sim_player, args=(player, fight, ))
party.append(p)
px = multiprocessing.Process(target=get_fight, args=(fight.id, ))
party.append(px)
for p in party:
p.start()
| [
"bsmi021@gmail.com"
] | bsmi021@gmail.com |
6eb0ec8e8aa7e761bcb11d41fee80a92377d7ab6 | ab0b09a351a76591be563a8b44c1b3969cf6de3b | /entity_extract/model/loss_function/cross_entropy_loss.py | 8f8ae4d7c95c551da9672302b8bf7738377a8f44 | [
"MIT"
] | permissive | 2019hong/PyTorch_BERT_Pipeline_IE | 617c8f4eca11fe496558760fb97e3b0b4da8aa48 | 9ee66bc9ceaed42e996e9b2414612de3fc0b23bb | refs/heads/main | 2023-04-20T22:16:16.178337 | 2021-05-09T15:21:42 | 2021-05-09T15:21:42 | 366,223,899 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,140 | py | import torch
from torch import nn
from utils.arguments_parse import args
class cross_entropy(nn.Module):
def __init__(self):
super().__init__()
self.loss_func = torch.nn.CrossEntropyLoss(reduction="none")
def forward(self,start_logits,end_logits,start_label, end_label,seq_mask):
start_label, end_label = start_label.view(size=(-1,)), end_label.view(size=(-1,))
start_loss = self.loss_func(input=start_logits.view(size=(-1, 2)), target=start_label)
end_loss = self.loss_func(input=end_logits.view(size=(-1, 2)), target=end_label)
sum_loss = start_loss + end_loss
sum_loss *= seq_mask.view(size=(-1,))
avg_se_loss = torch.sum(sum_loss) / seq_mask.size()[0]
# avg_se_loss = torch.sum(sum_loss) / bsz
return avg_se_loss
class cross_entropy(nn.Module):
def __init__(self):
super().__init__()
self.loss_func = torch.nn.CrossEntropyLoss()
def forward(self,logits,label):
label = label.view(size=(-1,))
loss = self.loss_func(input=logits.view(size=(-1, 56)), target=label)
return loss | [
"suolyer@qq.com"
] | suolyer@qq.com |
1cf3355e7a1f66058f64474f928216d3bb0a873d | dcdcd6f2ba42ea1be06885a7bf1cf15a277e5aa6 | /daya/Roomate_Notebook/backup/roomate_db.py | b973917b6c7da479d80127c12701c0d669dd084a | [] | no_license | dayanand18/Eagle_eye | 9abfe9953c2c9214a8821fa44d1364ecd5a43bec | 2f94b160ca76aa3515d57d6e1164ee661733748b | refs/heads/master | 2021-05-21T20:23:40.910983 | 2020-04-03T19:37:34 | 2020-04-03T19:37:34 | 252,787,829 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,302 | py | import sqlite3
import os
from datetime import datetime
class rommate_db:
def __init__(self):
self.db = sqlite3.connect('Roomates.db')
def create_roomates_table(self):
try:
query = "CREATE TABLE roomates(user_mail text PRIMARY KEY NOT NULL,full_name VARCHAR(200) NOT NULL, company_name VARCHAR(150),password text NOT NULL,user_role INTEGER NOT NULL, last_login VARCHAR(200) NOT NULL,user_status INTEGER DEFAULT 0 NOT NULL)"
self.db.execute(query)
except Exception as E:
print(E)
def insert_user(self,full_name,email,company,password):
try:
query = "INSERT or IGNORE INTO roomates(user_mail,full_name,company_name,password,user_role,last_login) VALUES(?,?,?,?,?,?)"
last_login = str(datetime.now()).split('.')[0]
parameters = (email,full_name,company,password,3,last_login)
self.db.cursor()
self.db.execute(query,parameters)
self.db.commit()
return True
except Exception as errr:
if str(errr).__contains__('no such table: roomates'):
self.create_roomates_table()
self.insert_user(full_name,email,company,password)
return False
except:
return False
def get_emails(self):
usr_maial_lst = []
try:
query = f"SELECT user_mail FROM roomates"
self.db.cursor()
self.data = self.db.execute(query)
usr_maial_lst = self.data.fetchall()
except:
return usr_maial_lst
finally:
return usr_maial_lst
def search_user(self,mail):
try:
query = f"SELECT * FROM roomates WHERE user_mail='{mail}'"
self.db.cursor()
self.data = self.db.execute(query)
return self.data.fetchall()
except Exception as errr:
if str(errr).__contains__('no such table: roomates'):
self.create_roomates_table()
return []
except:
self.db.rollback()
def update_user(self,mail,col_name,val):
try:
query = f"UPDATE roomates SET {col_name}='{val}' WHERE user_mail='{mail}'"
self.db.cursor()
self.db.execute(query)
self.db.commit()
except Exception as errr:
print (errr)
if str(errr).__contains__('no such table: roomates'):
self.create_roomates_table()
return False
except:
self.db.rollback()
def update_user_status(self,mail,status):
try:
query = f"UPDATE roomates SET user_status={status} WHERE user_mail='{mail}'"
self.db.cursor()
self.db.execute(query)
self.db.commit()
return True
except:
self.db.rollback()
return False
def delete_box(self,user):
try:
query = f"DELETE FROM roomates WHERE user_mail='{user}'"
self.db.cursor()
self.db.execute(query)
self.db.commit()
return True
except:
return False
def delete_db(self):
try:
query = "DROP TABLE roomates"
self.db.execute(query)
os.remove('Roomates.db')
except:
pass
# query = "ALTER TABLE roomates ADD user_status INTEGER DEFAULT 0 NOT NULL"
# Last_login = str(datetime.now()).split('.')[0]
# self.dbr.update_user(user,'last_login',str(Last_login))
# query = "SELECT * FROM roomates"
# db = sqlite3.connect('Roomates.db')
# db.cursor()
# data = db.execute(query)
# print(data.fetchall())
#a = rommate_db()
# # #a.delete_db()
#print(a.update_user_status('dayaayb@gmail.com',1))
#print(a.search_user('shiva.vh3@gmail.com'))
# query = "UPDATE roomates SET user_status=1 WHERE user_mail='shashikanth@gmail.com'"
#
# db = sqlite3.connect('Roomates.db')
# db.cursor()
# db.execute(query)
# db.commit()
# query = "DELETE FROM roomates WHERE user_mail='shiva.vh3@gmail.com'"
#
# db = sqlite3.connect('Roomates.db')
# db.cursor()
# db.execute(query)
# db.commit()
| [
"noreply@github.com"
] | noreply@github.com |
d6fc728012cdfc6a1f03ddba580a8d01a63504c5 | e3c8d2997a425bfd22e33cb074229b563ad7c0eb | /EXOSIMS/PlanetPopulation/JupiterTwin.py | c22e8a587d0bad1122903246625861e0973e5819 | [
"BSD-3-Clause"
] | permissive | semaphoreP/EXOSIMS | 3a5f9722a0fdfdf3d233bef398488fb920f6fba9 | 9b2c25ff0bf1e6378af08a95b04c9e51ef4f1340 | refs/heads/master | 2020-04-04T11:55:05.258189 | 2019-05-27T18:19:23 | 2019-05-27T18:19:23 | 155,907,999 | 0 | 0 | BSD-3-Clause | 2018-11-03T23:49:47 | 2018-11-02T18:41:34 | Python | UTF-8 | Python | false | false | 3,339 | py | from EXOSIMS.Prototypes.PlanetPopulation import PlanetPopulation
import numpy as np
import astropy.units as u
class JupiterTwin(PlanetPopulation):
"""
Population of Jupiter twins (11.209 R_Earth, 317.83 M_Eearth, 1 p_Earth)
On eccentric orbits (0.7 to 1.5 AU)*5.204.
Numbers pulled from nssdc.gsfc.nasa.gov/planetary/factsheet/jupiterfact.html
This implementation is intended to enforce this population regardless
of JSON inputs. The only inputs that will not be disregarded are erange
and constrainOrbits.
"""
def __init__(self, eta=1, erange=[0.,0.048], constrainOrbits=True, **specs):
#eta is probability of planet occurance in a system. I set this to 1
specs['erange'] = erange
specs['constrainOrbits'] = constrainOrbits
aEtoJ = 5.204
RpEtoJ = 11.209
MpEtoJ = 317.83
pJ = 0.538# 0.538 from nssdc.gsfc.nasa.gov
# specs being modified in JupiterTwin
specs['eta'] = eta
specs['arange'] = [1*aEtoJ,1*aEtoJ]#0.7*aEtoJ, 1.5*aEtoJ]
specs['Rprange'] = [1*RpEtoJ,1*RpEtoJ]
specs['Mprange'] = [1*MpEtoJ,1*MpEtoJ]
specs['prange'] = [pJ,pJ]
specs['scaleOrbits'] = True
self.RpEtoJ = RpEtoJ
self.pJ = pJ
PlanetPopulation.__init__(self, **specs)
def gen_plan_params(self, n):
"""Generate semi-major axis (AU), eccentricity, geometric albedo, and
planetary radius (earthRad)
Semi-major axis and eccentricity are uniformly distributed with all
other parameters constant.
Args:
n (integer):
Number of samples to generate
Returns:
a (astropy Quantity array):
Semi-major axis in units of AU
e (float ndarray):
Eccentricity
p (float ndarray):
Geometric albedo
Rp (astropy Quantity array):
Planetary radius in units of earthRad
"""
n = self.gen_input_check(n)
# generate samples of semi-major axis
ar = self.arange.to('AU').value
# check if constrainOrbits == True for eccentricity
if self.constrainOrbits:
# restrict semi-major axis limits
arcon = np.array([ar[0]/(1.-self.erange[0]), ar[1]/(1.+self.erange[0])])
a = np.random.uniform(low=arcon[0], high=arcon[1], size=n)*u.AU
tmpa = a.to('AU').value
# upper limit for eccentricity given sma
elim = np.zeros(len(a))
amean = np.mean(ar)
elim[tmpa <= amean] = 1. - ar[0]/tmpa[tmpa <= amean]
elim[tmpa > amean] = ar[1]/tmpa[tmpa>amean] - 1.
elim[elim > self.erange[1]] = self.erange[1]
elim[elim < self.erange[0]] = self.erange[0]
# uniform distribution
e = np.random.uniform(low=self.erange[0], high=elim, size=n)
else:
a = np.random.uniform(low=ar[0], high=ar[1], size=n)*u.AU
e = np.random.uniform(low=self.erange[0], high=self.erange[1], size=n)
# generate geometric albedo
p = self.pJ*np.ones((n,))
# generate planetary radius
Rp = np.ones((n,))*u.earthRad*self.RpEtoJ
return a, e, p, Rp
| [
"drk94@cornell.edu"
] | drk94@cornell.edu |
213892dd324e13ace2fbf3794ad46a2d864cd2cf | 4ae6a7123cae3fbca4b23d9691fe066e2807f91d | /mitapa/wsgi.py | dde4ec1bd0c953a35384256ae8f0995ce6efcb85 | [] | no_license | wolvelopez/mitapa2 | 910667aa77b2194e5fab323b5e93bc41adc7842b | 47c191007d09e2412284381c861dbcfe2fb988d4 | refs/heads/master | 2021-01-22T23:58:59.373786 | 2012-10-09T11:30:30 | 2012-10-09T11:30:30 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,134 | py | """
WSGI config for mitapa project.
This module contains the WSGI application used by Django's development server
and any production WSGI deployments. It should expose a module-level variable
named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover
this application via the ``WSGI_APPLICATION`` setting.
Usually you will have the standard Django WSGI application here, but it also
might make sense to replace the whole Django WSGI application with a custom one
that later delegates to the Django one. For example, you could introduce WSGI
middleware here, or combine a Django application with an application of another
framework.
"""
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mitapa.settings")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
| [
"wolvelopez@gmail.com"
] | wolvelopez@gmail.com |
ee11cd3dba463874062b05e9d15b89031fff7f5d | 2ac0c7162faf05bd20fe3123dd654861b529069d | /web scraping3 | 9030ab9e54f915db1edf99a1366654d7659946c0 | [] | no_license | deep141997/web-scraping | 04caf47c5b9ad310f2854c30a0a87a06bee28775 | 385b8b6793245c968102145d4de810652a0a8ec7 | refs/heads/master | 2020-03-31T07:10:41.931575 | 2018-10-14T21:17:53 | 2018-10-14T21:17:53 | 152,010,941 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,260 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Sep 13 02:05:15 2018
@author: deepak
"""
import requests
from bs4 import BeautifulSoup
# when we run request.get it return a response object
page=requests.get("http://dataquestio.github.io/web-scraping-pages/simple.html")
print(page.status_code) #response oject has a property status code and status_code=200 page succesfully downloaded
#print(page.content) # read data in html format
soup=BeautifulSoup(page.content,"html.parser")
#print(soup.prettify)
#print(soup.title)
# to fetch the data of paragraph
paragraph=soup.find('p')
print(paragraph.text)
for i,pg in enumerate(soup.find_all('p')):
print(i,pg.text)
# all tags are nested we can select all the elements at the first level
# we have to use children method of soup it will generate list so we have to use list
#print(soup.children) list type
for item in list(soup.children):
print(type(item)) #<class 'bs4.element.Doctype'>
#<class 'bs4.element.NavigableString'>
#<class 'bs4.element.Tag'>
tags=list(soup.children)[2]
print(tags)
print(type(tags))
tags_html=list(tags.children)
print(tags_html)
print(type(tags_html))
for tag in tags_html:
print(tag.get_text()) | [
"noreply@github.com"
] | noreply@github.com | |
d777370b107b7228ee335378e703d909a421f3be | 41b75966f379fee64b8b5ccf999423025a7fa327 | /todoapp/urls.py | 2ffce4223ad72fcd7d69302f52268b0967f37ba0 | [] | no_license | varshadesaimath/heroku_django | d206219ca1e422fd1328bc6349059a08b4e244cc | bb93e63cc66e73f7b03990e91c2ce6774fd01740 | refs/heads/master | 2023-08-15T09:04:52.023736 | 2020-07-16T20:56:13 | 2020-07-16T20:56:13 | 279,820,987 | 0 | 0 | null | 2021-09-22T19:26:58 | 2020-07-15T09:11:19 | Python | UTF-8 | Python | false | false | 954 | py | """todoapp URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
# from django.contrib import admin
# from django.urls import path
# urlpatterns = [
# path('admin/', admin.site.urls),
# ]
from django.conf.urls import url
from django.contrib import admin
from todolist.views import index
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', index, name="TodoList"),
] | [
"54346850+varshadesaimath@users.noreply.github.com"
] | 54346850+varshadesaimath@users.noreply.github.com |
ce1273390252530299cbc324c115e5978b11087a | 41e660c3f0f30480cbd813f7c0f11922e1ad2cd4 | /Desafio43.py | 7679ef41040dc3bdf6a65a76ad37364e880c5e21 | [] | no_license | emersonsemidio/python | a5fd973d7d66a4cd76d6a159997045aa615de495 | 84f3bda7bd52f348d7f754618f07a7f02fb4e3fe | refs/heads/master | 2023-05-06T23:50:14.163639 | 2021-05-29T16:34:05 | 2021-05-29T16:34:05 | 263,699,462 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 394 | py | a = float(input('Digite seu peso em KG '))
b = float(input('Digite sua altura em metros '))
c = a / b**2
if c > 0 and c <= 18.5:
print('Abaixo do peso ')
elif c > 18.5 and c <= 25:
print('Peso ideal ')
elif c > 25 and c <=30:
print('Sobrepeso ')
elif c > 30 and c <=40:
print('Obesidade ')
elif c > 40:
print('Obesidade mórbida')
print('\033[1mObrigado por participar!!!') | [
"emersonsemidio.01@gmail.com"
] | emersonsemidio.01@gmail.com |
e1dc4656c4add693f2dbb8e3c9a87990852101c8 | dccdb71dd75560ffeb076cedbd78b36e33b3adf2 | /EstruturaSequencial/ex02_numero.py | 5a2f85ce99f95f27163e7a478296cece3c9b0762 | [] | no_license | IngridFCosta/exerciciosPythonBrasil | 6fa669896cae135142101c522be7a919bda583b7 | 5c84d70f720b45a260320b08a5103bad5ce78339 | refs/heads/master | 2023-04-22T02:34:40.206350 | 2021-05-06T17:14:50 | 2021-05-06T17:14:50 | 291,265,097 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 194 | py | """2-Faça um Programa que peça um número e então mostre a mensagem O número informado foi [número]."""
numero=int(input('Escreva um número: '))
print(f'O número informado foi {numero}') | [
"49729290+IngridFCosta@users.noreply.github.com"
] | 49729290+IngridFCosta@users.noreply.github.com |
b442cc230d11c9178077d54f985460362a855a1f | 332b31b3dcc06ab377dfc726ba28b08ff3c7f8ea | /manage.py | 4ce1ada190976a8a47b5bab4df880ccd8c27c038 | [] | no_license | AlfiyaZi/Secret_Santa | 2763427e564ac872a23425038446d43f9f926809 | d57478b26b29244e3139f635eddf5adef983ca3e | refs/heads/master | 2020-07-27T22:12:28.262118 | 2016-02-17T00:42:48 | 2016-02-17T00:42:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 255 | py | #!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Secret_Santa.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
| [
"eugenemu55@gmail.com"
] | eugenemu55@gmail.com |
4fef97dd30ffbae5b485f9ef8ad01ff1307e6aba | 6667cb2a98799830d63c41603534eb5616084e0f | /addfaces.py | d65b449af92e501fadcd4a43ac2c9d58ea09204c | [] | no_license | milan0410/Smart-Lab | e5d50743a30555f62e7f7b92199714d06849da1a | 7e115fbd1e6756de4d62d8f037f8d93519b55542 | refs/heads/main | 2023-08-02T00:57:41.213997 | 2021-09-26T13:15:37 | 2021-09-26T13:15:37 | 410,556,023 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 645 | py | import boto3
import time
import cv2
bucket = 'facesmit'
cam=cv2.VideoCapture(0)
count=0
name=input("Enter the name of person:")
idno=input("Enter the id no:")
s3=boto3.client('s3',aws_access_key_id='<AWS_ACCESS_ID>',aws_secret_access_key='<AWS_SECRET_ACCESS_KEY>',
region_name='ap-south-1')
while count<60:
_,image=cam.read()
count=count+1
cv2.imwrite("new22.jpg",image)
s3.upload_file("new22.jpg",bucket,'{}_{}/{}.jpg'.format(name,idno,count))
print("Uploaded image {} of {}".format(count,name))
print("{} added sucessfully to database".format(name))
cam.release()
print("Succesfully added {}".format(name)) | [
"milan.vish10@gmail.com"
] | milan.vish10@gmail.com |
6c236271f1bc427a86e7a260cf99dc4942db983b | 9496bbf93018bed9cab7766b9983e37122f24a1d | /see_data_shape.py | 6e80e478a991942963f3074f59cdee0d430223c7 | [] | no_license | ioyy900205/Transform-your-data | 453fe83af235ff0c392edcd8da161083dd3e6363 | d19176444407df50f091cf4b4da14bc59564c198 | refs/heads/main | 2023-01-06T23:10:38.668075 | 2020-11-12T01:48:08 | 2020-11-12T01:48:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 194 | py | import cv2
# 修改为你的图片地址
img_path = ' 修改为你的图片地址 '
img = cv2.imread(img_path)
size = img.shape
print(size)
print('width:',size[1])
print('height:',size[0])
| [
"603329354@qq.com"
] | 603329354@qq.com |
4ac4d7c107972f26cee6658eebc94261410509f9 | 8471c456a81c33acee32247ebcd681b60a316791 | /tests/arp/test_arpall.py | e8c60dd659bb6eb7656d21baffb6bf10b27a5775 | [
"LicenseRef-scancode-generic-cla",
"Apache-2.0"
] | permissive | lguohan/sonic-mgmt | cd8df9249af447ce268ca541a552dcf2c61f57ab | 2eaea7c823ef224b7bd7553f91f27fe9d9160b2a | refs/heads/master | 2021-11-11T01:44:59.393111 | 2021-01-28T03:14:02 | 2021-01-28T03:14:02 | 66,689,725 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,634 | py | import logging
import pytest
import time
from datetime import datetime
from tests.arp.arp_utils import clear_dut_arp_cache
from tests.ptf_runner import ptf_runner
from tests.common.helpers.assertions import pytest_assert
from tests.common.fixtures.ptfhost_utils import copy_ptftests_directory # lgtm[py/unused-import]
pytestmark = [
pytest.mark.topology('t1')
]
logger = logging.getLogger(__name__)
def test_arp_unicast_reply(common_setup_teardown):
duthost, ptfhost, int_facts, intf1, intf2, intf1_indice, intf2_indice = common_setup_teardown
# Start PTF runner and send correct unicast arp packets
clear_dut_arp_cache(duthost)
params = {
'acs_mac': int_facts['ansible_interface_facts'][intf1]['macaddress'],
'port': intf1_indice
}
log_file = "/tmp/arptest.VerifyUnicastARPReply.{0}.log".format(datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
ptf_runner(ptfhost, 'ptftests', "arptest.VerifyUnicastARPReply", '/root/ptftests', params=params, log_file=log_file)
# Get DUT arp table
switch_arptable = duthost.switch_arptable()['ansible_facts']
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['macaddress'] == '00:06:07:08:09:00')
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['interface'] == intf1)
def test_arp_expect_reply(common_setup_teardown):
duthost, ptfhost, int_facts, intf1, intf2, intf1_indice, intf2_indice = common_setup_teardown
params = {
'acs_mac': int_facts['ansible_interface_facts'][intf1]['macaddress'],
'port': intf1_indice
}
# Start PTF runner and send correct arp packets
clear_dut_arp_cache(duthost)
log_file = "/tmp/arptest.ExpectReply.{0}.log".format(datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
ptf_runner(ptfhost, 'ptftests', "arptest.ExpectReply", '/root/ptftests', params=params, log_file=log_file)
switch_arptable = duthost.switch_arptable()['ansible_facts']
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['macaddress'] == '00:06:07:08:09:0a')
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['interface'] == intf1)
def test_arp_no_reply_other_intf(common_setup_teardown):
duthost, ptfhost, int_facts, intf1, intf2, intf1_indice, intf2_indice = common_setup_teardown
# Check DUT won't reply ARP and install ARP entry when ARP request coming from other interfaces
clear_dut_arp_cache(duthost)
intf2_params = {
'acs_mac': int_facts['ansible_interface_facts'][intf2]['macaddress'],
'port': intf2_indice
}
log_file = "/tmp/arptest.SrcOutRangeNoReply.{0}.log".format(datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
ptf_runner(ptfhost, 'ptftests', "arptest.SrcOutRangeNoReply", '/root/ptftests', params=intf2_params, log_file=log_file)
switch_arptable = duthost.switch_arptable()['ansible_facts']
for ip in switch_arptable['arptable']['v4'].keys():
pytest_assert(ip != '10.10.1.4')
def test_arp_no_reply_src_out_range(common_setup_teardown):
duthost, ptfhost, int_facts, intf1, intf2, intf1_indice, intf2_indice = common_setup_teardown
params = {
'acs_mac': int_facts['ansible_interface_facts'][intf1]['macaddress'],
'port': intf1_indice
}
# Check DUT won't reply ARP and install ARP entry when src address is not in interface subnet range
clear_dut_arp_cache(duthost)
log_file = "/tmp/arptest.SrcOutRangeNoReply.{0}.log".format(datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
ptf_runner(ptfhost, 'ptftests', "arptest.SrcOutRangeNoReply", '/root/ptftests', params=params, log_file=log_file)
switch_arptable = duthost.switch_arptable()['ansible_facts']
for ip in switch_arptable['arptable']['v4'].keys():
pytest_assert(ip != '10.10.1.22')
def test_arp_garp_no_update(common_setup_teardown):
duthost, ptfhost, int_facts, intf1, intf2, intf1_indice, intf2_indice = common_setup_teardown
params = {
'acs_mac': int_facts['ansible_interface_facts'][intf1]['macaddress'],
'port': intf1_indice
}
# Test Gratuitous ARP behavior, no Gratuitous ARP installed when arp was not resolved before
clear_dut_arp_cache(duthost)
log_file = "/tmp/arptest.GarpNoUpdate.{0}.log".format(datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
ptf_runner(ptfhost, 'ptftests', "arptest.GarpNoUpdate", '/root/ptftests', params=params, log_file=log_file)
switch_arptable = duthost.switch_arptable()['ansible_facts']
for ip in switch_arptable['arptable']['v4'].keys():
pytest_assert(ip != '10.10.1.7')
# Test Gratuitous ARP update case, when received garp, no arp reply, update arp table if it was solved before
log_file = "/tmp/arptest.ExpectReply.{0}.log".format(datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
ptf_runner(ptfhost, 'ptftests', "arptest.ExpectReply", '/root/ptftests', params=params, log_file=log_file)
switch_arptable = duthost.switch_arptable()['ansible_facts']
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['macaddress'] == '00:06:07:08:09:0a')
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['interface'] == intf1)
time.sleep(2)
log_file = "/tmp/arptest.GarpUpdate.{0}.log".format(datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
ptf_runner(ptfhost, 'ptftests', "arptest.GarpUpdate", '/root/ptftests', params=params, log_file=log_file)
switch_arptable = duthost.switch_arptable()['ansible_facts']
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['macaddress'] == '00:00:07:08:09:0a')
pytest_assert(switch_arptable['arptable']['v4']['10.10.1.3']['interface'] == intf1)
| [
"noreply@github.com"
] | noreply@github.com |
d81066e451cb0e509457c9eddf1945cfcadf946c | b6efc7ef83f51c31a218064a4c2fd05ecdc318ad | /util.py | 72b9b6a64795121eafffed4db2f522f76b5affbf | [] | no_license | FRC3184/frc2018 | 12000bab0c9608ef339cd53dfaee28aa3b584df1 | 19846ebf27c99d58075e9f2085e87512c3f67023 | refs/heads/master | 2021-03-27T16:32:32.763553 | 2018-05-16T05:06:56 | 2018-05-16T05:06:56 | 116,581,979 | 3 | 0 | null | 2018-05-09T21:44:18 | 2018-01-07T16:43:21 | Python | UTF-8 | Python | false | false | 154 | py | import wpilib
def get_basedir():
if wpilib.hal.isSimulation():
basedir = ""
else:
basedir = "/home/lvuser/py"
return basedir | [
"nick@nickschatz.com"
] | nick@nickschatz.com |
0db0f6433a05f9f4ea5f7ba64cb24c7302c6043a | 403139e10b660e325992ab9ac556ea2c74308b7a | /inbound_scanning.py | dbe04da13596a9a35ab119d85c2c60a3cda6307b | [] | no_license | timking451/inbound | 5c95344ff45f94f089738dd3d03223bb99e0ef81 | a54c19daaeced69c9de1ff5f49009916b99b9423 | refs/heads/master | 2023-07-13T10:51:32.045976 | 2021-01-05T16:18:42 | 2021-01-05T16:18:42 | 251,129,043 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,659 | py | # Import dependencies
import time
import csv
import collections as col
import pandas as pd
import re
import shelve
import pprint
import os
#from playsound import playsound
os.system("rclone copy dropbox:inbound ~/dropbox")
# Create and format the dataframe
df = pd.read_csv("~/dropbox/MasterCrossReference.txt")
df['OrderQty'] = df['OrderQty'].astype(int)
df = df.astype(str)
df['Count'] = ""
df['OK'] = "---"
df = df.sort_values(by=['Deliverable Unit'], ascending=False)
os.system("^C")
# Create empty list for UPC scans
scanned_items = []
def save_scans(data):
shelfFile = shelve.open('scanned_items')
shelfFile['scanned_items'] = data
shelfFile.close()
def load_scans():
shelfFile = shelve.open('scanned_items')
return shelfFile['scanned_items']
def tote_report(df):
for ind in df.index:
if df['OrderQty'][ind] > df['Count'][ind]:
df['OK'][ind] = "SHORT"
elif df['OrderQty'][ind] < df['Count'][ind]:
df['OK'][ind] = "OVER"
else:
df['OK'][ind] = "OK"
df = df.drop(df[df['OK'] == 'OK'].index)
df['UPC'] = ''
df.sort_values(by=['OK'])
tote = re.compile(r'^T')
df = df[df['Deliverable Unit'].str.match(tote) == True]
df.to_excel('~/dropbox/tote_report.xlsx')
os.system("rclone copy ~/dropbox dropbox:inbound")
def opti_report(df):
for ind in df.index:
if df['OrderQty'][ind] > df['Count'][ind]:
df['OK'][ind] = "SHORT"
elif df['OrderQty'][ind] < df['Count'][ind]:
df['OK'][ind] = "OVER"
else:
df['OK'][ind] = "OK"
df = df.drop(df[df['OK'] == 'OK'].index)
df['UPC'] = ''
df.sort_values(by=['OK'])
opti = re.compile(r'^\d')
df = df[df['Deliverable Unit'].str.match(opti) == True]
df.to_excel('~/dropbox/opti_report.xlsx')
os.system("rclone copy ~/dropbox dropbox:inbound")
#Basic interface decision tree.
#Most common loop is scanning items.
#Also accepts inputs for report generation, etc.
while True:
print('Please scan an item. Enter "help" for more options.')
scanned = input()
scanned = scanned.lstrip('0')
if scanned == 'exit':
break
elif scanned == 'load':
scanned_items = load_scans()
elif scanned == 'totes':
tote_report(df)
elif scanned == 'optis':
opti_report(df)
# elif scanned == "check":
# print("Please scan the item you wish to check")
# check = input()
# check = check.lstrip("0")
# c = col.Counter(scanned_items)
# # MAP the c values onto the Count column
# df['Count'] = df['UPC'].map(c)
# print("*************************************")
# print("Here's what I know about that item:")
# a = df.loc[df.index[df['UPC'] == check]].transpose()
# pprint.pprint(a)
# print("*************************************")
elif scanned == "undo":
scanned_items.pop()
print("Item removed")
elif scanned == 'help':
print("'load': Load the previously saved scanned items list")
print("'totes': Generate the totes report")
print("'optis': Generate the optis report")
# print("'check': Check the status of an item")
print("'undo': Remove the most recently scanned item")
print("'exit': Exit the program")
else:
try:
scanned_items.append(scanned)
#df['Count'][df['UPC'] == scanned] = scanned_items.count(scanned)
# c is a Counter object, counting occurences of each UPC in scanned_items
# it is a dictionary with the UPC as the key and the count as the value
c = col.Counter(scanned_items)
# MAP the c values onto the Count column
df['Count'] = df['UPC'].map(c)
# Reset everything to strings to help with later equivalency tests
df = df.astype(str)
a = df.loc[df.index[df['UPC'] == scanned]].transpose()
pprint.pprint(a)
#print(f"Item count: {scanned_items.count(scanned)}")
if len(scanned_items)%5 == 0:
save_scans(scanned_items)
#playsound('beep.wav')
except ValueError:
print("That item is not expected.")
# This is how you iterate over a dataframe.
# Don't forget to use the .index on your dataframe when initializing
# the for loop.
for ind in df.index:
if df['OrderQty'][ind] > df['Count'][ind]:
df['OK'][ind] = "SHORT"
elif df['OrderQty'][ind] < df['Count'][ind]:
df['OK'][ind] = "OVER"
else:
df['OK'][ind] = "OK"
#Export one big excel file with filters and formatting
| [
"timking451@gmail.com"
] | timking451@gmail.com |
9f7eadf31ab3bbe42dc462af256cd2a22ab92657 | 8435d1a2de86e4d9a4ca834ba59661975b7b573f | /UnrealMinesweeper/Main/ImageLoader.py | 6b917b1a7d0dd4650590ada06ad9b48f9662767c | [] | no_license | DominiqueDevinci/UnrealMinesweeper | c32cacb419bee6ef0fa2cd93b49ee2e3cb4a292a | 8ba92437029ea87c7c19d44b2a7382f65ca8c311 | refs/heads/master | 2022-01-06T14:23:38.337013 | 2019-06-12T09:35:56 | 2019-06-12T09:35:56 | 113,218,517 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 274 | py | from PyQt4.QtGui import *
from PyQt4.QtCore import *
class ImageLoader:
iconFlag=None
iconMine=None
@staticmethod
def init():
ImageLoader.iconFlag=QIcon(QPixmap("flag.png"))
ImageLoader.iconMine=QIcon(QPixmap("mine.png"))
| [
"dominique.blaze@devinci.fr"
] | dominique.blaze@devinci.fr |
f67140e2667ebecdc52c29d6f61ed211346dc736 | e0c92a1688e633e0d65c2b5c5844856626fa5116 | /problems/st.py | deb6f9ea1d8b76c7a7e10d1940ddedf74ed158b9 | [] | no_license | avee137/interview | 9bf4ac9fe09909ed0b25a5f24e075e58442d1491 | 5435a7ed21a407bff3abdea8bbc29689f9cb5ea0 | refs/heads/master | 2021-09-06T19:12:42.895486 | 2018-02-10T06:55:08 | 2018-02-10T06:55:08 | 115,280,824 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 538 | py | import sys
class stack():
st = []
def __init__(self):
self.st = []
def push(self, num):
self.st.append(num)
def pop(self):
return self.st.pop()
def print_alternate(self):
p = 1
while len(self.st) != 0:
n = self.st.pop()
if p%2 == 1:
print n
p += 1
def main ():
s = stack ()
for line in [1,2,3,4]:
num = int(line)
s.push(num)
s.print_alternate()
main()
| [
"avee137@gmail.com"
] | avee137@gmail.com |
ffa90cad695dc1a89745ab2dd5a57a1006b45f38 | 3716e91c0a18a2cf0b5807cc673d95a7539b008c | /Forest/BackwoodsFork.py | f295233eb6d3ecd0680e498a956ebd483a4ff8fb | [] | no_license | kiwiapple87/CodeCombat-1 | 47f0fa6d75d6d3e9fb9c28feeb6fe2648664c1aa | ce0201e5ed099193ca40afd3b7abeee5a3732387 | refs/heads/master | 2021-05-01T16:38:03.575842 | 2016-08-25T11:13:26 | 2016-08-25T11:13:26 | 66,552,813 | 1 | 0 | null | 2016-08-25T11:39:20 | 2016-08-25T11:39:18 | null | UTF-8 | Python | false | false | 818 | py | # https://codecombat.com/play/level/backwoods-fork
# Incoming ogres!
# Use the checkAndAttack function to make your code easy to read.
# This function has a parameter.
# An parameter is a way of passing information into a function.
def checkAndAttack(target):
# The 'target' parameter is just a variable!
# It contains the argument when the function was called.
if target:
hero.attack(target)
hero.moveXY(43, 34)
while True:
hero.moveXY(58, 52)
topEnemy = hero.findNearestEnemy()
checkAndAttack(topEnemy)
hero.moveXY(58, 16)
topEnemy = hero.findNearestEnemy()
checkAndAttack(topEnemy)
# Move to the bottom X mark.
# Create a variable named bottomEnemy and find the nearest enemy.
# Use the checkAndAttack function, and include the bottomEnemy variable.
| [
"vadim-job-hg@yandex.ru"
] | vadim-job-hg@yandex.ru |
7c01e4cf91496b5e1526475731e737f0d9e39a5b | ee9d6a53eb2f4f506f02b553cf397e0570a37540 | /qa/rpc-tests/listtransactions.py | 155381b39f22465072f24113a4bec855f0a02845 | [
"MIT"
] | permissive | adcoin-project/AdCoin-old | 399f7091810defd742f0370435dea60a559feefe | ab202457b17ec7014e0afc862edfc3f6da88fb67 | refs/heads/master | 2021-09-26T11:02:02.943144 | 2018-10-29T20:04:13 | 2018-10-29T20:04:13 | 97,236,000 | 3 | 0 | MIT | 2019-02-18T00:09:45 | 2017-07-14T13:21:52 | C++ | UTF-8 | Python | false | false | 10,252 | py | #!/usr/bin/env python3
# Copyright (c) 2014-2016 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
# Exercise the listtransactions API
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import *
from test_framework.mininode import CTransaction, COIN
from io import BytesIO
def txFromHex(hexstring):
tx = CTransaction()
f = BytesIO(hex_str_to_bytes(hexstring))
tx.deserialize(f)
return tx
class ListTransactionsTest(BitcoinTestFramework):
def __init__(self):
super().__init__()
self.num_nodes = 4
self.setup_clean_chain = False
def setup_nodes(self):
#This test requires mocktime
enable_mocktime()
return start_nodes(self.num_nodes, self.options.tmpdir)
def run_test(self):
# Simple send, 0 to 1:
txid = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 0.1)
self.sync_all()
assert_array_result(self.nodes[0].listtransactions(),
{"txid":txid},
{"category":"send","account":"","amount":Decimal("-0.1"),"confirmations":0})
assert_array_result(self.nodes[1].listtransactions(),
{"txid":txid},
{"category":"receive","account":"","amount":Decimal("0.1"),"confirmations":0})
# mine a block, confirmations should change:
self.nodes[0].generate(1)
self.sync_all()
assert_array_result(self.nodes[0].listtransactions(),
{"txid":txid},
{"category":"send","account":"","amount":Decimal("-0.1"),"confirmations":1})
assert_array_result(self.nodes[1].listtransactions(),
{"txid":txid},
{"category":"receive","account":"","amount":Decimal("0.1"),"confirmations":1})
# send-to-self:
txid = self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 0.2)
assert_array_result(self.nodes[0].listtransactions(),
{"txid":txid, "category":"send"},
{"amount":Decimal("-0.2")})
assert_array_result(self.nodes[0].listtransactions(),
{"txid":txid, "category":"receive"},
{"amount":Decimal("0.2")})
# sendmany from node1: twice to self, twice to node2:
send_to = { self.nodes[0].getnewaddress() : 0.11,
self.nodes[1].getnewaddress() : 0.22,
self.nodes[0].getaccountaddress("from1") : 0.33,
self.nodes[1].getaccountaddress("toself") : 0.44 }
txid = self.nodes[1].sendmany("", send_to)
self.sync_all()
assert_array_result(self.nodes[1].listtransactions(),
{"category":"send","amount":Decimal("-0.11")},
{"txid":txid} )
assert_array_result(self.nodes[0].listtransactions(),
{"category":"receive","amount":Decimal("0.11")},
{"txid":txid} )
assert_array_result(self.nodes[1].listtransactions(),
{"category":"send","amount":Decimal("-0.22")},
{"txid":txid} )
assert_array_result(self.nodes[1].listtransactions(),
{"category":"receive","amount":Decimal("0.22")},
{"txid":txid} )
assert_array_result(self.nodes[1].listtransactions(),
{"category":"send","amount":Decimal("-0.33")},
{"txid":txid} )
assert_array_result(self.nodes[0].listtransactions(),
{"category":"receive","amount":Decimal("0.33")},
{"txid":txid, "account" : "from1"} )
assert_array_result(self.nodes[1].listtransactions(),
{"category":"send","amount":Decimal("-0.44")},
{"txid":txid, "account" : ""} )
assert_array_result(self.nodes[1].listtransactions(),
{"category":"receive","amount":Decimal("0.44")},
{"txid":txid, "account" : "toself"} )
multisig = self.nodes[1].createmultisig(1, [self.nodes[1].getnewaddress()])
self.nodes[0].importaddress(multisig["redeemScript"], "watchonly", False, True)
txid = self.nodes[1].sendtoaddress(multisig["address"], 0.1)
self.nodes[1].generate(1)
self.sync_all()
assert(len(self.nodes[0].listtransactions("watchonly", 100, 0, False)) == 0)
assert_array_result(self.nodes[0].listtransactions("watchonly", 100, 0, True),
{"category":"receive","amount":Decimal("0.1")},
{"txid":txid, "account" : "watchonly"} )
#Litecoin: Disabled RBF
#self.run_rbf_opt_in_test()
# Check that the opt-in-rbf flag works properly, for sent and received
# transactions.
def run_rbf_opt_in_test(self):
# Check whether a transaction signals opt-in RBF itself
def is_opt_in(node, txid):
rawtx = node.getrawtransaction(txid, 1)
for x in rawtx["vin"]:
if x["sequence"] < 0xfffffffe:
return True
return False
# Find an unconfirmed output matching a certain txid
def get_unconfirmed_utxo_entry(node, txid_to_match):
utxo = node.listunspent(0, 0)
for i in utxo:
if i["txid"] == txid_to_match:
return i
return None
# 1. Chain a few transactions that don't opt-in.
txid_1 = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 1)
assert(not is_opt_in(self.nodes[0], txid_1))
assert_array_result(self.nodes[0].listtransactions(), {"txid": txid_1}, {"bip125-replaceable":"no"})
sync_mempools(self.nodes)
assert_array_result(self.nodes[1].listtransactions(), {"txid": txid_1}, {"bip125-replaceable":"no"})
# Tx2 will build off txid_1, still not opting in to RBF.
utxo_to_use = get_unconfirmed_utxo_entry(self.nodes[1], txid_1)
# Create tx2 using createrawtransaction
inputs = [{"txid":utxo_to_use["txid"], "vout":utxo_to_use["vout"]}]
outputs = {self.nodes[0].getnewaddress(): 0.999}
tx2 = self.nodes[1].createrawtransaction(inputs, outputs)
tx2_signed = self.nodes[1].signrawtransaction(tx2)["hex"]
txid_2 = self.nodes[1].sendrawtransaction(tx2_signed)
# ...and check the result
assert(not is_opt_in(self.nodes[1], txid_2))
assert_array_result(self.nodes[1].listtransactions(), {"txid": txid_2}, {"bip125-replaceable":"no"})
sync_mempools(self.nodes)
assert_array_result(self.nodes[0].listtransactions(), {"txid": txid_2}, {"bip125-replaceable":"no"})
# Tx3 will opt-in to RBF
utxo_to_use = get_unconfirmed_utxo_entry(self.nodes[0], txid_2)
inputs = [{"txid": txid_2, "vout":utxo_to_use["vout"]}]
outputs = {self.nodes[1].getnewaddress(): 0.998}
tx3 = self.nodes[0].createrawtransaction(inputs, outputs)
tx3_modified = txFromHex(tx3)
tx3_modified.vin[0].nSequence = 0
tx3 = bytes_to_hex_str(tx3_modified.serialize())
tx3_signed = self.nodes[0].signrawtransaction(tx3)['hex']
txid_3 = self.nodes[0].sendrawtransaction(tx3_signed)
assert(is_opt_in(self.nodes[0], txid_3))
assert_array_result(self.nodes[0].listtransactions(), {"txid": txid_3}, {"bip125-replaceable":"yes"})
sync_mempools(self.nodes)
assert_array_result(self.nodes[1].listtransactions(), {"txid": txid_3}, {"bip125-replaceable":"yes"})
# Tx4 will chain off tx3. Doesn't signal itself, but depends on one
# that does.
utxo_to_use = get_unconfirmed_utxo_entry(self.nodes[1], txid_3)
inputs = [{"txid": txid_3, "vout":utxo_to_use["vout"]}]
outputs = {self.nodes[0].getnewaddress(): 0.997}
tx4 = self.nodes[1].createrawtransaction(inputs, outputs)
tx4_signed = self.nodes[1].signrawtransaction(tx4)["hex"]
txid_4 = self.nodes[1].sendrawtransaction(tx4_signed)
assert(not is_opt_in(self.nodes[1], txid_4))
assert_array_result(self.nodes[1].listtransactions(), {"txid": txid_4}, {"bip125-replaceable":"yes"})
sync_mempools(self.nodes)
assert_array_result(self.nodes[0].listtransactions(), {"txid": txid_4}, {"bip125-replaceable":"yes"})
# Replace tx3, and check that tx4 becomes unknown
tx3_b = tx3_modified
tx3_b.vout[0].nValue -= int(Decimal("0.004") * COIN) # bump the fee
tx3_b = bytes_to_hex_str(tx3_b.serialize())
tx3_b_signed = self.nodes[0].signrawtransaction(tx3_b)['hex']
txid_3b = self.nodes[0].sendrawtransaction(tx3_b_signed, True)
assert(is_opt_in(self.nodes[0], txid_3b))
assert_array_result(self.nodes[0].listtransactions(), {"txid": txid_4}, {"bip125-replaceable":"unknown"})
sync_mempools(self.nodes)
assert_array_result(self.nodes[1].listtransactions(), {"txid": txid_4}, {"bip125-replaceable":"unknown"})
# Check gettransaction as well:
for n in self.nodes[0:2]:
assert_equal(n.gettransaction(txid_1)["bip125-replaceable"], "no")
assert_equal(n.gettransaction(txid_2)["bip125-replaceable"], "no")
assert_equal(n.gettransaction(txid_3)["bip125-replaceable"], "yes")
assert_equal(n.gettransaction(txid_3b)["bip125-replaceable"], "yes")
assert_equal(n.gettransaction(txid_4)["bip125-replaceable"], "unknown")
# After mining a transaction, it's no longer BIP125-replaceable
self.nodes[0].generate(1)
assert(txid_3b not in self.nodes[0].getrawmempool())
assert_equal(self.nodes[0].gettransaction(txid_3b)["bip125-replaceable"], "no")
assert_equal(self.nodes[0].gettransaction(txid_4)["bip125-replaceable"], "unknown")
if __name__ == '__main__':
ListTransactionsTest().main()
| [
"peter@codebuffet.co"
] | peter@codebuffet.co |
71ef43c37a39aabf11677782c933b7f2ca517fd1 | c20950b60480f927764400bd243722214f611213 | /nplus1_nomore/example_1/urls.py | 2dd0242100df105b2a1d0956006a4b9a204628e1 | [] | no_license | samuelmovi/nplus1_nomore | ad77d02c0ca7537307f86380064291e0cc7186f1 | 1f3811069c394d9fc213a75d5be380be0a607212 | refs/heads/master | 2023-07-07T21:59:57.364048 | 2021-08-30T17:45:10 | 2021-08-30T17:45:10 | 400,259,204 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 357 | py | from django.urls import path
from . import views
app_name = 'example_1'
urlpatterns = [
path('report_list_bad/', views.ReportsListBadView.as_view(), name='reports-bad'),
path('report_list_good/', views.ReportsListGoodView.as_view(), name='reports-good'),
path('report_list_best/', views.ReportsListBestView.as_view(), name='reports-best'),
]
| [
"samuel.molina@enreda.coop"
] | samuel.molina@enreda.coop |
7a1cb814c241270232d1ad276289e7b8c9a49cbb | af06f557d31fdfb6ef948dd5acebf74aa8144e2a | /jun/models.py | 4014db1d69cd55e1de9e1b0baba7e5695d95eff2 | [] | no_license | junkwon-dev/NET_Comm_web | 1d7c6921460830973b972c24536252dd9f591890 | c1549efaa2cf2288ca5b9f0f1fa4a4af2131069e | refs/heads/master | 2022-12-17T02:58:18.058715 | 2021-06-09T14:18:34 | 2021-06-09T14:18:34 | 225,137,248 | 0 | 0 | null | 2022-12-08T03:15:06 | 2019-12-01T09:39:11 | JavaScript | UTF-8 | Python | false | false | 372 | py | from django.db import models
# Create your models here.
class Blog_jun(models.Model):
title = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
body = models.TextField()
files =models.FileField(null=True,upload_to='files/')
isFile=models.BooleanField(default=False)
def summary(self):
return self.body[:30] | [
"jun@junui-MacBookPro.local"
] | jun@junui-MacBookPro.local |
a4d90d95aa7d47485090ac92009ccbef391cdfec | 5dfb9ca5e0c8cb4cb7a7a92d6f6a34b34a841869 | /LeetCodeSolutions/python/513_Find_Bottom_Left_Tree_Value.py | 808427669e6aa627ab9f1b0b0ec2c32986e01b6e | [
"MIT"
] | permissive | ChuanleiGuo/AlgorithmsPlayground | 2f71d29e697a656562e3d2a2de783d964dc6a325 | 90b6287b742c8bfd3797540c408d679be2821a40 | refs/heads/master | 2021-01-11T18:30:43.218959 | 2018-11-19T02:20:31 | 2018-11-19T02:20:31 | 79,550,052 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 432 | py | # Definition for a binary tree node.
class TreeNode(object):
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution(object):
def findBottomLeftValue(self, root):
"""
:type root: TreeNode
:rtype: int
"""
queue = [root]
for node in queue:
queue += filter(None, [node.right, node.left])
return node.val
| [
"chuanleiguo@gmail.com"
] | chuanleiguo@gmail.com |
e5f11a25aafb0f4285218d90725715b8808028b1 | f3539368956788ccd1ec21ac3889b3cdb1302555 | /EDA.py | 7919fed58ea224555d8fdbb5b7b721fd22458d56 | [] | no_license | tcwhalen/UrbanSoundPS | 440854eac903ac6fe77143e54bfa5128a33cd604 | fb2f7d82c369c99773ea780065e4430ef446c92c | refs/heads/master | 2023-02-04T23:50:58.932680 | 2020-12-24T05:31:42 | 2020-12-24T05:31:42 | 324,072,036 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,507 | py | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.5.2
# kernelspec:
# display_name: Python 3.6.9 64-bit
# name: python36964bit56390c8b731a4898893e0d66a8b6194d
# ---
# +
venv = "urbsound8k/bin/activate_this.py"
exec(open(venv).read())
import numpy as np
import scipy
import pandas as pd
import matplotlib.pyplot as plt
import librosa
import librosa.display
import math
import seaborn as sns
import IPython.display
pi = math.pi
# custom functions
from phaseshift import phasediff
# +
# sound1, FS1 = librosa.load('data/fold1/7383-3-0-0.wav') # dog
# sound1, FS1 = librosa.load('data/fold1/15564-2-0-0.wav') $ crowd
# sound1, FS1 = librosa.load('data/fold1/17913-4-0-1.wav') # construction
sound1, FS1 = librosa.load('data/fold1/19026-1-0-0.wav') # car horn, good ex of phase diff
wind = 2**11 # in samples
step = 2**9 # in samples
rayleigh = FS1/wind
nyquist = FS1/2
step_sec = step/FS1 # in sec
# freqs = [rayleigh*(i+1) for i in range(round(wind/2))]
ft1 = librosa.stft(sound1, window=scipy.signal.hann, n_fft=wind, hop_length=step, center=0)
psd1 = abs(ft1)**2
db1 = librosa.power_to_db(psd1)
phdiff, psd_rect, times, freqs = phasediff(sound1, windsize=wind, stepsize=step, sampfreq=FS1)
# -
# IPython.display.Audio(data=sound1, rate=FS1)
librosa.display.specshow(db1, y_axis='linear')
# + tags=["outputPrepend"]
librosa.display.specshow(phdiff, y_axis='linear')
# -
| [
"timcwhalen@gmail.com"
] | timcwhalen@gmail.com |
cb85e2e2c3443015432d4914f2efa2bbb155f8ab | 77625e7e13e23b6422aaa9f0d0e4edfa38d3c972 | /oca/tests/test_user_pool.py | a6daad4ce3388b4ef4872869097896350f241b76 | [
"Apache-2.0"
] | permissive | lukaszo/python-oca | 82809870b233a7b119e29f20421404fda94df632 | b2bb79ea0badc8e54fee6e09a185c3576a9808b3 | refs/heads/master | 2021-01-10T20:23:00.839900 | 2015-06-19T20:53:36 | 2015-06-19T20:53:36 | 1,144,937 | 6 | 6 | null | 2015-06-21T20:58:15 | 2010-12-06T23:53:07 | Python | UTF-8 | Python | false | false | 475 | py | # -*- coding: UTF-8 -*-
import os
from mock import Mock
import oca
class TestUserPool:
def setUp(self):
self.client = oca.Client('test:test')
self.xml = open(os.path.join(os.path.dirname(oca.__file__),
'tests/fixtures/userpool.xml')).read()
def test_info(self):
self.client.call = Mock(return_value=self.xml)
pool = oca.UserPool(self.client)
pool.info()
assert len(pool) == 2
| [
"lukaszoles@gmail.com"
] | lukaszoles@gmail.com |
ad93028a480b94013dd1f453bba7655d76b70bee | e97db3e24252eedf9fc85d9093c320e39822025e | /defensa/img/aprendizaje/clases3.py | c5632f6dcc6812c9eadd194dbc35504e4c54184c | [] | no_license | facundoq/thesis-writeup | 5217c84b6bf6e80d590dd92fd930522a9688f135 | fdea6c2b6c15ce0dea4f988df20fe9556e77d657 | refs/heads/master | 2021-01-22T08:53:12.896185 | 2017-12-11T15:29:31 | 2017-12-11T15:29:31 | 14,572,449 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,475 | py | import sys
import os
sys.path.append(os.path.realpath('../'))
from base_plot import *
from pylab import *
from math import exp,sqrt
import numpy as np
import matplotlib as mp
def gaussian2d(ux,uy,sx,sy,samples):
cov = array([[sx,0],[0,sy]])
x,y = np.random.multivariate_normal([ux,uy],cov ,size=samples).T
return(x,y)
def gaussian2d_isotropic(ux,uy,scale,samples):
return gaussian2d(ux,uy,scale,scale,samples)
def gaussian2d_unit(ux,uy,samples):
return gaussian2d_isotropic(ux,uy,1,samples)
def default_axes(r=10):
ylim([-r,r])
xlim([-r,r])
def graph_clases3():
(x,y)=gaussian2d_unit(0,0,20)
scatter(x,y)
def graph_perceptron_entrenamiento():
topology=141
samples=30
sd=2
(x1,y1)=gaussian2d_isotropic(-3,4,sd,samples)
(x2,y2)=gaussian2d_isotropic(3,-4,sd,samples)
w=array([ -0.5,-2,-0.5,1 ])
away=array([ [0,-2],[0,-2],[0,-3],[1,-3] ])
x=np.arange(-10.0,10.0,0.1)
for sp in xrange(4):
subplot(topology+sp,aspect='equal')
default_axes(10)
scatter(x1,y1,marker='+')
scatter(x2,y2,marker='_')
plot(x,x*w[sp])
arrow_size=4
side = 1 if sp<2 else -1;
direction= [w[sp]*side,-1*side]
better_arrow([0,0],direction,arrow_size,label='$\mathbf{w}$',position=away[sp])
if __name__=='__main__':
figure(figsize=(20,10))
#graph_clases3()
#savepngfig('clases3')
graph_perceptron_entrenamiento()
savepngfig('perceptron_entrenamiento')
show()
| [
"facundoq@gmail.com"
] | facundoq@gmail.com |
29a9f7ca66d7a1f7db10087bb7a92e53f63d1578 | 9f2eb69625bd0204f2e7d39b56592c92afe5e07b | /photos/templatetags/photos_carousel.py | 4abdcc1b232221d1e5a4af2442141d9859331e42 | [] | no_license | Cris123m/machtecBeta | f8b426a0f996b189a8595ef179639facf77c0056 | 7c01653d5876c34974e0e8b6beb8e93da86604c8 | refs/heads/master | 2020-09-06T23:11:45.904167 | 2020-03-09T00:22:45 | 2020-03-09T00:22:45 | 220,584,788 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 182 | py | from django import template
from photos.models import Photo
register=template.Library()
@register.simple_tag
def get_photo_list():
photos=Photo.objects.all()
return photos | [
"cristofer123mach@gmail.com"
] | cristofer123mach@gmail.com |
200995910948f77d1809031eabf7ce748480c34b | 32f7e4300e36b2a280daf94e6b887f02887c0c8f | /ref_impl/demo_tiny_ctc.py | 012434e6cc018306f5fdf0a64ef9dda9c1f726d3 | [
"MIT"
] | permissive | isaziconsulting/rnn2d | e09fe095ffb576c49c618f313cee70e5368ac1cf | e983bc441a4e3a5fb5f632ae30a3b4b20c5a63e1 | refs/heads/master | 2021-01-20T12:28:00.628269 | 2017-05-05T12:19:56 | 2017-05-05T12:19:56 | 90,363,724 | 0 | 0 | null | 2017-05-05T10:11:49 | 2017-05-05T10:11:49 | null | UTF-8 | Python | false | false | 1,396 | py | import numpy as np
import theano
import theano.tensor as T
from scipy.ndimage import imread
from returnn import Model, Collapse, Loss, SGD
np.random.seed(1234)
img_list = [ imread('../assets/a.png').astype(np.float32),
imread('../assets/ab.png').astype(np.float32) ]
y = np.array([1, 1, 2], dtype=np.int32)
ylen = np.array([1, 2], dtype=np.int32)
max_h, max_w, max_c = 0, 0, 0
for img in img_list:
max_h = max(max_h, img.shape[0])
max_w = max(max_w, img.shape[1])
max_c = max(max_c, img.shape[2])
z = np.array([(img.shape[0], img.shape[1]) for img in img_list],
dtype = np.float32)
x = np.zeros((max_h, max_w, len(img_list), max_c)).astype(np.float32)
for i, img in enumerate(img_list):
x[:img.shape[0], :img.shape[1], i, :img.shape[2]] = (255.0 - img) / 255.0
model = Collapse(Model(max_c, [5], collapse_type='sum'), collapse_type='sum')
loss = Loss(model)
sgd = SGD(loss)
# Round parameters for an easier visualization
for p in model.parameters:
v = p.get_value()
v = np.round(v, 1)
p.set_value(v)
z = np.array([(max_h, max_w) for img in img_list],
dtype = np.float32)
for i in xrange(20):
output, sizes = model(x, z)
l = sgd.train(x, z, y, ylen)
print 'ITER = %02d %9.5f %9.5f %9.5f %9.5f %9.5f %9.5f' % (
i, output.sum(), output.mean(), output.std(),
output.min(), output.max(), l)
| [
"joapuipe@gmail.com"
] | joapuipe@gmail.com |
e3feca8063a41c8988af243eef5ea642b26fe9c5 | 4a74875c7366a19b7189fcb89fa0fa27abc4309e | /data_pipeline/processor/oracle_where_parser.py | 60562db03590c59c92acd5b85aa9bc775183f86c | [
"Apache-2.0"
] | permissive | saubury-iag/data_pipeline | d865d66d25eeb4ea6c6a655ae934bfe83c0efa06 | 4ad04198ed48c643045113c6e2c3e0848adbdec6 | refs/heads/master | 2021-07-23T08:43:46.754162 | 2017-11-01T05:05:23 | 2017-11-01T05:05:23 | 108,808,749 | 0 | 0 | null | 2017-10-30T06:06:41 | 2017-10-30T06:06:41 | null | UTF-8 | Python | false | false | 5,101 | py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
###############################################################################
# Module: oracle_where_parser
# Purpose: Parses Where component of LogMiner SQL statements
#
# Notes:
#
###############################################################################
from enum import Enum
import data_pipeline.constants.const as const
from .oracle_base_parser import OracleBaseParser
WhereState = Enum('WhereState', 'start key operator value')
class OracleWhereParser(OracleBaseParser):
def __init__(self):
super(OracleWhereParser, self).__init__()
self._primarykey_list = None
self._curr_key = None
self._statement = None
def _set_primary_keys_from_string(self, pkstr, table_name):
pkstr = pkstr.strip()
if pkstr:
if pkstr == const.NO_KEYFIELD_STR:
self._primarykey_list = []
else:
self._primarykey_list = [pk.strip().upper()
for pk in pkstr.split(const.COMMA)]
self._logger.debug(
"[{table}] Set primary keys = {pks}"
.format(table=table_name, pks=self._primarykey_list))
return self._primarykey_list
def set_primary_keys(self, value):
self._primarykey_list = value
def set_statement(self, value):
self._statement = value
def parse(self):
self._char_buff = const.EMPTY_STRING
self._parsing_state = WhereState.start
while self._read_cursor < len(self._commit_statement):
if self._can_consume_char():
self._consume()
self._transition_where_parsing_state()
self._next()
def _can_consume_char(self):
return (self._parsing_state == WhereState.key or
self._parsing_state == WhereState.value)
def _transition_where_parsing_state(self):
if self._parsing_state == WhereState.start:
self._transition_from_where_start_parsing()
elif self._parsing_state == WhereState.key:
self._transition_from_where_key_parsing()
elif self._parsing_state == WhereState.operator:
self._transition_from_where_operator_parsing()
elif self._parsing_state == WhereState.value:
self._transition_from_where_value_parsing()
def _transition_from_where_start_parsing(self):
if self._at_key_word_boundary():
self._empty_buffer()
self._parsing_state = WhereState.key
def _transition_from_where_key_parsing(self):
if self._at_key_word_boundary():
self._curr_key = self._flush_buffer()
self._parsing_state = WhereState.operator
def _transition_from_where_operator_parsing(self):
is_null_index = self._commit_statement.find(const.IS_NULL,
self._read_cursor)
if self._read_cursor == is_null_index:
# Prefer using None over const.IS_NULL in case a
# key's value is actually 'IS NULL'
self._statement.add_condition(self._curr_key, None)
self._read_cursor += len(const.IS_NULL)
self._empty_buffer()
self._parsing_state = WhereState.start
elif self._at_value_word_boundary():
self._empty_buffer()
self._parsing_state = WhereState.value
def _transition_from_where_value_parsing(self):
if self._at_escaped_single_quote():
self._next()
elif self._at_value_word_boundary():
where_value = self._flush_buffer()
self._statement.add_condition(self._curr_key, where_value)
self._parsing_state = WhereState.start
def _can_add_primary_key(self):
return (self._primarykey_list is None or
const.NO_KEYFIELD_STR in self._primarykey_list or
self._curr_key.upper() in self._primarykey_list)
def _at_key_word_boundary(self):
return self._curr_char == const.DOUBLE_QUOTE
def _at_value_word_boundary(self):
return self._curr_char == const.SINGLE_QUOTE
def _flush_buffer(self):
# Always remove last char which is only useful when
# there are escaped single-quotes
return super(OracleWhereParser, self)._flush_buffer()[:-1]
| [
"simon.aubury@iag.com.au"
] | simon.aubury@iag.com.au |
64d2caa1c34922afd36e3cf82f53d1a0d89456f7 | 3f90c218ebcef6826808b7fd7ec3bd41a457074e | /thade/migrations/0019_auto_20210727_1606.py | 911404e5911de8808506abd0726637a7279da178 | [] | no_license | webclinic017/ProjectThade | a698b057a27046633a3d5f066732d4ac45a91b2a | 4c9d8ef7db9fa733910a4043748f8fba9724baba | refs/heads/main | 2023-08-19T04:56:20.898792 | 2021-07-31T09:18:11 | 2021-07-31T09:18:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 854 | py | # Generated by Django 3.2.5 on 2021-07-27 16:06
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('thade', '0018_alter_botlog_signal'),
]
operations = [
migrations.RemoveField(
model_name='bot',
name='investment_vnd',
),
migrations.RemoveField(
model_name='botlog',
name='investment_vnd',
),
migrations.AddField(
model_name='botlog',
name='decimal_investment_vnd',
field=models.DecimalField(decimal_places=4, default=0, max_digits=16),
preserve_default=False,
),
migrations.AlterField(
model_name='bot',
name='fee',
field=models.DecimalField(decimal_places=6, max_digits=12),
),
]
| [
"55542012+Khoa-bit@users.noreply.github.com"
] | 55542012+Khoa-bit@users.noreply.github.com |
555e33f4e61e3d7eb4e039e106af5677c3baf2fc | 128bf9519a131483776c07adbf868e4efb0a42f8 | /ex_02.1_fatorial_com_for.py | 286f49c39254f3be5d377fa06a9d1d7cd3a404c3 | [] | no_license | SydPedro/Exercicios_Sidney_Univesp | 8feb9142c24d1f81ec8c60b74cdf3596a36c48a5 | ede520b0947115a2b56c3b7b1a0c5502403a1cef | refs/heads/main | 2023-06-23T19:51:17.107918 | 2021-07-26T13:32:08 | 2021-07-26T13:32:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 187 | py | # Programa que calcula n!
n = int(input("Digite o número: "))
fatorial = 1
aux = 0
for i in range(1,n+1):
fatorial = fatorial*i
print("O fatorial de", n, "é",fatorial)
| [
"noreply@github.com"
] | noreply@github.com |
13d583aeb3e77a61c3d17a1da5edf9c4f1182864 | 9515eb99a4108c4ef18f3ff6ef7ccd2713bb990d | /djangotest/settings.py | 616c48544d154cb41884301790883aca11fede43 | [] | no_license | EARad/DjangoTest | 64baab6de43e1eb02f98f289f18b4838fd4d9fb0 | 40dc8d12ba9b9c94001c10a48880bb3fd03be902 | refs/heads/main | 2023-08-08T01:10:03.135560 | 2021-09-19T15:18:58 | 2021-09-19T15:18:58 | 408,159,762 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,497 | py | """
Django settings for djangotest project.
Generated by 'django-admin startproject' using Django 3.2.7.
For more information on this file, see
https://docs.djangoproject.com/en/3.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.2/ref/settings/
"""
import os
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-$ytagcno**6&vg48eok*jue$q9cw!kv_gsvv=vo86&1j$oyon6'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'django.contrib.flatpages',
'fpages'
]
SITE_ID = 1
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware',
]
ROOT_URLCONF = 'djangotest.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'djangotest.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.2/howto/static-files/
STATIC_URL = '/static/'
# Default primary key field type
# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
STATICFILES_DIRS = [
BASE_DIR/'static'
] | [
"Lezikmail@gmail.com"
] | Lezikmail@gmail.com |
8ca49f178eb6ef50c6e90616f3fdceecf2c9f828 | a4c78931b5b8ee4a1d5394ee101f25448360944c | /venv/bin/easy_install-2.7 | 1cc2c0ab549e298b771169578f2bdbd592c3cc7f | [] | no_license | igorvinnicius/python_ddd_flask | aeeade48cddbb163bbb89a5038e56cc4d15a5307 | 3ff861211caffe5855b9c4c859553982949945fa | refs/heads/master | 2020-04-02T03:50:51.945846 | 2016-07-19T01:50:35 | 2016-07-19T01:50:35 | 63,641,757 | 10 | 1 | null | null | null | null | UTF-8 | Python | false | false | 257 | 7 | #!/home/igor/igorvinicius/venv/bin/python2
# -*- coding: utf-8 -*-
import re
import sys
from setuptools.command.easy_install import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"ondasdechoque@cursopilatesbrasil.com.br"
] | ondasdechoque@cursopilatesbrasil.com.br |
b6a4f9b7cbadd66159005b0291e79b5af56cca7c | d257a407a570710c809ee3d448ee47becb91dc30 | /Longest Palindromic Substring/lps.py | 4ead72f44181d491d05d6b290ebb78971e300c48 | [] | no_license | SuerpX/SuperX_leetcode | aa934651143f8e199a6a1d09d7e2630036ca4b22 | b45747c4270643136eccfe4dc70dbd77256b1c71 | refs/heads/master | 2020-03-13T16:23:07.186244 | 2018-08-03T08:40:16 | 2018-08-03T08:40:16 | 131,196,527 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,001 | py | class Solution(object):
def longestPalindrome(self, s):
"""
:type s: str
:rtype: str
"""
maxS = 0
maxE = 0
longest = -1
l = len(s)
for i in range(l):
'''
if i == 0 or i == l - 1:
continue
'''
p1 = i - 1
p2 = i + 1
long = 1
while p1 >= 0 and p2 < l and s[p1] == s[p2]:
long += 2
p1 -= 1
p2 += 1
if longest < long:
longest = long
maxS = p1 + 1
maxE = p2 - 1
p1 = i
p2 = i + 1
long = 0
while p1 >= 0 and p2 < l and s[p1] == s[p2]:
long += 2
p1 -= 1
p2 += 1
if longest < long:
longest = long
maxS = p1 + 1
maxE = p2 - 1
return s[maxS:maxE + 1]
| [
"SuerpX@users.noreply.github.com"
] | SuerpX@users.noreply.github.com |
3665f21a557fabf7fa2f88468985df0174088928 | f4ef2be443dbb19d8da787fb7c3c0296db8c5306 | /environments/informationtechnology-205813/jenkins/webhook/webhooks.py | 82249e7aab836c53dd9dd288b3e76c1199bb1b94 | [
"MIT"
] | permissive | NarrativeCompany/narrative-infrastructure | 16f76fe86dbffacddcfa948a7f58412ac53cd0b3 | 6e656296dda84f6ebbdefbc8179624a6061788fd | refs/heads/master | 2020-12-05T16:13:58.306768 | 2020-01-07T16:59:33 | 2020-01-07T16:59:33 | 232,168,829 | 2 | 2 | null | null | null | null | UTF-8 | Python | false | false | 8,984 | py | # -*- coding: utf-8 -*-
#
# Copyright (C) 2014, 2015, 2016 Carlos Jenkins <carlos@jenkins.co.cr>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import base64
import logging
from sys import stdout,stderr, hexversion
# logging.basicConfig(stream=stdout)
import hmac
from hashlib import sha1
from json import loads, dumps
from subprocess import Popen, PIPE
from tempfile import mkstemp
import os
from os import access, X_OK, remove, fdopen
from os.path import isfile, abspath, normpath, dirname, join, basename
import requests
from ipaddress import ip_address, ip_network
from flask import Flask, request, abort
application = Flask(__name__)
# https://api.slack.com/methods/users.list/test
GIT_USER_MAP = { "bonovoxly": "<@UALFALVHV>",
"brianlenz": "<@U6U06HHC4>",
"bsantare": "<@UBGBGKAH1>",
"brooxmagnetic": "<@UAFBZPRPB>",
"Jeff-Narrative": "<@UALN1163Y>",
"JonmarkWeber": "<@UALQBPWCE>",
"lorilhope": "<@U6UQCSETW>",
"michaelf318is": "<@U6TFTNUEM>",
"paulonarrative": "<@U9QH69YR3>",
"platypusrex": "<@UAQDZRRQF>",
"PWAlessi": "<@UB11V5HDZ>",
"RosemaryONeill1": "<@U6PJ6USTU>",
"ted-oneill": "<@U6NUAHPEU>"
}
@application.route('/', methods=['GET', 'POST'])
def index():
"""
Main WSGI application entry.
"""
log = logging.getLogger('werkzeug')
log.setLevel(logging.INFO)
# ch = logging.StreamHandler()
# ch.setLevel(logging.INFO)
#log.addHandler(ch)
path = normpath(abspath(dirname(__file__)))
header_ip = request.headers.get('X-Real-IP')
header_forwarded_for = request.headers.get('X-Forwarded-For')
log.info(request.headers)
log.info('X-Real-IP: {}'.format(header_ip))
log.info('X-Forwarded-For: {}'.format(header_ip))
# Only POST is implemented
if request.method != 'POST':
abort(501)
# Load config
with open(join(path, 'conf/config.json'), 'r') as cfg:
config = loads(cfg.read())
# hooks = config.get('hooks_path', join(path, 'hooks'))
# Allow Github IPs only
if config.get('github_ips_only', True):
src_ip = ip_address(
u'{}'.format(request.access_route[0]) # Fix stupid ipaddress issue
)
whitelist = requests.get('https://api.github.com/meta').json()['hooks']
for valid_ip in whitelist:
if src_ip in ip_network(valid_ip):
break
else:
log.error('IP {} not allowed'.format(
src_ip
))
abort(403)
# Enforce secret
if os.environ["SECRET"]:
secret = os.environ["SECRET"]
else:
secret = config.get('enforce_secret', '')
if secret:
# Only SHA1 is supported
header_signature = request.headers.get('X-Hub-Signature')
if header_signature is None:
log.error('header_signature missing.')
abort(403)
sha_name, signature = header_signature.split('=')
if sha_name != 'sha1':
log.error('sha not sha1.')
abort(501)
# HMAC requires the key to be bytes, but data is string
secret_bytes= bytes(secret, 'latin-1')
mac = hmac.new(secret_bytes, request.data, digestmod='sha1')
# Python prior to 2.7.7 does not have hmac.compare_digest
if hexversion >= 0x020707F0:
if not hmac.compare_digest(str(mac.hexdigest()), str(signature)):
log.error('hmac.compare_digest failed.')
log.error(str(mac.hexdigest()))
log.error(str(signature))
abort(403)
else:
# What compare_digest provides is protection against timing
# attacks; we can live without this protection for a web-based
# application
if not str(mac.hexdigest()) == str(signature):
log.error('mac.hexdigest not equal to string signature.')
abort(403)
# Implement ping
event = request.headers.get('X-GitHub-Event', 'ping')
if event == 'ping':
return dumps({'msg': 'pong'})
# Gather data
try:
payload = request.get_json()
except Exception:
log.warning('Request parsing failed')
abort(400)
# Determining the branch is tricky, as it only appears for certain event
# types an at different levels
branch = None
try:
# Case 1: a ref_type indicates the type of ref.
# This true for create and delete events.
if 'ref_type' in payload:
if payload['ref_type'] == 'branch':
branch = payload['ref']
# Case 2: a pull_request object is involved. This is pull_request and
# pull_request_review_comment events.
elif 'pull_request' in payload:
# This is the TARGET branch for the pull-request, not the source
# branch
branch = payload['pull_request']['base']['ref']
elif event in ['push']:
# Push events provide a full Git ref in 'ref' and not a 'ref_type'.
branch = payload['ref'].split('/', 2)[2]
branch_origin = "origin/{}".format(branch)
# add custom fields
payload['branch_origin'] = branch_origin
if payload['pusher']['name'] in GIT_USER_MAP:
print("Slack user found: {}".format(payload['pusher']['name']))
payload['slack_user'] = GIT_USER_MAP[payload['pusher']['name']]
else:
print("Slack user not found: {}".format(payload['pusher']['name']))
payload['slack_user'] = payload['pusher']['name']
except KeyError:
# If the payload structure isn't what we expect, we'll live without
# the branch name
pass
# All current events have a repository, but some legacy events do not,
# so let's be safe
name = payload['repository']['name'] if 'repository' in payload else None
meta = {
'name': name,
'branch': branch,
'event': event
}
log.info('Metadata:\n{}'.format(dumps(meta)))
# Skip push-delete
if event == 'push' and payload['deleted']:
log.info('Skipping push-delete event for {}'.format(dumps(meta)))
return dumps({'status': 'skipped'})
# # Possible hooks
# scripts = []
# if branch and name:
# scripts.append(join(hooks, '{event}-{name}-{branch}'.format(**meta)))
# if name:
# scripts.append(join(hooks, '{event}-{name}'.format(**meta)))
# scripts.append(join(hooks, '{event}'.format(**meta)))
# scripts.append(join(hooks, 'all'))
# log the paylod
log.info('Payload:\n{}'.format(dumps(payload)))
if os.environ["TOKEN"]:
token = os.environ["TOKEN"]
url = "".join(['http://jenkins:8080/generic-webhook-trigger/invoke?token=', token])
else:
url = config.get('jenkins_url', 'http://jenkins:8080/generic-webhook-trigger/invoke')
headers = {'Content-type': 'application/json', 'Accept': 'text/plain', 'User-Agent': 'jenkins-webhook-python'}
r = requests.post(url, data=dumps(payload), headers=headers)
# Ignoring this section
# # Check permissions
# scripts = [s for s in scripts if isfile(s) and access(s, X_OK)]
# if not scripts:
# return dumps({'status': 'nop'})
# # Save payload to temporal file
# osfd, tmpfile = mkstemp()
# with fdopen(osfd, 'w') as pf:
# pf.write(dumps(payload))
# # Run scripts
# ran = {}
# for s in scripts:
# proc = Popen(
# [s, tmpfile, event],
# stdout=PIPE, stderr=PIPE
# )
# stdout, stderr = proc.communicate()
# ran[basename(s)] = {
# 'returncode': proc.returncode,
# 'stdout': stdout.decode('utf-8'),
# 'stderr': stderr.decode('utf-8'),
# }
# # Log errors if a hook failed
# if proc.returncode != 0:
# logging.error('{} : {} \n{}'.format(
# s, proc.returncode, stderr
# ))
# # Remove temporal file
# remove(tmpfile)
info = config.get('return_scripts_info', False)
if not info:
return dumps({'status': 'done'})
# output = dumps(ran, sort_keys=True, indent=4)
# logging.info(output)
return '200 OK'
if __name__ == '__main__':
application.run(debug=True, host='0.0.0.0')
| [
"brian@socialstrata.com"
] | brian@socialstrata.com |
f59f3f27e0a792a148ccd1c41537d50bd23ab19e | fc65aa5bf399ba8e762ac63e3886d6f21e7898b0 | /graphical_101pong/graph_pong.py | 1793455a99bc6cd31d81f55e9c462d32585b4ca6 | [] | no_license | pastequeninja/101_pong | 5d8cfc7cd7f2517f627b8baa7db2ef54249becb7 | 08a1b929ba70a298bffe99245c56039f99a8fa5d | refs/heads/master | 2021-07-03T20:23:45.625176 | 2020-10-08T15:07:27 | 2020-10-08T15:07:27 | 185,639,035 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,920 | py | import pygame
from pygame.locals import *
from sys import exit
pygame.init()
white = (255, 255, 255)
SCREEN_SIZE = (1920, 1080)
screen = pygame.display.set_mode(SCREEN_SIZE, 0, 32)
pygame.display.set_caption("Pygame Demo")
background = pygame.image.load("medias/fond_noir.jpg").convert()
bat_1 = pygame.image.load("medias/rectangle.jpeg").convert_alpha()
bat_2 = pygame.image.load("medias/rectangle.jpeg").convert_alpha()
ball = pygame.image.load("medias/balle.png").convert_alpha()
FONT = "medias/Minecraft.ttf"
WHITE = (255, 255, 255)
menu_background = pygame.image.load("medias/theme_menu.jpg").convert()
start_button = pygame.image.load("medias/start.jpg").convert()
ball_x, ball_y = 960, 540
x_2, y_2 = 1755, 540
x_1, y_1 = 110, 540
MOVE_RIGHT = 1
MOVE_LEFT = 2
MOVE_DOWN = 3
MOVE_UP = 4
direction_r = 0
direction_l = 0
points_player1 = 0
points_player2 = 0
x_vec = -5
y_vec = -5
while True:
while (points_player1 < 5 and points_player2 < 5):
if ball_x <= 0:
ball_x = 960
ball_y = 540
points_player2 += 1
if ball_x >= 1920 - 70:
ball_x = 960
ball_y = 540
points_player1 += 1
if ball_y == 0:
y_vec = -y_vec
ball_y += 5
if ball_y >= 1018:
y_vec = -y_vec
ball_y -= 5
if ball_x == 155 and ball_y >= y_1 - 34 / 2 and ball_y <= y_1 + 410 - 34 / 2:
x_vec = -x_vec
ball_x += 5
if ball_x == 1700 and ball_y >= y_2 - 34 / 2 and ball_y <= y_2 + 410 - 34 / 2:
x_vec = -x_vec
ball_x -= 5
if ball_y > 0 and ball_y < 1018:
ball_x += x_vec
ball_y += y_vec
for event in pygame.event.get():
if event.type == QUIT:
exit()
if event.type == KEYDOWN:
if event.key == K_z:
direction_l = MOVE_LEFT
if event.key == K_UP:
direction_r = MOVE_UP
if event.key == K_DOWN:
direction_r = MOVE_DOWN
if event.key == K_s:
direction_l = MOVE_RIGHT
elif event.type == KEYUP:
if event.key == K_z:
direction_l = 0
if event.key == K_DOWN:
direction_r = 0
if event.key == K_UP:
direction_r = 0
if event.key == K_s:
direction_l = 0
if(direction_l == MOVE_LEFT and y_1 > 0):
y_1-=5
if(direction_l == MOVE_RIGHT and y_1 < 1080 - 410):
y_1+=5
if(direction_r == MOVE_UP and y_2 > 0):
y_2-=5
if(direction_r == MOVE_DOWN and y_2 < 1080 - 410):
y_2+=5
screen.blit(background, (0, 0))
screen.blit(bat_1, (x_1, y_1))
screen.blit(bat_2, (x_2, y_2))
screen.blit(ball, (ball_x, ball_y))
pygame.display.update()
for event in pygame.event.get():
if (points_player1 == 5):
screen.blit(background, (0, 0))
pygame.display.update()
font = pygame.font.Font(FONT, 17)
surface_text = font.render("PLAYER 1 WINS !!!", True, WHITE)
text_rect = screen.get_rect()
text_rect.midtop = (1800, 500)
if event.type == QUIT:
exit()
screen.blit(surface_text, text_rect)
pygame.display.flip()
else:
screen.blit(background, (0, 0))
pygame.display.update()
font = pygame.font.Font(FONT, 17)
surface_text = font.render("PLAYER 2 WINS !!!", True, WHITE)
text_rect = screen.get_rect()
text_rect.midtop = (1800, 500)
if event.type == QUIT:
exit()
screen.blit(surface_text, text_rect)
pygame.display.flip()
| [
"Pasteque@localhost.localdomain"
] | Pasteque@localhost.localdomain |
dbf2facb2076d5ef35376878f876e8055cc06a3b | 4dfacb9653c3a72b45361b4df5827cb6117e86e0 | /app.py | 97dacf5a43285932944a289e1849358c2965d14d | [] | no_license | srahulkumar8380/final_major_project | a53ac368e73a3a4c3b08f404c02586bb5351e201 | 388e1bf3aea25530073bf488cd036ea5282e1c45 | refs/heads/master | 2023-03-23T14:43:59.853393 | 2020-06-06T08:06:07 | 2020-06-06T08:06:07 | 269,910,598 | 0 | 0 | null | 2021-03-20T04:14:08 | 2020-06-06T07:55:30 | Python | UTF-8 | Python | false | false | 2,796 | py |
from flask import Flask, render_template, request
# libraries for making count matrix and similarity matrix
import numpy as np
import pandas as pd
from scipy.spatial import distance
from sklearn.neighbors import NearestNeighbors
from sklearn.externals import joblib
import pickle
app = Flask(__name__)
model = pickle.load(open('model.pkl', 'rb'))
nutrition_df = pd.read_csv('Test.csv', header=0)
@app.route('/predict',methods=['POST'])
def predict():
'''
For rendering results on HTML GUI
'''
# print(request.get_data('Fat'))
print("hello")
#print(request.form.to_dict())
print(request.json)
print(request.json['Fat'])
#print(request.get_json()["Fat"])
#print(request.args.get('Fat'))
lis=[]
t=[]
lis.append(int(request.json['Weight']))
lis.append(int(request.json['Calories']))
lis.append(int(request.json['Fat']))
lis.append(int(request.json['Sodium']))
lis.append(int(request.json['Carbo']))
lis.append(int(request.json['Cholesterol']))
lis.append(int(request.json['Protein']))
lis.append(int(request.json['Calcium']))
lis.append(int(request.json['Potassium']))
lis.append(int(request.json['Iron']))
lis.append(int(request.json['VitaminA']))
lis.append(int(request.json['VitaminC']))
lis.append(int(request.json['A']))
lis.append(int(request.json['B']))
lis.append(int(request.json['C']))
lis.append(int(request.json['D']))
lis.append(int(request.json['E']))
lis.append(int(request.json['F']))
lis.append(int(request.json['G']))
lis.append(int(request.json['H']))
lis.append(int(request.json['I']))
lis.append(int(request.json['J']))
lis.append(int(request.json['K']))
lis.append(int(request.json['L']))
lis.append(int(request.json['M']))
lis.append(int(request.json['N']))
lis.append(int(request.json['O']))
lis.append(int(request.json['P']))
t.append(lis)
#t=[[48.2, 180, 4.5, 150, 28, 0,10,0.22,0.26,0.009,3,0.036]]
#int_features = [int(x) for x in request.form.values()]
#int_features.reshape(1,-1)
#print(int_features)
#final_features = np.array(int_features)
distances, indices = model.kneighbors(t, n_neighbors=5)
temp=[]
for i in indices[0]:
dic={}
#print (nutrition_df.loc[i]['Brand'])
dic['Brand']=nutrition_df.loc[i]['Brand']
dic['FoodName']=nutrition_df.loc[i]['FoodName']
dic['Category']=nutrition_df.loc[i]['Category']
temp.append(dic)
#recommended_products = [nutrition_df.loc[i]['FoodName'] for i in indices[0]]
return str(temp)
if __name__ == "__main__":
app.run(debug=True)
| [
"noreply@github.com"
] | noreply@github.com |
b15f2abae8cd733f498374b1a2d0c477cd073e9a | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/482/usersdata/309/110136/submittedfiles/Av2_Parte4.py | 4caf25849b6aea491973c78fc097c44d88370f19 | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 320 | py | # -*- coding: utf-8 -*-
mt=[]
lin=int(input("Digite a quantidade de linhas da matriz:"))
colu=int(input("Digite a quantidade de colunas da matriz:"))
for i in range (lin,0,-1):
lista=[]
for j in range (colu,0,-1):
lista[j]=(int(input("Digite um elemento para sua matriz:")))
mt[i]=lista
print(mt)
| [
"rafael.mota@ufca.edu.br"
] | rafael.mota@ufca.edu.br |
77ff1c2b53c17c875b84bb4f6b5a2944e786527e | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_207/ch136_2020_04_01_11_57_23_991207.py | d8760caf5639fb9f950467abccac43d652a78617 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,282 | py | import random
dinheiros = 10
d1 = random.randint(1,6)
d2 = random.randint(1,6)
d3 = random.randint(1,6)
soma = d1+d2+d3
print (soma)
JOGO = False
#fase de DICAS
print ("DICAS: \n Você possui {0} dinheiros." .format(dinheiros))
if dinheiros <=0:
print ("Você perdeu!")
pergunta1 = input("Você quer uma dica? ")
if pergunta1 == 'sim':
dinheiros -= 1
perguntaA = int(input ("Me diga um valor possível para a soma "))
perguntaB = int(input ("Me diga outro valor possível para a soma "))
perguntaC = int(input ("Me diga outro valor possível para a soma "))
if perguntaA == soma:
print ("Está entre os 3")
elif perguntaB == soma:
print ("Está entre os 3")
elif perguntaC == soma:
print ("Está entre os 3")
else:
print ("Não está entre os 3")
JOGO = True
while JOGO:
if dinheiros <=0:
print ("Você perdeu!")
JOGO = False
else:
print(" {0} dinheiros disponível" .format(dinheiros))
resposta = int(input ("Qual o valor da soma?"))
if resposta == soma:
dinheiros += 5*dinheiros
print ("Você ganhou o jogo com {0} dinheiros" .format (dinheiros))
JOGO = False
else:
dinheiros -= 1
| [
"you@example.com"
] | you@example.com |
630a999276d7e52a67c87f6a994dcbbd59c24900 | bc862ae748617c3035382d2313b487880d80c132 | /70_climbing_stairs.py | bbfc9bc600d0143edd73db75d332be95ba9bd93d | [] | no_license | jfbeyond/Coding-Practice-Python | 31fba612d92807ed3c473c0613e9e708e5d813d7 | 92f10fafb35d7d10e3c15e4d28c8f25d93dd8077 | refs/heads/master | 2020-05-22T10:17:55.789521 | 2019-05-12T22:11:34 | 2019-05-12T22:11:34 | 186,307,112 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 160 | py |
# 70 Climbing stairs
# Fibonacci series
def climbingStairs(5):
a,b = 1,2
for i in range(n-1):
a, b = b, a+b
return a
# O(n) O(1) | [
"noreply@github.com"
] | noreply@github.com |
6af1771661cfd3bc1ef147fed8d2917bac5edd6f | e06aa5b21204f037d0e505724aa2d1779708ec6f | /Pendulum/Utils.py | 18a19e6f857ee0283b8b974a3bf4cc8c37bd41a9 | [] | no_license | GreenWizard2015/openai-gym-exercise | bd745a8df6df00db404bf94db246084792fd87ec | b61438daab023637e5f1cfd3165a585e553ba394 | refs/heads/master | 2023-04-14T11:21:09.704888 | 2021-05-01T22:34:42 | 2021-05-01T22:34:42 | 315,734,155 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,258 | py | from Pendulum.PendulumEnvironment import PendulumEnvironment
import time
from Utils import emulateBatch
def showAgentPlay(agent, speed=.01, env=PendulumEnvironment):
env = env()
env.reset()
while not env.done:
env.apply(agent.process(env.state))
env.render()
time.sleep(speed)
env.hide()
return
def testAgent(agent, memory, episodes, processor=None, env=PendulumEnvironment):
testEnvs = [env() for _ in range(episodes)]
for replay, isDone in emulateBatch(testEnvs, agent):
replay = replay if processor is None else processor(replay)
memory.addEpisode(replay, terminated=not isDone)
return [x.score for x in testEnvs]
def trackScores(scores, metrics, levels=[.1, .5, .9]):
if 'scores' not in metrics:
metrics['scores'] = {}
def series(name):
if name not in metrics['scores']:
metrics['scores'][name] = []
return metrics['scores'][name]
########
N = len(scores)
orderedScores = list(sorted(scores, reverse=True))
totalScores = sum(scores) / N
print('Avg. test score: %.1f' % (totalScores))
series('avg.').append(totalScores)
for level in levels:
series('top %.0f%%' % (level * 100)).append(orderedScores[int(N * level)])
return | [
"sizeof.2011@gmail.com"
] | sizeof.2011@gmail.com |
91c9f763be41a1796d8ed796786b9dcee2e2c73f | f8e8faa75bb5dbae436c3dfba4040daf47bbce2c | /dosagelib/plugins/arcamax.py | a1c0b16597d65d72ae043795a956699c1d7bacb6 | [
"MIT"
] | permissive | acaranta/dosage | 784e00b1de8dd10e9664614ca32df67f6249e05e | 6e14e8709b3b213fdc07a2106464860e1cb99481 | refs/heads/master | 2021-01-12T16:16:52.813816 | 2020-01-12T23:36:46 | 2020-01-12T23:36:46 | 71,962,021 | 0 | 0 | MIT | 2018-12-27T09:48:37 | 2016-10-26T03:17:58 | Python | UTF-8 | Python | false | false | 6,345 | py | # -*- coding: utf-8 -*-
# Copyright (C) 2004-2008 Tristan Seligmann and Jonathan Jacobs
# Copyright (C) 2012-2014 Bastian Kleineidam
# Copyright (C) 2015-2019 Tobias Gruetzmacher
from __future__ import absolute_import, division, print_function
from ..scraper import _ParserScraper
class Arcamax(_ParserScraper):
imageSearch = '//img[@id="comic-zoom"]'
prevSearch = '//a[@class="prev"]'
def __init__(self, name, path):
super(Arcamax, self).__init__('Arcamax/' + name)
self.url = 'http://www.arcamax.com/thefunnies/' + path + '/'
@classmethod
def getmodules(cls):
return (
# do not edit anything below since these entries are generated from
# scripts/arcamax.py
# START AUTOUPDATE
# 9ChickweedLane has a duplicate in GoComics/9ChickweedLane
# Agnes has a duplicate in GoComics/Agnes
# AndyCapp has a duplicate in GoComics/AndyCapp
# Archie has a duplicate in Creators/Archie
cls('ArcticCircle', 'arcticcircle'),
# AskShagg has a duplicate in GoComics/AskShagg
cls('BabyBlues', 'babyblues'),
# BallardStreet has a duplicate in GoComics/BallardStreet
# BarneyAndClyde has a duplicate in GoComics/BarneyAndClyde
cls('BarneyGoogleAndSnuffySmith', 'barneygoogle'),
# BC has a duplicate in GoComics/BC
cls('BeetleBailey', 'beetlebailey'),
cls('Bizarro', 'bizarro'),
# BleekerTheRechargeableDog has a duplicate in GoComics/BleekerTheRechargeableDog
cls('Blondie', 'blondie'),
cls('Boondocks', 'boondocks'),
cls('BrilliantMindOfEdisonLee', 'brilliantmindofedisonlee'),
# Candorville has a duplicate in GoComics/Candorville
cls('CarpeDiem', 'carpediem'),
# Cathy has a duplicate in GoComics/Cathy
# ChipBok has a duplicate in GoComics/ChipBok
# ChuckleBros has a duplicate in GoComics/ChuckleBros
# ClayBennett has a duplicate in GoComics/ClayBennett
cls('Crankshaft', 'crankshaft'),
# CulDeSac has a duplicate in GoComics/CulDeSac
cls('Curtis', 'curtis'),
# DaddysHome has a duplicate in GoComics/DaddysHome
# DarrinBell has a duplicate in GoComics/DarrinBell
cls('DennisTheMenace', 'dennisthemenace'),
# DiamondLil has a duplicate in GoComics/DiamondLil
cls('DinetteSet', 'thedinetteset'),
# DogEatDoug has a duplicate in GoComics/DogEatDoug
# DogsOfCKennel has a duplicate in GoComics/DogsOfCKennel
# Doonesbury has a duplicate in GoComics/Doonesbury
cls('Dustin', 'dustin'),
cls('FamilyCircus', 'familycircus'),
# FloAndFriends has a duplicate in GoComics/FloAndFriends
# ForBetterOrForWorse has a duplicate in GoComics/ForBetterOrForWorse
# ForHeavensSake has a duplicate in GoComics/ForHeavensSake
# FortKnox has a duplicate in GoComics/FortKnox
# FreeRange has a duplicate in GoComics/FreeRange
# Garfield has a duplicate in GoComics/Garfield
# GetFuzzy has a duplicate in GoComics/GetFuzzy
# Heathcliff has a duplicate in GoComics/Heathcliff
# HerbAndJamaal has a duplicate in GoComics/HerbAndJamaal
cls('HiAndLois', 'hiandlois'),
cls('JerryKingCartoons', 'humorcartoon'),
# LisaBenson has a duplicate in GoComics/LisaBenson
# LittleDogLost has a duplicate in GoComics/LittleDogLost
# LongStoryShort has a duplicate in Creators/LongStoryShort
# LooseParts has a duplicate in GoComics/LooseParts
# Luann has a duplicate in GoComics/Luann
cls('MallardFillmore', 'mallardfillmore'),
cls('Marvin', 'marvin'),
cls('MasterStrokesGolfTips', 'masterstrokes'),
cls('MeaningOfLila', 'meaningoflila'),
# MichaelRamirez has a duplicate in GoComics/MichaelRamirez
# MikeDuJour has a duplicate in GoComics/MikeDuJour
# MikeLester has a duplicate in GoComics/MikeLester
# MikeLuckovich has a duplicate in GoComics/MikeLuckovich
# Momma has a duplicate in GoComics/Momma
cls('MotherGooseAndGrimm', 'mothergooseandgrimm'),
cls('Mutts', 'mutts'),
# NestHeads has a duplicate in GoComics/NestHeads
# NickAnderson has a duplicate in GoComics/NickAnderson
# NonSequitur has a duplicate in GoComics/NonSequitur
# OneBigHappy has a duplicate in GoComics/OneBigHappy
# Peanuts has a duplicate in GoComics/Peanuts
# PearlsBeforeSwine has a duplicate in GoComics/PearlsBeforeSwine
# Pickles has a duplicate in GoComics/Pickles
# RedAndRover has a duplicate in GoComics/RedAndRover
# ReplyAll has a duplicate in GoComics/ReplyAll
cls('RhymesWithOrange', 'rhymeswithorange'),
# Rubes has a duplicate in GoComics/Rubes
# RudyPark has a duplicate in GoComics/RudyPark
# Rugrats has a duplicate in Creators/Rugrats
# ScaryGary has a duplicate in GoComics/ScaryGary
# Shoe has a duplicate in GoComics/Shoe
# SigneWilkinson has a duplicate in GoComics/SigneWilkinson
# SpeedBump has a duplicate in GoComics/SpeedBump
# SteveBenson has a duplicate in GoComics/SteveBenson
# SteveBreen has a duplicate in GoComics/SteveBreen
# StrangeBrew has a duplicate in GoComics/StrangeBrew
cls('TakeItFromTheTinkersons', 'takeitfromthetinkersons'),
# TheBarn has a duplicate in GoComics/TheBarn
cls('TheLockhorns', 'thelockhorns'),
# TheOtherCoast has a duplicate in GoComics/TheOtherCoast
# WeePals has a duplicate in GoComics/WeePals
# WizardOfId has a duplicate in GoComics/WizardOfId
# WorkingItOut has a duplicate in GoComics/WorkingItOut
# Wumo has a duplicate in GoComics/WuMo
# ZackHill has a duplicate in GoComics/ZackHill
cls('Zits', 'zits'),
# END AUTOUPDATE
)
| [
"tobias-git@23.gs"
] | tobias-git@23.gs |
78a68bd9736199f2bd1d9890aa96ee623dcf42aa | ae9ec0383abbd32afc903cdbcba7be76e876dfd0 | /community/community/wsgi.py | bc1aa0feb1eb231dbe6d910b83b64a80afd789f7 | [
"MIT"
] | permissive | aeikenberry/community | 3f3ddf111259017826912345f9328b74026252e4 | 89434cc053c2d90d4ab495669dedc186f1926f71 | refs/heads/master | 2021-01-01T05:41:07.405013 | 2015-01-14T20:49:12 | 2015-01-14T20:49:12 | 29,264,938 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,566 | py | """
WSGI config for community project.
This module contains the WSGI application used by Django's development server
and any production WSGI deployments. It should expose a module-level variable
named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover
this application via the ``WSGI_APPLICATION`` setting.
Usually you will have the standard Django WSGI application here, but it also
might make sense to replace the whole Django WSGI application with a custom one
that later delegates to the Django one. For example, you could introduce WSGI
middleware here, or combine a Django application with an application of another
framework.
"""
import os
from os.path import abspath, dirname
from sys import path
SITE_ROOT = dirname(dirname(abspath(__file__)))
path.append(SITE_ROOT)
# We defer to a DJANGO_SETTINGS_MODULE already in the environment. This breaks
# if running multiple sites in the same mod_wsgi process. To fix this, use
# mod_wsgi daemon mode with each site in its own daemon process, or use
# os.environ["DJANGO_SETTINGS_MODULE"] = "jajaja.settings"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "community.settings.production")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
| [
"aaron@Aarons-Mac-Pro.local"
] | aaron@Aarons-Mac-Pro.local |
39345363d19292552026ee086e433fd56e66706d | b7a395e4ceb1ccb858f29840c5f52fcc64cd114a | /aletheialib/options/tools.py | 6847434e6c1bc5cfb87737056334d6364f00a2d7 | [
"MIT"
] | permissive | daniellerch/aletheia | d9805a0fffea675ea3ae941c22771db842c5f898 | bf7aa3aee145a667fdca6178b8365d9f8fa76ab2 | refs/heads/master | 2023-09-01T11:53:46.699484 | 2023-08-30T06:06:30 | 2023-08-30T06:06:30 | 101,512,532 | 148 | 42 | MIT | 2022-08-09T07:41:49 | 2017-08-26T20:50:34 | Python | UTF-8 | Python | false | false | 6,973 | py | import os
import sys
doc = "\n" \
" Tools:\n" \
" - hpf: High-pass filter.\n" \
" - print-diffs: Differences between two images.\n" \
" - print-dct-diffs: Differences between the DCT coefficients of two JPEG images.\n" \
" - print-pixels: Print a range of píxels.\n" \
" - print-coeffs: Print a range of JPEG coefficients.\n" \
" - rm-alpha: Opacity of the alpha channel to 255.\n" \
" - plot-histogram: Plot histogram.\n" \
" - plot-histogram-diff: Plot histogram of differences.\n" \
" - plot-dct-histogram: Plot DCT histogram.\n" \
" - eof-extrat: Extract the data after EOF.\n" \
" - print-metadata: Print Exif metadata.\n" \
# {{{ hpf
def hpf():
if len(sys.argv)!=4:
print(sys.argv[0], "hpf <input-image> <output-image>\n")
print("")
sys.exit(0)
import aletheialib.attacks
aletheialib.attacks.high_pass_filter(sys.argv[2], sys.argv[3])
sys.exit(0)
# }}}
# {{{ print_diffs
def print_diffs():
if len(sys.argv)!=4:
print(sys.argv[0], "print-diffs <cover image> <stego image>\n")
print("")
sys.exit(0)
import aletheialib.utils
import aletheialib.attacks
cover = aletheialib.utils.absolute_path(sys.argv[2])
stego = aletheialib.utils.absolute_path(sys.argv[3])
if not os.path.isfile(cover):
print("Cover file not found:", cover)
sys.exit(0)
if not os.path.isfile(stego):
print("Stego file not found:", stego)
sys.exit(0)
aletheialib.attacks.print_diffs(cover, stego)
sys.exit(0)
# }}}
# {{{ print_dct_diffs
def print_dct_diffs():
if len(sys.argv)!=4:
print(sys.argv[0], "print-dtc-diffs <cover image> <stego image>\n")
print("")
sys.exit(0)
import aletheialib.attacks
import aletheialib.utils
cover = aletheialib.utils.absolute_path(sys.argv[2])
stego = aletheialib.utils.absolute_path(sys.argv[3])
if not os.path.isfile(cover):
print("Cover file not found:", cover)
sys.exit(0)
if not os.path.isfile(stego):
print("Stego file not found:", stego)
sys.exit(0)
name, ext = os.path.splitext(cover)
if ext.lower() not in [".jpeg", ".jpg"] or not os.path.isfile(cover):
print("Please, provide a JPEG image!\n")
sys.exit(0)
name, ext = os.path.splitext(stego)
if ext.lower() not in [".jpeg", ".jpg"] or not os.path.isfile(stego):
print("Please, provide a JPEG image!\n")
sys.exit(0)
aletheialib.attacks.print_dct_diffs(cover, stego)
sys.exit(0)
# }}}
# {{{ rm_alpha
def rm_alpha():
if len(sys.argv)!=4:
print(sys.argv[0], "rm-alpha <input-image> <output-image>\n")
print("")
sys.exit(0)
import aletheialib.utils
import aletheialib.attacks
aletheialib.attacks.remove_alpha_channel(sys.argv[2], sys.argv[3])
sys.exit(0)
# }}}
# {{{ plot_histogram
def plot_histogram():
if len(sys.argv)<3:
print(sys.argv[0], "plot-histogram <image>\n")
print("")
sys.exit(0)
import imageio
import aletheialib.utils
from matplotlib import pyplot as plt
fn = aletheialib.utils.absolute_path(sys.argv[2])
I = imageio.imread(fn)
data = []
if len(I.shape) == 1:
data.append(I.flatten())
else:
for i in range(I.shape[2]):
data.append(I[:,:,i].flatten())
plt.hist(data, range(0, 255), color=["r", "g", "b"])
plt.show()
sys.exit(0)
# }}}
# {{{ plot_dct_histogram
def plot_dct_histogram():
if len(sys.argv)<3:
print(sys.argv[0], "plot-dct-histogram <image>\n")
print("")
sys.exit(0)
import aletheialib.utils
import aletheialib.jpeg
from matplotlib import pyplot as plt
fn = aletheialib.utils.absolute_path(sys.argv[2])
name, ext = os.path.splitext(fn)
if ext.lower() not in [".jpeg", ".jpg"] or not os.path.isfile(fn):
print("Please, provide a JPEG image!\n")
sys.exit(0)
I = aletheialib.jpeg.JPEG(fn)
channels = ["r", "g", "b"]
dct_list = []
for i in range(I.components()):
dct = I.coeffs(i).flatten()
dct_list.append(dct)
#counts, bins = np.histogram(dct, range(-5, 5))
#plt.plot(bins[:-1], counts, channels[i])
plt.hist(dct_list, range(-10, 10), rwidth=1, color=["r", "g", "b"])
plt.show()
sys.exit(0)
# }}}
# {{{ print_pixels()
def print_pixels():
if len(sys.argv)!=5:
print(sys.argv[0], "print-pixels <image> <width start>:<width end> <height start>:<height end>\n")
print("Example:")
print(sys.argv[0], "print-pixels test.png 400:410 200:220\n")
print("")
sys.exit(0)
import imageio
I = imageio.imread(sys.argv[2])
w = sys.argv[3].split(":")
h = sys.argv[4].split(":")
ws = int(w[0])
we = int(w[1])
hs = int(h[0])
he = int(h[1])
if len(I.shape) == 2:
print("Image shape:", I.shape[:2])
print(I[hs:he, ws:we])
else:
print("Image shape:", I.shape[:2])
for ch in range(I.shape[2]):
print("Channel:", ch)
print(I[hs:he, ws:we, ch])
print()
# }}}
# {{{ print_coeffs()
def print_coeffs():
if len(sys.argv)!=5:
print(sys.argv[0], "print-coeffs <image> <width start>:<width end> <height start>:<height end>\n")
print("Example:")
print(sys.argv[0], "print-coeffs test.jpg 400:410 200:220\n")
print("")
sys.exit(0)
w = sys.argv[3].split(":")
h = sys.argv[4].split(":")
ws = int(w[0])
we = int(w[1])
hs = int(h[0])
he = int(h[1])
fn, ext = os.path.splitext(sys.argv[2])
if ext[1:].lower() not in ["jpg", "jpeg"]:
print("ERROR: Please, provide a JPEG image")
sys.exit(0)
import aletheialib.utils
from aletheialib.jpeg import JPEG
img = aletheialib.utils.absolute_path(sys.argv[2])
im_jpeg = JPEG(img)
for i in range(im_jpeg.components()):
coeffs = im_jpeg.coeffs(i)
print("Image shape:", coeffs.shape)
print("Channel:", i)
print(coeffs[hs:he, ws:we])
print()
# }}}
# {{{ eof_extract
def eof_extract():
if len(sys.argv)!=4:
print(sys.argv[0], "eof-extract <input-image> <output-data>\n")
print("")
sys.exit(0)
if not os.path.isfile(sys.argv[2]):
print("Please, provide a valid image!\n")
import aletheialib.attacks
aletheialib.attacks.eof_extract(sys.argv[2], sys.argv[3])
sys.exit(0)
# }}}
# {{{ print_metadata
def print_metadata():
if len(sys.argv)!=3:
print(sys.argv[0], "print-metadata <input-image>\n")
print("")
sys.exit(0)
if not os.path.isfile(sys.argv[2]):
print("Please, provide a valid image!\n")
import aletheialib.attacks
aletheialib.attacks.exif(sys.argv[2])
sys.exit(0)
# }}}
| [
"dlerch@gmail.com"
] | dlerch@gmail.com |
83f565b10fbddf2f16dfcf1bde0e4d1c5c45b7cc | 551c036552a3654069f2dfabaf426e3a7cd244c5 | /core/inference.py | c63ffe2e0f4a726ffb923a6caa3fb319918c5912 | [] | no_license | afterimagex/CRNN-Tensorflow | 4fc0ca4e87578e8f39087a5be68195ebb9cf040a | 30e752ae72e7bce9a04f39e790c25dd3a486bbd8 | refs/heads/master | 2020-05-23T13:55:56.320013 | 2019-06-27T07:03:00 | 2019-06-27T07:03:00 | 186,789,657 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,318 | py | # ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import cv2
import numpy as np
import tensorflow as tf
from core.config import cfg
from models import get_models
from utils.prettytable import PrettyTable
from core.character import Character
class TestServer(object):
def __init__(self, coordinator):
self.output_dir = coordinator.checkpoints_dir()
self.decoder = Character().from_txt(cfg.CHARACTER_TXT)
def evaluate(self, data_loader):
with tf.device('/cpu:0'):
input_images, input_labels, input_widths = data_loader.read_with_bucket_queue(
batch_size=cfg.TEST.BATCH_SIZE,
num_threads=cfg.TEST.THREADS,
num_epochs=1,
shuffle=False)
with tf.device('/gpu:0'):
logits = get_models(cfg.MODEL.BACKBONE)(cfg.MODEL.NUM_CLASSES).build(input_images, False)
seqlen = tf.cast(tf.floor_div(input_widths, 2), tf.int32, name='sequence_length')
softmax = tf.nn.softmax(logits, dim=-1, name='softmax')
decoded, log_prob = tf.nn.ctc_greedy_decoder(softmax, seqlen)
distance = tf.reduce_mean(tf.edit_distance(tf.cast(decoded[0], tf.int32), input_labels))
saver = tf.train.Saver(tf.global_variables())
gpu_options = tf.GPUOptions(allow_growth=True)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, allow_soft_placement=True)) as sess:
saver.restore(sess, tf.train.latest_checkpoint(self.output_dir))
sess.run(tf.local_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
cnt = 0
dm = 1e-5
while not coord.should_stop():
dt = sess.run([distance, ])[0]
cnt += 1
dm = (dm + dt) / cnt
if cfg.TEST.VIS:
dd, il, ii = sess.run([decoded, input_labels, input_images])
gts = self.decoder.sparse_to_strlist(il.indices, il.values, cfg.TEST.BATCH_SIZE)
pts = self.decoder.sparse_to_strlist(dd[0].indices, dd[0].values, cfg.TEST.BATCH_SIZE)
tb = PrettyTable()
tb.field_names = ['Index', 'GroundTruth', 'Predict', '{:.3f}/{:.3f}'.format(dt, dm)]
for i in range(len(gts)):
tb.add_row([i, gts[i], pts[i], ''])
print(tb)
else:
print('EditDistance: {:.3f}/{:.3f}'.format(dt, dm))
except tf.errors.OutOfRangeError:
print('Epochs Complete!')
finally:
coord.request_stop()
coord.join(threads)
class InferServer(object):
def __init__(self, coordinator, images_dir, **kwargs):
self.images_dir = images_dir
self.output_dir = coordinator.checkpoints_dir()
self.decoder = Character().from_txt(cfg.CHARACTER_TXT)
self._allow_soft_placement = kwargs.get('allow_soft_placement', True)
self._log_device_placement = kwargs.get('log_device_placenebt', False)
self.device = kwargs.get('device', '/cpu:0')
self.model_path = kwargs.get('model_path', None)
self.graph = tf.Graph()
self.sess = tf.Session(graph=self.graph, config=tf.ConfigProto(allow_soft_placement=self._allow_soft_placement,
log_device_placement=self._log_device_placement))
self.model = self._load_weights(self.model_path)
def _load_weights(self, weights=None):
with self.graph.as_default():
with tf.device(self.device):
input_images = tf.placeholder(tf.uint8, shape=[None, 32, None, 3], name='input_images')
input_widths = tf.placeholder(tf.uint8, shape=[None], name='input_widths')
with tf.device('/gpu:0'):
logits = get_models(cfg.MODEL.BACKBONE)(cfg.MODEL.NUM_CLASSES).build(input_images, False)
seqlen = tf.cast(tf.floor_div(input_widths, 2), tf.int32, name='sequence_length')
softmax = tf.nn.softmax(logits, dim=-1, name='softmax')
decoded, log_prob = tf.nn.ctc_greedy_decoder(softmax, seqlen)
prob = -tf.divide(tf.cast(log_prob, tf.int32), seqlen[0])
saver = tf.train.Saver(tf.global_variables())
if weights is None:
saver.restore(self.sess, tf.train.latest_checkpoint(self.output_dir))
else:
saver.restore(self.sess, weights)
return {'input_images': input_images, 'input_widths': input_widths, 'decoded': decoded, 'prob': prob}
@staticmethod
def resize_images_and_pad(images):
ws = list(map(lambda x: int(np.ceil(32.0 * x.shape[1] / x.shape[0])), images))
wmax = max(ws)
wmax = wmax if wmax % 32 == 0 else (wmax // 32 + 1) * 32
data = np.zeros(shape=(len(ws), 32, wmax, 3), dtype=np.uint8)
for i, img in enumerate(images):
tmp = cv2.resize(img, (ws[i], 32))
data[i, :32, : ws[i], :] = tmp
length = np.array([wmax], dtype=np.int32).repeat(len(ws))
return data, length
def predict(self, images):
imgs, widths = self.resize_images_and_pad(images)
output, prob = self.sess.run([self.model['decoded'], self.model['prob']],
feed_dict={self.model['input_images']: imgs,
self.model['input_widths']: widths})
context = self.decoder.sparse_to_strlist(output.indices, output.values, len(imgs))
return context, prob.reshape(-1) | [
"563853580@qq.com"
] | 563853580@qq.com |
d3db2ba962d1265de8c6accd504465761e870948 | ece123467d071eb3e993760fd53ee86ab03f33f1 | /tutorapp/main.py | 1206cb9f7fa9c19508267c7d0991eccb3e0dd025 | [
"MIT"
] | permissive | wann100/tutorapp | 69411bfb20795b2af7a4e44031180a82069e6230 | 27da5846cc36776c8d48d0db828c7e362b140da2 | refs/heads/master | 2020-03-28T20:26:25.031422 | 2018-09-17T05:09:30 | 2018-09-17T05:09:30 | 149,070,636 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 78,514 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2007 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# diingibuted under the License is distringibuted on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from header import *
import model
import au
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'])
# application settings
account_id = 860764059 # your app's account_id
access_token = 'PRODUCTION_7d635086b74e2bb90d8a13c0046bee6adba703b362d06e27645a12a5da553d59' # your app's access_token
client_id =147330
client_secret ="75d19ee2f1"
production = True
# set production to True for live environments
wepay = au.WePay(production, access_token)
class GeneralSettings(webapp2.RequestHandler):
def get(self):
settings = ndb.gql("select * from Settings")
for item in settings:
settings = item
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
date = datetime.now()
if not(user):
self.redirect('/')
return
if not(user.admin):
self.redirect('/calendar.html')
return
classlist = ''
deps = ndb.gql("SELECT * FROM Departments order by name")
for dep in deps:
classlist += '<li><a href="#">'+dep.name+'</a>'
classlist2 = ''
classes = ndb.gql("SELECT * FROM Classes where department = '"+dep.code+"'")
for myclass in classes:
classlist2 += '<li>'+dep.code+str(myclass.coursenumber)+': '+myclass.classname+'</li>'
if classlist2 != '':
classlist += '<ul>'+classlist2+'</ul>'
classlist += '</li>'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'tab1': 'General Settings',
'tab1url': '/gensettings.html',
'tab2': 'Manage Tutors',
'tab2url': '/managetutors.html',
'tab3': 'Student Groups',
'tab3url': '/gensettings.html',
'tab4': 'Manage Resources',
'tab4url': '/adminresources.html',
'rate': settings.tutorrate,
'time':str(date.date()),
'checked':au.printstudentsetrate(settings),
'classes': classlist,
}
template = JINJA_ENVIRONMENT.get_template('gensettings.html')
self.response.write(template.render(template_values))
class AdminResources(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if not(user.admin):
self.redirect('/calendar.html')
return
classlist = ''
deps = ndb.gql("SELECT * FROM Departments order by name")
for dep in deps:
classlist += '<li><a href="#">'+dep.name+'</a>'
classlist2 = ''
classes = ndb.gql("SELECT * FROM Classes where department = '"+dep.code+"'")
for myclass in classes:
classlist2 += '<li>'+dep.code+str(myclass.coursenumber)+': '+myclass.classname+'</li>'
if classlist2 != '':
classlist += '<ul>'+classlist2+'</ul>'
classlist += '</li>'
resources = ''
resourcelist = ndb.gql("SELECT * FROM Resources ORDER BY order")
for resource in resourcelist:
resources += resource.html
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'tab1': 'General Settings',
'tab1url': '/gensettings.html',
'tab2': 'Manage Tutors',
'tab2url': '/gensettings.html',
'tab3': 'Student Groups',
'tab3url': '/gensettings.html',
'tab4': 'Manage Resources',
'tab4url': '/adminresources.html',
'classes': classlist,
'resources': resources
}
template = JINJA_ENVIRONMENT.get_template('adminresources.html')
self.response.write(template.render(template_values))
class GroupCalendar(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if user.admin:
self.redirect('/gensettings.html')
return
if not(user.studentgroupname):
self.redirect('/calendar.html')
return
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'calendar': '',
'tab1': 'My Calendar',
'tab1url': '/groupcalendar.html',
'tab2': 'My Students',
'tab2url': '/mystudents.html',
'tab3': 'Payment Requests',
'tab3url': '/paymentrequest.html',
'tab4': 'Contact Us',
'tab4url': '/contactus.html'
}
template = JINJA_ENVIRONMENT.get_template('calendar.html')
self.response.write(template.render(template_values))
class contactus (webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if user.admin:
self.redirect('/gensettings.html')
return
if not(user.studentgroupname):
self.redirect('/calendar.html')
return
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'calendar': '',
'tab1': 'My Calendar',
'tab1url': '/groupcalendar.html',
'tab2': 'My Students',
'tab2url': '/mystudents.html',
'tab3': 'Payment Requests',
'tab3url': '/paymentrequest.html',
'tab4': 'Contact Us',
'tab4url': '/contactus.html'
}
template = JINJA_ENVIRONMENT.get_template('contact.html')
self.response.write(template.render(template_values))
class sendmail(webapp2.RequestHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
au.sendmessage(user.email,user.fullname,self.request.get("message"))
class MyStudents(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if user.admin:
self.redirect('/gensettings.html')
return
if not(user.studentgroupname):
self.redirect('/calendar.html')
return
requeststring = ''
requests = ndb.gql("SELECT * FROM StudentGroupRequests WHERE groupname = '"+user.studentgroupname+"'")
for request in requests:
requeststring += au.getStudentGroupRequestForm(request)
studentstring = ''
students = ndb.gql("SELECT * FROM StudentGroups WHERE groupname = '"+user.studentgroupname+"'")
for studentgroup in students:
student = model.Users.get_by_id(studentgroup.student)
studentstring +='<tr><td class="info_fields">'
studentstring +='<span>Student Name: <span style="color:#0093e7;">'+student.fullname+'</span> |'
studentstring += ' Student Email: <span style="color:#0093e7;">'+student.email+'</span></span></td></tr>'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'requests': requeststring,
'students': studentstring,
'groupname':str(user.studentgroupname),
'tab1': 'My Calendar',
'tab1url': '/groupcalendar.html',
'tab2': 'My Students',
'tab2url': '/mystudents.html',
'tab3': 'Payment Requests',
'tab3url': '/paymentrequest.html',
'tab4': 'Contact Us',
'tab4url': '/contactus.html'
}
template = JINJA_ENVIRONMENT.get_template('mystudents.html')
self.response.write(template.render(template_values))
class AcceptStudent(webapp2.RequestHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if user.admin:
self.redirect('/gensettings.html')
return
if not(user.studentgroupname):
self.redirect('/calendar.html')
return
request = model.StudentGroupRequests.get_by_id(int(self.request.get('id')))
studentgroup = model.StudentGroups(
student = request.student,
groupname = request.groupname)
studentgroup.put()
request.key.delete()
time.sleep(.5)
self.redirect('/mystudents.html')
class PaymentRequest(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if user.admin:
self.redirect('/gensettings.html')
return
if not(user.studentgroupname):
self.redirect('/calendar.html')
return
appts = ndb.gql("SELECT * FROM Appointments WHERE studentgroupname = '"+user.studentgroupname+"'")
apptstring = ''
for appt in appts:
apptstring += au.getGroupAppointmentInfo(appt)
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'appointments': apptstring,
'tab1': 'My Calendar',
'tab1url': '/groupcalendar.html',
'tab2': 'My Students',
'tab2url': '/mystudents.html',
'tab3': 'Payment Requests',
'tab3url': '/paymentrequest.html',
'tab4': 'Contact Us',
'tab4url': '/contactus.html'
}
template = JINJA_ENVIRONMENT.get_template('paymentrequest.html')
self.response.write(template.render(template_values))
class Calendar(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if user.admin:
self.redirect('/gensettings.html')
return
if user.studentgroupname:
self.redirect('/groupcalendar.html')
return
tab2 = 'My Tutors'
if user.tutor:
tab2 = 'My Tutees'
calendar = ''
qry = ndb.gql("SELECT * FROM Appointments WHERE student = "+str(user.key.integer_id()))
for appoint in qry:
calendar+=au.getCalendarFromStudentAppointment(appoint)
if user.tutor:
tutorclasses = ndb.gql("SELECT * FROM TutorClasses WHERE tutor = "+str(user.key.integer_id()))
query = ""
for tutorclass in tutorclasses:
if query != "":
query += ","
query += str(tutorclass.key.integer_id())
if query != "":
qry = ndb.gql("SELECT * FROM Appointments WHERE tutorclassid in ("+query+")")
for appoint in qry:
calendar+=au.getCalendarFromTutorAppointment(appoint)
tab2url = '/'+tab2.lower().replace(' ','')+'.html'
tab1 = 'My Calendar'
tab1url = 'calendar.html'
tab3 = 'My Resources'
tab3url = 'myresources.html'
tab4 = 'My Profile'
tab4url = 'myprofile.html'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'calendar': calendar,
'tab1': tab1,
'tab1url': tab1url,
'tab2': tab2,
'tab2url': tab2url,
'tab3': tab3,
'tab3url': tab3url,
'tab4': tab4,
'tab4url': tab4url
}
template = JINJA_ENVIRONMENT.get_template('calendar.html')
self.response.write(template.render(template_values))
class Landing(webapp2.RequestHandler):
def get(self):
template_values = {
}
template = JINJA_ENVIRONMENT.get_template('landing.html')
self.response.write(template.render(template_values))
def getMajors(user):
string = "Undecided"
if user.major1 != "":
query = "SELECT * FROM Departments WHERE code in ('"+user.major1+"'"
if user.major2 != "":
query += ",'"+user.major2+"'"
if user.major3 != "":
query += ",'"+user.major3+"'"
query += ")"
iterator = ndb.gql(query)
for dept in iterator:
if string == "Undecided":
string = dept.name
else:
string += ",<br/>"+dept.name
return string
def getMinors(user):
string = "None"
if user.minor1 != "":
query = "SELECT * FROM Departments WHERE code in ('"+user.minor1+"'"
if user.minor2 != "":
query += ",'"+user.minor2+"'"
if user.minor3 != "":
query += ",'"+user.minor3+"'"
query += ")"
iterator = ndb.gql(query)
for dept in iterator:
if string == "None":
string = dept.name
else:
string += ",<br/>"+dept.name
return string
class tutorsearch(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
template_values = {
'tutors': tutorFullSearch()
}
template = JINJA_ENVIRONMENT.get_template('tutorsearch.html')
self.response.write(template.render(template_values))
def appointmentSearch(user):
string = ''
appointmentlist = ndb.gql("select * from Appointments where student = "+str(user.key.integer_id())+" AND studentgroupname = ''")
requests =ndb.gql("select * from Requests where student = "+str(user.key.integer_id()))
for appointment in appointmentlist:
if not(appointment.paid):
string += au.getAppointmentInfo(appointment)
#if requests:
# for request in requests:
# string+= au.getPendingInfo(request)
return string
def TutorappointmentSearch(user):
date = datetime.now()
string = ''
tutorclasses = ndb.gql("select * from TutorClasses where tutor = "+str(user.key.integer_id()))
query = ''
count = 0
for tutorclass in tutorclasses:
if query != '':
query += ','
query += str(tutorclass.key.integer_id())
count+=1
if count == 0:
return "No Appointments Found."
query = 'select * from Appointments where tutorclassid in ('+query+')'
appointmentlist = ndb.gql(query)
for appointment in appointmentlist :
#if not(appointment.paid):
day1= appointment.end.date()
day2 =date.date()
delta = day2 - day1
totaldays = abs(delta.days)
if(totaldays <=7):
string += au.tutorApprovedAppointment(appointment)
return string
def tutorFullSearch():
string = ''
tutorlist = ndb.gql("select * from Users where tutor = true")
for tutor in tutorlist:
if(tutor.available == True):
string += au.getTutorSearchInfo(tutor)
return string
def getfuturetutors():
string=''
query= ndb.gql("SELECT * FROM Users WHERE appliedastutor = True")
for Users in query:
string += au.printfuturtutors(Users)
# if(string == ''):
# string+='No tutor applications'
return string
def getclassrequest():
string=''
query= ndb.gql("SELECT * FROM TutorClasses WHERE approved= False")
for Tutorclasses in query:
string += au.printrequestedclasses(Tutorclasses)
# if(string == ''):
# string+='No Class applications'
return string
class managetutors(webapp2.RequestHandler):
def get(self):
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'tab1': 'General Settings',
'tab1url': '/gensettings.html',
'tab2': 'Manage Tutors',
'tab2url': '/managetutors.html',
'tab3': 'Student Groups',
'tab3url': '/gensettings.html',
'tab4': 'Manage Resources',
'tab4url': '/adminresources.html',
'futuretutors':getfuturetutors(),
'classes':getclassrequest()
}
template = JINJA_ENVIRONMENT.get_template('managetutors.html')
self.response.write(template.render(template_values))
class myprofile(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
wepayaccountinfo = ''
addsponsor =''
minimumstring=''
available =''
if not(user):
self.redirect('/')
return
status = 'Student'
avatar = 'statics/images/profilepic.png'
if user.avatar:
avatar = '/viewPhoto/'+user.avatar.__str__()
searchfunction = 'searchStudentClasses()'
tutorapply = "<button id='areyousurebutton'>REQUEST TO BE A TUTOR</button><br />"
if user.available:
available =getAvailabilities(user)
if (user.appliedastutor):
tutorapply = "<p>Your request to be a tutor has been sent.</p>"
if (user.tutor):
status = 'Tutor'
minimumstring = '<p style="font-size: 0.85em; font-weight: normal; border: 1px solid #f5f5f5; background-color: #f5f5f5; padding: 10px 20px;"><span style="color: #f68e56; font-weight: bold;">NOTE:</span> "<span style="color: #0093e7;">ONLY </span>" ADD CLASSES THAT YOU ARE AN EXPERT AT TEACHING</p>'
searchfunction = 'searchTutorClasses()'
if (user.wepayuid =='0' or user.wepayuid ==''):
wepayaccountinfo = au.wepayinformation(user)
elif (user.wepayuid !='0' or user.wepayuid !=''):
wepayaccountinfo= '<tr><td width=10% colspan="1">For more information about your wepay transactions <a href="https://wepay.com">CLICK HERE </a> </td></tr>'
tutorapply = ''
tab2 = 'My Tutors'
if user.tutor:
tab2 = 'My Tutees'
studentgroups = ndb.gql("SELECT * FROM Users WHERE studentgroupname != ''")
studentgroupstring = ''
if not user.tutor:
addsponsor='<div id='+'studentgroupselect'+'>REGISTER WITH A STUDENT GROUP</div>'
for studentgroup in studentgroups:
studentgroupstring += '<option value="'+studentgroup.studentgroupname+'">'+studentgroup.studentgroupname+'</option>'
majors = getMajors(user)
minors = getMinors(user)
years = ['Freshman','Sophomore','Junior','Senior']
tab2url = '/'+tab2.lower().replace(' ','')+'.html'
tab1 = 'My Calendar'
tab1url = 'calendar.html'
tab3 = 'My Resources'
tab3url = 'myresources.html'
tab4 = 'My Profile'
tab4url = 'myprofile.html'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'firstname': user.firstname,
'lastname': user.lastname,
'fullname': user.fullname,
'status': status,
'email': user.email,
'year': years[min(user.year-1,3)],
'majors': majors,
'minors': minors,
'classes': getClasses(user),
'sideclasses': getClassesSmall(user),
'availability': available,
'minavailability': getSideAvailabilities(user),
'departments': model.getDepartmentSelect(),
'userid': str(user.key.integer_id()),
#'studentgroups': studentgroupstring,
'searchfunction': searchfunction,
'avatar': avatar,
'tutorapply': tutorapply,
'tab1': tab1,
'tab1url': tab1url,
'tab2': tab2,
'tab2url': tab2url,
'tab3': tab3,
'tab3url': tab3url,
'tab4': tab4,
'tab4url': tab4url,
'wepayaccountinfo':wepayaccountinfo,
'client_id':client_id,
'fullname': user.fullname,
'email': user.email,
#'addsponsor':addsponsor,
'minimumstring':minimumstring,
'phonenumber':user.phone,
'available':au.vacationbutton(user)
}
template = JINJA_ENVIRONMENT.get_template('myprofile.html')
self.response.write(template.render(template_values))
class ViewPhotoHandler(blobstore_handlers.BlobstoreDownloadHandler):
def get(self, photo_key):
if not blobstore.get(photo_key):
self.error(404)
else:
self.send_blob(photo_key)
def getSideAvailabilities(user):
qry = ndb.gql("SELECT * FROM Availabilities WHERE user = "+str(user.key.integer_id())+" ORDER BY dayofweek, start")
string = ''
avails = []
days = ['Su','M','T','W','Th','F','Sa']
for avail in qry:
avails.append(avail)
for i in range(7):
daystring = '<tr><td valign="top">'+days[i]+':</td><td><table border="0" cellspacing="0" cellpadding="0">'
timestring = ''
for avail in avails:
if avail.dayofweek == i:
timestring+='<tr><td><span class="timepref">'
timestring+=au.formattime(avail.start)+' - '+au.formattime(avail.end)
timestring+='</span></td></tr>'
if timestring != '':
string+=daystring+timestring+'</table></td></tr>'
return string
def getAvailabilities(user):
qry = ndb.gql("SELECT * FROM Availabilities WHERE user = "+str(user.key.integer_id())+" ORDER BY dayofweek, start")
string = ''
avails = []
days = ['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday']
for avail in qry:
avails.append(avail)
for i in range(7):
daystring = '<tr><td class="col_title">'+days[i]+':</td><td></td>'
daystring+= '<td class="col_title">Delete?</td></tr>'
timestring = ''
for avail in avails:
if avail.dayofweek == i:
timestring+='<tr><td class="avail_time">'
timestring+=au.formattime(avail.start)+' - '+au.formattime(avail.end)
timestring+='</td><td></td>'
timestring+='</select></td><td class="class_status" align="center"><img class="del-avail" onclick="setAvailToDelete('+str(avail.key.integer_id())+');" src="statics/images/minus.png" /></td></tr>'
if timestring != '':
string+=daystring+timestring+'<tr><td></td><td></td><td></td></tr>'
return string
def getClassesSmall(user):
qry = ndb.gql("SELECT * FROM StudentClasses WHERE student = "+user.key.integer_id().__str__())
string = ''
classnum = 1
colors = ['black','orange','blue']
for mystudentclass in qry:
myclass = model.Classes.get_by_id(mystudentclass.class_id)
string+='<span id="mini'+str(mystudentclass.key.integer_id())+'" class="status_'+colors[mystudentclass.status]+'"></span>'+myclass.department+' '+myclass.coursenumber.__str__()+'<br />'
return string
def getClasses(user):
if user.tutor:
return getTutorClasses(user)
qry = ndb.gql("SELECT * FROM StudentClasses WHERE student = "+user.key.integer_id().__str__())
string = ''
classnum = 1
for mystudentclass in qry:
myclass = model.Classes.get_by_id(mystudentclass.class_id)
if myclass:
string += au.getClassHTML(myclass,classnum,mystudentclass.status,mystudentclass.key.integer_id())
classnum += 1
return string +'</tr>'
def getTutorClasses(user):
qry = ndb.gql("SELECT * FROM TutorClasses WHERE tutor = "+user.key.integer_id().__str__())
string = ''
classnum = 1
for mytutorclass in qry:
myclass = model.Classes.get_by_id(mytutorclass.class_id)
if mytutorclass.approved:
if myclass:
string += au.getTutorClassHTML(myclass,classnum,mytutorclass.rate,mytutorclass.key.integer_id())
classnum += 1
return string +'</tr>'
#write a function for just the tutors email!
class mytutors(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
tab2 = 'My Tutors'
if user.tutor:
tab2 = 'My Tutees'
status = 'Student'
if (user.tutor):
status = 'Tutor'
avatar = 'statics/images/profilepic.png'
if user.avatar:
avatar = '/viewPhoto/'+user.avatar.__str__()
studentgroupstring = ''
studentgroups = ndb.gql("SELECT * FROM StudentGroups WHERE student = "+str(user.key.integer_id()))
for studentgroup in studentgroups:
studentgroupstring += '<option>'+studentgroup.groupname+'</option>'
years = ['Freshman','Sophomore','Junior','Senior']
tab2url = '/'+tab2.lower().replace(' ','')+'.html'
tab1 = 'My Calendar'
tab1url = 'calendar.html'
tab3 = 'My Resources'
tab3url = 'myresources.html'
tab4 = 'My Profile'
tab4url = 'myprofile.html'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'firstname': user.firstname,
'lastname': user.lastname,
'fullname': user.fullname,
'status': status,
'majors': getMajors(user),
'minors': getMinors(user),
'year': years[min(user.year-1,3)],
'avatar': avatar,
'tutors': appointmentSearch(user)+tutorFullSearch(),
'studentgroups': studentgroupstring,
'tab1': tab1,
'tab1url': tab1url,
'tab2': tab2,
'tab2url': tab2url,
'tab3': tab3,
'tab3url': tab3url,
'tab4': tab4,
'tab4url': tab4url
}
template = JINJA_ENVIRONMENT.get_template('mytutors.html')
self.response.write(template.render(template_values))
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
status = 'Student'
if (user.tutor):
status = 'Tutor'
tab2 = 'My Tutors'
if user.tutor:
tab2 = 'My Tutees'
avatar = 'statics/images/profilepic.png'
if user.avatar:
avatar = '/viewPhoto/'+user.avatar.__str__()
studentgroupstring = ''
studentgroups = ndb.gql("SELECT * FROM StudentGroups WHERE student = "+str(user.key.integer_id()))
for studentgroup in studentgroups:
studentgroupstring += '<option>'+studentgroup.groupname+'</option>'
years = ['Freshman','Sophomore','Junior','Senior']
tab2url = '/'+tab2.lower().replace(' ','')+'.html'
tab1 = 'My Calendar'
tab1url = 'calendar.html'
tab3 = 'My Resources'
tab3url = 'myresources.html'
tab4 = 'My Profile'
tab4url = 'myprofile.html'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'firstname': user.firstname,
'lastname': user.lastname,
'fullname': user.fullname,
'status': status,
'majors': getMajors(user),
'minors': getMinors(user),
'year': years[min(user.year-1,3)],
'avatar': avatar,
'studentgroups': studentgroupstring,
'tutors': tutorQuery(self.request.get("mysearch")),
'tab1': tab1,
'tab1url': tab1url,
'tab2': tab2,
'tab2url': tab2url,
'tab3': tab3,
'tab3url': tab3url,
'tab4': tab4,
'tab4url': tab4url
}
template = JINJA_ENVIRONMENT.get_template('mytutors.html')
self.response.write(template.render(template_values))
class mytutees(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if not(user.tutor):
self.redirect('/mytutors.html')
return
tab2 = 'My Tutees'
status = 'Tutor'
avatar = 'statics/images/profilepic.png'
if user.avatar:
avatar = '/viewPhoto/'+user.avatar.__str__()
years = ['Freshman','Sophomore','Junior','Senior']
tab2url = '/'+tab2.lower().replace(' ','')+'.html'
tab1 = 'My Calendar'
tab1url = 'calendar.html'
tab3 = 'My Resources'
tab3url = 'myresources.html'
tab4 = 'My Profile'
tab4url = 'myprofile.html'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'firstname': user.firstname,
'lastname': user.lastname,
'fullname': user.fullname,
'status': status,
'majors': getMajors(user),
'minors': getMinors(user),
'year': years[min(user.year-1,3)],
'tutors': tuteeFullSearch(user),
'approved':TutorappointmentSearch(user),
'avatar': avatar,
'tab1': tab1,
'tab1url': tab1url,
'tab2': tab2,
'tab2url': tab2url,
'tab3': tab3,
'tab3url': tab3url,
'tab4': tab4,
'tab4url': tab4url
}
template = JINJA_ENVIRONMENT.get_template('mytutees.html')
self.response.write(template.render(template_values))
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user.tutor):
self.redirect('/mytutors.html')
status = 'Tutor'
tab2 = 'My Tutees'
years = ['Freshman','Sophomore','Junior','Senior']
tab2url = '/'+tab2.lower().replace(' ','')+'.html'
tab1 = 'My Calendar'
tab1url = 'calendar.html'
tab3 = 'My Resources'
tab3url = 'myresources.html'
tab4 = 'My Profile'
tab4url = 'myprofile.html'
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'firstname': user.firstname,
'lastname': user.lastname,
'fullname': user.fullname,
'status': status,
'majors': getMajors(user),
'minors': getMinors(user),
'year': years[min(user.year-1,3)],
'tutors': tutorQuery(self.request.get("mysearch")),
'tab1': tab1,
'tab1url': tab1url,
'tab2': tab2,
'tab2url': tab2url,
'tab3': tab3,
'tab3url': tab3url,
'tab4': tab4,
'tab4url': tab4url
}
template = JINJA_ENVIRONMENT.get_template('mytutees.html')
self.response.write(template.render(template_values))
def tuteeFullSearch(user):
string = ''
tutorclasses = ndb.gql("select * from TutorClasses where tutor = "+str(user.key.integer_id()))
query = ''
count = 0
for tutorclass in tutorclasses:
if query != '':
query += ','
query += str(tutorclass.key.integer_id())
count+=1
if count == 0:
return "No Requests Found."
query = 'select * from Requests where tutorclassid in ('+query+')'
requests = ndb.gql(query)
for request in requests:
string += au.getRequestInfo(request)
return string
def tutorQuery(search):
string = ''
tutors = []
tutorlist = ndb.gql("select * from Users where firstname = '"+search+"'")
for tutor in tutorlist:
tutors.append(tutor.key.integer_id())
tutorlist = ndb.gql("select * from Users where lastname = '"+search+"'")
for tutor in tutorlist:
tutors.append(tutor.key.integer_id())
tutorlist = ndb.gql("select * from Users where fullname = '"+search+"'")
for tutor in tutorlist:
tutors.append(tutor.key.integer_id())
obj = getClassCode(search)
if obj:
myclass = ndb.gql("select * from Classes where coursenumber = "+obj[1].__str__()+" and department = '"+obj[0].upper()+"'").get()
if myclass:
tutorclasslist = ndb.gql("select * from TutorClasses where class_id = "+str(myclass.key.integer_id()))
for tutorclass in tutorclasslist:
tutors.append(tutorclass.tutor)
for tutorid in tutors:
#tutors.append(tutor.key.integer_id())
tutor = model.Users.get_by_id(tutorid)
string += au.getTutorSearchInfo(tutor)
return string
def getClassCode(string):
try:
num = 0
if len(string) == 7:
num = int(string[4:])
elif string[4] != ' ':
return false
else:
num = int(string[5:])
if num >= 100 and num < 1000:
return [string[:4],num]
except:
return False
class myresources(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
status = 'Student'
if (user.tutor):
status = 'Tutor'
tab2 = 'My Tutors'
if user.tutor:
tab2 = 'My Tutees'
avatar = 'statics/images/profilepic.png'
if user.avatar:
avatar = '/viewPhoto/'+user.avatar.__str__()
years = ['Freshman','Sophomore','Junior','Senior']
tab2url = '/'+tab2.lower().replace(' ','')+'.html'
tab1 = 'My Calendar'
tab1url = 'calendar.html'
tab3 = 'My Resources'
tab3url = 'myresources.html'
tab4 = 'My Profile'
tab4url = 'myprofile.html'
resources = ''
resourcelist = ndb.gql("SELECT * FROM Resources ORDER BY order")
for resource in resourcelist:
resources += resource.html
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'firstname': user.firstname,
'lastname': user.lastname,
'fullname': user.fullname,
'status': status,
'majors': getMajors(user),
'minors': getMinors(user),
'year': years[min(user.year-1,3)],
'avatar': avatar,
'tab1': tab1,
'tab1url': tab1url,
'tab2': tab2,
'tab2url': tab2url,
'tab3': tab3,
'tab3url': tab3url,
'tab4': tab4,
'tab4url': tab4url,
'resources': resources
}
template = JINJA_ENVIRONMENT.get_template('myresources.html')
self.response.write(template.render(template_values))
class Register(webapp2.RequestHandler):
def get(self):
template_values = {
}
template = JINJA_ENVIRONMENT.get_template('reg1.html')
self.response.write(template.render(template_values))
class RegisterReplace(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
avatar = 'statics/images/profilepic.png'
if user.tos == True:
self.redirect('/calendar.html')
if user.avatar:
avatar = '/viewPhoto/'+user.avatar.__str__()
template_values = {
'name':user.fullname,
'email':user.email,
'password':hashlib.sha224(user.password).hexdigest(),
'year':user.year,
'major':user.major1,
'avatar':avatar,
'minor':user.minor1}
template = JINJA_ENVIRONMENT.get_template('reg.html')
self.response.write(template.render(template_values))
class TOS(webapp2.RequestHandler):
def get(self):
template_values = {
}
template = JINJA_ENVIRONMENT.get_template('tos.html')
self.response.write(template.render(template_values))
class LogIn(webapp2.RequestHandler):
def post(self):
redirecte ='/calendar.html'
username = self.request.get('username')
user = model.getUserNoPassword(username)
if not(user):
self.redirect('/')
return
password = hashlib.sha224(self.request.get('password')).hexdigest()
if not(password==user.password):
self.redirect('/')
return
if not(user.admin):
if (user.tos == False):
redirecte ='/reg.html'
self.response.headers.add_header( "Set-Cookie","user=%s; path=/" % username.__str__())
self.response.headers.add_header( "Set-Cookie","pass=%s; path=/" % self.request.get('password').__str__())
self.redirect(redirecte)
class LogOut(webapp2.RequestHandler):
def get(self):
self.response.headers.add_header( "Set-Cookie","user=%s; path=/" % '')
self.response.headers.add_header( "Set-Cookie","pass=%s; path=/" % '')
self.redirect('/')
class ModifyUser(webapp2.RequestHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
checkbox = self.request.get('tos')
if(checkbox == 'on'):
user.tos = True;
#CODE TO MODIFY USER
#self.request.get('tos')
user.put()
self.redirect('/calendar.html')
class CreateUser(webapp2.RequestHandler):
def post(self):
tosvar = False;
if(self.request.get('tos')== 'on'):
tosvar = True;
user = model.Users(username = self.request.get('username'),
password = hashlib.sha224(self.request.get('password')).hexdigest(),
firstname = self.request.get('firstname'),
lastname = self.request.get('lastname'),
fullname = self.request.get('firstname')+' '+self.request.get('lastname'),
phone = self.request.get('phonenumber'),
email = self.request.get('email'),
aboutme = self.request.get('aboutme'),
# major1 = self.request.get('major1'),
# major2 = self.request.get('major2'),
# major3 = self.request.get('major3'),
# minor1 = self.request.get('minor1'),
# minor2 = self.request.get('minor2'),
# minor3 = self.request.get('minor3'),
wepayuid= self.request.get('wepayuid'),
wepayat = self.request.get('wepayat'),
tutor = False,
avatar = None,
year = 4,
appliedastutor = False,
tos = tosvar,
admin = False,
available = True,
studentgroupname = '')
user.put()
username = self.request.get('username')
password = hashlib.sha224(self.request.get('password')).hexdigest()
self.response.headers.add_header( "Set-Cookie","user=%s; path=/" % username.__str__())
self.response.headers.add_header( "Set-Cookie","pass=%s; path=/" % self.request.get('password').__str__())
self.response.write('User Created!')
avatarfile = ''
avatarfile = self.request.get('avatarfile','')
if(avatarfile != ''):
form_fields = {
"file": self.request.get('avatarfile2')
}
form_data = urllib.urlencode(form_fields)
result = urlfetch.fetch(url=avatarfile,
payload=form_data,
method=urlfetch.POST,
headers={'Content-Type': 'application/x-www-form-urlencoded'})
self.redirect('/')
class StudentGroupPay(webapp2.RequestHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
appointment = model.Appointments.get_by_id(int(self.request.get('apptid')))
if user.key.integer_id() != appointment.student:
self.redirect('/')
return
appointment.studentgroupname = self.request.get('studentgroup')
appointment.put()
time.sleep(.5)
self.redirect('/mytutors.html')
class SendRequest(webapp2.RequestHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
tci = int(self.request.get('tutorclassid'))
tutorclass = model.TutorClasses.get_by_id(tci)
request = model.Requests(
tutorclassid = tci,
student = user.key.integer_id(),
notes = self.request.get('notes'))
request.put()
rid = request.key.integer_id()
line = 1
linestr = str(line)
while self.request.get('start'+linestr,"") != '' and self.request.get('date'+linestr,""):
date = self.request.get('date'+linestr).split('-')
year = int(date[0])
month = int(date[1])
day = int(date[2])
start = int(self.request.get('start'+linestr))
end = int(self.request.get('end'+linestr))
totalcost = (end - start) * tutorclass.rate
starttime = datetime(year,month,day,start)
endtime = datetime(year,month,day,end)
requestLine = model.RequestLines(
requestid = rid,
start = starttime,
end = endtime,
cost = totalcost)
requestLine.put()
line = line + 1
linestr = str(line)
self.redirect('/mytutors.html')
class AddAdminData(webapp2.RequestHandler):
def get(self):
settings = model.Settings(
logo = None,
emblem = None,
allowratings = True,
tutorsetrate = False,
tutorrate = 0)
settings.put()
class AddTestData(webapp2.RequestHandler):
def get(self):
#studentclass = model.StudentClasses(
# class_id = 5838406743490560,
# student = 5733953138851840,
# status = 0
# )
#studentclass.put()
#availability = model.Availabilities(
# user = 5681726336532480,
# dayofweek = 0,
# start = datetime.now().time(),
# end = datetime.now().time())
#availability.put()
# x = 0
# tutorlist = ndb.gql("select * from Users")
# for Users in tutorlist:
# Users.key.delete()
# while x < 500:
# tutor = model.Users(
# username = "email" + str(x) + "@email.com",
# password = "d63dc919e201d7bc4c825630d2cf25fdc93d4b2f0d46706d29038d01",
# firstname = "tutor1",
# lastname = "wann",
# fullname = "tutor1 Wann",
# phone = "302545893",
# email = "email" + str(x) + "@email.com",
# tutor = True,
# aboutme = "I love Cartoons",
# major1 = "MATH",
# minor1 = "MATH",
# year = 4
# )
# tutor.put()
# x+=1
# y = 0
# while y < 500:
# student = model.Users(
# username = "email" + str(x) + "@email.com",
# password = "d63dc919e201d7bc4c825630d2cf25fdc93d4b2f0d46706d29038d01",
# firstname = "Hey",
# lastname = "Arnold",
# fullname = "Hey Arnold",
# phone = "302545893",
# email = "email" + str(x) + "@email.com",
# tutor = False,
# aboutme = "I love Cartoons",
# major1 = "MATH",
# minor1 = "MATH",
# year = 4
# )
# student.put()
# y+=1
myclass = model.Classes(
department = "MATH",
coursenumber = 001,
classname = "Discrete Mathematics",
active = True,
semester = 1,
year = 2015
)
myclass.put()
myclass1 = model.Classes(
department = "MATH",
coursenumber = 002,
classname = "Calculus 1",
active = True,
semester = 1,
year = 2015
)
myclass1.put()
myclass2 = model.Classes(
department = "MATH",
coursenumber = 003,
classname = "Calculus 2",
active = True,
semester = 1,
year = 2015
)
myclass2.put()
myclass3 = model.Classes(
department = "MATH",
coursenumber = 004,
classname = "Calculus 3",
active = True,
semester = 1,
year = 2015
)
myclass3.put()
myclass800 = model.Classes(
department = "MATH",
coursenumber = 005,
classname = "Algebra 1",
active = True,
semester = 1,
year = 2015
)
myclass800.put()
myclass900 = model.Classes(
department = "MATH",
coursenumber = 006,
classname = "Algebra 2",
active = True,
semester = 1,
year = 2015
)
myclass900.put()
myclass900 = model.Classes(
department = "MATH",
coursenumber = 007,
classname = "Elementary Math",
active = True,
semester = 1,
year = 2015
)
myclass900.put()
myclass5 = model.Classes(
department = "HIST",
coursenumber = 001,
classname = "US History 1",
active = True,
semester = 1,
year = 2015
)
myclass5.put()
myclass6 = model.Classes(
department = "HIST",
coursenumber = 002,
classname = "US History 2",
active = True,
semester = 1,
year = 2015
)
myclass6.put()
myclass100 = model.Classes(
department = "HIST",
coursenumber = 003,
classname = "World History ",
active = True,
semester = 1,
year = 2015
)
myclass100.put()
myclass200 = model.Classes(
department = "HIST",
coursenumber = 004,
classname = "AP US History ",
active = True,
semester = 1,
year = 2015
)
myclass200.put()
myclass300 = model.Classes(
department = "HIST",
coursenumber = 005,
classname = "AP Government",
active = True,
semester = 1,
year = 2015
)
myclass300.put()
myclass500 = model.Classes(
department = "ENGL",
coursenumber = 001,
classname = "High School English ",
active = True,
semester = 1,
year = 2015
)
myclass500.put()
myclass600 = model.Classes(
department = "ENGL",
coursenumber = 002,
classname = "Middle School English ",
active = True,
semester = 1,
year = 2015
)
myclass600.put()
myclass700 = model.Classes(
department = "ENGL",
coursenumber = 003,
classname = "Elementary School English ",
active = True,
semester = 1,
year = 2015
)
myclass1200 = model.Classes(
department = "ENGL",
coursenumber = 004,
classname = "AP English ",
active = True,
semester = 1,
year = 2015
)
myclass1200.put()
myclass7 = model.Classes(
department = "CHEM",
coursenumber = 001,
classname = "General Chemistry",
active = True,
semester = 1,
year = 2015
)
myclass7.put()
myclass101 = model.Classes(
department = "CHEM",
coursenumber = 002,
classname = "College Chemistry",
active = True,
semester = 1,
year = 2015
)
myclass101.put()
myclass102 = model.Classes(
department = "CHEM",
coursenumber = 003,
classname = "College Chemistry 2",
active = True,
semester = 1,
year = 2015
)
myclass102.put()
myclass103 = model.Classes(
department = "PHYS",
coursenumber = 001,
classname = "General Physics",
active = True,
semester = 1,
year = 2015
)
myclass103.put()
myclass104 = model.Classes(
department = "PHYS",
coursenumber = 002,
classname = "AP Physics",
active = True,
semester = 1,
year = 2015
)
myclass104.put()
myclass105 = model.Classes(
department = "PHYS",
coursenumber = 003,
classname = "College Physics 1",
active = True,
semester = 1,
year = 2015
)
myclass105.put()
myclass107 = model.Classes(
department = "PHYS",
coursenumber = 004,
classname = "College Physics 2",
active = True,
semester = 1,
year = 2015
)
myclass107.put()
mydepartement = model.Departments(
code ='MATH',
name= 'Mathematics'
)
mydepartement.put()
mydepartement1 = model.Departments(
code ='HIST',
name= 'History'
)
mydepartement1.put()
mydepartement2 = model.Departments(
code ='CHEM',
name= 'chemistry'
)
mydepartement2.put()
mydepartement3 = model.Departments(
code ='PHYS',
name= 'Physics'
)
mydepartement3.put()
mydepartement4 = model.Departments(
code ='ENGL',
name= 'English'
)
mydepartement4.put()
class AcceptRequest(webapp2.RequestHandler):
def post(self):
requestid = int(self.request.get('requestid'))
request = model.Requests.get_by_id(requestid)
lines = ndb.gql("select * from RequestLines where requestid = "+str(requestid))
for line in lines:
appointment = model.Appointments(
tutorclassid = request.tutorclassid,
student = request.student,
start = line.start,
end = line.end,
cost = line.cost,
paid = False,
studentgroupname = '')
appointment.put()
student = model.Users.get_by_id(appointment.student)
tutorclasses = model.TutorClasses.get_by_id(appointment.tutorclassid)
tutor = model.Users.get_by_id(tutorclasses.tutor)
line.key.delete()
au.sendmessage(tutor.email,'Appointment Accepted','The appointment has been accepted')
au.sendmessage(student.email, 'Appointment Accepted', 'The appointment has been accepted')
request.key.delete()
self.redirect('/mycalendar.html')
class ChangeSetRate(webapp2.RequestHandler):
def get(self):
checkbox = self.request.get('checkbox')
settings = ndb.gql("select * from Settings")
for item in settings:
settings = item
if (checkbox == 'true'):
settings.tutorsetrate = True
else:
settings.tutorsetrate = False
settings.put()
class SubmitRate(webapp2.RequestHandler):
def get(self):
price = float(self.request.get('price'))
settings = ndb.gql("select * from Settings")
for item in settings:
settings = item
settings.tutorrate = price
settings.put()
self.redirect("/gensettings.html")
class AcceptClass(webapp2.RequestHandler):
def post(self):
requestid = int(self.request.get('requestid'))
# request = model.Requesttotutor.requestid(requestid)
query = ndb.gql("select * from TutorClasses")
for TutorClasses in query:
if(requestid == TutorClasses.tutor):
mytutorclasses= model.TutorClasses(
class_id =TutorClasses.class_id,
tutor = TutorClasses.tutor,
rate = TutorClasses.rate,
approved = True
)
mytutorclasses.put()
TutorClasses.key.delete()
# request.key.delete()
tutor = model.Users.get_by_id(mytutorclasses.tutor)
au.sendmessage(tutor.email,'Class Approved','The subject you have requested has been approved')
self.redirect('/managetutors.html')
class DenyClass(webapp2.RequestHandler):
def post(self):
requestid = int(self.request.get('requestid'))
# request = model.Requesttotutor.requestid(requestid)
query = ndb.gql("select * from TutorClasses")
for TutorClasses in query:
if(requestid == TutorClasses.tutor):
TutorClasses.key.delete()
else:
self.redirect('/gensettings.html')
# request.key.delete()
tutor = model.Users.get_by_id(TutorClasses.tutor)
au.sendmessage(tutor.email,'Class Denied','The subject you have requested has been denied')
self.redirect('/managetutors.html')
##################################################################################
class AcceptTutor(webapp2.RequestHandler):
def post(self):
requestid = int(self.request.get('requestid'))
# request = model.Requesttotutor.requestid(requestid)
query = ndb.gql("select * from Users")
for Users in query:
if(requestid == Users.key.integer_id()):
myuser = model.Users(
username = Users.username,
password =Users.password,
firstname = Users.firstname,
lastname =Users.lastname,
fullname =Users.fullname,
phone = Users.phone,
email = Users.email,
aboutme =Users.aboutme,
major1 =Users.major1,
minor1 = Users.minor1,
wepayuid=Users.wepayuid,
wepayat =Users.wepayat,
tutor = True,
avatar = Users.avatar,
year = Users.year,
appliedastutor = False,
admin = False,
available = True,
studentgroupname = '',
tos = True)
myuser.put()
Users.key.delete()
# request.key.delete()
user = model.Users.get_by_id(myuser.email)
au.sendmessage(myuser.email,'Tutor Approved','You have been approved a tutor')
self.redirect('/managetutors.html')
class Denytutor(webapp2.RequestHandler):
def post(self):
requestid = int(self.request.get('requestid'))
# request = model.Requesttotutor.requestid(requestid)
query = ndb.gql("select * from Users")
for Users in query:
if(requestid == Users.key.integer_id()):
myuser = model.Users(
username = Users.username,
password =Users.password,
firstname = Users.firstname,
lastname =Users.lastname,
fullname =Users.fullname,
phone = Users.phone,
email = Users.email,
aboutme =Users.aboutme,
major1 =Users.major1,
minor1 = Users.minor1,
wepayuid=Users.wepayuid,
wepayat =Users.wepayat,
tutor = False,
avatar = Users.avatar,
year = Users.year,
appliedastutor = False,
admin = False,
available = True,
studentgroupname = '')
myuser.put()
Users.key.delete()
# request.key.delete()
user = model.Users.get_by_id(myuser.email)
au.sendmessage(myuser.email, 'Tutor Rejected', 'You have not been approved to become a tutor')
self.redirect('/managetutors.html')
class GoonVacation(webapp2.RequestHandler):
def get(self):
curUser = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
curUser.available = False
curUser.put()
time.sleep(1)
self.redirect('/myprofile.html')
class comebackfromvacation(webapp2.RequestHandler):
def get(self):
curUser = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
curUser.available = True
curUser.put()
time.sleep(1)
self.redirect('/myprofile.html')
class ApplyAsTutor(webapp2.RequestHandler):
def get(self):
curUser = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
curUser.appliedastutor = True
curUser.put()
time.sleep(1)
self.redirect('/myprofile.html')
#Put delete appointment
class AddAvailability(webapp2.RequestHandler):
def post(self):
curUser = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
starthour = int(self.request.get('starttime'))
endhour = int(self.request.get('endtime'))
addAvailability(curUser.key.integer_id(),int(self.request.get('dayofweek')),starthour,endhour)
time.sleep(.5)
self.redirect('/myprofile.html')
class CancelAppointment(webapp2.RequestHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
apptid = int(self.request.get('cancelapptid'))
#TODO add validation
appt = model.Appointments.get_by_id(apptid)
student = model.Users.get_by_id(appt.student)
tutorclass = model.TutorClasses.get_by_id(appt.tutorclassid)
tutor = model.Users.get_by_id(tutorclass.tutor)
if appt.paid and appt.checkoutid != 0:
resp = au.refund(appt.checkoutid, tutor.wepayat)
if(str(resp) == "Error"):
self.redirect('error.html')
else:
au.sendmessage(student,'Appointment has been cancelled','Appointment has been cancelled.')
au.sendmessage(tutor,'Appointment has been cancelled','Appointment has been cancelled.')
appt.key.delete()
time.sleep(.5)
self.redirect('/mytutees.html')
class Error(webapp2.RequestHandler):
def get(self):
template_values = {
'error':'It seems the payment has yet to be completely authorized. </ br>Please wait a moment and try again.'
}
template = JINJA_ENVIRONMENT.get_template('error.html')
self.response.write(template.render(template_values))
class Error2(webapp2.RequestHandler):
def get(self):
template_values = {
'error':'This tutor does not have a wepay account.</ br>Please contact them of this or pay them in person.'
}
template = JINJA_ENVIRONMENT.get_template('error.html')
self.response.write(template.render(template_values))
class RequestStudentGroup(webapp2.RequestHandler):
def post(self):
curUser = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
request = model.StudentGroupRequests(
student = curUser.key.integer_id(),
groupname = self.request.get('studentgroup'))
request.put()
self.redirect('/myprofile.html')
def addAvailability(userkey, dayofweek, starthour, endhour):
overlapped = ndb.gql("select * from Availabilities "+
"where user = "+str(userkey)+" "+
"and dayofweek = "+str(dayofweek)+" "
"and start >= TIME("+str(starthour)+",0,0) "+
"and start <= TIME("+str(endhour)+",0,0)")
for avail in overlapped:
avail.key.delete()
addAvailability(userkey,dayofweek,starthour,max(endhour,avail.end.hour))
return
overlapped = ndb.gql("select * from Availabilities "+
"where user = "+str(userkey)+" "+
"and dayofweek = "+str(dayofweek)+" "
"and end >= TIME("+str(starthour)+",0,0) "+
"and end <= TIME("+str(endhour)+",0,0)")
for avail in overlapped:
avail.key.delete()
addAvailability(userkey,dayofweek,max(endhour,avail.end.hour),endhour)
return
avail = model.Availabilities(
user = userkey,
dayofweek = dayofweek,
start = datetime(1970,1,1,starthour,0,0).time(),
end = datetime(1970,1,1,endhour,0,0).time()
)
avail.put()
class UploadPhoto(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
redirect = '/myprofile.html'
redirect = self.request.get('redirect','/myprofile.html')
if not(user):
self.redirect('/')
return
old_key = user.avatar
if old_key:
blobstore.delete(old_key)
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
upload = upload_files[0]
user.avatar = upload.key()
user.put()
time.sleep(1)
self.redirect(redirect)
class UploadLogo(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if not(user.admin):
self.redirect('/')
return
settings = None
query = ndb.gql("select * from Settings")
for item in query:
settings = item
old_key = settings.logo
if old_key:
blobstore.delete(old_key)
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
upload = upload_files[0]
settings.logo = upload.key()
settings.put()
time.sleep(1)
self.redirect('/gensettings.html')
class UploadEmblem(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if not(user.admin):
self.redirect('/')
return
settings = None
query = ndb.gql("select * from Settings")
for item in query:
settings = item
old_key = settings.emblem
if old_key:
blobstore.delete(old_key)
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
upload = upload_files[0]
settings.emblem = upload.key()
settings.put()
time.sleep(1)
self.redirect('/gensettings.html')
class UploadResource(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if not(user.admin):
self.redirect('/')
return
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
upload = upload_files[0]
max_order = 0
query = ndb.gql("select * from Resources")
for res in query:
if res.order > max_order:
max_order = res.order
resource = model.Resources(
html = '<br/> <br/> <img style="margin-right:20px"src="/viewPhoto/'+str(upload.key())+'" /> </i>',
order = max_order+1)
resource.put()
time.sleep(1)
self.redirect('/adminresources.html')
class UploadStringResource(webapp2.RequestHandler):
def post(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
if not(user):
self.redirect('/')
return
if not(user.admin):
self.redirect('/')
return
max_order = 0
query = ndb.gql("select * from Resources")
for res in query:
if res.order > max_order:
max_order = res.order
resource = model.Resources(
html = self.request.get('html'),
order = max_order+1)
resource.put()
time.sleep(1)
self.redirect('/adminresources.html')
class Database(webapp2.RequestHandler):
def __init__(self, *args, **kwargs):
webapp2.RequestHandler.__init__(self, *args, **kwargs)
self.methods = RPCMethods()
def get(self):
func = None
action = self.request.get('action')
redirect = ''
redirect = self.request.get('redirect','')
if action:
if action[0]=='_':
self.error(403)
return
else:
func = getattr(self.methods,action, None)
if not(func):
self.error(403)
return
args = ()
while True:
key = 'arg%d' % len(args)
val = self.request.get(key)
if val:
args += (val,)
else:
break
result = func(*args)
self.response.out.write(result)
class RPCMethods():
def photoURL(self, *args):
return blobstore.create_upload_url('/uploadPhoto')
def logoURL(self, *args):
return blobstore.create_upload_url('/uploadLogo')
def emblemURL(self, *args):
return blobstore.create_upload_url('/uploadEmblem')
def resourceURL(self, *args):
return blobstore.create_upload_url('/uploadResource')
def updateClassStatus(self, *args):
mystudentclass = model.StudentClasses.get_by_id(int(args[0]))
mystudentclass.status = int(args[1])
mystudentclass.put()
return "Success"
def removeClass(self, *args):
if args[1]=='Student':
myclass = model.StudentClasses.get_by_id(int(args[0]))
myclass.key.delete()
time.sleep(1)
return "Done"
myclass = model.TutorClasses.get_by_id(int(args[0]))
myclass.key.delete()
time.sleep(1)
return "Done"
def removeAvail(self, *args):
avail = model.Availabilities.get_by_id(int(args[0]))
avail.key.delete()
time.sleep(1)
return "Done"
def addTutorClass(self, *args):
myrate = 0
try:
myrate = float(args[2])
except:
return "Please select a valid hourly rate."
try:
query = "SELECT * FROM TutorClasses WHERE class_id = "+args[0]+" AND tutor = "+args[1]
iterator = ndb.gql(query)
for item in iterator:
return "You are already offering this class."
myclass = model.TutorClasses(
class_id = int(args[0]),
tutor = int(args[1]),
rate = myrate)
myclass.put()
time.sleep(1)
return "Success"
except:
return "We were unable to add your class. Please reload the page and try again."
def addStudentClass(self, *args):
try:
query = "SELECT * FROM StudentClasses WHERE class_id = "+args[0]+" AND student = "+args[1]
iterator = ndb.gql(query)
for item in iterator:
return "You are already taking this class."
myclass = model.StudentClasses(
class_id = int(args[0]),
student = int(args[1]),
status = 0)
myclass.put()
time.sleep(1)
return "Success"
except:
return "We were unable to add your class. Please reload the page and try again."
def searchClasses(self, *args):
string = '<tr><td align="center">Course:</td><td align="center">Name:</td>'
if args[2] == 'Tutor':
string += '<td align="center">Rate:</td>'
string +='<td align="center">Add:</td></tr>'
department = args[0]
courseno = -1
args1 = ''
try:
args1 = args[1]
except:
args1 = ''
if args1 == 'NULL':
courseno = 0
else:
try:
courseno = int(args1)
except:
return '<tr><td align="center">Please enter a valid course number.</td></tr>'
query = ndb.gql("Select * from Settings")
settings = None
for item in query:
settings = item
query = "SELECT * FROM Classes"
where = False
if args[0] != "---":
query += " WHERE department = '"+department+"'"
where = True
if courseno > 0 and courseno < 1000:
query2 = " WHERE "
if where:
query2 = " AND "
query += query2
if courseno < 10:
query += "coursenumber >= "+str(courseno)+"00 "
query += "AND coursenumber <= "+str(courseno)+"99"
elif courseno < 100:
query += "coursenumber >= "+str(courseno)+"0 "
query += "AND coursenumber <= "+str(courseno)+"9"
else:
query += "coursenumber = "+str(courseno)
iterator = ndb.gql(query)
string2 = ""
disabled = ''
if not(settings.tutorsetrate):
disabled = ' disabled'
for myclass in iterator:
string2+='<tr><td align="center">'+myclass.department+" "+str(myclass.coursenumber)+"</td>"
string2+='<td align="center">'+myclass.classname+"</td>"
onclick = 'addStudentClass('+str(myclass.key.integer_id())+')'
if args[2] == 'Tutor':
string2+='<td align="center"><input id="rate'+str(myclass.key.integer_id())+'" value="'+au.moneyFormat(settings.tutorrate)+'" '+disabled+'/></td>'
onclick = 'addTutorClass('+str(myclass.key.integer_id())+')'
string2+='<td align="center"><button onclick="'+onclick+'">Add Course</button></td></tr>'
if string2 == "":
return '<tr><td align="center">Your query returned no results</td></tr>'
return string+string2
class Checkout(webapp2.RequestHandler):
def post(self):
at = ""
at = self.request.get('accesstoken',"")
#raise Exception(len(at))
if len(at) < 6:
self.redirect('/error2.html')
else:
amount = ""
amount = self.request.get('amount',1)
resp = ""
resp = self.request.get('comment',"")
myuid = ""
myuid = self.request.get('userid',"")
url = ""
url = self.request.host+'/payappointment.html?appt='+self.request.get('appointment')+'&at='+at
if(amount == ""):
amount = 1
if(resp == ""):
resp = "TEST DESCRIPTION"
mywepay = au.WePay(production, at)
respon = au.makeCheckout(mywepay, myuid, amount, resp,'http://'+url)
myson = json.loads(respon)
#raise Exception('RESPONSE: '+respon)
resp = myson['checkout_uri']
template_values = {
'logo': au.getLogo(),
'emblem': au.getEmblem(),
'checkout' : resp
}
template = JINJA_ENVIRONMENT.get_template('checkout.html')
self.response.write(template.render(template_values))
class PayAppointment(webapp2.RequestHandler):
def get(self):
apptid = self.request.get('appt')
appt = model.Appointments.get_by_id(int(apptid))
if appt == None:
self.redirect('/')
return
checkoutid = self.request.get('checkout_id')
accesstoken = self.request.get('at')
json = au.getpaymentinfo(checkoutid,accesstoken)
if appt.cost == json['amount']:
appt.paid = True
appt.checkoutid = int(checkoutid)
appt.put()
template_values = {
}
template = JINJA_ENVIRONMENT.get_template('thankyou.html')
self.response.write(template.render(template_values))
else:
self.redirect('/')
class UpdateHandler(webapp2.RequestHandler):
def get(self):
deferred.defer(model.UpdateSchema)
self.response.out.write('Schema migration successfully initiated.')
class WepayAccountCreate(webapp2.RequestHandler):
def get(self):
template_values = {
}
template = JINJA_ENVIRONMENT.get_template('createaccount.html')
self.response.write(template.render(template_values))
class CreateWepayAccount(webapp2.RequestHandler):
def get(self):
user = model.getUser(self.request.cookies.get('user',''),self.request.cookies.get('pass',''))
# oauth2 parameters
code = ''
code = self.request.get('code',''); # the code parameter from step 2
redirect_uri = "http://delaware.tutorsbin.com/"; # this is the redirect_uri you used in step 1
if(code != ''):
# create an account for a user
resp = wepay.get_token(redirect_uri, client_id, client_secret, code)
self.response.write(resp);
mywepay = au.WePay(production, resp['access_token'])
# create an account for a user
response2 = mywepay.call('/account/create', {
'name': user.fullname,
'description': 'A payment account for Tutorsbin'
})
response2 = json.loads(response2)
#raise Exception('RESPONSE: '+str(response2['account_id']))
#self.response.out.write(str(response2))
else:
self.response.out.write('Error')
user.wepayuid=str(response2['account_id'])
user.wepayat =str(resp['access_token'])
user.put()
app = webapp2.WSGIApplication([
('/', Landing),
('/calendar.html', Calendar),
('/myprofile.html', myprofile),
('/mytutors.html', mytutors),
('/mytutees.html', mytutees),
('/myresources.html', myresources),
('/tutorsearch.html',tutorsearch),
('/register.html', Register),
('/gensettings.html', GeneralSettings),
('/adminresources.html', AdminResources),
('/payappointment.html',PayAppointment),
('/groupcalendar.html', GroupCalendar),
('/mystudents.html', MyStudents),
('/error.html', Error),
('/error2.html', Error2),
('/paymentrequest.html',PaymentRequest),
('/createuser', CreateUser),
('/modifyuser', ModifyUser),
('/studentgrouppay',StudentGroupPay),
('/addTestData',AddTestData),
('/addAdminData',AddAdminData),
('/requestStudentGroup',RequestStudentGroup),
('/acceptStudent',AcceptStudent),
('/acceptTutor',AcceptTutor),
('/Denytutor',Denytutor),
('/cancelappointment',CancelAppointment),
('/login',LogIn),
('/logout',LogOut),
('/applyAsTutor',ApplyAsTutor),
('/addAvailability',AddAvailability),
('/sendRequest',SendRequest),
('/uploadPhoto',UploadPhoto),
('/uploadLogo',UploadLogo),
('/uploadEmblem',UploadEmblem),
('/uploadResource',UploadResource),
('/uploadStringResource',UploadStringResource),
('/viewPhoto/([^/]+)?',ViewPhotoHandler),
('/acceptRequest',AcceptRequest),
('/checkout.html',Checkout),
('/database',Database),
('/createaccount.html',WepayAccountCreate),
('/useraccountcreate',CreateWepayAccount),
('/managetutors.html',managetutors),
('/contactus.html',contactus),
('/GoonVacation',GoonVacation),
('/AcceptClass', AcceptClass),
('/DenyClass',DenyClass),
('/ChangeSetRate',ChangeSetRate),
('/SubmitRate',SubmitRate),
('/comebackfromvacation',comebackfromvacation),
('/tos.html',TOS),
('/reg.html',RegisterReplace),
('/sendm',sendmail),
('/update_schema', UpdateHandler)
], debug=True)
| [
"mwann@kognos.co"
] | mwann@kognos.co |
261911e7285a2cf4b2af737b3088a3fd77239733 | 2766aad9e5fc28665071b5555cb5beef5ddcf065 | /test.py | 9df2b8dc2145ec627b37dbcd8546aa1231458d55 | [] | no_license | ymewangyc/awesome-python3-webapp | edd0d3ec648b0e2357d53366718f195468f4d656 | 926f2f745972dbecb6a71848adeb4df46d14ef9c | refs/heads/master | 2021-08-30T18:53:25.824964 | 2017-12-19T02:22:16 | 2017-12-19T02:22:16 | 114,316,952 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 491 | py | # -*- coding: utf-8 -*-
from MysqlHelper import MysqlHelper
mh = MysqlHelper('10.100.1.142', 'test', '123456', 'testDB', 'utf8')
sql = "select * from user where name=%s"
print(mh.find(sql, '小明'))
print(mh.find(sql, '张三'))
# mh = MysqlHelper('localhost', 'root', '123456', 'test', 'utf8')
# mh = MysqlHelper('10.100.1.142', 'test', '123456', 'testDB', 'utf8')
sql = "insert into user(name,password) values(%s,%s)"
mh.cud(sql, ('李四', '123456'))
mh.cud(sql, ('赵六', '123456')) | [
"wyc@example.com"
] | wyc@example.com |
75cc871b353ebfbbb4ced6e65e8a1a38d99c10a6 | 1a217021d8ff5f7530107f1576b538953e154124 | /AlllMyCats2.py | 828a412d04b229b8b1f38062d955ad5d3ea7f4f7 | [] | no_license | BinThani/PYTHON | a25c802ba174aab2d39ffaa6aa9a9607d632e649 | e3682d85c73290538a1faf4b69507436a3f3e8df | refs/heads/master | 2021-04-23T18:58:35.307144 | 2020-08-15T07:58:49 | 2020-08-15T07:58:49 | 249,974,487 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 366 | py | catNames = []
while True:
print('Enter the name of cat ' + str(len(catNames) + 1) + # Everytime it loops increases by 1 and so on.
' (Or enter nothing to stop.):')
name = input()
if name == "":
break
catNames = catNames + [name] # List Concatenation
# Names.
print("The cat names are:")
for name in catNames:
print("" + name)
| [
"noreply@github.com"
] | noreply@github.com |
eddb4a31230e835db275730ea4226babb1806507 | 2fd64c8d1710f6a482a3227cbc9553b883768167 | /15/getdata.py | aca792a3885db98773a8a0614f43fd819170c242 | [] | no_license | wangweihao/Python | e0938cbf67afcbc031a855ee439937ced53c8446 | 9de7f0479edbe2cef4c1db868bdaa15239d121b3 | refs/heads/master | 2021-01-10T20:26:38.825549 | 2015-05-03T17:27:20 | 2015-05-03T17:27:20 | 33,823,741 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 789 | py | #!/usr/bin/env python
#coding:UTF-8
from random import randint, choice
from string import lowercase
from sys import maxint
from time import ctime
from sys import argv
f = open(argv[1], 'w+')
doms = ('com', 'edu', 'net', 'org', 'gov')
for i in range(randint(5,10)):
dtint = randint(0, 2^32-1)
dtstr = ctime(dtint)
#dtstr = ctime()
shorter = randint(4, 7)
em = ''
for j in range(shorter):
em += choice(lowercase)
longer = randint(shorter, 12)
dn = ''
for j in range(longer):
dn += choice(lowercase)
f.write('%s::%s@%s.%s::%d-%d-%d' % (dtstr, em,
dn, choice(doms), dtint, shorter, longer))
f.write('\n')
#return '%s::%s@%s.%s::%d-%d-%d' % (dtstr, em,
# dn, choice(doms), dtint, shorter, longer)
| [
"578867817@qq.com"
] | 578867817@qq.com |
bdea108d4883de1a13f2c75224bc83113c52d55d | 20ff9c30344ba1c43ba6b3fe862a5894737fcd9f | /mongopy/mongoconn.py | e403317346411855b92ae6220fd496449f38fd73 | [] | no_license | jzorrof/mongopy | c36bc4b2592cc983438e4dcbddedfca694b7815e | 4b1270c6936c664e4b1e670756f43ab0ca0e38e0 | refs/heads/master | 2021-01-18T13:59:25.101572 | 2015-05-15T10:35:59 | 2015-05-15T10:35:59 | 34,641,570 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 330 | py | __author__ = 'joe_fan'
# -*- coding: utf-8 -*-
from pymongo import MongoClient
"""
connection demo
"""
client = MongoClient()
client = MongoClient("mongodb://mongodb0.example.net:27019")
"""
access db objects
"""
db = client.primer
db = client['primer']
"""
access collection Objects
"""
coll = db.dataset
coll = db['dataset']
| [
"jzorrof@gmail.com"
] | jzorrof@gmail.com |
0c9daec6558d9de88338199192f4e5d1dad32639 | 8dca64dd11b23a7d59413ac8e28e92a0ab80c49c | /832. Flipping an Image/solution.py | 25b69e5fc050245a58bc16a757d3ea451316a7c4 | [] | no_license | huangruihaocst/leetcode-python | f854498c0a1d257698e10889531c526299d47e39 | 8f88cae7cc982ab8495e185914b1baeceb294060 | refs/heads/master | 2020-03-21T20:52:17.668477 | 2018-10-08T20:29:35 | 2018-10-08T20:29:35 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 689 | py | class Solution:
def flipAndInvertImage(self, A):
"""
:type A: List[List[int]]
:rtype: List[List[int]]
"""
return self.invert(self.reverse(A))
@staticmethod
def reverse(A):
"""
:type A: List[List[int]]
:rtype: List[List[int]]
"""
return list(map(lambda l: list(reversed(l)), A))
@staticmethod
def invert(A):
"""
:type A: List[List[int]]
:rtype: List[List[int]]
"""
return list(map(lambda l: list(map(lambda i: 1 - i, l)), A))
if __name__ == '__main__':
s = Solution()
print(s.flipAndInvertImage([[1,1,0,0],[1,0,0,1],[0,1,1,1],[1,0,1,0]]))
| [
"huangruihaocst@126.com"
] | huangruihaocst@126.com |
3a3a2bda3c9dc71fb5f213b93e4215892c92b19a | 354e2dede869a32e57c66c2e1b71bbd61d2cc36d | /widgets/pad.py | 17d2e11a21f2a29919b0d0f7268c7bfef1cc107d | [
"MIT"
] | permissive | hiddenvs/micropython_ra8875 | 5abf906d95c0c997ceb9638b8785c4cc2f803cf6 | a61314d62d6add831f6618c857b01d1a5b7ce388 | refs/heads/master | 2023-04-09T22:37:23.097440 | 2021-04-16T13:13:34 | 2021-04-16T13:13:34 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,750 | py | # pad.py Extension to lcd160gui providing the invisible touchpad class
# Released under the MIT License (MIT). See LICENSE.
# Copyright (c) 2020 Peter Hinch
# Usage: import classes as required:
# from gui.widgets.pad import Pad
import uasyncio as asyncio
from micropython_ra8875.py.ugui import Touchable
from micropython_ra8875.primitives.delay_ms import Delay_ms
# Pad coordinates relate to bounding box (BB). x, y are of BB top left corner.
# likewise width and height refer to BB
class Pad(Touchable):
long_press_time = 1000
def __init__(self, location, *, height=20, width=50, onrelease=True,
callback=None, args=[], lp_callback=None, lp_args=[]):
super().__init__(location, None, height, width, None, None, None, None, False, '', None)
self.callback = (lambda *_: None) if callback is None else callback
self.callback_args = args
self.onrelease = onrelease
self.lp_callback = lp_callback
self.lp_args = lp_args
self.lp_task = None # Long press not in progress
def show(self):
pass
def _touched(self, x, y): # Process touch
if self.lp_callback is not None:
self.lp_task = asyncio.create_task(self.longpress())
if not self.onrelease:
self.callback(self, *self.callback_args) # Callback not a bound method so pass self
def _untouched(self):
if self.lp_task is not None:
self.lp_task.cancel()
self.lp_task = None
if self.onrelease:
self.callback(self, *self.callback_args) # Callback not a bound method so pass self
async def longpress(self):
await asyncio.sleep_ms(Pad.long_press_time)
self.lp_callback(self, *self.lp_args)
| [
"peter@hinch.me.uk"
] | peter@hinch.me.uk |
9bb10c134aacfcf074a9013fd4da4be138bf44d6 | 6fda9b7b57523caa5f3791b707765593fa30cde0 | /project/product/migrations/0001_initial.py | 9b5256e2031c8495af53f54fbfbb0852e84b4f2c | [] | no_license | orhan1616/Django-Eticaret | 4ccce641baf5cd5b310f9573f99a2881c120040d | 253d991cb7eab922afee9e6d5b23592ed28fa337 | refs/heads/main | 2023-03-22T12:50:31.632204 | 2021-03-05T19:42:08 | 2021-03-05T19:42:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,190 | py | # Generated by Django 3.0.5 on 2020-04-12 18:25
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Question',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=30)),
('keywords', models.CharField(max_length=255)),
('description', models.CharField(max_length=255)),
('image', models.ImageField(blank=True, upload_to='images/')),
('status', models.CharField(choices=[('True', 'Evet'), ('False', 'Hayır')], max_length=10)),
('slug', models.SlugField()),
('create_at', models.DateTimeField(auto_now_add=True)),
('update_at', models.DateTimeField(auto_now=True)),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='children', to='product.Question')),
],
),
]
| [
"yigitcakmak.dev@gmail.com"
] | yigitcakmak.dev@gmail.com |
6462cb96855447582dafdd84cfd45d9866a3fa4c | 6169f8975558e0b25f1d32cced3d8ca9d9871c9f | /algorithms/kmp_matching.py | 8db41ffb7288d0d4d7dcba70e6a1fc26b77d3b02 | [] | no_license | jm2242/interviewPrep | adba832c4bbde60e55af619389e15e8f139b41c1 | b9fac886894746fd0f7432cfac449fe62cfdefbb | refs/heads/master | 2021-03-27T09:16:35.975042 | 2017-02-05T18:41:14 | 2017-02-05T18:41:14 | 77,166,687 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,990 | py |
'''
Implementation of Knuth-Morris-Pratt string matching.
Run time: O(n+m)
'''
class KmpMatching(object):
"""
Build the prefix table for pattern pat
:type pat: str
:rtype: List[int]
"""
def prefix_table(self, pat):
# table to store the longest prefix
table = [0] * len(pat)
j = 0
for i in range(1, len(pat)):
# if characters don't match, increment i
if pat[i] != pat[j]:
# j reset while loop
while j > 0 and pat[i] != pat[j]:
# we now set j to be the value stored in table[j-1]
# this part is kind of odd
j = table[j-1]
# if no match, continue to reassign j
if pat[i] != pat[j]:
continue
# if match, increment j and break j reset loop
else:
table[i] = j + 1
j += 1
break
else:
# if characters match
table[i] = j + 1
j += 1
return table
'''
find the index of the first occurence of pat in text
return -1 if pat not in text
:type pat: str
:type text: str
:rtype: int
'''
def search(self, pat, text):
# prefix table
table = self.prefix_table(pat)
j = 0
for i in range(len(text)):
# if the pattern and text match
if text[i] == pat[j]:
# if we've reached the end of the pattern
if j == len(pat) - 1:
return i + 1 - len(pat)
j += 1
continue
# if no match, then check value in j-1 index of table
else:
# change j only if it was somewhere in pattern
while j > 0 and text[i] != pat[j]:
j = table[j-1]
if text[i] != pat[j]:
continue
else:
j += 1
break
# no match
return -1
def main():
kmp = KmpMatching()
text = "abxabcabcaby"
pat = "abcaby"
# tests = [{"text":"yoloyoloyoloyolo", "pat":"bob"}]
# for test in tests:
# text = test['text']
# pat = test['pat']
# assert kmp.search(pat, text) == -1
# assert kmp.search("yolo", text) == 0
print kmp.prefix_table("aacecaaa#aaacecaa")
if __name__ == "__main__":
main()
| [
"jmares93@gmail.com"
] | jmares93@gmail.com |
c70dceb58a15043f0b75dc4cc2d0bd5bb1d898ed | 96b1ab3013dd770501171369b1a7d6dd2c52511d | /week6/coffeehouse/coffeehouse/urls.py | af1976f7fd9788278faec48b7fe258eb19e5affc | [] | no_license | dfreshreed/class_notes | 851e4e2b484ab827abc2f57336427f50922e2524 | 730ebada646a57818304bd0a0bf5837175e82d09 | refs/heads/master | 2021-06-17T20:35:43.519063 | 2016-11-17T15:40:56 | 2016-11-17T15:40:56 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,258 | py | """coffeehouse URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url, include
from django.contrib import admin
from menu.views import IndexView, ProfileUpdateView, SpecialUpdateView, SpecialDeleteView
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^', include('django.contrib.auth.urls')),
url(r'^$', IndexView.as_view(), name="index_view"),
url(r'^special/(?P<pk>\d+)/update/$', SpecialUpdateView.as_view(), name="special_update_view"),
url(r'^special/(?P<pk>\d+)/delete/$', SpecialDeleteView.as_view(), name="special_delete_view"),
url(r'^accounts/profile/$', ProfileUpdateView.as_view(), name="profile_view"),
]
| [
"delisestanton@gmail.com"
] | delisestanton@gmail.com |
b938e7c532cfe60e760b8a0b87abee91e54e4942 | 549f6aeee12b66189d6e30b448a656fe55bc3eca | /tensorflow/python/distribute/strategy_combinations.py | d66c7acba7776ce0478f7d6e7076a2bf5ddace66 | [
"Apache-2.0"
] | permissive | thomkallor/tensorflow | 2f6973b95a3fbaa41b6a4d03edb7ae665398865b | fb03bc60fe90d332e357aafa8359a44369ff8caf | refs/heads/master | 2022-11-10T20:25:03.224037 | 2020-06-26T23:39:44 | 2020-06-26T23:49:42 | 275,148,424 | 0 | 0 | Apache-2.0 | 2020-06-26T12:15:14 | 2020-06-26T12:15:13 | null | UTF-8 | Python | false | false | 16,830 | py | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Strategy and optimizer combinations for combinations.combine()."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import atexit
from tensorflow.python import tf2
from tensorflow.python.distribute import central_storage_strategy
from tensorflow.python.distribute import cluster_resolver
from tensorflow.python.distribute import collective_all_reduce_strategy
from tensorflow.python.distribute import combinations
from tensorflow.python.distribute import distribution_strategy_context
from tensorflow.python.distribute import mirrored_strategy as mirrored_lib
from tensorflow.python.distribute import multi_process_runner
from tensorflow.python.distribute import one_device_strategy as one_device_lib
from tensorflow.python.distribute import tpu_strategy as tpu_lib
from tensorflow.python.distribute.cluster_resolver import tpu_cluster_resolver
from tensorflow.python.eager import context
from tensorflow.python.eager import remote
from tensorflow.python.framework import config
from tensorflow.python.keras.optimizer_v2 import adadelta as adadelta_keras_v2
from tensorflow.python.keras.optimizer_v2 import adagrad as adagrad_keras_v2
from tensorflow.python.keras.optimizer_v2 import adam as adam_keras_v2
from tensorflow.python.keras.optimizer_v2 import adamax as adamax_keras_v2
from tensorflow.python.keras.optimizer_v2 import ftrl as ftrl_keras_v2
from tensorflow.python.keras.optimizer_v2 import gradient_descent as gradient_descent_keras_v2
from tensorflow.python.keras.optimizer_v2 import nadam as nadam_keras_v2
from tensorflow.python.keras.optimizer_v2 import rmsprop as rmsprop_keras_v2
from tensorflow.python.platform import flags
from tensorflow.python.tpu import device_assignment as device_assignment_lib
from tensorflow.python.tpu import tpu_strategy_util
from tensorflow.python.training import adagrad
from tensorflow.python.training import adam
from tensorflow.python.training import ftrl
from tensorflow.python.training import gradient_descent
from tensorflow.python.training import rmsprop
FLAGS = flags.FLAGS
_did_connect_to_cluster = False
# pylint: disable=missing-docstring
def _get_tpu_strategy_creator(steps_per_run,
use_single_core=False,
enable_packed_variable=False,
**kwargs):
def _create_tpu_strategy():
global _did_connect_to_cluster
try:
# Attempt to locally discover the TPU. This will fail for Cloud TPU, in
# which case we fall back to the values passed as flags.
resolver = tpu_cluster_resolver.TPUClusterResolver()
did_automatically_resolve = True
except ValueError:
did_automatically_resolve = False
# These flags will be defined by tpu_test_wrapper.py.
resolver = tpu_cluster_resolver.TPUClusterResolver(
tpu=hasattr(FLAGS, "tpu") and FLAGS.tpu or "",
zone=hasattr(FLAGS, "zone") and FLAGS.zone or None,
project=hasattr(FLAGS, "project") and FLAGS.project or None,
)
# Only connect once per process, rather than per test method.
if getattr(FLAGS, "tpu", "") or did_automatically_resolve:
if not _did_connect_to_cluster:
remote.connect_to_cluster(resolver)
_did_connect_to_cluster = True
topology = tpu_strategy_util.initialize_tpu_system(resolver)
device_assignment = None
if use_single_core:
device_assignment = device_assignment_lib.DeviceAssignment(
topology, core_assignment=device_assignment_lib.
SINGLE_CORE_ASSIGNMENT)
# Steps per run is only supported in TF 1.x
if tf2.enabled():
strategy = tpu_lib.TPUStrategy(resolver, device_assignment, **kwargs)
else:
strategy = tpu_lib.TPUStrategyV1(resolver, steps_per_run,
device_assignment, **kwargs)
strategy._enable_packed_variable_in_eager_mode = enable_packed_variable # pylint: disable=protected-access
return strategy
return _create_tpu_strategy
def _get_multi_worker_mirrored_creator(required_gpus):
def _create_multi_worker_mirrored():
tf_config = cluster_resolver.TFConfigClusterResolver()
resolver = cluster_resolver.SimpleClusterResolver(
cluster_spec=tf_config.cluster_spec(),
task_type=tf_config.task_type,
task_id=tf_config.task_id,
master=tf_config.master(),
environment=tf_config.environment,
num_accelerators={"GPU": required_gpus},
rpc_layer=tf_config.rpc_layer or "grpc",
)
# Always create the strategy in eager mode so that it starts the server and
# configures the eager context. The eager context can no longer be
# configured after initialization.
with context.eager_mode():
strategy = collective_all_reduce_strategy.CollectiveAllReduceStrategy(
cluster_resolver=resolver)
# TODO(b/152320929): Wait for the cluster before proceeding, otherwise
# collectives may hang if any worker launches collectives before the chief
# creates the strategy.
try:
multi_process_runner.barrier().wait()
except ValueError:
# If the creator is called in the main process,
# multi_process_runner.barrier() raises ValueError, which is safe to
# ignore.
pass
return strategy
return _create_multi_worker_mirrored
# pylint: disable=g-long-lambda
default_strategy = combinations.NamedDistribution(
"Default",
distribution_strategy_context._get_default_strategy, # pylint: disable=protected-access
required_gpus=None)
one_device_strategy = combinations.NamedDistribution(
"OneDeviceCPU",
lambda: one_device_lib.OneDeviceStrategy("/cpu:0"),
required_gpus=None)
one_device_strategy_gpu = combinations.NamedDistribution(
"OneDeviceGPU",
lambda: one_device_lib.OneDeviceStrategy("/gpu:0"),
required_gpus=1)
one_device_strategy_on_worker_1 = combinations.NamedDistribution(
"OneDeviceOnWorker1CPU",
lambda: one_device_lib.OneDeviceStrategy("/job:worker/replica:0/task:1/cpu:0"), # pylint: disable=line-too-long
required_gpus=None)
one_device_strategy_gpu_on_worker_1 = combinations.NamedDistribution(
"OneDeviceOnWorker1GPU",
lambda: one_device_lib.OneDeviceStrategy("/job:worker/replica:0/task:1/gpu:0"), # pylint: disable=line-too-long
required_gpus=1)
tpu_strategy = combinations.NamedDistribution(
"TPU", _get_tpu_strategy_creator(steps_per_run=2), required_tpu=True)
tpu_strategy_packed_var = combinations.NamedDistribution(
"TPUPackedVar",
_get_tpu_strategy_creator(steps_per_run=2, enable_packed_variable=True),
required_tpu=True)
tpu_strategy_one_step = combinations.NamedDistribution(
"TPUOneStep", _get_tpu_strategy_creator(steps_per_run=1), required_tpu=True)
tpu_strategy_one_core = combinations.NamedDistribution(
"TPUOneCore",
_get_tpu_strategy_creator(steps_per_run=2, use_single_core=True),
required_tpu=True)
tpu_strategy_one_step_one_core = combinations.NamedDistribution(
"TPUOneStepOneCore",
_get_tpu_strategy_creator(steps_per_run=1, use_single_core=True),
required_tpu=True)
cloud_tpu_strategy = combinations.NamedDistribution(
"CloudTPU",
_get_tpu_strategy_creator(steps_per_run=2),
required_tpu=True,
use_cloud_tpu=True)
mirrored_strategy_with_one_cpu = combinations.NamedDistribution(
"Mirrored1CPU", lambda: mirrored_lib.MirroredStrategy(["/cpu:0"]))
mirrored_strategy_with_one_gpu = combinations.NamedDistribution(
"Mirrored1GPU",
lambda: mirrored_lib.MirroredStrategy(["/gpu:0"]),
required_gpus=1)
mirrored_strategy_with_gpu_and_cpu = combinations.NamedDistribution(
"MirroredCPUAndGPU",
lambda: mirrored_lib.MirroredStrategy(["/gpu:0", "/cpu:0"]),
required_gpus=1)
mirrored_strategy_with_two_gpus = combinations.NamedDistribution(
"Mirrored2GPUs",
lambda: mirrored_lib.MirroredStrategy(["/gpu:0", "/gpu:1"]),
required_gpus=2)
# Should call set_virtual_cpus_to_at_least(3) in your test's setUp methods.
mirrored_strategy_with_cpu_1_and_2 = combinations.NamedDistribution(
"Mirrored2CPU", lambda: mirrored_lib.MirroredStrategy(["/cpu:1", "/cpu:2"]))
central_storage_strategy_with_two_gpus = combinations.NamedDistribution(
"CentralStorage2GPUs",
lambda: central_storage_strategy.CentralStorageStrategy._from_num_gpus(2), # pylint: disable=protected-access
required_gpus=2)
central_storage_strategy_with_gpu_and_cpu = combinations.NamedDistribution(
"CentralStorageCPUAndGPU",
lambda: central_storage_strategy.CentralStorageStrategy(
["/gpu:0", "/cpu:0"]),
required_gpus=1)
# chief + 1 worker, with CPU.
multi_worker_mirrored_2x1_cpu = combinations.NamedDistribution(
"MultiWorkerMirrored2x1CPU",
_get_multi_worker_mirrored_creator(required_gpus=0),
has_chief=True,
num_workers=1,
)
# chief + 1 worker, with 1 GPU each.
multi_worker_mirrored_2x1_gpu = combinations.NamedDistribution(
"MultiWorkerMirrored2x1GPU",
_get_multi_worker_mirrored_creator(required_gpus=1),
has_chief=True,
num_workers=1,
required_gpus=1,
)
# chief + 1 worker, with 2 GPU each.
multi_worker_mirrored_2x2_gpu = combinations.NamedDistribution(
"MultiWorkerMirrored2x2GPU",
_get_multi_worker_mirrored_creator(required_gpus=2),
has_chief=True,
num_workers=1,
required_gpus=2,
)
# chief + 3 workers, with CPU.
multi_worker_mirrored_4x1_cpu = combinations.NamedDistribution(
"MultiWorkerMirrored4x1CPU",
_get_multi_worker_mirrored_creator(required_gpus=0),
has_chief=True,
num_workers=3,
)
# Shutdown the runners gracefully to avoid the processes getting SIGTERM.
def _shutdown_at_exit():
for strategy in [
multi_worker_mirrored_2x1_cpu,
multi_worker_mirrored_2x1_gpu,
multi_worker_mirrored_2x2_gpu,
multi_worker_mirrored_4x1_cpu,
]:
if strategy.runner:
strategy.runner.shutdown()
atexit.register(_shutdown_at_exit)
gradient_descent_optimizer_v1_fn = combinations.NamedObject(
"GradientDescentV1",
lambda: gradient_descent.GradientDescentOptimizer(0.001))
adagrad_optimizer_v1_fn = combinations.NamedObject(
"AdagradV1", lambda: adagrad.AdagradOptimizer(0.001))
adam_optimizer_v1_fn = combinations.NamedObject(
"AdamV1", lambda: adam.AdamOptimizer(0.001, epsilon=1))
ftrl_optimizer_v1_fn = combinations.NamedObject(
"FtrlV1", lambda: ftrl.FtrlOptimizer(0.001))
rmsprop_optimizer_v1_fn = combinations.NamedObject(
"RmsPropV1", lambda: rmsprop.RMSPropOptimizer(0.001))
# TODO(shiningsun): consider adding the other v1 optimizers
optimizers_v1 = [
gradient_descent_optimizer_v1_fn, adagrad_optimizer_v1_fn,
ftrl_optimizer_v1_fn, rmsprop_optimizer_v1_fn
]
adadelta_optimizer_keras_v2_fn = combinations.NamedObject(
"AdadeltaKerasV2", lambda: adadelta_keras_v2.Adadelta(0.001))
adagrad_optimizer_keras_v2_fn = combinations.NamedObject(
"AdagradKerasV2", lambda: adagrad_keras_v2.Adagrad(0.001))
adam_optimizer_keras_v2_fn = combinations.NamedObject(
"AdamKerasV2", lambda: adam_keras_v2.Adam(0.001, epsilon=1.0))
adamax_optimizer_keras_v2_fn = combinations.NamedObject(
"AdamaxKerasV2", lambda: adamax_keras_v2.Adamax(0.001, epsilon=1.0))
nadam_optimizer_keras_v2_fn = combinations.NamedObject(
"NadamKerasV2", lambda: nadam_keras_v2.Nadam(0.001, epsilon=1.0))
ftrl_optimizer_keras_v2_fn = combinations.NamedObject(
"FtrlKerasV2", lambda: ftrl_keras_v2.Ftrl(0.001))
gradient_descent_optimizer_keras_v2_fn = combinations.NamedObject(
"GradientDescentKerasV2", lambda: gradient_descent_keras_v2.SGD(0.001))
rmsprop_optimizer_keras_v2_fn = combinations.NamedObject(
"RmsPropKerasV2", lambda: rmsprop_keras_v2.RMSprop(0.001))
# TODO(shiningsun): consider adding the other v2 optimizers
optimizers_v2 = [
gradient_descent_optimizer_keras_v2_fn, adagrad_optimizer_keras_v2_fn
]
optimizers_v1_and_v2 = optimizers_v1 + optimizers_v2
graph_and_eager_modes = ["graph", "eager"]
# This function should be called in a test's `setUp` method with the
# maximum value needed in any test.
def set_virtual_cpus_to_at_least(num_virtual_cpus):
"""Create virtual CPU devices if they haven't yet been created."""
if num_virtual_cpus < 1:
raise ValueError("`num_virtual_cpus` must be at least 1 not %r" %
(num_virtual_cpus,))
physical_devices = config.list_physical_devices("CPU")
if not physical_devices:
raise RuntimeError("No CPUs found")
configs = config.get_logical_device_configuration(physical_devices[0])
if configs is None:
logical_devices = [
context.LogicalDeviceConfiguration() for _ in range(num_virtual_cpus)
]
config.set_logical_device_configuration(physical_devices[0],
logical_devices)
else:
if len(configs) < num_virtual_cpus:
raise RuntimeError("Already configured with %d < %d virtual CPUs" %
(len(configs), num_virtual_cpus))
def distributions_and_v1_optimizers():
"""A common set of combination with DistributionStrategies and Optimizers."""
return combinations.combine(
distribution=[
one_device_strategy,
mirrored_strategy_with_gpu_and_cpu,
mirrored_strategy_with_two_gpus,
],
optimizer_fn=optimizers_v1)
def distributions_and_v2_optimizers():
"""A common set of combination with DistributionStrategies and Optimizers."""
return combinations.combine(
distribution=[
one_device_strategy,
mirrored_strategy_with_gpu_and_cpu,
mirrored_strategy_with_two_gpus,
],
optimizer_fn=optimizers_v2)
def distributions_and_v1_and_v2_optimizers():
"""A common set of combination with DistributionStrategies and Optimizers."""
return combinations.combine(
distribution=[
one_device_strategy,
mirrored_strategy_with_gpu_and_cpu,
mirrored_strategy_with_two_gpus,
],
optimizer_fn=optimizers_v1_and_v2)
strategies_minus_tpu = [
default_strategy,
one_device_strategy,
one_device_strategy_gpu,
mirrored_strategy_with_gpu_and_cpu,
mirrored_strategy_with_two_gpus,
central_storage_strategy_with_gpu_and_cpu,
]
strategies_minus_default_and_tpu = [
one_device_strategy,
one_device_strategy_gpu,
mirrored_strategy_with_gpu_and_cpu,
mirrored_strategy_with_two_gpus,
]
tpu_strategies = [
tpu_strategy, # steps_per_run=2
tpu_strategy_one_step,
tpu_strategy_packed_var,
cloud_tpu_strategy,
]
all_strategies_minus_default = strategies_minus_default_and_tpu + tpu_strategies
all_strategies = strategies_minus_tpu + tpu_strategies
two_replica_strategies = [
mirrored_strategy_with_gpu_and_cpu,
mirrored_strategy_with_two_gpus,
multi_worker_mirrored_2x1_cpu,
multi_worker_mirrored_2x1_gpu,
tpu_strategy, # steps_per_run=2
tpu_strategy_one_step,
central_storage_strategy_with_gpu_and_cpu,
]
four_replica_strategies = [
multi_worker_mirrored_2x2_gpu,
multi_worker_mirrored_4x1_cpu,
]
# TODO(b/159831907): replace with two_replica_strategies after the tests using
# it work with MWMS.
multidevice_strategies = [
mirrored_strategy_with_gpu_and_cpu,
mirrored_strategy_with_two_gpus,
tpu_strategy, # steps_per_run=2
tpu_strategy_one_step
]
def strategy_minus_tpu_combinations():
return combinations.combine(
distribution=strategies_minus_tpu, mode=["graph", "eager"])
def tpu_strategy_combinations():
return combinations.combine(distribution=tpu_strategies, mode=["graph"])
def all_strategy_combinations():
return strategy_minus_tpu_combinations() + tpu_strategy_combinations()
def all_strategy_minus_default_and_tpu_combinations():
return combinations.combine(
distribution=[
one_device_strategy, one_device_strategy_gpu,
mirrored_strategy_with_gpu_and_cpu, mirrored_strategy_with_two_gpus
],
mode=["graph", "eager"])
def all_strategy_combinations_minus_default():
return (all_strategy_minus_default_and_tpu_combinations() +
tpu_strategy_combinations())
| [
"gardener@tensorflow.org"
] | gardener@tensorflow.org |
5c1aa0b9ba494cfce668a282f75867e3a9ea4d0d | f48a10a555b69ea078113a6227a76fd25839468d | /indexGenerator.py | 510ed818e920c328221d59bfd90686ec07276c13 | [] | no_license | ShakoHo/Hasal-Core | 88afea63a37bb3cc1d71a0d0e43af75b6dc900d7 | 0b9f5001af4435c90cf68e3d704f455efd7ad9e6 | refs/heads/master | 2021-01-11T17:27:55.520478 | 2017-01-26T09:38:43 | 2017-01-26T09:38:43 | 79,775,552 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,734 | py | import pp
import os
import time
import subprocess
import shutil
import cv2
import numpy
import argparse
from argparse import ArgumentDefaultsHelpFormatter
class IndexGenerator(object):
def __init__(self, input_video_fp, input_sample_dp, output_img_dp='tmp'):
self.result_data = {}
self.input_video_fp = input_video_fp
self.output_img_dp = os.path.join(os.getcwd(), output_img_dp)
self.generate_result_data(input_sample_dp)
self.sleep_time = 3
def generate_result_data(self, input_sample_dp):
for sample_fn in os.listdir(input_sample_dp):
sample_fp = os.path.join(input_sample_dp, sample_fn)
self.result_data[sample_fn] = {'dct': self.convert_to_dct(sample_fp),
'fp': sample_fp,
'result': []}
def convert_to_image(self, converting_fmt='bmp'):
if os.path.exists(self.output_img_dp):
shutil.rmtree(self.output_img_dp)
os.mkdir(self.output_img_dp)
output_img_fmt = self.output_img_dp + os.sep + 'img%05d.' + converting_fmt.lower()
cmd_list = ['ffmpeg', '-i', self.input_video_fp, output_img_fmt]
ffmpeg_process = subprocess.Popen(cmd_list)
return ffmpeg_process
def run(self):
image_list = []
background_process = self.convert_to_image()
start_counter = 0
time.sleep(self.sleep_time)
while True:
print "start_counter = [%s]" % str(start_counter)
print "background information!!!!"
print background_process.pid
print "background information!!!!"
f_list = os.listdir(self.output_img_dp)
if start_counter == len(f_list):
break
else:
start_counter = len(f_list)
tmp_list = list(set(f_list) - set(image_list))
self.compare_with_sample_img(tmp_list, self.result_data)
def convert_to_dct(self, image_fp, skip_status_bar_fraction=1.0):
dct_obj = None
try:
img_obj = cv2.imread(image_fp)
height, width, channel = img_obj.shape
height = int(height * skip_status_bar_fraction) - int(height * skip_status_bar_fraction) % 2
img_obj = img_obj[:height][:][:]
img_gray = numpy.zeros((height, width))
for channel in range(channel):
img_gray += img_obj[:, :, channel]
img_gray /= channel
img_dct = img_gray / 255.0
dct_obj = cv2.dct(img_dct)
except Exception as e:
print e
return dct_obj
def compare_dct(self, dct_obj_1, dct_obj_2, threshold=0.0003):
match = False
try:
row1, cols1 = dct_obj_1.shape
row2, cols2 = dct_obj_2.shape
if (row1 == row2) and (cols1 == cols2):
mismatch_rate = numpy.sum(numpy.absolute(numpy.subtract(dct_obj_1, dct_obj_2))) / (row1 * cols1)
if mismatch_rate <= threshold:
match = True
except Exception as e:
print e
return match
def compare_with_sample_img(self, current_fn_list, result_data, threads=4):
current_fn_number = len(current_fn_list)
index_list = []
jobs = []
ppservers = ()
job_server = pp.Server(threads, ppservers=ppservers)
if current_fn_number % threads == 0:
r_index = threads
else:
r_index = threads + 2
for i in range(r_index):
i_index = i * (current_fn_number / threads)
if i_index < len(current_fn_list):
index_list.append(i_index)
else:
index_list.append(current_fn_number - 1)
for index in range(len(index_list) - 1):
jobs.append(job_server.submit(self.paralle_compare_image, (current_fn_list[index_list[index]:index_list[index + 1]], result_data),
(self.convert_to_dct, self.compare_dct,), ("cv2", "numpy",)))
for job in jobs:
if job():
self.result_data[job()[0]]['result'].append(job()[1])
print "%s = %s" % (job()[0], self.result_data[job()[0]]['result'])
def paralle_compare_image(self, input_fn_list, input_sample_data):
for image_fn in input_fn_list:
image_fp = os.path.join(self.output_img_dp, image_fn)
dct_obj = self.convert_to_dct(image_fp)
for sample_fn in input_sample_data.keys():
if self.compare_dct(dct_obj, input_sample_data[sample_fn]['dct']):
return sample_fn, image_fn
return None
def main():
arg_parser = argparse.ArgumentParser(description='Image tool',
formatter_class=ArgumentDefaultsHelpFormatter)
arg_parser.add_argument('-i', '--input', action='store', dest='input_video_fp', default=None,
help='Specify the file path.', required=False)
arg_parser.add_argument('-o', '--outputdir', action='store', dest='output_img_dp', default=None,
help='Specify output image dir path.', required=False)
arg_parser.add_argument('-s', '--sample', action='store', dest='sample_img_dp', default=None,
help='Specify sample image dir path.', required=False)
args = arg_parser.parse_args()
input_video_fp = args.input_video_fp
output_img_dp = args.output_img_dp
sample_img_dp = args.sample_img_dp
run_obj = IndexGenerator(input_video_fp, sample_img_dp, output_img_dp)
run_obj.run()
if __name__ == '__main__':
main() | [
"!amshakoinmozillatw"
] | !amshakoinmozillatw |
c64bab437cf48005a6c77d4c91f47a8da2e52e5a | 6f8acb1d8aa9758cc5663725bdf230995e362e6f | /station_manager.py | 44b02f0fa7916fbfbf5b0dff5afd37eb72b584c7 | [] | no_license | liadkeller/PickupScheduler | 1a09f156f4820628b6c201d707806c2e5090d4fd | 73a5872bc88919858bd20d65a8463cd00f6f6dab | refs/heads/master | 2022-11-30T05:43:26.200062 | 2020-08-17T20:20:07 | 2020-08-17T20:20:07 | 287,816,824 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,383 | py | from csv_loadable import CSVLoadable
class StationManager:
def __init__(self):
self._stations_to_manager = SingleDirectionStationsManager('data/stations_to.csv')
self._stations_back_manager = SingleDirectionStationsManager('data/stations_back.csv')
def get_stations(self, is_sunday):
return self._stations_to_manager.get_stations() if is_sunday else self._stations_back_manager.get_stations()
def get_to(self, city):
return self._stations_to_manager._get(city)
def get_back(self, city):
return self._stations_back_manager._get(city)
class SingleDirectionStationsManager(CSVLoadable):
def __init__(self, csv_path):
self._csv_path = csv_path
self._cities = {}
self.load()
def get_stations(self):
return [station for city in self._cities.values() for station in city._stations]
def add(self, station_city, station_name, station_num):
if station_city in self._cities:
self._cities[station_city].add(station_name, station_num)
else:
self._cities[station_city] = City(station_city, station_name, station_num)
def _get(self, station_city):
try:
if ': ' in station_city:
station_city, station_name = station_city.split(': ')
return self._cities[station_city].get(station_name)
else:
return self._cities[station_city].get()
except KeyError:
return None # might be unnecessary, will lead to exception only if necessary
class City:
def __init__(self, city, name, num):
self.city = city
self._stations = []
self.add(name, num)
def add(self, name, num):
self._stations.append(Station(self.city, name, num))
def get(self, *args):
if len(args) > 0:
station_name, *_ = args
return self._get_station(station_name)
else:
return self._stations[0]
def _get_station(self, station_name):
for station in self._stations:
if station.name == station_name:
return station
raise KeyError(f"{station_name} not found.")
class Station:
def __init__(self, city, name, num):
self.city = city
self.name = name
self.num = num
| [
"liadkeller@gmail.com"
] | liadkeller@gmail.com |
3fc2dbb3e16a2861f18b339ee15022a85d67550f | 6b7bdeff133c37245698e450a01f75bb6a97c2a4 | /28.colordialog.py | 4a47f37fbfa2e3611337d803621896fb32a462cf | [] | no_license | HardPlant/PyQT | 9bb4f8e53417ede638bddd63571ade506785d2fb | fae0315620b3005ead451a170214c1334bac7fab | refs/heads/master | 2020-03-24T23:49:47.248763 | 2018-08-05T07:59:17 | 2018-08-05T07:59:17 | 143,155,998 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,104 | py | from PyQt5.QtWidgets import (QWidget, QPushButton, QFrame,
QColorDialog, QApplication)
from PyQt5.QtGui import QColor
import sys
class Example(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
col = QColor(0, 0, 0)
self.btn = QPushButton('Dialog', self)
self.btn.move(20, 20)
self.btn.clicked.connect(self.showDialog)
self.frm = QFrame(self)
self.frm.setStyleSheet("QWidget { background-color: %s }"
% col.name())
self.frm.setGeometry(130, 22, 100, 100)
self.setGeometry(300, 300, 250, 180)
self.setWindowTitle('Color dialog')
self.show()
def showDialog(self):
col = QColorDialog.getColor()
if col.isValid():
self.frm.setStyleSheet("QWidget { background-color: %s }"
% col.name())
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_()) | [
"abc7988se@naver.com"
] | abc7988se@naver.com |
097ab129b8cef6b5640d70cae31e5546bfea0ef8 | 74970207d288128116f524e391f3289751bd7c2c | /nodetype/ntcompiler.py | c582073448ea2c15b171120552211251a90b4920 | [] | no_license | locha0305/NodeType | c8b50ad093fe4da2c2ecb49816fcb73b888b23dd | f0f13d0a7f65671b006bec22bdeb59d7267591cb | refs/heads/main | 2023-02-03T19:02:37.829436 | 2020-12-19T08:30:08 | 2020-12-19T08:30:08 | 314,925,207 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,214 | py | import sys
import os
import essentials.datatype as dt
def init():
version = 0.1
author = "locha"
github = "https://github.com/locha0305"
email = "locha0305@naver.com"
print("NodeType v{}".format(version))
print("developed by {}".format(author))
print("github : {}".format(github))
print("email : {}".format(email))
class ntobject():
def __init__(self, route):
try:
self.desc = ""
self.node = {}
with open(route, 'r', encoding='UTF-8') as file:
file = file.readlines()
for lines in file:
self.desc += lines
self.readdesc()
except:
print("[nt] ERROR OCCURED WHILE LOADING NODETYPE FILE")
def readdesc(self):
try:
word = ""
paren = " {}:"
self.mother = None
is_paren_open = False
cursor = 0
is_string = False
while cursor < len(self.desc):
letter = self.desc[cursor]
if not(is_string):
if letter in paren:
if letter == " ":
word = ""
elif letter == "\n":
word = ""
elif letter == ":":
left_indi = word.strip('\n')
word = ""
jump = 1
right_indi_is_string = False
string_indi = False
while self.desc[cursor + jump] != "\n" and not(is_string):
if self.desc[cursor + jump] == '"' and not(is_string):
is_string = True
string_indi = True
if self.desc[cursor + jump] == '"' and is_string:
is_string = False
right_indi_is_string = True
string_indi = True
else:
pass
if not(string_indi):
word += self.desc[cursor + jump]
else:
string_indi = False
jump += 1
cursor += jump
right_indi = word.strip("\n")
if right_indi_is_string:
self.node[self.mother][left_indi] = right_indi
else:
if dt.is_int(right_indi):
self.node[self.mother][left_indi] = int(right_indi)
elif dt.is_float(right_indi):
self.node[self.mother][left_indi] = float(right_indi)
elif dt.is_bool(right_indi):
if right_indi == "True":
self.node[self.mother][left_indi] = True
else:
self.node[self.mother][left_indi] = False
word = ""
elif letter == "{":
self.mother = word.strip('\n')
self.node[self.mother] = {}
word = ""
is_paren_open = True
elif letter == "}":
self.mother = None
word = ""
is_paren_open = False
else:
word += letter
cursor += 1
except:
print("[nt] ERROR OCCURED WHILE COMPILING")
def attr(self, attr_name):
return self.node[attr_name]
| [
"noreply@github.com"
] | noreply@github.com |
8957329d0d8ff6f99b71d40d998f8fc8936ea358 | f355c9bec04b07072416a8c8a225f4bca9710eda | /Simple_Calculator.py | 4a0e986e090ff4a91314bc7921925cd28168db7f | [] | no_license | ramyadevisetty/St07 | 37be0ef873104f13621b914d75569aed44d815c4 | 981df79e572c15b4f4448a797800eeb1ac135bd7 | refs/heads/main | 2023-03-20T21:53:01.658961 | 2021-03-13T21:27:53 | 2021-03-13T21:27:53 | 347,480,105 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 434 | py | class Calculator:
def add(self,x, y):
"""This function adds two numbers"""
return x + y
def subtract(self,x, y):
"""This function subtracts two numbers"""
return x - y
def multiply(self,x, y):
"""This function multiplies two numbers"""
return x * y
def divide(self,x, y):
"""This function divides two numbers"""
return x / y
| [
"noreply@github.com"
] | noreply@github.com |
1883005efee9489e335963b21d681e133037c0b4 | f2da6fa4b7bc6bc7f1287bc3627aaab03a32d75b | /mindaffectBCI/decoder/scoreOutput.py | c2a5ab80b98341d7195c97f97a6cb5e60ede8e5a | [
"MIT"
] | permissive | wozu-dichter/pymindaffectBCI | 0591f6100bc5091a03934ffa4a0d4924f7fd324e | f9ba5391c70bf08a5f3c198471c91ebaaa1e51e9 | refs/heads/master | 2023-02-05T16:32:37.668123 | 2020-08-18T10:40:00 | 2020-08-18T10:40:00 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 12,783 | py | import numpy as np
from .utils import window_axis
#@function
def scoreOutput(Fe, Ye, dedup0=None, R=None, offset=0, outputscore='ip'):
'''
score each output given information on which stim-sequences corrospend to which inputs
Inputs:
Fe = (nM,nTrl,nSamp,nE) similarity score for each event type for each stimulus
Ye = (nTrl,nSamp,nY,nE) Indicator for which events occured for which outputs
nE=#event-types nY=#possible-outputs nEpoch=#stimulus events to process
R = (nM,nfilt,nE,tau) FWD-model (impulse response) for each of the event types, used to correct the
scores for correlated responses.
dedup0 = bool - remove duplicate copies of output O (used when cross validating calibration data)
Outputs:
Fy = (nM,nTrl,nEp,nY) similarity score for each input epoch for each output
Copyright (c) MindAffect B.V. 2018
'''
if Fe.size == 0:
Fy = np.zeros((Fe.shape[0], Fe.shape[1], Fe.shape[2], Ye.shape[-2]),dtype=np.float32)
return Fy
if Ye.ndim < 4: # ensure 4-d
Ye = Ye.reshape((1,)*(4-Ye.ndim)+Ye.shape)
if dedup0 is not None: # remove duplicate copies output=0
Ye = dedupY0(Ye)
# inner-product score
Fy = np.einsum("mTEe,TEYe->mTEY", Fe, Ye, dtype=Fe.dtype)
Fy = Fy.astype(Fe.dtype)
# add correction for other measures
if outputscore == 'sse':
YR = convYR(Ye,R,offset) # (nM,nTrl,nSamp,nY,nFilt)
# Apply the correction:
# SSE = (wX-Yr).^2
# = wX**2 - 2 wXYr + Yr**2
# = wX**2 = wX**2 - 2 Fe*Y + Yr**2
# = wX**2 - 2Fy + Yr**2
# take negative, so big (i.e. 0) is good, and drop constant over Y and divide by 2 ->
# -SSE = fY = Fe*Y - .5* Yr**2
Fy = Fy - np.sum(YR**2,-1) / 2
elif outputscore == 'corr': # correlation based scoring...
raise NotImplementedError()
elif not outputscore == 'ip':
raise NotImplementedError("output scoring with {} isn't supported".format(outputscore))
return Fy
def dedupY0(Y, zerodup=True, yfeatdim=True):
''' remove outputs which are duplicates of the first (objID==0) output
Inputs:
Y=(tr,ep,Y,e)
zerodup : bool
if True, then if mi is the duplicate, then zero the duplicate, i.e. of Y[...,mi,:]=0
else, we zero out ojID==0, i.e. Y[...,0,:]=0
Outputs:
Y=(tr,ep,Y,e) version of Y with duplicates of 1st row of Y set to 0'''
# record input shape so can get it back later
Yshape = Y.shape
#print("Y={}".format(Yshape))
Y = np.copy(Y) # copy so can modify, w/o killing orginal
# make the shape we want
if not yfeatdim: # hack in feature dim
Y = Y[..., np.newaxis]
if Y.shape[-2] == 1:
return Y.reshape(Yshape)
if Y.ndim == 3: # add trial dim if not there
Y = Y[np.newaxis, :, :, :]
for ti in range(Y.shape[0]):
# Note: horrible numpy hacks to make work & only for idx==0
sim = np.sum(np.equal(Y[ti, :, 0:1, :], Y[ti, :, 1:, :]), axis=(0, 2))/(Y.shape[1]*Y.shape[3])
#print("sim={}".format(sim))
mi = np.argmax(sim)
if sim[mi] > .95:
#print("{}) dup {}={}".format(ti,0,mi+1))
if zerodup: # zero out the duplicate of objId=0
Y[ti, :, mi+1, :] = 0
else: # zero out the objId==0 line
Y[ti, :, 0, :] = 0
# reshape back to input shape
Y = np.reshape(Y, Yshape)
return Y
def convWX(X,W):
''' apply spatial filter W to X '''
if W.ndim < 3:
W=W.reshape((1,)*(3-W.ndim)+W.shape)
if X.ndim < 3:
X=X.reshape((1,)*(3-X.ndim)+X.shape)
WX = np.einsum("TSd,mfd->mTSf",X,W)
return WX #(nM,nTrl,nSamp,nfilt)
def convYR(Y,R,offset=0):
''' compute the convolution of Y with R '''
if R is None:
return Y
if R.ndim < 4: # ensure 4-d
R = R.reshape((1,)*(4-R.ndim)+R.shape) # (nM,nfilt,nE,tau)
if Y.ndim < 4: # ensure 4-d
Y = Y.reshape((1,)*(4-Y.ndim)+Y.shape) # (nTr,nSamp,nY,nE)
#print("R={}".format(R.shape))
#print("Y={}".format(Y.shape))
# Compute the 'template' response
Yt = window_axis(Y, winsz=R.shape[-1], axis=-3) # (nTr,nSamp,tau,nY,nE)
#print("Yt={} (TStYe)".format(Yt.shape))
#print("R={} (mfet)".format(R[...,::-1].shape))
# TODO []: check treating filters correctly
# TODO []: check if need to time-reverse IRF - A: YES!
YtR = np.einsum("TStYe,mfet->mTSYf", Yt, R[...,::-1]) # (nM,nTr,nSamp,nY,nE)
#print("YtR={} (mTSYf)".format(YtR.shape))
# TODO []: correct edge effect correction, rather than zero-padding....
# zero pad to keep the output size
# TODO[]: pad with the *actual* values...
tmp = np.zeros(YtR.shape[:-3]+(Y.shape[-3],)+YtR.shape[-2:])
#print("tmp={}".format(tmp.shape))
tmp[..., R.shape[-1]-1-offset:YtR.shape[-3]+R.shape[-1]-1-offset, :, :] = YtR
YtR = tmp
#print("YtR={}".format(YtR.shape))
return YtR #(nM,nTrl,nSamp,nY,nfilt)
def convXYR(X,Y,W,R,offset):
WX=convWX(X,W) # (nTr,nSamp,nfilt)
YR=convYR(Y,R,offset) # (nTr,nSamp,nY,nfilt)
# Sum out filt dimesion
WXYR = np.sum(WX[...,np.newaxis,:]*YR,-1) #(nTr,nSamp,nY)
return WXYR,WX,YR
#@function
def testcases():
# Fe = [nE x nEpoch x nTrl x nM ] similarity score for each event type for each stimulus
# Ye = [nE x nY x nEpoch x nTrl ] Indicator for which events occured for which outputs
nE=2
nEpoch=100
nTrl=30
nY=20
nM=20
Fe=np.random.standard_normal((nM,nTrl,nEpoch,nE))
Ye=np.random.standard_normal((nTrl,nEpoch,nY,nE))
print("Fe={}".format(Fe.shape))
print("Ye={}".format(Ye.shape))
Fy=scoreOutput(Fe,Ye) # (nM,nTrl,nEp,nY)
print("Fy={}".format(Fy.shape))
import matplotlib.pyplot as plt
sFy=np.cumsum(Fy,axis=-2)
plt.clf();plt.plot(sFy[0,0,:,:]);plt.xlabel('epoch');plt.ylabel('output');plt.show()
# more complex example with actual signal/noise
from utils import testSignal
from scoreOutput import scoreOutput, plot_outputscore, convWX, convYR, convXYR
from scoreStimulus import scoreStimulus
from decodingSupervised import decodingSupervised
from normalizeOutputScores import normalizeOutputScores
import numpy as np
irf=(1,1,-1,-1,0,0,0,0,0,0)
X,Y,st,W,R = testSignal(nTrl=1,nSamp=1000,d=1,nE=1,nY=10,isi=2,irf=irf,noise2signal=0)
plot_outputscore(X[0,...],Y[0,:,0:3,:],W,R)
# add a correlated output
Y[:,:,1,:]=Y[:,:,0,:]*.5
plot_outputscore(X[0,...],Y[0,:,0:3,:],W,R)
def datasettest():
# N.B. imports in function to avoid import loop..
from datasets import get_dataset
from model_fitting import MultiCCA, BwdLinearRegression, FwdLinearRegression
from analyse_datasets import debug_test_dataset
from scoreOutput import plot_outputscore
from decodingCurveSupervised import decodingCurveSupervised
if True:
tau_ms=300
offset_ms=0
rank=1
evtlabs=None
l,f,_=get_dataset('twofinger')
oX,oY,coords=l(f[0])
else:
tau_ms=20
offset_ms=0
rank=8
evtlabs=None
l,f,_=get_dataset('mark_EMG')
X,Y,coords=l(f[1],whiten=True,stopband=((0,10),(45,55),(200,-1)),filterbank=((10,20),(20,45),(55,95),(105,200)))
oX=X.copy(); oY=Y.copy()
X=oX.copy()
Y=oY.copy()
# test with reduced number classes?
nY=8
Y=Y[:,:,:nY+1,:nY]
plt.close('all')
debug_test_dataset(X, Y, coords, tau_ms=tau_ms, offset_ms=offset_ms, evtlabs=evtlabs, rank=8, outputscore='ip', model='cca')
fs = coords[1]['fs']
tau = min(X.shape[-2],int(tau_ms*fs/1000))
offset=int(offset_ms*fs/1000)
cca = MultiCCA(tau=tau, offset=offset, rank=rank, evtlabs=evtlabs)
cca.fit(X,Y)
Fy = cca.predict(X, Y, dedup0=True)
(_) = decodingCurveSupervised(Fy)
W=cca.W_
R=cca.R_
b=cca.b_
plot_outputscore(X[0,...],Y[0,...],W,R)
def plot_Fy(Fy,cumsum=True):
import matplotlib.pyplot as plt
import numpy as np
'''plot the output score function'''
if cumsum:
Fy = np.cumsum(Fy.copy(),-2)
plt.clf()
nPlts=min(25,Fy.shape[0])
if Fy.shape[0]/2 > nPlts:
tis = np.linspace(0,Fy.shape[0]/2-1,nPlts,dtype=int)
else:
tis = np.arange(0,nPlts,dtype=int)
ncols = int(np.ceil(np.sqrt(nPlts)))
nrows = int(np.ceil(nPlts/ncols))
#fig, plts = plt.subplots(nrows, ncols, sharex='all', sharey='all', squeeze=False)
axploti = ncols*(nrows-1)
ax = plt.subplot(nrows,ncols,axploti+1)
for ci,ti in enumerate(tis):
# make the axis
if ci==axploti: # common axis plot
pl = ax
else: # normal plot
pl = plt.subplot(nrows,ncols,ci+1,sharex=ax, sharey=ax) # share limits
plt.tick_params(labelbottom=False,labelleft=False) # no labels
#pl = plts[ci//ncols, ci%ncols]
pl.plot(Fy[ti,:,1:])
pl.plot(Fy[ti,:,0:1],'k',linewidth=5)
pl.set_title("{}".format(ti))
pl.grid(True)
pl.legend(range(Fy.shape[-1]-1))
plt.suptitle('cumsum Fy')
def plot_Fycomparsion(Fy,Fys,ti=0):
import matplotlib.pyplot as plt
import numpy as np
from normalizeOutputScores import normalizeOutputScores
from decodingSupervised import decodingSupervised
plt.subplot(231); plt.cla(); plt.plot(Fy[ti,:,:],label='Fy'); plt.title('Inner-product');
# Ptgt every sample, accum from trial start
ssFy,varsFy,N,nEp,nY=normalizeOutputScores(Fy, minDecisLen=-1, filtLen=5)
plt.subplot(232); plt.cla(); plt.plot(np.cumsum(Fy[ti,:,:],-2),label='sFy');plt.plot(varsFy[ti,1:],'k-',linewidth=5,label='scale'); plt.title('cumsum(Inner-product)'); plt.title('cumsum(Inner-product)');
Yest,Perr,Ptgt,decismdl,decisEp=decodingSupervised(Fy, minDecisLen=-1, marginalizemodels=False, filtLen=5)
plt.subplot(233); plt.cla(); plt.plot(Ptgt[ti,:,:],label='Ptgt'); plt.title('Ptgt');
#X2 = np.sum(X**2,axis=-1,keepdims=True); Fys=Fys-X2/2 # include norm of X to be sure.
plt.subplot(234); plt.cla(); plt.plot(Fys[ti,:,:],label='sFy'); plt.title('-SSE');
# Ptgt every sample, accum from trial start
ssFy,varsFy,N,nEp,nY=normalizeOutputScores(Fys, minDecisLen=-1, filtLen=5)
Yest,Perr,Ptgt,decismdl,decisEp=decodingSupervised(Fys, minDecisLen=-1, marginalizemodels=False, filtLen=5)
plt.subplot(235); plt.cla(); plt.plot(np.cumsum(Fys[ti,:,:],-2),label='sFy'); plt.plot(varsFy[ti,1:],'k-',linewidth=5,label='scale'); plt.title('cumsum(-SSE)');
plt.subplot(236); plt.cla(); plt.plot(Ptgt[ti,:,:],label='Ptgt'); plt.title('Ptgt(-SSE)');
def plot_outputscore(X,Y,W=None,R=None,offset=0):
import matplotlib.pyplot as plt
import numpy as np
from model_fitting import MultiCCA
from scoreStimulus import scoreStimulus
from scoreOutput import scoreOutput
from decodingCurveSupervised import decodingCurveSupervised
''' plot the factored model to visualize the fit and the scoring functions'''
if W is None:
cca = MultiCCA(tau=R,evtlabs=None)
if X.ndim < 3: # BODGE:
cca.fit(X[np.newaxis,...], Y[np.newaxis,...])
else:
cca.fit(X, Y)
W=cca.W_
R=cca.R_
Fy=cca.predict(X,Y)
(_) = decodingCurveSupervised(Fy)
WXYR,WX,YR=convXYR(X,Y,W,R,offset)
plt.clf();
plt.subplot(511);plt.plot(np.squeeze(WX));plt.grid();plt.title("WX");
plt.subplot(512);plt.plot(Y[...,0],label='Y');plt.plot(np.squeeze(R).T,'k',label='R',linewidth=5);plt.grid();plt.title("Y");
plt.subplot(513);plt.plot(np.squeeze(YR[...,0]));plt.grid();plt.title("YR");
plt.subplot(5,3,10);plt.plot(np.squeeze(WXYR));plt.grid();plt.title("WXYR")
plt.subplot(5,3,11);plt.plot(np.squeeze(np.cumsum(WXYR,-2)));plt.grid();plt.title("cumsum(WXYR)")
err = WX[...,np.newaxis,:]-YR
sse = np.sum(err**2,-1)
plt.subplot(5,3,12);plt.plot(np.squeeze(np.cumsum(-sse,-2)));plt.grid();plt.title("cumsum(sse)")
cor = np.cumsum(WXYR,-2)/np.sqrt(np.cumsum(np.sum(YR**2,-1),-2))
plt.subplot(5,3,12);plt.cla();plt.plot(np.squeeze(np.cumsum(-sse,-2)));plt.grid();plt.title("corr")
Fe = scoreStimulus(X,W,R)
Fy = scoreOutput(Fe,Y,R=R,outputscore='ip')
Fys = scoreOutput(Fe,Y,R=R,outputscore='sse')
plt.subplot(5,3,13);plt.plot(np.squeeze(Fy));plt.grid();plt.title("Fy")
plt.subplot(5,3,14);plt.plot(np.squeeze(np.cumsum(Fy,-2)));plt.grid();plt.title("cumsum(Fy)")
plt.subplot(5,3,15);plt.plot(np.squeeze(np.cumsum(2*Fys,-2)));plt.grid();plt.title("cumsum(Fy(sse))")
if __name__=="__main__":
testcases()
| [
"jadref@gmail.com"
] | jadref@gmail.com |
fbbd0793087768772b3bee556b1c90f6bc381436 | f54f92b59fec69618d5dac838a11c70d1f8b1dc3 | /shiboqi_main.py | c236d2e9178091d00bff85d3b6744ef144bac00e | [] | no_license | TobinZuo/Accelerometer-Show | 80e52eff9d89992278e2f0d6654371abae16a8b8 | cd9b9b829e8912f7cb7b24d0eb3c2035dedfb466 | refs/heads/main | 2023-02-26T03:21:21.347404 | 2021-01-25T09:27:48 | 2021-01-25T09:27:48 | 332,691,345 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 6,315 | py | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'shiboqi_main.ui'
#
# Created by: PyQt5 UI code generator 5.13.0
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(1151, 656)
MainWindow.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(940, 10, 61, 20))
self.label.setObjectName("label")
self.comboBox = QtWidgets.QComboBox(self.centralwidget)
self.comboBox.setGeometry(QtCore.QRect(1010, 10, 71, 20))
self.comboBox.setObjectName("comboBox")
self.comboBox_2 = QtWidgets.QComboBox(self.centralwidget)
self.comboBox_2.setGeometry(QtCore.QRect(1010, 40, 71, 20))
self.comboBox_2.setObjectName("comboBox_2")
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setGeometry(QtCore.QRect(940, 70, 51, 20))
self.label_3.setObjectName("label_3")
self.comboBox_3 = QtWidgets.QComboBox(self.centralwidget)
self.comboBox_3.setGeometry(QtCore.QRect(1010, 70, 71, 21))
self.comboBox_3.setObjectName("comboBox_3")
self.label_4 = QtWidgets.QLabel(self.centralwidget)
self.label_4.setGeometry(QtCore.QRect(940, 100, 54, 16))
self.label_4.setObjectName("label_4")
self.comboBox_4 = QtWidgets.QComboBox(self.centralwidget)
self.comboBox_4.setGeometry(QtCore.QRect(1010, 100, 71, 21))
self.comboBox_4.setObjectName("comboBox_4")
self.label_5 = QtWidgets.QLabel(self.centralwidget)
self.label_5.setGeometry(QtCore.QRect(940, 130, 54, 16))
self.label_5.setObjectName("label_5")
self.comboBox_5 = QtWidgets.QComboBox(self.centralwidget)
self.comboBox_5.setGeometry(QtCore.QRect(1010, 130, 71, 21))
self.comboBox_5.setObjectName("comboBox_5")
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(940, 40, 54, 16))
self.label_2.setObjectName("label_2")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(940, 170, 81, 23))
self.pushButton.setObjectName("pushButton")
self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(1050, 170, 81, 23))
self.pushButton_2.setObjectName("pushButton_2")
self.graphicsView = QtWidgets.QGraphicsView(self.centralwidget)
self.graphicsView.setGeometry(QtCore.QRect(0, 1, 921, 541))
self.graphicsView.setObjectName("graphicsView")
self.pushButton_6 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_6.setGeometry(QtCore.QRect(1000, 410, 81, 23))
self.pushButton_6.setObjectName("pushButton_6")
self.pushButton_7 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_7.setGeometry(QtCore.QRect(1000, 460, 81, 23))
self.pushButton_7.setObjectName("pushButton_7")
self.lineEdit_3 = QtWidgets.QLineEdit(self.centralwidget)
self.lineEdit_3.setGeometry(QtCore.QRect(990, 240, 71, 21))
self.lineEdit_3.setObjectName("lineEdit_3")
self.label_14 = QtWidgets.QLabel(self.centralwidget)
self.label_14.setGeometry(QtCore.QRect(930, 240, 61, 21))
self.label_14.setObjectName("label_14")
self.pushButton_8 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_8.setGeometry(QtCore.QRect(1070, 240, 81, 23))
self.pushButton_8.setObjectName("pushButton_8")
self.pushButton_9 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_9.setGeometry(QtCore.QRect(1000, 360, 81, 23))
self.pushButton_9.setObjectName("pushButton_9")
self.lineEdit_4 = QtWidgets.QLineEdit(self.centralwidget)
self.lineEdit_4.setGeometry(QtCore.QRect(990, 300, 121, 21))
self.lineEdit_4.setObjectName("lineEdit_4")
self.label_15 = QtWidgets.QLabel(self.centralwidget)
self.label_15.setGeometry(QtCore.QRect(930, 300, 61, 21))
self.label_15.setObjectName("label_15")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 1151, 18))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.label.setBuddy(self.comboBox)
self.label_3.setBuddy(self.comboBox_3)
self.label_4.setBuddy(self.comboBox_4)
self.label_5.setBuddy(self.comboBox_5)
self.label_2.setBuddy(self.comboBox_2)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.label.setText(_translate("MainWindow", "串口号:"))
self.label_3.setText(_translate("MainWindow", "数据位:"))
self.label_4.setText(_translate("MainWindow", "停止位:"))
self.label_5.setText(_translate("MainWindow", "校验位:"))
self.label_2.setText(_translate("MainWindow", "波特率:"))
self.pushButton.setText(_translate("MainWindow", "打开串口连接"))
self.pushButton_2.setText(_translate("MainWindow", "刷新串口设备"))
self.pushButton_6.setText(_translate("MainWindow", "开始记录"))
self.pushButton_7.setText(_translate("MainWindow", "保存"))
self.label_14.setText(_translate("MainWindow", "窗口大小"))
self.pushButton_8.setText(_translate("MainWindow", "减均值"))
self.pushButton_9.setText(_translate("MainWindow", "固定边界"))
self.label_15.setText(_translate("MainWindow", "保存文件名"))
| [
"zuotongbin@gmail.com"
] | zuotongbin@gmail.com |
122930506c28e46e8f3824cd3c587a495af5f7d0 | c52a0cc3980a677c9ee800b0b6be1e2b3cc34a8c | /food/migrations/0002_food_image.py | a87ae57e521015e4b5a5951ccd3ab8e5906e26a1 | [] | no_license | ilonajulczuk/pierogi | 91669d0b1f2c4f2f7866dbb93c1925f97a1142f9 | c1aa7042b6747186cde09341da36cebba4dcc059 | refs/heads/master | 2021-01-10T10:17:53.071708 | 2015-11-28T16:54:28 | 2015-11-28T16:54:31 | 47,017,910 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 416 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('food', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='food',
name='image',
field=models.ImageField(max_length=254, upload_to='photos', null=True),
),
]
| [
"justyna.janczyszyn@10clouds.com"
] | justyna.janczyszyn@10clouds.com |
a2fc957ea31628458121317a6f5f24ed91d611ed | dfe4e50a8b30a992df1183d82ffc4c16484f70fb | /deep_learning/deep-learning-from-scratch-8ca57dea36b05ffe6bc4486069e5ffb9bd444285/ch01/image_practice.py | e5a2a2f9dcdb3b2f75d3706d9869f11dd15ab78a | [
"MIT"
] | permissive | TaeSanKIM/python | 9ce68cbfebc14afd9c33ccfc4ce2c96635eb607a | e90e811787332a6b2157c2387ec59070ddaa4b38 | refs/heads/master | 2021-04-27T00:11:37.910593 | 2018-03-04T07:43:35 | 2018-03-04T07:43:35 | 123,766,897 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 500 | py | import numpy as np
import sys, os
sys.path.append(os.pardir)
from dataset.mnist import load_mnist
from PIL import image
def img_show(img):
pil_img = Image.fromarray(np.unit8(img))
pil_img.show()
(x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False)
img = x_train[0]
label = t_train[0]
print(label)
print(img.shape)
img = img.reshape(28, 28)
print(img,shape)
img_show(img)
#print(x_train.shape)
#print(t_train.shape)
#print(x_test.shape)
#print(t_test.shape)
| [
"ktmountain@naver.com"
] | ktmountain@naver.com |
99acaf4d8d8b122af0d5303d0ea2bdd629a4d952 | 56c02cf652971b366c05b7a77db0292467cf331e | /src/kegg.py | fdb2c12f5549e5da1c3bc2d8f9a252f3c98f4749 | [] | no_license | mlorthiois/gene-annotation-summary | d3d44898c86b214ab15e245689c54cfe0f15d6bf | 47ea082dbdc8af27fa4d8c67016f3344d9316954 | refs/heads/master | 2022-04-11T12:19:38.082993 | 2020-03-04T19:42:54 | 2020-03-04T19:42:54 | 236,598,278 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,641 | py | import requests
import re
def kegg(ncbi, output_file):
"""Find pathway in the Go database from NCBI Gene ID.
Find KEGG pathways of a protein (from NCBI Gene ID),
and write them in a column of a HTML table.
Args:
ncbi (str) : NCBI Gene ID. For example : '5888'.
output_file (open file) : file to write in.
"""
print('\tKegg ...')
r_url = "http://rest.kegg.jp/conv/genes/ncbi-geneid:{}".format(ncbi)
r = requests.get(r_url)
if len(r.text) != 1:
# Extract KEGG ID from request
kegg_id = r.text.rstrip().split('\t')[1]
organism_id = kegg_id[:3]
url = "https://www.genome.jp/dbget-bin/www_bget?{}".format(kegg_id)
# Write KEGG ID in the HTML table
output_file.write(f"<td><a href={url}>{kegg_id}</a></td>")
# Make request and extract KEGG pathways in a list
r_url = "http://rest.kegg.jp/get/{}".format(kegg_id)
r = requests.get(r_url)
pathway_list = re.findall("("+organism_id+"\d+\s{2}.*)", r.text)
# Write KEGG Pathways in the HTML table
output_file.write('<td><div class="scroll">')
if pathway_list != []:
for path in pathway_list:
path_id = path.split(' ')[0]
path_name = path.split(' ')[1]
link = f"https://www.genome.jp/dbget-bin/www_bget?{path_id}"
output_file.write(f"<a href={link}>{path_id}</a>: {path_name}<br>")
else:
output_file.write('<i>No data found</i>')
output_file.write('</div></td>')
else:
output_file.write('<td><i>No data found</i></td>'*2)
| [
"matthias.lts@gmail.com"
] | matthias.lts@gmail.com |
9e131733e27c3f1393787522057b146c61ee6e8e | 152cd8c6a11243f47287cc9e19feacc0e803778f | /meiduo_mall/apps/areas/migrations/0001_initial.py | 0f0e3632eb0eacc2980daebdbda86bc384f88f0e | [
"MIT"
] | permissive | MarioKarting/Django_meiduo_project | dfc21ab75996f4f58d5050d9e3bc173eaa625d13 | ef06e70b1ddb6709983ebb644452c980afc29000 | refs/heads/master | 2020-07-03T23:06:01.877057 | 2019-08-13T07:13:04 | 2019-08-13T07:13:04 | 202,080,964 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 968 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11.11 on 2019-06-07 02:18
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Area',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=20, verbose_name='名称')),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='subs', to='areas.Area', verbose_name='上级行政区划')),
],
options={
'verbose_name': '省市区',
'verbose_name_plural': '省市区',
'db_table': 'tb_areas',
},
),
]
| [
"429325776@qq.com"
] | 429325776@qq.com |
f3fcc8e1a040d4f9f545a1d0c68f148c8eb6bebc | 202ce29f510e451d407e9aef34daf8e21eefcd3d | /data-mining-assignments/hw5/hw5.py | 6e5561f5244ac2876bea097f339f6d466c035c36 | [] | no_license | DragonZ/college-work | 754e9a6d077794cc7858bdac668c9b2264b79ab6 | d4100eef4f0079cf6340e14952ace6b558ffe595 | refs/heads/master | 2021-01-10T17:07:42.786312 | 2016-03-09T00:58:59 | 2016-03-09T00:58:59 | 53,442,560 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,633 | py | import collections, csv, random, sys, re, time, ast, pickle, operator
from scipy import stats
def extract_top_2000_words(file_table):
allwords = []
# here traversal the first level
for row in file_table:
allwords += row
top2000_words_withcount = collections.Counter(allwords).most_common()[100:2100]
top2000words_list = []
for row in top2000_words_withcount:
top2000words_list.append(row[0])
return top2000words_list
def csv_to_words_array():
# each row in file table will be one record in the original csv
file_table = []
with open('stars_data.csv', 'rb') as file_dp:
file_reader = csv.DictReader(file_dp)
for row in file_reader:
strip_reviews = re.sub('[^\w ]', '', row['text'].lower())
file_table.append(strip_reviews.split())
return file_table
def generate_feature_matrix(top2000words_features):
feature_matrix = []
with open('stars_data.csv', 'rb') as file_dp:
file_reader = csv.DictReader(file_dp)
for row in file_reader:
strip_reviews = re.sub('[^\w ]', '', row['text'].lower())
current_review_word_list = strip_reviews.split()
current_attributes = []
for x in range(0, 2000):
if top2000words_features[x] in current_review_word_list:
current_attributes.append(1)
else:
current_attributes.append(0)
# label order = ispositive, isnegative
if row['stars'] == '5':
current_attributes.append(1)
current_attributes.append(0)
if row['stars'] == '1':
current_attributes.append(0)
current_attributes.append(1)
feature_matrix.append(current_attributes)
return feature_matrix
def help_feature_present_table(word_feature_list, target_matrix):
feature_present_table = {}
for i in range(0, 2002):
current_feature_present_in = []
for j in range(0, 5000):
if target_matrix[j][i] == 1:
current_feature_present_in.append(j)
if i < 2000:
feature_present_table[word_feature_list[i]] = current_feature_present_in
elif i == 2000:
feature_present_table['isPositive'] = current_feature_present_in
else:
feature_present_table['isNegative'] = current_feature_present_in
return feature_present_table
def gen_size1_itemset(feature_present_table, sthreadhold):
itemset = {}
for key in feature_present_table:
if len(feature_present_table[key])/5000.0 >= sthreadhold:
itemset[key] = len(feature_present_table[key])
return itemset
def gen_size2_itemset(size1_itemset, feature_present_table, sthreadhold):
itemset = {}
ordered_key_pairs = []
num_size1_itemset = len(size1_itemset)
keys_size1_itemset = size1_itemset.keys()
for i in range(num_size1_itemset):
for j in range(i+1, num_size1_itemset):
self_join_count = len(collections.Counter(feature_present_table[keys_size1_itemset[i]]) & \
collections.Counter(feature_present_table[keys_size1_itemset[j]]))
if self_join_count/5000.0 >= sthreadhold:
key_pair = []
itemset[keys_size1_itemset[i] + ' ' + keys_size1_itemset[j]] = self_join_count
key_pair.append(keys_size1_itemset[i])
key_pair.append(keys_size1_itemset[j])
ordered_key_pairs.append(key_pair)
return itemset, ordered_key_pairs
def qualified_size3_key_combination(key_pairs_from_size2):
num_pairs_from_size2 = len(key_pairs_from_size2)
qualified_pairs = []
for i in range(num_pairs_from_size2):
for j in range(i+1, num_pairs_from_size2):
if key_pairs_from_size2[i][0] == key_pairs_from_size2[j][0] and \
[key_pairs_from_size2[i][1], key_pairs_from_size2[j][1]] in key_pairs_from_size2:
key_pair = []
key_pair.append(key_pairs_from_size2[i][0])
key_pair.append(key_pairs_from_size2[i][1])
key_pair.append(key_pairs_from_size2[j][1])
qualified_pairs.append(key_pair)
if key_pairs_from_size2[i][0] != key_pairs_from_size2[j][0]:
break
return qualified_pairs
def gen_size3_itemset(key_combinaton_candidate, feature_present_table, sthreadhold):
itemset = {}
for key_combinaton in key_combinaton_candidate:
candidate_count = len(collections.Counter(feature_present_table[key_combinaton[0]]) & \
collections.Counter(feature_present_table[key_combinaton[1]]) & \
collections.Counter(feature_present_table[key_combinaton[2]]))
if candidate_count/5000.0 >= sthreadhold:
itemset[key_combinaton[0] + ' ' + key_combinaton[1] + ' ' + key_combinaton[2]] = candidate_count
return itemset
def gen_size2_rules(size1_itemset, size2_itemset, cthreadhold):
rule_set = {}
for key in size2_itemset:
key_list = key.split()
# compute first -> second
first_to_second = float(size2_itemset[key])/float(size1_itemset[key_list[0]])
if first_to_second >= cthreadhold:
current_rule_detail = key
rule_set[current_rule_detail] = first_to_second
# compute second -> first
second_to_first = float(size2_itemset[key])/float(size1_itemset[key_list[1]])
if second_to_first >= cthreadhold:
current_rule_detail = key_list[1] + ' ' + key_list[0]
rule_set[current_rule_detail] = second_to_first
return rule_set
def gen_size3_rules(size2_itemset, size3_itemset, cthreadhold):
rule_set = {}
for key in size3_itemset:
key_list = key.split()
# 0, 1, 2
# compute 0, 1 -> 2
p01_to_2 = float(size3_itemset[key])/float(size2_itemset[key_list[0]+' '+key_list[1]])
if p01_to_2 >= cthreadhold:
rule_set[key_list[0]+' '+key_list[1]+' '+key_list[2]] = p01_to_2
# compute 0, 2 -> 1
p02_to_1 = float(size3_itemset[key])/float(size2_itemset[key_list[0]+' '+key_list[2]])
if p02_to_1 >= cthreadhold:
rule_set[key_list[0]+' '+key_list[2]+' '+key_list[1]] = p02_to_1
# compute 1, 2 -> 0
p12_to_0 = float(size3_itemset[key])/float(size2_itemset[key_list[1]+' '+key_list[2]])
if p12_to_0 >= cthreadhold:
rule_set[key_list[1]+' '+key_list[2]+' '+key_list[0]] = p12_to_0
return rule_set
def gen_size2_rules_chi(size1_itemset, size2_itemset, chithreadhold):
rule_set = {}
for item in size2_itemset:
A = item.split()[0]
B = item.split()[1]
a = size2_itemset[item]
b = size1_itemset[A]-size2_itemset[item]
c = size1_itemset[B]-size2_itemset[item]
d = 5000 - a - b - c
target_matrix = [[a, b], [c ,d]]
chi_score, p, trash1, trash2 = stats.chi2_contingency(target_matrix)
if p < chithreadhold:
rule_set[item] = [chi_score, p]
# rule_set[B + ' ' + A] = [chi_score, p]
return rule_set
def gen_size3_rules_chi(size1_itemset, size2_itemset, size3_itemset, chithreadhold):
rule_set = {}
for item in size3_itemset:
A = item.split()[0]
B = item.split()[1]
C = item.split()[2]
AB = A + ' ' + B
AC = A + ' ' + C
BC = B + ' ' + C
# AB -> C
a = size3_itemset[item]
b = size2_itemset[AB] - a
c = size1_itemset[C] - a
d = 5000 - a - b - c
target_matrix = [[a, b], [c ,d]]
chi_score, p, trash1, trash2 = stats.chi2_contingency(target_matrix)
if p < chithreadhold:
rule_set[item] = [chi_score, p]
# AC -> B
a = size3_itemset[item]
b = size2_itemset[AC] - a
c = size1_itemset[B] - a
d = 5000 - a - b - c
target_matrix = [[a, b], [c ,d]]
chi_score, p, trash1, trash2 = stats.chi2_contingency(target_matrix)
if p < chithreadhold:
rule_set[AC + ' ' + B] = [chi_score, p]
# BC -> A
a = size3_itemset[item]
b = size2_itemset[BC] - a
c = size1_itemset[A] - a
d = 5000 - a - b - c
target_matrix = [[a, b], [c ,d]]
chi_score, p, trash1, trash2 = stats.chi2_contingency(target_matrix)
if p < chithreadhold:
rule_set[BC + ' ' + A] = [chi_score, p]
return rule_set
def find_size2_support(keys, size2_itemset):
cand1 = keys[0] + ' ' + keys[1]
cand2 = keys[1] + ' ' + keys[0]
if cand1 in size2_itemset:
return size2_itemset[cand1]
else:
return size2_itemset[cand2]
def find_size3_support(keys, size3_itemset):
cand1 = keys[0] + ' ' + keys[1] + ' ' + keys[2]
cand2 = keys[0] + ' ' + keys[2] + ' ' + keys[1]
cand3 = keys[2] + ' ' + keys[0] + ' ' + keys[1]
if cand1 in size3_itemset:
return size3_itemset[cand1]
elif cand2 in size3_itemset:
return size3_itemset[cand2]
else:
return size3_itemset[cand3]
support_threadhold = 0.03
confidence_threadhold = 0.25
chi_threadhold = 0.05
ordered_2000_feature_list = extract_top_2000_words(csv_to_words_array())
target_feature_matrix = generate_feature_matrix(ordered_2000_feature_list)
feature_present_table = help_feature_present_table(ordered_2000_feature_list, target_feature_matrix)
size1_qualified_itemset = gen_size1_itemset(feature_present_table, support_threadhold)
size2_qualified_itemset, key_pairs_for_size3 = gen_size2_itemset(size1_qualified_itemset, feature_present_table, support_threadhold)
possible_size3_key_combination = qualified_size3_key_combination(key_pairs_for_size3)
size3_qualified_itemset = gen_size3_itemset(possible_size3_key_combination, feature_present_table, support_threadhold)
size2_rules = gen_size2_rules(size1_qualified_itemset, size2_qualified_itemset, confidence_threadhold)
size3_rules = gen_size3_rules(size2_qualified_itemset, size3_qualified_itemset, confidence_threadhold)
# chi_threadhold = chi_threadhold/(len(size2_qualified_itemset)*2 + len(size3_qualified_itemset)*3)
# size2_rules = gen_size2_rules_chi(size1_qualified_itemset, size2_qualified_itemset, chi_threadhold)
# size3_rules = gen_size3_rules_chi(size1_qualified_itemset, size2_qualified_itemset, size3_qualified_itemset, chi_threadhold)
all_rules = size2_rules.items() + size3_rules.items()
sorted_rules = sorted(all_rules, key=operator.itemgetter(1), reverse=True)
# print len(size2_rules)
# print len(size3_rules)
# all_rules = []
# for rule in size2_rules:
# tr = [rule]
# tr += size2_rules[rule]
# all_rules.append(tr)
# for rule in size3_rules:
# tr = [rule]
# tr += size3_rules[rule]
# all_rules.append(tr)
# sorted_rules = sorted(all_rules, key=operator.itemgetter(2))
top30 = 1
for line in sorted_rules:
if top30 > 30:
break
print line[0]
# keys = line[0].split()
# key_size = len(keys)
# rule_string = 't'
# support = 0.0
# if key_size == 2:
# rule_string = 'IF ' + keys[0] + ' THEN ' + keys[1]
# support = find_size2_support(keys, size2_qualified_itemset)/5000.0
# if key_size == 3:
# rule_string = 'IF ' + keys[0] + ' AND ' + keys[1] + ' THEN ' + keys[2]
# support = find_size3_support(keys, size3_qualified_itemset)/5000.0
# print top30, ' & ', rule_string, ' & $', line[2], '$ & $', line[1], '$ & $', support, '$ \\\\'
top30 += 1
# pickle.dump(target_feature_matrix, open("target_matrix.db", "wb"))
# target_feature_matrix = pickle.load(open("target_matrix.db", "rb"))
| [
"lzhen@Longs-MBP.attlocal.net"
] | lzhen@Longs-MBP.attlocal.net |
8655a86f4d99e4dc3d21d0a740e200eda1cd4269 | 66643f48950453dd1cc408a763360db2be9942f6 | /tests/validation/test_executable_definitions.py | 4a21c63b65e4397cb38e995fa01b69400852109f | [
"MIT"
] | permissive | khasbilegt/graphql-core | ac958b5a68c27acd0c7f96429deeca7f7f8736b3 | fc76d01a2a134ba2cebd863bf48773fd44c2645b | refs/heads/main | 2023-08-05T06:03:56.299244 | 2021-09-19T10:31:30 | 2021-09-19T10:31:30 | 408,735,141 | 1 | 0 | MIT | 2021-09-21T08:00:36 | 2021-09-21T08:00:35 | null | UTF-8 | Python | false | false | 2,234 | py | from functools import partial
from graphql.validation import ExecutableDefinitionsRule
from .harness import assert_validation_errors
assert_errors = partial(assert_validation_errors, ExecutableDefinitionsRule)
assert_valid = partial(assert_errors, errors=[])
def describe_validate_executable_definitions():
def with_only_operation():
assert_valid(
"""
query Foo {
dog {
name
}
}
"""
)
def with_operation_and_fragment():
assert_valid(
"""
query Foo {
dog {
name
...Frag
}
}
fragment Frag on Dog {
name
}
"""
)
def with_type_definition():
assert_errors(
"""
query Foo {
dog {
name
}
}
type Cow {
name: String
}
extend type Dog {
color: String
}
""",
[
{
"message": "The 'Cow' definition is not executable.",
"locations": [(8, 13)],
},
{
"message": "The 'Dog' definition is not executable.",
"locations": [(12, 13)],
},
],
)
def with_schema_definition():
assert_errors(
"""
schema {
query: Query
}
type Query {
test: String
}
extend schema @directive
""",
[
{
"message": "The schema definition is not executable.",
"locations": [(2, 13)],
},
{
"message": "The 'Query' definition is not executable.",
"locations": [(6, 13)],
},
{
"message": "The schema definition is not executable.",
"locations": [(10, 13)],
},
],
)
| [
"cito@online.de"
] | cito@online.de |
a35fc58e8d48e6b0306595b3c3cdc5d79273a1aa | dba8d5e1e290192221e7b297e1303253a10fb24d | /wandb/sdk/internal/handler.py | e84b8c046944ced72437961a2fdfb76c81edcf81 | [
"MIT"
] | permissive | haenara-shin/wandb_client | 9c916f84e253cf68e4f37ef54ff30b006a8852f3 | 94c226afc4925535e6301c9bc9b9ee36061d99d4 | refs/heads/master | 2023-04-18T13:51:43.411681 | 2021-05-11T02:16:33 | 2021-05-11T02:16:33 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 23,507 | py | #
# -*- coding: utf-8 -*-
"""Handle Manager."""
from __future__ import print_function
import json
import logging
import math
import numbers
import os
import six
import wandb
from wandb.proto import wandb_internal_pb2
from . import meta, sample, stats
from . import tb_watcher
from ..lib import handler_util, proto_util
if wandb.TYPE_CHECKING:
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
List,
Tuple,
Sequence,
Iterable,
Optional,
cast,
)
from .settings_static import SettingsStatic
from six.moves.queue import Queue
from threading import Event
from ..interface.interface import BackendSender
from wandb.proto.wandb_internal_pb2 import Record, Result
SummaryDict = Dict[str, Any]
logger = logging.getLogger(__name__)
def _dict_nested_set(target: Dict[str, Any], key_list: Sequence[str], v: Any) -> None:
# recurse down the dictionary structure:
for k in key_list[:-1]:
target.setdefault(k, {})
new_target = target.get(k)
if wandb.TYPE_CHECKING and TYPE_CHECKING:
new_target = cast(Dict[str, Any], new_target)
target = new_target
# use the last element of the key to write the leaf:
target[key_list[-1]] = v
class HandleManager(object):
_consolidated_summary: SummaryDict
_sampled_history: Dict[str, sample.UniformSampleAccumulator]
_settings: SettingsStatic
_record_q: "Queue[Record]"
_result_q: "Queue[Result]"
_stopped: Event
_sender_q: "Queue[Record]"
_writer_q: "Queue[Record]"
_interface: BackendSender
_system_stats: Optional[stats.SystemStats]
_tb_watcher: Optional[tb_watcher.TBWatcher]
_metric_defines: Dict[str, wandb_internal_pb2.MetricRecord]
_metric_globs: Dict[str, wandb_internal_pb2.MetricRecord]
_metric_track: Dict[Tuple[str, ...], float]
_metric_copy: Dict[Tuple[str, ...], Any]
def __init__(
self,
settings: SettingsStatic,
record_q: "Queue[Record]",
result_q: "Queue[Result]",
stopped: Event,
sender_q: "Queue[Record]",
writer_q: "Queue[Record]",
interface: BackendSender,
) -> None:
self._settings = settings
self._record_q = record_q
self._result_q = result_q
self._stopped = stopped
self._sender_q = sender_q
self._writer_q = writer_q
self._interface = interface
self._tb_watcher = None
self._system_stats = None
self._step = 0
# keep track of summary from key/val updates
self._consolidated_summary = dict()
self._sampled_history = dict()
self._metric_defines = dict()
self._metric_globs = dict()
self._metric_track = dict()
self._metric_copy = dict()
def handle(self, record: Record) -> None:
record_type = record.WhichOneof("record_type")
assert record_type
handler_str = "handle_" + record_type
handler: Callable[[Record], None] = getattr(self, handler_str, None)
assert handler, "unknown handle: {}".format(handler_str)
handler(record)
def handle_request(self, record: Record) -> None:
request_type = record.request.WhichOneof("request_type")
assert request_type
handler_str = "handle_request_" + request_type
handler: Callable[[Record], None] = getattr(self, handler_str, None)
if request_type != "network_status":
logger.debug("handle_request: {}".format(request_type))
assert handler, "unknown handle: {}".format(handler_str)
handler(record)
def _dispatch_record(self, record: Record, always_send: bool = False) -> None:
if not self._settings._offline or always_send:
self._sender_q.put(record)
if not record.control.local:
self._writer_q.put(record)
def debounce(self) -> None:
pass
def handle_request_defer(self, record: Record) -> None:
defer = record.request.defer
state = defer.state
logger.info("handle defer: {}".format(state))
# only handle flush tb (sender handles the rest)
if state == defer.FLUSH_STATS:
if self._system_stats:
# TODO(jhr): this could block so we dont really want to call shutdown
# from handler thread
self._system_stats.shutdown()
elif state == defer.FLUSH_TB:
if self._tb_watcher:
# shutdown tensorboard workers so we get all metrics flushed
self._tb_watcher.finish()
self._tb_watcher = None
elif state == defer.FLUSH_SUM:
self._save_summary(self._consolidated_summary, flush=True)
# defer is used to drive the sender finish state machine
self._dispatch_record(record, always_send=True)
def handle_request_login(self, record: Record) -> None:
self._dispatch_record(record)
def handle_run(self, record: Record) -> None:
self._dispatch_record(record)
def handle_stats(self, record: Record) -> None:
self._dispatch_record(record)
def handle_config(self, record: Record) -> None:
self._dispatch_record(record)
def handle_output(self, record: Record) -> None:
self._dispatch_record(record)
def handle_files(self, record: Record) -> None:
self._dispatch_record(record)
def handle_artifact(self, record: Record) -> None:
self._dispatch_record(record)
def handle_alert(self, record: Record) -> None:
self._dispatch_record(record)
def _save_summary(self, summary_dict: SummaryDict, flush: bool = False) -> None:
summary = wandb_internal_pb2.SummaryRecord()
for k, v in six.iteritems(summary_dict):
update = summary.update.add()
update.key = k
update.value_json = json.dumps(v)
record = wandb_internal_pb2.Record(summary=summary)
if flush:
self._dispatch_record(record)
elif not self._settings._offline:
self._sender_q.put(record)
def _save_history(self, record: Record) -> None:
for item in record.history.item:
# TODO(jhr) save nested keys?
k = item.key
v = json.loads(item.value_json)
if isinstance(v, numbers.Real):
self._sampled_history.setdefault(k, sample.UniformSampleAccumulator())
self._sampled_history[k].add(v)
def _update_summary_metrics(
self,
s: wandb_internal_pb2.MetricSummary,
kl: List[str],
v: "numbers.Real",
float_v: float,
goal_max: Optional[bool],
) -> bool:
updated = False
best_key: Optional[Tuple[str, ...]] = None
if s.none:
return False
if s.copy:
# non key list copy already done in _update_summary
if len(kl) > 1:
_dict_nested_set(self._consolidated_summary, kl, v)
return True
if s.last:
last_key = tuple(kl + ["last"])
old_last = self._metric_track.get(last_key)
if old_last is None or float_v != old_last:
self._metric_track[last_key] = float_v
_dict_nested_set(self._consolidated_summary, last_key, v)
updated = True
if s.best:
best_key = tuple(kl + ["best"])
if s.max or best_key and goal_max:
max_key = tuple(kl + ["max"])
old_max = self._metric_track.get(max_key)
if old_max is None or float_v > old_max:
self._metric_track[max_key] = float_v
if s.max:
_dict_nested_set(self._consolidated_summary, max_key, v)
updated = True
if best_key:
_dict_nested_set(self._consolidated_summary, best_key, v)
updated = True
# defaulting to minimize if goal is not supecified
if s.min or best_key and not goal_max:
min_key = tuple(kl + ["min"])
old_min = self._metric_track.get(min_key)
if old_min is None or float_v < old_min:
self._metric_track[min_key] = float_v
if s.min:
_dict_nested_set(self._consolidated_summary, min_key, v)
updated = True
if best_key:
_dict_nested_set(self._consolidated_summary, best_key, v)
updated = True
if s.mean:
tot_key = tuple(kl + ["tot"])
num_key = tuple(kl + ["num"])
avg_key = tuple(kl + ["mean"])
tot = self._metric_track.get(tot_key, 0.0)
num = self._metric_track.get(num_key, 0)
tot += float_v
num += 1
self._metric_track[tot_key] = tot
self._metric_track[num_key] = num
_dict_nested_set(self._consolidated_summary, avg_key, tot / num)
updated = True
return updated
def _update_summary_leaf(
self,
kl: List[str],
v: Any,
d: Optional[wandb_internal_pb2.MetricRecord] = None,
) -> bool:
has_summary = d and d.HasField("summary")
if len(kl) == 1:
copy_key = tuple(kl)
old_copy = self._metric_copy.get(copy_key)
if old_copy is None or v != old_copy:
self._metric_copy[copy_key] = v
# Store copy metric if not specified, or copy behavior
if not has_summary or (d and d.summary.copy):
self._consolidated_summary[kl[0]] = v
return True
if not d:
return False
if not has_summary:
return False
if not isinstance(v, numbers.Real):
return False
if math.isnan(v):
return False
float_v = float(v)
goal_max = None
if d.goal:
goal_max = d.goal == d.GOAL_MAXIMIZE
if self._update_summary_metrics(
d.summary, kl=kl, v=v, float_v=float_v, goal_max=goal_max
):
return True
return False
def _update_summary_list(
self,
kl: List[str],
v: Any,
d: Optional[wandb_internal_pb2.MetricRecord] = None,
) -> bool:
metric_key = ".".join([k.replace(".", "\\.") for k in kl])
d = self._metric_defines.get(metric_key, d)
# if the dict has _type key, its a wandb table object
if isinstance(v, dict) and not handler_util.metric_is_wandb_dict(v):
updated = False
for nk, nv in six.iteritems(v):
if self._update_summary_list(kl=kl[:] + [nk], v=nv, d=d):
updated = True
return updated
updated = self._update_summary_leaf(kl=kl, v=v, d=d)
return updated
def _update_summary(self, history_dict: Dict[str, Any]) -> bool:
# keep old behavior fast path if no define metrics have been used
if not self._metric_defines:
self._consolidated_summary.update(history_dict)
return True
updated = False
for k, v in six.iteritems(history_dict):
if self._update_summary_list(kl=[k], v=v):
updated = True
return updated
def _history_assign_step(self, record: Record, history_dict: Dict) -> None:
has_step = record.history.HasField("step")
item = record.history.item.add()
item.key = "_step"
if has_step:
step = record.history.step.num
history_dict["_step"] = step
item.value_json = json.dumps(step)
self._step = step + 1
else:
history_dict["_step"] = self._step
item.value_json = json.dumps(self._step)
self._step += 1
def _history_define_metric(
self, hkey: str
) -> Optional[wandb_internal_pb2.MetricRecord]:
"""check for hkey match in glob metrics, return defined metric."""
# Dont define metric for internal metrics
if hkey.startswith("_"):
return None
for k, mglob in six.iteritems(self._metric_globs):
if k.endswith("*"):
if hkey.startswith(k[:-1]):
m = wandb_internal_pb2.MetricRecord()
m.CopyFrom(mglob)
m.ClearField("glob_name")
m.options.defined = False
m.name = hkey
return m
return None
def _history_update_leaf(
self, kl: List[str], v: Any, history_dict: Dict, update_history: Dict[str, Any]
) -> None:
hkey = ".".join([k.replace(".", "\\.") for k in kl])
m = self._metric_defines.get(hkey)
if not m:
m = self._history_define_metric(hkey)
if not m:
return
mr = wandb_internal_pb2.Record()
mr.metric.CopyFrom(m)
mr.control.local = True # Dont store this, just send it
self._handle_defined_metric(mr)
if m.options.step_sync and m.step_metric:
if m.step_metric not in history_dict:
copy_key = tuple([m.step_metric])
step = self._metric_copy.get(copy_key)
if step is not None:
update_history[m.step_metric] = step
def _history_update_list(
self, kl: List[str], v: Any, history_dict: Dict, update_history: Dict[str, Any]
) -> None:
if isinstance(v, dict):
for nk, nv in six.iteritems(v):
self._history_update_list(
kl=kl[:] + [nk],
v=nv,
history_dict=history_dict,
update_history=update_history,
)
return
self._history_update_leaf(
kl=kl, v=v, history_dict=history_dict, update_history=update_history
)
def _history_update(self, record: Record, history_dict: Dict) -> None:
# if syncing an old run, we can skip this logic
if history_dict.get("_step") is None:
self._history_assign_step(record, history_dict)
update_history: Dict[str, Any] = {}
# Look for metric matches
if self._metric_defines or self._metric_globs:
for hkey, hval in six.iteritems(history_dict):
self._history_update_list([hkey], hval, history_dict, update_history)
if update_history:
history_dict.update(update_history)
for k, v in six.iteritems(update_history):
item = record.history.item.add()
item.key = k
item.value_json = json.dumps(v)
def handle_history(self, record: Record) -> None:
history_dict = proto_util.dict_from_proto_list(record.history.item)
self._history_update(record, history_dict)
self._dispatch_record(record)
self._save_history(record)
updated = self._update_summary(history_dict)
if updated:
self._save_summary(self._consolidated_summary)
def handle_summary(self, record: Record) -> None:
summary = record.summary
for item in summary.update:
if len(item.nested_key) > 0:
# we use either key or nested_key -- not both
assert item.key == ""
key = tuple(item.nested_key)
else:
# no counter-assertion here, because technically
# summary[""] is valid
key = (item.key,)
target = self._consolidated_summary
# recurse down the dictionary structure:
for prop in key[:-1]:
target = target[prop]
# use the last element of the key to write the leaf:
target[key[-1]] = json.loads(item.value_json)
for item in summary.remove:
if len(item.nested_key) > 0:
# we use either key or nested_key -- not both
assert item.key == ""
key = tuple(item.nested_key)
else:
# no counter-assertion here, because technically
# summary[""] is valid
key = (item.key,)
target = self._consolidated_summary
# recurse down the dictionary structure:
for prop in key[:-1]:
target = target[prop]
# use the last element of the key to erase the leaf:
del target[key[-1]]
self._save_summary(self._consolidated_summary)
def handle_exit(self, record: Record) -> None:
self._dispatch_record(record, always_send=True)
def handle_final(self, record: Record) -> None:
self._dispatch_record(record, always_send=True)
def handle_header(self, record: Record) -> None:
self._dispatch_record(record)
def handle_footer(self, record: Record) -> None:
self._dispatch_record(record)
def handle_request_check_version(self, record: Record) -> None:
self._dispatch_record(record)
def handle_request_log_artifact(self, record: Record) -> None:
self._dispatch_record(record)
def handle_telemetry(self, record: Record) -> None:
self._dispatch_record(record)
def handle_request_run_start(self, record: Record) -> None:
run_start = record.request.run_start
assert run_start
assert run_start.run
if not self._settings._disable_stats:
pid = os.getpid()
self._system_stats = stats.SystemStats(pid=pid, interface=self._interface)
self._system_stats.start()
if not self._settings._disable_meta:
run_meta = meta.Meta(settings=self._settings, interface=self._interface)
run_meta.probe()
run_meta.write()
self._tb_watcher = tb_watcher.TBWatcher(
self._settings, interface=self._interface, run_proto=run_start.run
)
if run_start.run.resumed:
self._step = run_start.run.starting_step
result = wandb_internal_pb2.Result(uuid=record.uuid)
self._result_q.put(result)
def handle_request_resume(self, record: Record) -> None:
if self._system_stats is not None:
logger.info("starting system metrics thread")
self._system_stats.start()
def handle_request_pause(self, record: Record) -> None:
if self._system_stats is not None:
logger.info("stopping system metrics thread")
self._system_stats.shutdown()
def handle_request_poll_exit(self, record: Record) -> None:
self._dispatch_record(record, always_send=True)
def handle_request_stop_status(self, record: Record) -> None:
self._dispatch_record(record)
def handle_request_network_status(self, record: Record) -> None:
self._dispatch_record(record)
def handle_request_get_summary(self, record: Record) -> None:
result = wandb_internal_pb2.Result(uuid=record.uuid)
for key, value in six.iteritems(self._consolidated_summary):
item = wandb_internal_pb2.SummaryItem()
item.key = key
item.value_json = json.dumps(value)
result.response.get_summary_response.item.append(item)
self._result_q.put(result)
def handle_tbrecord(self, record: Record) -> None:
logger.info("handling tbrecord: %s", record)
if self._tb_watcher:
tbrecord = record.tbrecord
self._tb_watcher.add(tbrecord.log_dir, tbrecord.save, tbrecord.root_dir)
self._dispatch_record(record)
def _handle_defined_metric(self, record: wandb_internal_pb2.Record) -> None:
metric = record.metric
if metric._control.overwrite:
self._metric_defines.setdefault(
metric.name, wandb_internal_pb2.MetricRecord()
).CopyFrom(metric)
else:
self._metric_defines.setdefault(
metric.name, wandb_internal_pb2.MetricRecord()
).MergeFrom(metric)
# before dispatching, make sure step_metric is defined, if not define it and
# dispatch it locally first
metric = self._metric_defines[metric.name]
if metric.step_metric and metric.step_metric not in self._metric_defines:
m = wandb_internal_pb2.MetricRecord(name=metric.step_metric)
self._metric_defines[metric.step_metric] = m
mr = wandb_internal_pb2.Record()
mr.metric.CopyFrom(m)
mr.control.local = True # Dont store this, just send it
self._dispatch_record(mr)
self._dispatch_record(record)
def _handle_glob_metric(self, record: wandb_internal_pb2.Record) -> None:
metric = record.metric
if metric._control.overwrite:
self._metric_globs.setdefault(
metric.glob_name, wandb_internal_pb2.MetricRecord()
).CopyFrom(metric)
else:
self._metric_globs.setdefault(
metric.glob_name, wandb_internal_pb2.MetricRecord()
).MergeFrom(metric)
self._dispatch_record(record)
def handle_metric(self, record: Record) -> None:
"""Handle MetricRecord.
Walkthrough of the life of a MetricRecord:
Metric defined:
- run.define_metric() parses arguments create wandb_metric.Metric
- build MetricRecord publish to interface
- handler (this function) keeps list of metrics published:
- self._metric_defines: Fully defined metrics
- self._metric_globs: metrics that have a wildcard
- dispatch writer and sender thread
- writer: records are saved to persistent store
- sender: fully defined metrics get mapped into metadata for UI
History logged:
- handle_history
- check if metric matches _metric_defines
- if not, check if metric matches _metric_globs
- if _metric globs match, generate defined metric and call _handle_metric
Args:
record (Record): Metric record to process
"""
if record.metric.name:
self._handle_defined_metric(record)
elif record.metric.glob_name:
self._handle_glob_metric(record)
def handle_request_sampled_history(self, record: Record) -> None:
result = wandb_internal_pb2.Result(uuid=record.uuid)
for key, sampled in six.iteritems(self._sampled_history):
item = wandb_internal_pb2.SampledHistoryItem()
item.key = key
values: Iterable[Any] = sampled.get()
if all(isinstance(i, numbers.Integral) for i in values):
item.values_int.extend(values)
elif all(isinstance(i, numbers.Real) for i in values):
item.values_float.extend(values)
result.response.sampled_history_response.item.append(item)
self._result_q.put(result)
def handle_request_shutdown(self, record: Record) -> None:
# TODO(jhr): should we drain things and stop new requests from coming in?
result = wandb_internal_pb2.Result(uuid=record.uuid)
self._result_q.put(result)
self._stopped.set()
def finish(self) -> None:
logger.info("shutting down handler")
if self._tb_watcher:
self._tb_watcher.finish()
| [
"noreply@github.com"
] | noreply@github.com |
74566c72ad1bf535d7b33226ee11c151e31b808f | 3bee14adad765f2193915326a474e6594ff91ec7 | /old/lr_finder.py | ae306cd594f9285219ebdeac7a6245c01a0afb14 | [] | no_license | Cirets0h/atml19 | 77bbb71cb11a64a24746987242e09aeee0ec2758 | 3d2740b1018a1adc648699b203133f04b5bffbe1 | refs/heads/master | 2020-04-28T09:57:46.993136 | 2019-05-21T05:45:55 | 2019-05-21T05:45:55 | 175,186,055 | 4 | 1 | null | null | null | null | UTF-8 | Python | false | false | 12,809 | py | from __future__ import print_function, with_statement, division
import numpy as np
import copy
import os
import torch
#from tqdm.autonotebook import tqdm
from torch.optim.lr_scheduler import _LRScheduler
import matplotlib.pyplot as plt
class LRFinder(object):
"""Learning rate range test.
The learning rate range test increases the learning rate in a pre-training run
between two boundaries in a linear or exponential manner. It provides valuable
information on how well the network can be trained over a range of learning rates
and what is the optimal learning rate.
Arguments:
model (torch.nn.Module): wrapped model.
optimizer (torch.optim.Optimizer): wrapped optimizer where the defined learning
is assumed to be the lower boundary of the range test.
criterion (torch.nn.Module): wrapped loss function.
device (str or torch.device, optional): a string ("cpu" or "cuda") with an
optional ordinal for the device type (e.g. "cuda:X", where is the ordinal).
Alternatively, can be an object representing the device on which the
computation will take place. Default: None, uses the same device as `model`.
memory_cache (boolean): if this flag is set to True, `state_dict` of model and
optimizer will be cached in memory. Otherwise, they will be saved to files
under the `cache_dir`.
cache_dir (string): path for storing temporary files. If no path is specified,
system-wide temporary directory is used.
Notice that this parameter will be ignored if `memory_cache` is True.
Example:
>>> lr_finder = LRFinder(net, optimizer, criterion, device="cuda")
>>> lr_finder.range_test(dataloader, end_lr=100, num_iter=100)
Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186
fastai/lr_find: https://github.com/fastai/fastai
"""
def __init__(self, model, optimizer, criterion, device=None, memory_cache=True, cache_dir=None):
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.history = {"lr": [], "loss": []}
self.best_loss = None
self.memory_cache = memory_cache
self.cache_dir = cache_dir
# Save the original state of the model and optimizer so they can be restored if
# needed
self.model_device = next(self.model.parameters()).device
self.state_cacher = StateCacher(memory_cache, cache_dir=cache_dir)
self.state_cacher.store('model', self.model.state_dict())
self.state_cacher.store('optimizer', self.optimizer.state_dict())
# If device is None, use the same as the model
if device:
self.device = device
else:
self.device = self.model_device
def reset(self):
"""Restores the model and optimizer to their initial states."""
self.model.load_state_dict(self.state_cacher.retrieve('model'))
self.optimizer.load_state_dict(self.state_cacher.retrieve('optimizer'))
self.model.to(self.model_device)
def range_test(
self,
train_loader,
val_loader=None,
end_lr=10,
num_iter=100,
step_mode="exp",
smooth_f=0.05,
diverge_th=5,
):
"""Performs the learning rate range test.
Arguments:
train_loader (torch.utils.data.DataLoader): the training set data laoder.
val_loader (torch.utils.data.DataLoader, optional): if `None` the range test
will only use the training loss. When given a data loader, the model is
evaluated after each iteration on that dataset and the evaluation loss
is used. Note that in this mode the test takes significantly longer but
generally produces more precise results. Default: None.
end_lr (float, optional): the maximum learning rate to test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
step_mode (str, optional): one of the available learning rate policies,
linear or exponential ("linear", "exp"). Default: "exp".
smooth_f (float, optional): the loss smoothing factor within the [0, 1[
interval. Disabled if set to 0, otherwise the loss is smoothed using
exponential smoothing. Default: 0.05.
diverge_th (int, optional): the test is stopped when the loss surpasses the
threshold: diverge_th * best_loss. Default: 5.
"""
# Reset test results
self.history = {"lr": [], "loss": []}
self.best_loss = None
# Move the model to the proper device
self.model.to(self.device)
# Initialize the proper learning rate policy
if step_mode.lower() == "exp":
lr_schedule = ExponentialLR(self.optimizer, end_lr, num_iter)
elif step_mode.lower() == "linear":
lr_schedule = LinearLR(self.optimizer, end_lr, num_iter)
else:
raise ValueError("expected one of (exp, linear), got {}".format(step_mode))
if smooth_f < 0 or smooth_f >= 1:
raise ValueError("smooth_f is outside the range [0, 1[")
# Create an iterator to get data batch by batch
iterator = iter(train_loader)
#for iteration in tqdm(range(num_iter)):
for iteration in range(num_iter):
# Get a new set of inputs and labels
try:
inputs, labels = next(iterator)
except StopIteration:
iterator = iter(train_loader)
inputs, labels = next(iterator)
# Train on batch and retrieve loss
loss = self._train_batch(inputs, labels)
if val_loader:
loss = self._validate(val_loader)
# Update the learning rate
lr_schedule.step()
self.history["lr"].append(lr_schedule.get_lr()[0])
# Track the best loss and smooth it if smooth_f is specified
if iteration == 0:
self.best_loss = loss
else:
if smooth_f > 0:
loss = smooth_f * loss + (1 - smooth_f) * self.history["loss"][-1]
if loss < self.best_loss:
self.best_loss = loss
# Check if the loss has diverged; if it has, stop the test
self.history["loss"].append(loss)
if loss > diverge_th * self.best_loss:
print("Stopping early, the loss has diverged")
break
print("Learning rate search finished. See the graph with {finder_name}.plot()")
def _train_batch(self, inputs, labels):
# Set model to training mode
self.model.train()
# Move data to the correct device
inputs = inputs.to(self.device, dtype=torch.float)
labels = labels.to(self.device)
# Forward pass
self.optimizer.zero_grad()
outputs, _ = self.model(inputs, None)
loss = self.criterion(outputs, labels)
# Backward pass
loss.backward()
self.optimizer.step()
return loss.item()
def _validate(self, dataloader):
# Set model to evaluation mode and disable gradient computation
running_loss = 0
self.model.eval()
with torch.no_grad():
for inputs, labels in dataloader:
# Move data to the correct device
inputs = inputs.to(self.device)
labels = labels.to(self.device)
# Forward pass and loss computation
outputs = self.model(inputs)
loss = self.criterion(outputs, labels)
running_loss += loss.item() * inputs.size(0)
return running_loss / len(dataloader.dataset)
def optimal_lr(self):
idx = np.argmin(self.history["loss"])
return self.history["lr"][idx]
def plot(self, skip_start=10, skip_end=5, log_lr=True):
"""Plots the learning rate range test.
Arguments:
skip_start (int, optional): number of batches to trim from the start.
Default: 10.
skip_end (int, optional): number of batches to trim from the start.
Default: 5.
log_lr (bool, optional): True to plot the learning rate in a logarithmic
scale; otherwise, plotted in a linear scale. Default: True.
"""
if skip_start < 0:
raise ValueError("skip_start cannot be negative")
if skip_end < 0:
raise ValueError("skip_end cannot be negative")
# Get the data to plot from the history dictionary. Also, handle skip_end=0
# properly so the behaviour is the expected
lrs = self.history["lr"]
losses = self.history["loss"]
if skip_end == 0:
lrs = lrs[skip_start:]
losses = losses[skip_start:]
else:
lrs = lrs[skip_start:-skip_end]
losses = losses[skip_start:-skip_end]
# Plot loss as a function of the learning rate
plt.plot(lrs, losses)
if log_lr:
plt.xscale("log")
plt.xlabel("Learning rate")
plt.ylabel("Loss")
plt.show()
class LinearLR(_LRScheduler):
"""Linearly increases the learning rate between two boundaries over a number of
iterations.
Arguments:
optimizer (torch.optim.Optimizer): wrapped optimizer.
end_lr (float, optional): the initial learning rate which is the lower
boundary of the test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
last_epoch (int): the index of last epoch. Default: -1.
"""
def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
self.end_lr = end_lr
self.num_iter = num_iter
super(LinearLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
curr_iter = self.last_epoch + 1
r = curr_iter / self.num_iter
return [base_lr + r * (self.end_lr - base_lr) for base_lr in self.base_lrs]
class ExponentialLR(_LRScheduler):
"""Exponentially increases the learning rate between two boundaries over a number of
iterations.
Arguments:
optimizer (torch.optim.Optimizer): wrapped optimizer.
end_lr (float, optional): the initial learning rate which is the lower
boundary of the test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
last_epoch (int): the index of last epoch. Default: -1.
"""
def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
self.end_lr = end_lr
self.num_iter = num_iter
super(ExponentialLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
curr_iter = self.last_epoch + 1
r = curr_iter / self.num_iter
return [base_lr * (self.end_lr / base_lr) ** r for base_lr in self.base_lrs]
class StateCacher(object):
def __init__(self, in_memory, cache_dir=None):
self.in_memory = in_memory
self.cache_dir = cache_dir
if self.cache_dir is None:
import tempfile
self.cache_dir = tempfile.gettempdir()
else:
if not os.path.isdir(self.cache_dir):
raise ValueError('Given `cache_dir` is not a valid directory.')
self.cached = {}
def store(self, key, state_dict):
if self.in_memory:
self.cached.update({key: copy.deepcopy(state_dict)})
else:
fn = os.path.join(self.cache_dir, 'state_{}_{}.pt'.format(key, id(self)))
self.cached.update({key: fn})
torch.save(state_dict, fn)
def retrieve(self, key):
if key not in self.cached:
raise KeyError('Target {} was not cached.'.format(key))
if self.in_memory:
return self.cached.get(key)
else:
fn = self.cached.get(key)
if not os.path.exists(fn):
raise RuntimeError('Failed to load state in {}. File does not exist anymore.'.format(fn))
state_dict = torch.load(fn, map_location=lambda storage, location: storage)
return state_dict
def __del__(self):
"""Check whether there are unused cached files existing in `cache_dir` before
this instance being destroyed."""
if self.in_memory:
return
for k in self.cached:
if os.path.exists(self.cached[k]):
os.remove(self.cached[k])
| [
"andre.ryser@gmail.com"
] | andre.ryser@gmail.com |
d95fb552556fe37b61378dbfef21dcd4b15356dd | 3d7039903da398ae128e43c7d8c9662fda77fbdf | /database/Vue.js/juejin_2910.py | 433d99a184537f0048d01e9b65145675a9d3cc65 | [] | no_license | ChenYongChang1/spider_study | a9aa22e6ed986193bf546bb567712876c7be5e15 | fe5fbc1a5562ff19c70351303997d3df3af690db | refs/heads/master | 2023-08-05T10:43:11.019178 | 2021-09-18T01:30:22 | 2021-09-18T01:30:22 | 406,727,214 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 66,851 | py | {"err_no": 0, "err_msg": "success", "data": [{"article_id": "6919726191666528263", "article_info": {"article_id": "6919726191666528263", "user_id": "2400989127905352", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "", "cover_image": "", "is_gfw": 0, "title": "vue 中使用柱状图 并自修改配置", "brief_content": "1.在html文件导入echart2.在main.js上挂载echarts对象3.页面结构4.data中的数据mounted钩子函数调用更改柱形图配置1.在index.html引入主题配置文件2.在需", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1611124318", "mtime": "1611139838", "rtime": "1611128360", "draft_id": "6919725601553121287", "view_count": 153, "collect_count": 0, "digg_count": 3, "comment_count": 0, "hot_index": 10, "is_hot": 0, "rank_index": 0.0001309, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "2400989127905352", "user_name": "小江和他自己", "company": "", "job_title": "前端", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/91365cb95beff6eb7dfdcae5c627cc2e~300x300.image", "level": 1, "description": "", "followee_count": 13, "follower_count": 1, "post_article_count": 12, "digg_article_count": 54, "got_digg_count": 11, "got_view_count": 743, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 18, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6919726191666528263, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844904117924691982", "article_info": {"article_id": "6844904117924691982", "user_id": "1873223544998494", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "https://juejin.im/post/6844904117924691982", "cover_image": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/gold-user-assets/2020/4/8/171587b960059db9~tplv-t2oaga2asx-image.image", "is_gfw": 0, "title": "60行代码 | Proxy实现一个数据驱动简易DEMO", "brief_content": "运行截图本文能做的带你了解一下Proxy的使用了解通过插值表达式怎么实现简易版的数据驱动代码仅是简单Demo,参考了50行代码的MVVM,感受闭包的艺术,一篇好文章,因为时间,沉底了,掏出来再学习一下", "is_english": 0, "is_original": 1, "user_index": 2.9826908456339, "original_type": 0, "original_author": "", "content": "", "ctime": "1586327094", "mtime": "1606695649", "rtime": "1586328125", "draft_id": "6845076717090897928", "view_count": 345, "collect_count": 9, "digg_count": 7, "comment_count": 1, "hot_index": 25, "is_hot": 0, "rank_index": 0.00013083, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "1873223544998494", "user_name": "独立开发者丨磊子", "company": "知乎", "job_title": "前端", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/03b00ce0554959d981f62f186824c323~300x300.image", "level": 2, "description": "", "followee_count": 21, "follower_count": 248, "post_article_count": 22, "digg_article_count": 181, "got_digg_count": 210, "got_view_count": 25205, "post_shortmsg_count": 15, "digg_shortmsg_count": 15, "isfollowed": false, "favorable_author": 0, "power": 462, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6844904117924691982, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844904100811767822", "article_info": {"article_id": "6844904100811767822", "user_id": "3227821870160286", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "https://juejin.im/post/6844904100811767822", "cover_image": "", "is_gfw": 0, "title": "剖析Vue实现原理 - 如何实现双向绑定mvvm", "brief_content": "目前几种主流的mvc(vm)框架都实现了单向数据绑定,而我所理解的双向数据绑定无非就是在单向绑定的基础上给可输入元素(input、textare等)添加了change(input)事件,来动态修改model和 view,并没有多高深。所以无需太过介怀是实现的单向或双向绑定。 D…", "is_english": 0, "is_original": 1, "user_index": 0.038273012061283, "original_type": 0, "original_author": "", "content": "", "ctime": "1584971505", "mtime": "1598861532", "rtime": "1585018357", "draft_id": "6845076693619572743", "view_count": 494, "collect_count": 13, "digg_count": 5, "comment_count": 0, "hot_index": 29, "is_hot": 0, "rank_index": 0.00013083, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "3227821870160286", "user_name": "clouding", "company": "", "job_title": "前端", "avatar_large": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/mirror-assets/168e09bae71b41c7c75~tplv-t2oaga2asx-image.image", "level": 2, "description": "", "followee_count": 7, "follower_count": 9, "post_article_count": 5, "digg_article_count": 75, "got_digg_count": 45, "got_view_count": 27455, "post_shortmsg_count": 1, "digg_shortmsg_count": 4, "isfollowed": false, "favorable_author": 0, "power": 319, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6844904100811767822, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903773878353928", "article_info": {"article_id": "6844903773878353928", "user_id": "3896324938537373", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "", "cover_image": "", "is_gfw": 0, "title": "Vue知识点", "brief_content": "1. vue的生命周期 beforeCreate : el 和 data 并未初始化。是获取不到props或者data中的数据的 created :已经可以访问到之前不能访问的数据,但是这时候组件还没被挂载,所以是看不到的。 mounted:将虚拟DOM渲染为真实DOM,并且渲…", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1549099983", "mtime": "1623573281", "rtime": "1549102633", "draft_id": "6845076151270899725", "view_count": 979, "collect_count": 16, "digg_count": 13, "comment_count": 0, "hot_index": 61, "is_hot": 0, "rank_index": 0.00013083, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "3896324938537373", "user_name": "kakayuii", "company": "上海", "job_title": "前端工程师", "avatar_large": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/mirror-assets/168e08d4506c090dc35~tplv-t2oaga2asx-image.image", "level": 1, "description": "", "followee_count": 56, "follower_count": 12, "post_article_count": 13, "digg_article_count": 22, "got_digg_count": 23, "got_view_count": 4373, "post_shortmsg_count": 1, "digg_shortmsg_count": 3, "isfollowed": false, "favorable_author": 0, "power": 66, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6844903773878353928, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903624137506824", "article_info": {"article_id": "6844903624137506824", "user_id": "1468603263615694", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215, 6809640407484334093, 6809641072508010503], "visible_level": 0, "link_url": "https://juejin.im/post/6844903624137506824", "cover_image": "", "is_gfw": 0, "title": "谈一谈对vue-router的简单理解", "brief_content": "这就是比较简单和比较全的结构了,除去(参数和钩子,后面讲)。 vue-router默认使用的是hash模式,就是在路由跳转的时候,会有很多 #,看起来不太美观,因此我们可以使用 history模式 ,这种模式充分利用了 history.pushStateAPI来完成URL跳转而…", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1529722072", "mtime": "1598457080", "rtime": "1529976625", "draft_id": "6845075545818923015", "view_count": 1330, "collect_count": 7, "digg_count": 11, "comment_count": 3, "hot_index": 80, "is_hot": 0, "rank_index": 0.00013082, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "1468603263615694", "user_name": "尤小左", "company": "作坊", "job_title": "低级前端开发工程师", "avatar_large": "https://sf1-ttcdn-tos.pstatp.com/img/user-avatar/b5afc3187387129280c604d06c645c4b~300x300.image", "level": 2, "description": "一个前端", "followee_count": 28, "follower_count": 10, "post_article_count": 8, "digg_article_count": 642, "got_digg_count": 46, "got_view_count": 21498, "post_shortmsg_count": 2, "digg_shortmsg_count": 4, "isfollowed": false, "favorable_author": 0, "power": 260, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}, {"id": 2546526, "tag_id": "6809640407484334093", "tag_name": "前端", "color": "#60ADFF", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/bac28828a49181c34110.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971546, "mtime": 1631693180, "id_type": 9, "tag_alias": "", "post_article_count": 88830, "concern_user_count": 527705}, {"id": 2547007, "tag_id": "6809641072508010503", "tag_name": "vue-router", "color": "", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/gold-ad-assets/14988107830572beb053b0c5bb4c59fb974b1ac7ce8bd.jpg~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1498782020, "mtime": 1631683218, "id_type": 9, "tag_alias": "", "post_article_count": 1074, "concern_user_count": 28589}], "user_interact": {"id": 6844903624137506824, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6906331373406322696", "article_info": {"article_id": "6906331373406322696", "user_id": "2444950350861495", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "", "cover_image": "https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/7364060d7cb54c1f90a2c7f2f5535109~tplv-k3u1fbpfcp-watermark.image", "is_gfw": 0, "title": "桌面与Web的信使:Notification API(附Vue3中使用)", "brief_content": "引言这是在初次进入Facebook的时候Chrome在地址栏弹出的提示👇当你点击了“允许”后,即使你最小化了标签或者切到其他标签去了,朋友发给你的聊天信息也会通过桌面级的通知告诉你,FB还使用了FCM", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1608005775", "mtime": "1608014531", "rtime": "1608014531", "draft_id": "6906330533001854989", "view_count": 147, "collect_count": 2, "digg_count": 1, "comment_count": 4, "hot_index": 12, "is_hot": 0, "rank_index": 0.00013074, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "2444950350861495", "user_name": "someCat", "company": "翼果(深圳)科技有限公司", "job_title": "产品&前端开发工程师", "avatar_large": "https://sf6-ttcdn-tos.pstatp.com/img/user-avatar/4a220ec3dee3924c48cf10b71123b42f~300x300.image", "level": 1, "description": "这个人很懒", "followee_count": 0, "follower_count": 0, "post_article_count": 1, "digg_article_count": 0, "got_digg_count": 1, "got_view_count": 147, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 2, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6906331373406322696, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6933236452612374541", "article_info": {"article_id": "6933236452612374541", "user_id": "1802854802139998", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "", "cover_image": "", "is_gfw": 0, "title": "Router路由-VUE-CLI脚手架-VUE..JS组件", "brief_content": "1. 父传子有哪些方式 2. 子传父有哪些方式 商品为子组件,购物车为父组件,父组件需要统计商品个数,就需要在子组件个数变化时传值给父组件。 3. 如何让 CSS 只在当前组件中起作用 4. keep-alive 的作用是什么 主要用于保留组件状态或避免组件重新渲染。 5. v…", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1614269971", "mtime": "1614312640", "rtime": "1614312640", "draft_id": "6933235835810938888", "view_count": 110, "collect_count": 0, "digg_count": 1, "comment_count": 2, "hot_index": 8, "is_hot": 0, "rank_index": 0.00013036, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "1802854802139998", "user_name": "往底层挖矿", "company": "贱人发展有限公司", "job_title": "贱黄湿", "avatar_large": "https://sf6-ttcdn-tos.pstatp.com/img/user-avatar/5ddd4bc95d08ba89c24ddf182251ec02~300x300.image", "level": 1, "description": "搞怪,幽默博主", "followee_count": 1, "follower_count": 1, "post_article_count": 15, "digg_article_count": 0, "got_digg_count": 5, "got_view_count": 844, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 13, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6933236452612374541, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903551446024205", "article_info": {"article_id": "6844903551446024205", "user_id": "4336129589657245", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215, 6809640407484334093, 6809640406058270733, 6809640403516522504], "visible_level": 0, "link_url": "https://juejin.im/post/6844903551446024205", "cover_image": "", "is_gfw": 0, "title": "自己动手写一个 SimpleVue", "brief_content": "双向绑定是 MVVM 框架最核心之处,那么双向绑定的核心是什么呢?核心就是 Object.defineProperty 这个 API,关于这个 API 的具体内容,请移步 MDN - Object.defineProperty ,里面有更详细的说明。 监听者(Observer)…", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1516160165", "mtime": "1599393208", "rtime": "1516160165", "draft_id": "6845075367447773191", "view_count": 1040, "collect_count": 25, "digg_count": 42, "comment_count": 0, "hot_index": 94, "is_hot": 0, "rank_index": 0.0001303, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "4336129589657245", "user_name": "小烜同学", "company": "知道创宇", "job_title": "搬砖工程师", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/35270877735321ea14d176693c2c99af~300x300.image", "level": 3, "description": "天衣无缝的秘密是: 做技术,你开心吗?", "followee_count": 20, "follower_count": 487, "post_article_count": 23, "digg_article_count": 244, "got_digg_count": 1462, "got_view_count": 79358, "post_shortmsg_count": 0, "digg_shortmsg_count": 3, "isfollowed": false, "favorable_author": 0, "power": 2204, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}, {"id": 2546526, "tag_id": "6809640407484334093", "tag_name": "前端", "color": "#60ADFF", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/bac28828a49181c34110.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971546, "mtime": 1631693180, "id_type": 9, "tag_alias": "", "post_article_count": 88830, "concern_user_count": 527705}, {"id": 2546525, "tag_id": "6809640406058270733", "tag_name": "设计", "color": "#F56868", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/f2e3a6fceb1a4f1ce6b6.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971510, "mtime": 1631689661, "id_type": 9, "tag_alias": "", "post_article_count": 6064, "concern_user_count": 218547}, {"id": 2546523, "tag_id": "6809640403516522504", "tag_name": "API", "color": "#616161", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/a7c8e10a9c742fc175f9.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1435971240, "mtime": 1631683137, "id_type": 9, "tag_alias": "", "post_article_count": 7154, "concern_user_count": 88261}], "user_interact": {"id": 6844903551446024205, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903981861306376", "article_info": {"article_id": "6844903981861306376", "user_id": "360295544138669", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "https://juejin.im/post/6844903981861306376", "cover_image": "", "is_gfw": 0, "title": "Vue3.0数据响应系统分析(主要针对于reactive)", "brief_content": "Vue3.0与Vue2.x的数据响应机制Vue3.0采用了ES6的Proxy来进行数据监听优点:缺点:源码导读在分析源码之前,我们需要得知几个变量先:首先我们来看一下reactive函数首先我们检测了", "is_english": 0, "is_original": 1, "user_index": 3.5350891055048, "original_type": 0, "original_author": "", "content": "", "ctime": "1572225338", "mtime": "1598531145", "rtime": "1572251543", "draft_id": "6845076510903107591", "view_count": 686, "collect_count": 2, "digg_count": 2, "comment_count": 0, "hot_index": 36, "is_hot": 0, "rank_index": 0.00013006, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "360295544138669", "user_name": "我在曾经眺望彼岸", "company": "网易", "job_title": "前端", "avatar_large": "https://sf1-ttcdn-tos.pstatp.com/img/user-avatar/02103eca95c16c56d25565ae3755e008~300x300.image", "level": 2, "description": "喜欢扶老奶奶过马路", "followee_count": 4, "follower_count": 53, "post_article_count": 26, "digg_article_count": 156, "got_digg_count": 250, "got_view_count": 22491, "post_shortmsg_count": 10, "digg_shortmsg_count": 2, "isfollowed": false, "favorable_author": 0, "power": 477, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6844903981861306376, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903577224216590", "article_info": {"article_id": "6844903577224216590", "user_id": "1046390798826654", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215, 6809640407484334093, 6809640398105870343], "visible_level": 0, "link_url": "https://juejin.im/post/6844903577224216590", "cover_image": "", "is_gfw": 0, "title": "【翻译】Vue.js中的computed是如何工作的", "brief_content": "这篇文章,我们通过实现一个简单版的和Vue中computed具有相同功能的函数来了解computed是如何工作的。 尽管person.age看起来像是访问了对象的一个属性,但其实在内部我们是运行了一个函数。 有趣的是,25和‘Brazil’还是一个闭包内部的变量,只有当赋给它们…", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1521361901", "mtime": "1598449201", "rtime": "1521543036", "draft_id": "6845075395876749326", "view_count": 1182, "collect_count": 11, "digg_count": 28, "comment_count": 1, "hot_index": 88, "is_hot": 0, "rank_index": 0.00012979, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "1046390798826654", "user_name": "陈杰夫", "company": "", "job_title": "前端工程师", "avatar_large": "https://sf6-ttcdn-tos.pstatp.com/img/user-avatar/9fe7e7130263b378326a81fe97f181bc~300x300.image", "level": 3, "description": "rua", "followee_count": 14, "follower_count": 55, "post_article_count": 18, "digg_article_count": 182, "got_digg_count": 801, "got_view_count": 33910, "post_shortmsg_count": 1, "digg_shortmsg_count": 5, "isfollowed": false, "favorable_author": 0, "power": 1139, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}, {"id": 2546526, "tag_id": "6809640407484334093", "tag_name": "前端", "color": "#60ADFF", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/bac28828a49181c34110.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971546, "mtime": 1631693180, "id_type": 9, "tag_alias": "", "post_article_count": 88830, "concern_user_count": 527705}, {"id": 2546519, "tag_id": "6809640398105870343", "tag_name": "JavaScript", "color": "#616161", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/5d70fd6af940df373834.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1435884803, "mtime": 1631693185, "id_type": 9, "tag_alias": "", "post_article_count": 67405, "concern_user_count": 398957}], "user_interact": {"id": 6844903577224216590, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844904131451158536", "article_info": {"article_id": "6844904131451158536", "user_id": "817692384193470", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "https://juejin.im/post/6844904131451158536", "cover_image": "", "is_gfw": 0, "title": "40.vue全解(起手式1)", "brief_content": "详细:组件不仅仅是要把模板的内容进行复用,更重要的是组件之间的通信,通常父组件的模板中包含子组件,父组件要向子组件传递数据或者参数,子组件接收到后根据参数的不同来进行对应的渲染。数据传递在vue组件可以通过props来实现", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1587272824", "mtime": "1598930979", "rtime": "1587273314", "draft_id": "6845076736011567118", "view_count": 550, "collect_count": 0, "digg_count": 0, "comment_count": 0, "hot_index": 27, "is_hot": 0, "rank_index": 0.00012974, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "817692384193470", "user_name": "y_cj", "company": "", "job_title": "前端菜鸡", "avatar_large": "https://sf1-ttcdn-tos.pstatp.com/img/user-avatar/770876b6b8b0d86c9191b222b1127241~300x300.image", "level": 2, "description": "", "followee_count": 2, "follower_count": 7, "post_article_count": 100, "digg_article_count": 3, "got_digg_count": 23, "got_view_count": 18104, "post_shortmsg_count": 1, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 204, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6844904131451158536, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6972548822287417352", "article_info": {"article_id": "6972548822287417352", "user_id": "4212984289165853", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215, 6809640407484334093], "visible_level": 0, "link_url": "", "cover_image": "https://p1-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/94a7bc5c76ef414eacd9cf8229b49984~tplv-k3u1fbpfcp-watermark.image", "is_gfw": 0, "title": "【Vue2.x 源码学习】第九篇 - 对象数据变化的观测情况", "brief_content": "【Vue2.x 源码学习】第九篇-对象数据变化的观测情况;实现了对象老属性值变更为对象、数组时的深层观测处理;结合实现原理,说明了对象新增属性不能被观测的原因,及如何实现数据观测;", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1623423288", "mtime": "1623473748", "rtime": "1623473748", "draft_id": "6972546541575733261", "view_count": 71, "collect_count": 0, "digg_count": 0, "comment_count": 0, "hot_index": 3, "is_hot": 0, "rank_index": 0.00012972, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "4212984289165853", "user_name": "BraveWang", "company": "", "job_title": "", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/e9463d73674c4d6bc05a763a63cb0dd3~300x300.image", "level": 2, "description": "源码学习中...组件库建设中...", "followee_count": 96, "follower_count": 22, "post_article_count": 81, "digg_article_count": 31, "got_digg_count": 44, "got_view_count": 5822, "post_shortmsg_count": 11, "digg_shortmsg_count": 10, "isfollowed": false, "favorable_author": 0, "power": 102, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}, {"id": 2546526, "tag_id": "6809640407484334093", "tag_name": "前端", "color": "#60ADFF", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/bac28828a49181c34110.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971546, "mtime": 1631693180, "id_type": 9, "tag_alias": "", "post_article_count": 88830, "concern_user_count": 527705}], "user_interact": {"id": 6972548822287417352, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6936708299786190856", "article_info": {"article_id": "6936708299786190856", "user_id": "4037062427423959", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "", "cover_image": "", "is_gfw": 0, "title": "SpringBoot+Vue豆宝社区前后端分离项目手把手实战系列教程14---帖子列表分页功能实现", "brief_content": "本项目实战教程配有免费视频教程,配套代码完全开源。手把手从零开始搭建一个目前应用最广泛的Springboot+Vue前后端分离多用户社区项目。本项目难度适中,为便于大家学习,每一集视频教程对应在Github上的每一次提交。", "is_english": 0, "is_original": 1, "user_index": 2.49491483101579, "original_type": 0, "original_author": "", "content": "", "ctime": "1615079777", "mtime": "1615097715", "rtime": "1615097715", "draft_id": "6929764264882208782", "view_count": 88, "collect_count": 2, "digg_count": 1, "comment_count": 0, "hot_index": 5, "is_hot": 0, "rank_index": 0.00012971, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "4037062427423959", "user_name": "豆约翰", "company": "home", "job_title": "teacher", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/b811e305c83f965db8afd6481284725e~300x300.image", "level": 2, "description": "宠辱不惊,看庭前花开花落;去留无意,望天上云卷云舒", "followee_count": 20, "follower_count": 39, "post_article_count": 64, "digg_article_count": 85, "got_digg_count": 76, "got_view_count": 12715, "post_shortmsg_count": 4, "digg_shortmsg_count": 1, "isfollowed": false, "favorable_author": 0, "power": 201, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6936708299786190856, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6947969424921985031", "article_info": {"article_id": "6947969424921985031", "user_id": "3641226811678344", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "", "cover_image": "", "is_gfw": 0, "title": "el-tree数据回显,解决子级部分选中,父级全选效果", "brief_content": "在el-tree中回显数据有一个很明显的坑,只要回显的数据里有父级的id,不管当前父级下的子级是部分选中还是全选,父级的check效果都是全选。 我看了很多博主说用setChecked循环便利来勾选叶子节点,但我试了半天也没用,setTimeout,nextTick都试了,无效…", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1617700214", "mtime": "1617701858", "rtime": "1617701858", "draft_id": "6947969148676898824", "view_count": 92, "collect_count": 0, "digg_count": 2, "comment_count": 0, "hot_index": 6, "is_hot": 0, "rank_index": 0.00012969, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "3641226811678344", "user_name": "阿狸要吃吃的", "company": "", "job_title": "前端工程师", "avatar_large": "https://sf1-ttcdn-tos.pstatp.com/img/user-avatar/a61c139e97180974032d075c5316cfc2~300x300.image", "level": 1, "description": "vue 小程序 js,react", "followee_count": 5, "follower_count": 6, "post_article_count": 6, "digg_article_count": 12, "got_digg_count": 18, "got_view_count": 1454, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 32, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6947969424921985031, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6972757240411324447", "article_info": {"article_id": "6972757240411324447", "user_id": "2023559627027486", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215, 6809640407484334093], "visible_level": 0, "link_url": "", "cover_image": "", "is_gfw": 0, "title": "Vue CLI", "brief_content": "初识Vue CLI Vue CLI的使用 Vue CLI 2 创建项目 项目目录结构 runtimecompiler和runtimeonly的区别 同时新建两个项目,左为配置了runtimecompi", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1623471877", "mtime": "1623476554", "rtime": "1623475855", "draft_id": "6972052080076783629", "view_count": 74, "collect_count": 0, "digg_count": 0, "comment_count": 0, "hot_index": 3, "is_hot": 0, "rank_index": 0.00012955, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "2023559627027486", "user_name": "Mengxixi", "company": "", "job_title": "", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/6d18b638e352dd76965224698ccc5f61~300x300.image", "level": 1, "description": "天天摸鱼学前端的烟酒生", "followee_count": 3, "follower_count": 2, "post_article_count": 5, "digg_article_count": 8, "got_digg_count": 8, "got_view_count": 735, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 15, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}, {"id": 2546526, "tag_id": "6809640407484334093", "tag_name": "前端", "color": "#60ADFF", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/bac28828a49181c34110.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971546, "mtime": 1631693180, "id_type": 9, "tag_alias": "", "post_article_count": 88830, "concern_user_count": 527705}], "user_interact": {"id": 6972757240411324447, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6892582663765377038", "article_info": {"article_id": "6892582663765377038", "user_id": "2348212570302903", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "", "cover_image": "", "is_gfw": 0, "title": "Vue数据响应式、模板解析的实现原理(实现一个简易的Vue)", "brief_content": "在上方,我们把模板解析和数据响应分成2个方法去执行。但是他们相互之间没有关联的话怎么去实现页面和数据同步呢?", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1604804695", "mtime": "1604825041", "rtime": "1604805933", "draft_id": "6892581897105473543", "view_count": 204, "collect_count": 0, "digg_count": 2, "comment_count": 2, "hot_index": 14, "is_hot": 0, "rank_index": 0.00012928, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "2348212570302903", "user_name": "Rawsdom", "company": "", "job_title": "前端", "avatar_large": "https://sf6-ttcdn-tos.pstatp.com/img/user-avatar/e32461f34ee41a879c2cb35ba30afbe0~300x300.image", "level": 1, "description": "", "followee_count": 6, "follower_count": 3, "post_article_count": 5, "digg_article_count": 2, "got_digg_count": 12, "got_view_count": 2254, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 34, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6892582663765377038, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903640432377869", "article_info": {"article_id": "6844903640432377869", "user_id": "1239904844807000", "category_id": "6809637767543259144", "tag_ids": [6809640357354012685, 6809640369764958215, 6809640407484334093, 6809640782329282567], "visible_level": 0, "link_url": "https://juejin.im/post/6844903640432377869", "cover_image": "", "is_gfw": 0, "title": "vue、react隐式实例化", "brief_content": "这是一篇几个月前写的文章,那时候理解也不是特别透彻,但还是搬运到掘金发一遍。 可以看的出来,我们的需求是想有一个组件能像html原生的alert一样,在需要的地方能够直接去调用,而不是需要把message组件写进节点中。 react相当明显地创建了一个class,vue表面上好…", "is_english": 0, "is_original": 1, "user_index": 0, "original_type": 0, "original_author": "", "content": "", "ctime": "1531928938", "mtime": "1598460085", "rtime": "1531991639", "draft_id": "6845075577213288461", "view_count": 1006, "collect_count": 17, "digg_count": 27, "comment_count": 0, "hot_index": 77, "is_hot": 0, "rank_index": 0.00012923, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "1239904844807000", "user_name": "limengke123", "company": "", "job_title": "", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/df4206fcdb9ed6905637d98f855b1633~300x300.image", "level": 2, "description": "", "followee_count": 40, "follower_count": 23, "post_article_count": 6, "digg_article_count": 177, "got_digg_count": 250, "got_view_count": 20614, "post_shortmsg_count": 0, "digg_shortmsg_count": 14, "isfollowed": false, "favorable_author": 0, "power": 456, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546490, "tag_id": "6809640357354012685", "tag_name": "React.js", "color": "#61DAFB", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/f655215074250f10f8d4.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234367, "mtime": 1631692935, "id_type": 9, "tag_alias": "", "post_article_count": 16999, "concern_user_count": 226420}, {"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}, {"id": 2546526, "tag_id": "6809640407484334093", "tag_name": "前端", "color": "#60ADFF", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/bac28828a49181c34110.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971546, "mtime": 1631693180, "id_type": 9, "tag_alias": "", "post_article_count": 88830, "concern_user_count": 527705}, {"id": 2546798, "tag_id": "6809640782329282567", "tag_name": "iView", "color": "#000000", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/dfe71e5ba82cc7796831.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1488860237, "mtime": 1631432548, "id_type": 9, "tag_alias": "", "post_article_count": 222, "concern_user_count": 6788}], "user_interact": {"id": 6844903640432377869, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6854573213632954375", "article_info": {"article_id": "6854573213632954375", "user_id": "1679709498776280", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "https://juejin.im/post/6854573213632954375", "cover_image": "", "is_gfw": 0, "title": "Vue.js源码学习(完结)-虚拟DOM的patch算法", "brief_content": "虚拟DOM将虚拟节点vnode和旧虚拟节点oldVnode进行对比,得出真正需要更新的节点进行DOM操作。对两个虚拟节点进行对比是虚拟DOM中最核心的算法(即patch)。 由于Vue.js的变化侦测粒度很细,一定程度上可以知道哪些状态发生了变化,所以可以通过细粒度的绑定来更新…", "is_english": 0, "is_original": 1, "user_index": 2.7095112913515, "original_type": 0, "original_author": "", "content": "", "ctime": "1595317609", "mtime": "1599099496", "rtime": "1595336975", "draft_id": "6854812664749621256", "view_count": 330, "collect_count": 3, "digg_count": 2, "comment_count": 0, "hot_index": 18, "is_hot": 0, "rank_index": 0.00012914, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "1679709498776280", "user_name": "小宝的挖机", "company": "", "job_title": "前端工程师", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/638ceb6a3b77b2eef933e1c2996c07d2~300x300.image", "level": 2, "description": "", "followee_count": 7, "follower_count": 44, "post_article_count": 85, "digg_article_count": 151, "got_digg_count": 301, "got_view_count": 28172, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 585, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6854573213632954375, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903968791855111", "article_info": {"article_id": "6844903968791855111", "user_id": "3333374985120024", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215], "visible_level": 0, "link_url": "https://juejin.im/post/6844903968791855111", "cover_image": "", "is_gfw": 0, "title": "Electron + Vue + Vscode构建跨平台应用(四)利用Electron-Vue构建Vue应用详解", "brief_content": "主进程是指: 运行 package.json 里面 main 脚本的进程成为主进程。 渲染进程是指: 每个 electron 的页面都运行着自己的进程,称为渲染进程。 通俗的来讲主进程也就是 npm run start 出来的窗口,而窗口里面的内容,即是渲染进程。", "is_english": 0, "is_original": 1, "user_index": 0.003086090311724, "original_type": 0, "original_author": "", "content": "", "ctime": "1571301659", "mtime": "1600155867", "rtime": "1571324971", "draft_id": "6845076499368771597", "view_count": 804, "collect_count": 3, "digg_count": 0, "comment_count": 0, "hot_index": 40, "is_hot": 0, "rank_index": 0.00012898, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "3333374985120024", "user_name": "Yi_Master", "company": "", "job_title": "", "avatar_large": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/gold-user-assets/2018/1/15/160f773f2b9fda91~tplv-t2oaga2asx-image.image", "level": 2, "description": "移动端App,Framework,JS等", "followee_count": 0, "follower_count": 24, "post_article_count": 14, "digg_article_count": 0, "got_digg_count": 12, "got_view_count": 14458, "post_shortmsg_count": 0, "digg_shortmsg_count": 0, "isfollowed": false, "favorable_author": 0, "power": 156, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}], "user_interact": {"id": 6844903968791855111, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}, {"article_id": "6844903518344577037", "article_info": {"article_id": "6844903518344577037", "user_id": "3474112473989448", "category_id": "6809637767543259144", "tag_ids": [6809640369764958215, 6809640407484334093, 6809640398105870343, 6809640403516522504], "visible_level": 0, "link_url": "https://github.com/berwin/Blog/issues/17", "cover_image": "", "is_gfw": 0, "title": "深入浅出 - vue变化侦测原理", "brief_content": "一年前我写过一篇 响应式原理的文章,但是最近我发现讲的内容和我现在心里想的有些不太一样,所以我重写一篇文章。 我的目标是能让读者读完我写的文章能学到知识,有一部分文章标题都以深入浅出开头,目的是把一个复杂的东西排除掉干扰学习的因素后剩下的核心原理通过很简单的描述来让读者学习到知识", "is_english": 0, "is_original": 0, "user_index": 0, "original_type": 1, "original_author": "", "content": "", "ctime": "1512356773", "mtime": "1598438903", "rtime": "1512356773", "draft_id": "0", "view_count": 719, "collect_count": 24, "digg_count": 51, "comment_count": 11, "hot_index": 97, "is_hot": 0, "rank_index": 0.00012888, "status": 2, "verify_status": 1, "audit_status": 2, "mark_content": ""}, "author_user_info": {"user_id": "3474112473989448", "user_name": "玖五", "company": "阿里巴巴 - 淘系技术部", "job_title": "前端技术专家", "avatar_large": "https://sf3-ttcdn-tos.pstatp.com/img/user-avatar/6e896367e48d45745bc6362bf2ec6cdb~300x300.image", "level": 3, "description": "畅销书《深入浅出Vue.js》作者,负责阿里淘系大促(天猫双11,618等)主会场终端渲染架构,双11大促消防员👨🚒、中专毕业、95后。\n\n爱好:写作、金融,飙各种交通工具(🚴♀️🛴🏍️🚗),滑雪🏂\n\nGitHub:github.com/berwin", "followee_count": 30, "follower_count": 2934, "post_article_count": 27, "digg_article_count": 41, "got_digg_count": 1624, "got_view_count": 71863, "post_shortmsg_count": 10, "digg_shortmsg_count": 23, "isfollowed": false, "favorable_author": 0, "power": 2099, "study_point": 0, "university": {"university_id": "0", "name": "", "logo": ""}, "major": {"major_id": "0", "parent_id": "0", "name": ""}, "student_status": 0, "select_event_count": 0, "select_online_course_count": 0, "identity": 0, "is_select_annual": false, "select_annual_rank": 0, "annual_list_type": 0, "extraMap": {}, "is_logout": 0}, "category": {"category_id": "6809637767543259144", "category_name": "前端", "category_url": "frontend", "rank": 2, "back_ground": "https://lc-mhke0kuv.cn-n1.lcfile.com/8c95587526f346c0.png", "icon": "https://lc-mhke0kuv.cn-n1.lcfile.com/1c40f5eaba561e32.png", "ctime": 1457483942, "mtime": 1432503190, "show_type": 3, "item_type": 2, "promote_tag_cap": 4, "promote_priority": 2}, "tags": [{"id": 2546498, "tag_id": "6809640369764958215", "tag_name": "Vue.js", "color": "#41B883", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/7b5c3eb591b671749fee.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1432234520, "mtime": 1631693194, "id_type": 9, "tag_alias": "", "post_article_count": 31257, "concern_user_count": 313520}, {"id": 2546526, "tag_id": "6809640407484334093", "tag_name": "前端", "color": "#60ADFF", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/bac28828a49181c34110.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 1, "ctime": 1435971546, "mtime": 1631693180, "id_type": 9, "tag_alias": "", "post_article_count": 88830, "concern_user_count": 527705}, {"id": 2546519, "tag_id": "6809640398105870343", "tag_name": "JavaScript", "color": "#616161", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/5d70fd6af940df373834.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1435884803, "mtime": 1631693185, "id_type": 9, "tag_alias": "", "post_article_count": 67405, "concern_user_count": 398957}, {"id": 2546523, "tag_id": "6809640403516522504", "tag_name": "API", "color": "#616161", "icon": "https://p1-jj.byteimg.com/tos-cn-i-t2oaga2asx/leancloud-assets/a7c8e10a9c742fc175f9.png~tplv-t2oaga2asx-image.image", "back_ground": "", "show_navi": 0, "ctime": 1435971240, "mtime": 1631683137, "id_type": 9, "tag_alias": "", "post_article_count": 7154, "concern_user_count": 88261}], "user_interact": {"id": 6844903518344577037, "omitempty": 2, "user_id": 0, "is_digg": false, "is_follow": false, "is_collect": false}, "org": {"org_info": null, "org_user": null, "is_followed": false}, "req_id": "20210915160833010212150040390020BC"}], "cursor": "eyJ2IjoiNzAwNzI1MjQ2NDcyNjQ1ODM5OSIsImkiOjEwMjYwfQ==", "count": 11060, "has_more": true} | [
"www.1759633997@qq.com"
] | www.1759633997@qq.com |
021e27b295aa1e58939d5d8842368e01465b03e1 | 746cd1d11eb24e7a987f2dd4db72b68a8e0a5b99 | /android/external/adt-infra/build/scripts/slave/recipes/skia/skia.py | c27eafe8ca462660a9cd4d803724a7fb4193b04d | [] | no_license | binsys/BPI-A64-Android7 | c49b0d3a785811b7f0376c92100c6ac102bb858c | 751fadaacec8493a10461661d80fce4b775dd5de | refs/heads/master | 2023-05-11T18:31:41.505222 | 2020-07-13T04:27:40 | 2020-07-13T12:08:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 11,098 | py | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Recipe module for Skia builders.
DEPS = [
'json',
'path',
'platform',
'properties',
'raw_io',
'skia',
]
TEST_BUILDERS = {
'client.skia': {
'skiabot-ipad4-000': [
'Test-iOS-Clang-iPad4-GPU-SGX554-Arm7-Debug',
],
'skiabot-linux-tester-000': [
'Test-Ubuntu-GCC-GCE-CPU-AVX2-x86_64-Debug',
'Test-Ubuntu-GCC-GCE-CPU-AVX2-x86_64-Release-TSAN',
'Test-Ubuntu-Clang-GCE-CPU-AVX2-x86_64-Coverage-Trybot',
],
'skiabot-shuttle-ubuntu12-003': [
'Test-ChromeOS-GCC-Link-CPU-AVX-x86_64-Debug',
],
'skiabot-shuttle-ubuntu12-gtx550ti-001': [
'Test-Ubuntu-GCC-ShuttleA-GPU-GTX550Ti-x86_64-Release-Valgrind',
'Test-Ubuntu-GCC-ShuttleA-GPU-GTX550Ti-x86_64-Debug-ZeroGPUCache',
],
'skiabot-shuttle-win8-i7-4790k-001': [
'Perf-Win8-MSVC-ShuttleB-GPU-HD4600-x86_64-Release-Trybot',
],
},
'client.skia.android': {
'skiabot-shuttle-ubuntu12-nexus7-001': [
'Perf-Android-GCC-Nexus7-GPU-Tegra3-Arm7-Release',
'Test-Android-GCC-Nexus7-GPU-Tegra3-Arm7-Debug',
],
},
'client.skia.compile': {
'skiabot-mac-10_8-compile-001': [
'Build-Mac10.8-Clang-Arm7-Debug-Android',
],
'skiabot-linux-compile-000': [
'Build-Ubuntu-GCC-Arm7-Debug-Android',
'Build-Ubuntu-GCC-x86_64-Release-CMake',
],
},
'client.skia.fyi': {
'skiabot-linux-housekeeper-003': [
'Housekeeper-PerCommit',
'Housekeeper-PerCommit-Trybot',
'Perf-Android-GCC-Nexus5-CPU-NEON-Arm7-Release-Appurify',
'Perf-Android-GCC-Nexus5-GPU-Adreno330-Arm7-Release-Appurify',
],
},
}
def RunSteps(api):
api.skia.gen_steps()
def GenTests(api):
def AndroidTestData(builder):
test_data = (
api.step_data(
'get EXTERNAL_STORAGE dir',
stdout=api.raw_io.output('/storage/emulated/legacy')) +
api.step_data(
'read SKP_VERSION',
stdout=api.raw_io.output('42')) +
api.step_data(
'read SKIMAGE_VERSION',
stdout=api.raw_io.output('42'))
)
if 'Test' in builder:
test_data += (
api.step_data(
'exists skia_dm',
stdout=api.raw_io.output(''))
)
if 'Perf' in builder:
test_data += api.step_data(
'exists skia_perf',
stdout=api.raw_io.output(''))
return test_data
for mastername, slaves in TEST_BUILDERS.iteritems():
for slavename, builders_by_slave in slaves.iteritems():
for builder in builders_by_slave:
test = (
api.test(builder) +
api.properties(buildername=builder,
mastername=mastername,
slavename=slavename,
buildnumber=5,
revision='abc123') +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
if 'Test' in builder or 'Perf' in builder:
test += api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42'))
if 'Android' in builder:
ccache = '/usr/bin/ccache' if 'Appurify' in builder else None
test += api.step_data('has ccache?',
stdout=api.json.output({'ccache':ccache}))
if ('Android' in builder and
('Test' in builder or 'Perf' in builder) and
not 'Appurify' in builder):
test += AndroidTestData(builder)
if 'ChromeOS' in builder:
test += api.step_data('read SKP_VERSION',
stdout=api.raw_io.output('42'))
test += api.step_data('read SKIMAGE_VERSION',
stdout=api.raw_io.output('42'))
if 'Trybot' in builder:
test += api.properties(issue=500,
patchset=1,
rietveld='https://codereview.chromium.org')
if 'Win' in builder:
test += api.platform('win', 64)
yield test
builder = 'Test-Ubuntu-GCC-GCE-CPU-AVX2-x86_64-Debug'
yield (
api.test('failed_dm') +
api.properties(buildername=builder,
mastername='client.skia',
slavename='skiabot-linux-tester-000',
buildnumber=6) +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.step_data('dm', retcode=1)
)
yield (
api.test('has_ccache_android') +
api.properties(buildername='Build-Ubuntu-GCC-Arm7-Debug-Android',
mastername='client.skia.compile',
slavename='skiabot-linux-compile-000') +
api.step_data(
'has ccache?',
stdout=api.json.output({'ccache':'/usr/bin/ccache'}))
)
builder = 'Test-Android-GCC-Nexus7-GPU-Tegra3-Arm7-Debug'
master = 'client.skia.android'
slave = 'skiabot-shuttle-ubuntu12-nexus7-001'
yield (
api.test('failed_get_hashes') +
api.properties(buildername=builder,
mastername=master,
slavename=slave,
buildnumber=6,
revision='abc123') +
api.step_data(
'has ccache?',
stdout=api.json.output({'ccache':None})) +
AndroidTestData(builder) +
api.step_data('read SKP_VERSION',
stdout=api.raw_io.output('42')) +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.step_data('read SKIMAGE_VERSION',
stdout=api.raw_io.output('42')) +
api.step_data('get uninteresting hashes', retcode=1) +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
yield (
api.test('download_and_push_skps') +
api.properties(buildername=builder,
mastername=master,
slavename=slave,
buildnumber=6,
revision='abc123',
test_downloaded_skp_version='2') +
api.step_data(
'has ccache?',
stdout=api.json.output({'ccache':None})) +
AndroidTestData(builder) +
api.step_data('read SKP_VERSION',
stdout=api.raw_io.output('2')) +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.step_data('read SKIMAGE_VERSION',
stdout=api.raw_io.output('42')) +
api.step_data(
'exists skps',
stdout=api.raw_io.output('')) +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
yield (
api.test('missing_SKP_VERSION_device') +
api.properties(buildername=builder,
mastername=master,
slavename=slave,
buildnumber=6,
revision='abc123') +
api.step_data(
'has ccache?',
stdout=api.json.output({'ccache':None})) +
AndroidTestData(builder) +
api.step_data('read SKP_VERSION',
retcode=1) +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.step_data('read SKIMAGE_VERSION',
stdout=api.raw_io.output('42')) +
api.step_data(
'exists skps',
stdout=api.raw_io.output('')) +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
yield (
api.test('download_and_push_skimage') +
api.properties(buildername=builder,
mastername=master,
slavename=slave,
buildnumber=6,
revision='abc123',
test_downloaded_skimage_version='2') +
api.step_data(
'has ccache?',
stdout=api.json.output({'ccache':None})) +
AndroidTestData(builder) +
api.step_data('read SKP_VERSION',
stdout=api.raw_io.output('42')) +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.step_data('read SKIMAGE_VERSION',
stdout=api.raw_io.output('2')) +
api.step_data(
'exists skia_images',
stdout=api.raw_io.output('')) +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
yield (
api.test('missing_SKIMAGE_VERSION_device') +
api.properties(buildername=builder,
mastername=master,
slavename=slave,
buildnumber=6,
revision='abc123') +
api.step_data(
'has ccache?',
stdout=api.json.output({'ccache':None})) +
AndroidTestData(builder) +
api.step_data('read SKP_VERSION',
stdout=api.raw_io.output('42')) +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.step_data('read SKIMAGE_VERSION',
retcode=1) +
api.step_data(
'exists skia_images',
stdout=api.raw_io.output('')) +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
builder = 'Test-Ubuntu-GCC-GCE-CPU-AVX2-x86_64-Debug'
master = 'client.skia'
slave = 'skiabot-linux-test-000'
yield (
api.test('missing_SKP_VERSION_host') +
api.properties(buildername=builder,
mastername=master,
slavename=slave,
buildnumber=6,
revision='abc123') +
api.step_data('Get downloaded SKP_VERSION', retcode=1) +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
yield (
api.test('missing_SKIMAGE_VERSION_host') +
api.properties(buildername=builder,
mastername=master,
slavename=slave,
buildnumber=6,
revision='abc123') +
api.step_data('gsutil cat TIMESTAMP_LAST_UPLOAD_COMPLETED',
stdout=api.raw_io.output('42')) +
api.step_data('Get downloaded SKIMAGE_VERSION', retcode=1) +
api.path.exists(
api.path['slave_build'].join('skia'),
api.path['slave_build'].join('tmp', 'uninteresting_hashes.txt')
)
)
| [
"mingxin.android@gmail.com"
] | mingxin.android@gmail.com |
6d8b0b5732d62a9cf3767687b8e8763bebd74f0c | 0c154919a427bad71598bae69a8b5b6b94bc2627 | /Posts/migrations/0008_post_vote.py | 1f9c7557cad3250dbbcd9af07789b7d0075280aa | [] | no_license | MichalDrosio/blog | b7b56ffd35ef82d2a5c3c791eff8d326644f5627 | ebb7c07f29ea9de1f872bfeea79ae6d0c4822766 | refs/heads/master | 2020-12-22T13:02:54.573249 | 2020-07-27T11:25:16 | 2020-07-27T11:25:16 | 236,775,255 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 385 | py | # Generated by Django 3.0.8 on 2020-07-24 10:07
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('Posts', '0007_auto_20200723_1357'),
]
operations = [
migrations.AddField(
model_name='post',
name='vote',
field=models.PositiveIntegerField(default=0),
),
]
| [
"drosio.michal@gmail.com"
] | drosio.michal@gmail.com |
0f2b6251b864a2798282730b3fa7e054c2bdbc3b | 795d7860d6c58b06cd935ce858594af14f8b3d79 | /django/analysis/views/__init__.py | aa24512d5af029858e00e8c12f98f88d8a096031 | [] | no_license | sbhr/narou | d18e81af74bb4c83a99f8deb2f318750824c50f2 | 2601b6fb3bc49aebd5455d7c85ebf58a853829b9 | refs/heads/master | 2021-03-27T12:31:11.022019 | 2017-10-01T07:01:21 | 2017-10-01T07:01:21 | 54,224,432 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 142 | py | import index
import about
import ranking
import RankingList
import search_letter
import search_title
import detail_letter
import detail_title
| [
"sbhr.123@gmail.com"
] | sbhr.123@gmail.com |
629bb2ace85054f0e0e1bf16e25f739810db624e | 0e834094f5e4274b279939b81caedec7d8ef2c73 | /m3/django_/mysite3/book/migrations/0003_auto_20200115_1745.py | e90a76a3af2a4b4753b0bf2a493db5ffb36cf7a9 | [] | no_license | SpringSnowB/All-file | b74eaebe1d54e1410945eaca62c70277a01ef0bf | 03485c60e7c07352aee621df94455da3d466b872 | refs/heads/master | 2020-11-27T23:54:36.984555 | 2020-01-21T08:42:21 | 2020-01-21T08:42:21 | 229,651,737 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,176 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11.8 on 2020-01-15 09:45
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('book', '0002_auto_20200115_1603'),
]
operations = [
migrations.CreateModel(
name='Author',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=20, verbose_name='姓名')),
('age', models.IntegerField(default=1, verbose_name='年龄')),
('email', models.EmailField(max_length=254, null=True, verbose_name='邮箱')),
],
),
migrations.AddField(
model_name='book',
name='market_price',
field=models.DecimalField(decimal_places=2, default=0, max_digits=7, verbose_name='图书零售价'),
),
migrations.AddField(
model_name='book',
name='pub',
field=models.CharField(default='', max_length=30, verbose_name='出版社'),
),
]
| [
"tszxwsb@163.com"
] | tszxwsb@163.com |
37b2b85da6f95fe407e802ee89f5b1e8d5f1697d | 4e068621a6103ff66b74c60a66086877f26b8021 | /personal_portfolio/settings.py | 4ffd76d84e947e36cbade636d018841f79fe90e6 | [] | no_license | rohit-1305/Dajngo-Personal_Portfolio | 955cc9803c6330e2253a55bf8bce3c6d86b4a004 | 2ad60ba2eab45b8c09d48a2f75c242b27fd7dfe1 | refs/heads/master | 2022-12-31T02:04:57.313260 | 2020-10-20T06:02:29 | 2020-10-20T06:02:29 | 283,790,669 | 1 | 2 | null | 2020-10-20T06:02:30 | 2020-07-30T14:00:15 | Python | UTF-8 | Python | false | false | 3,274 | py | """
Django settings for personal_portfolio project.
Generated by 'django-admin startproject' using Django 3.0.8.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'h30w(0w*3$sevjy5-%p@23m#b)(s@ctt87q-^94&5dek2jvul='
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'blog',
'portfolio',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'personal_portfolio.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'personal_portfolio.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
# this is the place where the images will be stored
MEDIA_ROOT = os.path.join(BASE_DIR, 'media') | [
"rohitsav1305@gmail.com"
] | rohitsav1305@gmail.com |
23b3a98920da0a1ec1e67bcbc3edf4adf756c662 | e36b3e56576c398aabf7dfe5a82b9f4950eb707a | /UI/Graduation/inputNewResult.py | 18d6e731c1bbf8bf19696edb807a96fb244c604f | [
"Apache-2.0"
] | permissive | Biggig/xlnet | 7e69f94c3de9a7d50b8b019d99528fac18b440b4 | 8f69d95f92c5b0bbc26920d8d6f62f2362a5738c | refs/heads/master | 2021-02-07T05:07:20.422637 | 2020-04-08T17:43:44 | 2020-04-08T17:43:44 | 243,985,965 | 0 | 0 | Apache-2.0 | 2020-02-29T14:45:06 | 2020-02-29T14:45:06 | null | UTF-8 | Python | false | false | 895 | py | import json
import django
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Graduation.settings")
django.setup()
def main():
from ReadingTest.models import Reading, Q_A
data_path = os.getcwd() + '\predict_result_new\\'
all_path = os.walk(data_path)
for path, dir_list, file_list in all_path:
for file_name in file_list:
cur_path = os.path.join(path, file_name)
with open(cur_path, 'r') as f:
data = json.load(f)
answer = data['answer']
question = data['question']
id_ = data['id']
r = Reading.objects.get(id=id_)
q = Q_A.objects.filter(reading=r, question=question)
for ques in q:
ques.new_prediction = answer
ques.save()
if __name__ == "__main__":
main()
print('Done!')
| [
"756983026@qq.com"
] | 756983026@qq.com |
0f2d47b13afc8cda18a4ff436ac94aa01541396e | 111bb49dd6b335940b557d7ca77f4776baf719c1 | /bin/python-config | 4405b857a5d58dede91b54d337bd769e5acf5e1e | [] | no_license | venigreat/Tests | f8e942b95c1a575c5ac1ff8d18d2ef9a425dd5d5 | aadb8f855f1b0533a681060615510bbf3a53f5ae | refs/heads/master | 2020-03-19T11:32:56.396082 | 2018-06-09T13:27:51 | 2018-06-09T13:27:51 | 136,461,122 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,366 | #!/Users/andrey.belyaev/PycharmProjects/Apium/venv/bin/python
import sys
import getopt
import sysconfig
valid_opts = ['prefix', 'exec-prefix', 'includes', 'libs', 'cflags',
'ldflags', 'help']
if sys.version_info >= (3, 2):
valid_opts.insert(-1, 'extension-suffix')
valid_opts.append('abiflags')
if sys.version_info >= (3, 3):
valid_opts.append('configdir')
def exit_with_usage(code=1):
sys.stderr.write("Usage: {0} [{1}]\n".format(
sys.argv[0], '|'.join('--'+opt for opt in valid_opts)))
sys.exit(code)
try:
opts, args = getopt.getopt(sys.argv[1:], '', valid_opts)
except getopt.error:
exit_with_usage()
if not opts:
exit_with_usage()
pyver = sysconfig.get_config_var('VERSION')
getvar = sysconfig.get_config_var
opt_flags = [flag for (flag, val) in opts]
if '--help' in opt_flags:
exit_with_usage(code=0)
for opt in opt_flags:
if opt == '--prefix':
print(sysconfig.get_config_var('prefix'))
elif opt == '--exec-prefix':
print(sysconfig.get_config_var('exec_prefix'))
elif opt in ('--includes', '--cflags'):
flags = ['-I' + sysconfig.get_path('include'),
'-I' + sysconfig.get_path('platinclude')]
if opt == '--cflags':
flags.extend(getvar('CFLAGS').split())
print(' '.join(flags))
elif opt in ('--libs', '--ldflags'):
abiflags = getattr(sys, 'abiflags', '')
libs = ['-lpython' + pyver + abiflags]
libs += getvar('LIBS').split()
libs += getvar('SYSLIBS').split()
# add the prefix/lib/pythonX.Y/config dir, but only if there is no
# shared library in prefix/lib/.
if opt == '--ldflags':
if not getvar('Py_ENABLE_SHARED'):
libs.insert(0, '-L' + getvar('LIBPL'))
if not getvar('PYTHONFRAMEWORK'):
libs.extend(getvar('LINKFORSHARED').split())
print(' '.join(libs))
elif opt == '--extension-suffix':
ext_suffix = sysconfig.get_config_var('EXT_SUFFIX')
if ext_suffix is None:
ext_suffix = sysconfig.get_config_var('SO')
print(ext_suffix)
elif opt == '--abiflags':
if not getattr(sys, 'abiflags', None):
exit_with_usage()
print(sys.abiflags)
elif opt == '--configdir':
print(sysconfig.get_config_var('LIBPL'))
| [
"asbelyaew@gmail.com"
] | asbelyaew@gmail.com | |
231fa35f3981bdd6928277558e138bcd4a06c5ea | 9e4e134f0a7ed5896c72ed68a4f8ce1de6853616 | /pages/account_page.py | 41778ad449d5fabfafd6486fe0645598da5a5686 | [] | no_license | G4beXYZ/ProjetoMantisAuto | b066098802e2a6f87b8c5cea1217248d4afc93de | 1a66295a4f3e7e9c79c5b8c2572902f86a9a9a03 | refs/heads/main | 2023-05-30T09:21:38.522654 | 2021-06-18T00:04:16 | 2021-06-18T00:04:16 | 377,928,931 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,947 | py | from browser import Browser,Keys
# Classe especial para armazenar algumas variáveis com um valor fixo de uma pagina (facilita o acesso)
class AccountPageElement(object):
WARN = '//*[@id="main-container"]/div/div/div/div/div[4]/p/text()'
WARN_PHRASE = "Você está visitando uma página segura, e sua sessão expirou. Por favor autentique-se novamente."
BUTTON_ATUALIZAR_COLUNAS = '//*[@id="manage-columns-form"]/div/div[2]/input'
BUTTON_COPIAR_COLUNAS_PARA = '//*[@id="manage-columns-copy-form"]/fieldset/input[5]'
BUTTON_REINCIALIZAR_COLUNAS = '//*[@id="main-container"]/div[2]/div[2]/div/div/div[6]/form/fieldset/input[2]'
BUTTON_ADICIONAR_PERFIL = '//*[@id="account-profile-form"]/fieldset/div/div[2]/div[2]/input'
BUTTON_ATUALIZAR_PERFIL = '//*[@id="main-container"]/div[2]/div[2]/div/div/form/div/div[2]/div[2]/input'
BUTTON_CONFIRMAR_ACAO = '//*[@id="account-profile-update-form"]/div/div[2]/div[2]/input'
BUTTON_CRIAR_TOKEN = '//*[@id="account-create-api-token-form"]/div/div[2]/div[2]/input'
PERFIL_PLATAFORMA = 'platform'
PERFIL_SO = 'os'
PERFIL_SO_VERSION = 'os-version'
# --------------------------------------------------------------------------------------------------------
# [Classe que herda as condições do script Browser.py, aqui é onde criamos as funções de pesquisa para uma
# página em específico no site do Mantis, no caso desta classe, ela é específica para a
# página MINHA CONTA]
class AccountPage(Browser):
#[Esta função pega o elemento h4 da pagina MINHA CONTA que no caso seria o título de uma aba, serve para verificar
#se o programa está seguindo o fluxo certo]
def get_widgetTitle(self):
return self.driver.find_element_by_tag_name('h4').text
# [Esta função é serve para verificar se no campo onde é armazenada todas as colunas na aba "Gerenciar Colunas"
# possui alguma coluna escrita no campo ou não, a função acaba retornando os valores do campo escolhido
# podendo ter dentro dele um valor x de n tamanho ou um valor nulo]
def get_allColumns(self):
return self.driver.find_element_by_id('all-columns').text
# Esta função é serve para verificar se na aba "Perfis" existe algum projeto com que possa ser trabalhado
def get_projeto(self):
return self.driver.find_element_by_xpath('//*[@id="manage-columns-copy-form"]/fieldset/select/option[2]').text
# Esta função é serve para procurar quantos tokens existem e retorna um valor com o total de tokens na tabela
def get_tokenList(self):
elements = self.driver.find_elements_by_xpath('//*[@id="api-token-list-div"]/div/div[2]/div/div/table/tbody/tr')
print("QUANTIDADE DE TOKENS > "+ str(len(elements)))
return len(elements)
#Esta função executa a ação de atualizar as prefências
def atualizar_Prefencias(self):
self.driver.find_element_by_xpath('//*[@id="account-prefs-update-form"]/fieldset/div/div[2]/div[2]/input[1]').click()
#Esta função procura pela aba de PREFERÊNCIAS em MINHA CONTA e acessa ela para executar outras funções depois
def Preferencias(self):
self.driver.find_element_by_xpath('//*[@id="main-container"]/div[2]/div[2]/div/ul/li[2]/a').click()
# Esta função procura pela aba de GERENCIAR COLUNAS em MINHA CONTA e acessa ela para executar outras funções depois
def Gerenciar_Colunas(self):
self.driver.find_element_by_xpath('//*[@id="main-container"]/div[2]/div[2]/div/ul/li[3]/a').click()
# Esta função procura pela aba de PERFIS em MINHA CONTA e acessa ela para executar outras funções depois
def Perfis(self):
self.driver.find_element_by_xpath('//*[@id="main-container"]/div[2]/div[2]/div/ul/li[4]/a').click()
# Esta função procura pela aba de TOKENS API em MINHA CONTA e acessa ela para executar outras funções depois
def Tokens(self,userPass):
self.driver.find_element_by_xpath('//*[@id="main-container"]/div[2]/div[2]/div/ul/li[5]/a').click()
if AccountPageElement.WARN[0] == AccountPageElement.WARN_PHRASE:
self.driver.find_element_by_id("password").send_keys(userPass)
self.driver.find_element_by_id("password").send_keys(Keys.RETURN)
else:
pass
# [Esta função é serve para criar um Token com um nome escolhido por input para facilitar o teste, e depois ele
# retorna no console o token criado]
def criar_token(self,nomeToken):
self.driver.find_element_by_id('token_name').send_keys(nomeToken)
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_CRIAR_TOKEN).click()
print("TOKEN CRIADO ==>["+self.driver.find_element_by_class_name('well').text+"]")
print("ATENÇÃO: Guarde ele em um lugar seguro!")
# [Esta função é serve para revogar um token criado, onde a função pega o primeiro da lista dos tokens criados
# e revoga ele]
def revogar_token(self):
self.driver.find_element_by_xpath('//*[@id="revoke-api-token-form"]/fieldset/input[3]').click()
# Esta função é serve para atualizar as colunas na aba "Gerenciar Colunas"
def atualizar_coluna(self):
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_ATUALIZAR_COLUNAS).click()
# Esta função é serve para reinicializar as colunas na aba "Gerenciar Colunas"
def ReinicializarColunas(self):
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_REINCIALIZAR_COLUNAS).click()
# Esta função é serve para copiar as colunas na aba "Gerenciar Colunas" para um outro projeto
def copiar_coluna(self):
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_COPIAR_COLUNAS_PARA).click()
# Esta função é serve para preencher um perfil com valores de teste na aba "Perfis"
def preencher_perfil(self,testData):
self.driver.find_element_by_id(AccountPageElement.PERFIL_PLATAFORMA).send_keys(testData)
self.driver.find_element_by_id(AccountPageElement.PERFIL_SO).send_keys(testData)
self.driver.find_element_by_id(AccountPageElement.PERFIL_SO_VERSION).send_keys(testData)
# Esta função é serve confirmar o preenchimento do perfil na aba "Perfis"
def adicionar_perfil(self):
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_ADICIONAR_PERFIL).click()
# Esta função é serve para editar um perfil criado na aba "Perfis"
def editar_perfil(self):
#fazendo a ação de clicar no botão radial 'editar' e depois confirmando a ação com um outro botão
self.driver.execute_script("document.getElementById('action-edit').click()")
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_CONFIRMAR_ACAO).click()
# Escrevendo valores editados nos campos já predefinidos
self.driver.find_element_by_name('platform').send_keys(" [EDITADO]")
self.driver.find_element_by_name('os').send_keys(" [EDITADO]")
self.driver.find_element_by_name('os_build').send_keys("[EDITADO]")
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_ATUALIZAR_PERFIL).click()
# Esta função é serve para tornar um perfil criado como padrão de uso na aba "Perfis"
def tornar_padrao(self):
self.driver.execute_script("document.getElementById('action-default').click()")
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_CONFIRMAR_ACAO).click()
# Esta função é serve para excluir um perfil criado na aba "Perfis"
def excluir_perfil(self):
self.driver.execute_script("document.getElementById('action-delete').click()")
self.driver.find_element_by_xpath(AccountPageElement.BUTTON_CONFIRMAR_ACAO).click()
# Esta função é serve para selecionar o primeiro perfil da lista dos perfis criados na aba "Perfis"
def selecionar_perfil(self):
self.driver.find_element_by_xpath('//*[@id="select-profile"]/option[2]').click() | [
"gabricanu@terra.com.br"
] | gabricanu@terra.com.br |
49e1451854a1eb87e9c642d56cd742a849dfe9e2 | 36934ac28f41c8c64c78bcea2fb3e496ec39675f | /day_12/p2.py | 1ec9098e9d1e651a802bb826dc2b8401edd562bc | [] | no_license | drew-ellingson/advent_of_code_2017 | c30b641523ee515e717e3cae1771312b110cd575 | ceae49dc173813e681cfb410015d3d17927457a0 | refs/heads/master | 2020-07-10T05:54:31.756676 | 2019-09-22T15:04:07 | 2019-09-22T15:04:07 | 204,184,014 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,093 | py | with open('input.txt') as my_file:
lines = [line.strip('\n').split(' <-> ') for line in my_file.readlines()]
graph_dict = {int(line[0]):[int(pipe) for pipe in line[1].split(', ')] for line in lines}
in_a_group = []
""" thinking of this as a walk on a graph, adds the nodes one step further"""
def add_layer(_graph_dict, _connected_nodes):
for node in _connected_nodes:
_connected_nodes = _connected_nodes + _graph_dict[node]
return list(set(_connected_nodes))
# building a dictionary to track orbit info
orbits = {}
for program in graph_dict.keys():
if program in in_a_group:
continue
in_current_group = [program]
stable = False # iterate until orbit remains unchanged under add_layer
while not stable:
next_step_nodes = add_layer(graph_dict, in_current_group)
if set(next_step_nodes) == set(in_current_group):
stable = True
in_current_group = next_step_nodes
in_a_group = in_a_group + in_current_group
orbits[program] = in_current_group
print(len(orbits.keys()))
| [
"noreply@github.com"
] | noreply@github.com |
6267d925f4ba5879b734a6dcba1640ccedeb8b48 | a2eaa3decc385dea227da8a99203f767f32cf941 | /electronic_station/ascending_list.py | 98a5261f1acf37986d291eb1cd62bca1a61480d4 | [] | no_license | vlad-bezden/py.checkio | 94db32111eeeb2cd90c7b3c4606ea72cf2bb6678 | 6cd870ca3056cc9dcdce0ad520c27e92311719b3 | refs/heads/master | 2021-07-01T18:39:35.955671 | 2020-10-05T00:56:38 | 2020-10-05T00:56:38 | 93,111,389 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,075 | py | """Ascending List
https://py.checkio.org/en/mission/ascending-list/
Determine whether the sequence of elements items is ascending so that its
each element is strictly larger than (and not merely equal to) the
element that precedes it.
Input: Iterable with ints.
Output: bool
Output:
is_ascending_all([-5, 10, 99, 123456]). Exec time = 0.000124
is_ascending_set([-5, 10, 99, 123456]). Exec time = 0.000084
is_ascending_all([99]). Exec time = 0.000082
is_ascending_set([99]). Exec time = 0.000057
is_ascending_all([4, 5, 6, 7, 3, 7, 9]). Exec time = 0.000142
is_ascending_set([4, 5, 6, 7, 3, 7, 9]). Exec time = 0.000081
is_ascending_all([]). Exec time = 0.000090
is_ascending_set([]). Exec time = 0.000047
is_ascending_all([1, 1, 1, 1]). Exec time = 0.000103
is_ascending_set([1, 1, 1, 1]). Exec time = 0.000061
Conclusion: using set works faster than iterating using zip
"""
from typing import Callable, Sequence
from timeit import repeat
from dataclasses import dataclass
@dataclass
class Test:
data: Sequence[int]
expected: bool
TESTS = [
Test([-5, 10, 99, 123456], True),
Test([99], True),
Test([4, 5, 6, 7, 3, 7, 9], False),
Test([], True),
Test([1, 1, 1, 1], False),
]
def is_ascending_set(items: Sequence[int]) -> bool:
return sorted(set(items)) == items
def is_ascending_zip(items: Sequence[int]) -> bool:
return all(x < y for x, y in zip(items, items[1:]))
def validate(funcs: Sequence[Callable[[Sequence[int]], bool]]) -> None:
for test in TESTS:
for f in funcs:
result = f(test.data)
assert result == test.expected, f"{f.__name__}({test}), {result = }"
print("PASSED!!!\n")
if __name__ == "__main__":
funcs = [is_ascending_zip, is_ascending_set]
validate(funcs)
for test in TESTS:
for f in funcs:
t = repeat(stmt=f"f({test.data})", repeat=5, number=100, globals=globals())
print(f"{f.__name__}({test.data}). Exec time = {min(t):.6f}")
print()
| [
"vlad.bezden@gmail.com"
] | vlad.bezden@gmail.com |
9d79a11f4f78e82aa7f30a57486d44d4a7f7df4c | 20bcae34d9010ee67e8c77d64f8afec1ec6b7465 | /Code/Models.py | c4225bed7a5afb97b42253df087fd083de9c1c4a | [] | no_license | siyukenny/ExpO | ead6e810458bc443468ee2b75f0bd36a454e6504 | fdc80bdd09d02c3345a17365105d2fda804eb40b | refs/heads/master | 2022-12-30T16:29:37.170818 | 2020-10-16T20:23:43 | 2020-10-16T20:23:43 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,363 | py |
import tensorflow as tf
class MLP():
# shape[0] = input dimension
# shape[end] = output dimension
def __init__(self, shape):
self.shape = shape
self.weight_init = tf.contrib.layers.xavier_initializer()
self.bias_init = tf.constant_initializer(0.0)
def layer(self, input, input_size, output_size):
weights = tf.get_variable("weights", [input_size, output_size], initializer = self.weight_init)
biases = tf.get_variable("biases", output_size, initializer = self.bias_init)
return tf.nn.leaky_relu(tf.matmul(input, weights) + biases)
def model(self, input):
shape = self.shape
n = len(shape)
x = input
for i in range(n - 2):
with tf.variable_scope("hidden_" + str(i + 1)):
out = self.layer(x, shape[i], shape[i + 1])
x = out
with tf.variable_scope("output"):
weights = tf.get_variable("weights", [shape[n - 2], shape[n - 1]], initializer = self.weight_init)
biases = tf.get_variable("biases", shape[n - 1], initializer = self.bias_init)
#return tf.squeeze(tf.matmul(x, weights) + biases)
return tf.matmul(x, weights) + biases
class CNN():
# shape = [1, 6, 12, 24, 200, 10]
# filters = [6, 5, 4]
# strides = [1, 2, 2]
# final_im_size = 7
# shape[0] = number of input channels
def __init__(self, shape, filters, strides, final_im_size):
self.shape = shape
self.filters = filters
self.strides = strides
self.final_im_size = final_im_size
self.weight_init = tf.initializers.truncated_normal(stddev = 0.1)
self.bias_init = tf.constant_initializer(0.1)
def conv_layer(self, input, filter_size, channels_in, channels_out, stride):
weights = tf.get_variable("weights", [filter_size, filter_size, channels_in, channels_out], initializer = self.weight_init)
biases = tf.get_variable("biases", [channels_out], initializer = self.bias_init)
return tf.nn.relu(tf.nn.conv2d(input, weights, strides = [1, stride, stride, 1], padding = "SAME") + biases)
def model(self, input):
shape = self.shape
n = len(shape)
filters = self.filters
strides = self.strides
final_im_size = self.final_im_size
x = input
for i in range(n - 3):
with tf.variable_scope("conv_" + str(i + 1)):
x = self.conv_layer(x, filters[i], shape[i], shape[i + 1], strides[i])
x = tf.reshape(x, shape = [-1, final_im_size * final_im_size * shape[n - 3]])
with tf.variable_scope("fully_connected"):
weights = tf.get_variable("weights", [final_im_size * final_im_size * shape[n - 3], shape[n - 2]], initializer = self.weight_init)
biases = tf.get_variable("biases", [shape[n - 2]], initializer = self.bias_init)
x = tf.nn.relu(tf.matmul(x, weights) + biases)
#with tf.variable_scope("dropout"):
# x = tf.nn.dropout(x, 0.5)
with tf.variable_scope("output"):
weights = tf.get_variable("weights", [shape[n - 2], shape[n - 1]], initializer = self.weight_init)
biases = tf.get_variable("biases", [shape[n - 1]], initializer = self.bias_init)
return tf.matmul(x, weights) + biases
| [
"plumbdg@gmail.com"
] | plumbdg@gmail.com |
11734eca3d5b6ed18f796550d760ff9fcc2c83e7 | 7ec29c87b2220aa8c20d5be3465eda00cf072f4d | /lib/command/remove_command.py | 27e0cf701f5ea26d4fd5246dbdbd03ce9efa1c89 | [] | no_license | mdlugajczyk/computing-infrastructure-assignment | 63146f264e14d34a47e5c9dd333c00d93d8550cd | ca9198ea01d991e803947043604d9c267ed5b7e5 | refs/heads/master | 2020-04-01T14:45:21.991556 | 2014-04-07T14:36:46 | 2014-04-07T14:36:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 582 | py | #!/usr/bin/env python
import argparse
from lib.saga_service.filesystem_service import FilesystemService
from lib.command.command_template import CommandTemplate
class RemoveCommand(CommandTemplate):
"""
A UNIX like rm command for removing files.
Directories are not supported.
"""
def parse_args(self, args):
parser = argparse.ArgumentParser()
parser.add_argument("files", help="Files to remove.", nargs="+")
self.args = parser.parse_args(args)
def execute_command(self):
self._filesystem.remove(self.args.files)
| [
"dlugajczykmarcin@gmail.com"
] | dlugajczykmarcin@gmail.com |
c92452059858097f6e7e9190c803bb425c49d0b9 | f0c6d2bd80d29da8a273fde37c3932438275b57c | /ihealthdata/utils/SwitchCase.py | 9cc5f34b5748049a99e7dd79fe49c38a1489e70b | [] | no_license | ofsinnovation/hls-sparkojfapp-shield | fd1acb9ab37037a1bac066e9dbc752a0ce7160d3 | 4421efef45f32693bf1d40a68c10c16267292c54 | refs/heads/master | 2021-01-20T06:42:45.546791 | 2017-05-16T20:58:28 | 2017-05-16T20:58:28 | 89,911,980 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 541 | py | class SwitchCase(object):
def __init__(self, value):
self.value = value
self.fall = False
def __iter__(self):
"""Return the match method once, then stop"""
yield self.match
raise StopIteration
def match(self, *args):
"""Indicate whether or not to enter a case suite"""
if self.fall or not args:
return True
elif self.value in args: # changed for v1.5, see below
self.fall = True
return True
else:
return False | [
"ganapathy.subramanian@object-frontier.com"
] | ganapathy.subramanian@object-frontier.com |
b03c20c06b007718fdc3396885410fb6db0e19f2 | 6fa7f99d3d3d9b177ef01ebf9a9da4982813b7d4 | /hQRuQguN4bKyM2gik_21.py | d59885f89bac86c7d6d3eae679f0d6dbfb33395c | [] | no_license | daniel-reich/ubiquitous-fiesta | 26e80f0082f8589e51d359ce7953117a3da7d38c | 9af2700dbe59284f5697e612491499841a6c126f | refs/heads/master | 2023-04-05T06:40:37.328213 | 2021-04-06T20:17:44 | 2021-04-06T20:17:44 | 355,318,759 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 195 | py |
def simple_check(a, b):
lo, hi = min(a,b), max(a,b)
cnt = 0
for i in range(lo):
if hi % lo == 0:
cnt += 1
hi = hi - 1
lo = lo - 1
return cnt
| [
"daniel.reich@danielreichs-MacBook-Pro.local"
] | daniel.reich@danielreichs-MacBook-Pro.local |
45c599cf13145968e7cb58b46e7dfb9b36b32daa | 289d212e958384d86af0f98996ce73fde882bd85 | /bloch.py | 1859f51272d67730946a0b832cda34f27658d575 | [] | no_license | mouehab/MRI-Task-1 | 2912a77dd1e795fc08ec28e873695ec4dc8bb031 | c6bb380890a43d78fa77f694d9340a2d59448dd0 | refs/heads/master | 2021-05-21T08:43:09.165432 | 2020-04-06T16:59:02 | 2020-04-06T16:59:02 | 252,623,260 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,396 | py | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import math
import enum
import sys
class Modes(enum.Enum):
recovery = 1
decay = 2
precession = 3
magVector=[1,1,1]
magVector1=[None]*2
xVector=[]
yVector=[]
fig, ax = plt.subplots()
line, = ax.plot([], [], lw=2)
ax.grid()
# magVectorZ=[0.01,0.01,1]
magVectorZ=[1,1,1]
xVector=[]
yVector=[]
zVector=[]
tVector=[]
t=0
def data_gen(mode, angle):
magVector1=[None]*3
magVectorXY=[None]*3
t=0
df=10
magVectorXY[0]=magVector[0]*math.cos(angle)
magVectorXY[1]=magVector[1]*math.cos(angle)
magVectorXY[2]=magVector[2]*math.cos(angle)
while(magVector1[0]!=0 or magVector1[1]!=0):
E1 = math.exp(-t/100)
E2 = math.exp(-t/600)
A= np.array([[E2,0,0],[0,E2,0],[0,0,E1]])
B= np.array([0, 0, 1-E1])
phi =2*math.pi*df*t/1000
Rz=np.array([[math.cos(phi),-math.sin(phi),0],[math.sin(phi),math.cos(phi),0],[0, 0, 1]])
Afb = np.matmul(A,Rz)
magVector1 =np.matmul(Afb,magVectorXY)+B
if(mode ==Modes.recovery):
yield t, magVector1[2]
t = t+10
elif(mode == Modes.decay):
yield t, np.sqrt(np.square(magVector1[0])+np.square(magVector1[1]))
t = t+10
elif(mode == Modes.precession):
yield magVector1[0], magVector1[1]
t = t+10
else: print("error")
def init():
ax.set_ylim(-2.0,2.0)
ax.set_xlim(-2.0, 2.0)
del xVector[:]
del yVector[:]
line.set_data(xVector, yVector)
return line,
def run(data):
# update the data
t, y = data
xVector.append(t)
yVector.append(y)
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
ax.figure.canvas.draw()
line.set_data(xVector, yVector)
if t >= xmax:
ax.set_xlim(xmin, 2*xmax)
ax.figure.canvas.draw()
if y >= ymax:
ax.set_ylim(ymin, 2*ymax)
ax.figure.canvas.draw()
line.set_data(xVector, yVector)
return line,
def main(*args, **kwargs):
Index = Modes[sys.argv[1]]
theta = float(sys.argv[2])
print(Index)
print(theta)
ani = animation.FuncAnimation(fig, run, data_gen(Index,theta), blit=False, interval=10,
repeat=True, init_func=init)
plt.show()
if __name__ == '__main__':
main() | [
"mooehab988@gmail.com"
] | mooehab988@gmail.com |
960758d8355d39ceceed1951181ddf3b7bd0da22 | e569c41ec81382630693d3bc0a163c06a6a23d52 | /PythonProgramming/ICP7/exercise.py | 4b88a65c0f4c3a600d0e725387a35a2241175a01 | [] | no_license | Sravanthi-Gogadi/PythonDeeplearningCourse | 20a074763283c3bdbcbc3846576509c5e7a733e9 | 037e94f19362635dd6911cdbd70f60830ec51f5c | refs/heads/master | 2020-03-19T08:21:18.901163 | 2018-07-28T04:39:00 | 2018-07-28T04:39:00 | 136,197,623 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,981 | py | # import beautifulsoap package to get the data from url
from bs4 import BeautifulSoup
import urllib.request
#import nltk package
import nltk
# import stemmer and lemmatizer
from nltk.stem import PorterStemmer
from nltk.stem import WordNetLemmatizer
from nltk.util import trigrams
from nltk import ne_chunk
'''Get the text from the python programming language url'''
# Read the web url into a variable
url = "https://en.wikipedia.org/wiki/Python_(programming_language)"
# use urllib to open the url
res = urllib.request.urlopen(url)
plain_text = res
# Use beautiful soup to get the content of webpage
soup = BeautifulSoup(plain_text, "html.parser")
# get the text from the website
data = soup.get_text("\n")
#print(data)
# open the out.txt in write mode and write the website data to out file
with open("out.txt",'w') as f:
f.write(str(data.encode("utf-8")))
# tokenize words
tokenize_words = nltk.word_tokenize(data)
print("Tokenized words")
print("----------------------------------------------------------------------------------------------------------------")
print(tokenize_words)
print("-----------------------------------------------------------------------------------------------------------------")
print("POS Tagging")
print("----------------------------------------------------------------------------------------------------------------")
# apply parts of speech to words
parts_of_speech=nltk.pos_tag(tokenize_words)
print(parts_of_speech)
print("-----------------------------------------------------------------------------------------------------------------")
print("Trigrams")
print("------------------------------------------------------------------------------------------------------------------")
# import trigrams to divide the data into triplets
print(list(trigrams(tokenize_words)))
print("------------------------------------------------------------------------------------------------------------------")
print("Named entity recognition")
print("------------------------------------------------------------------------------------------------------------------")
# apply named entity recognition to the tagged words
print(ne_chunk(parts_of_speech))
print("-------------------------------------------------------------------------------------------------------------------")
# open the out.txt file
with open("out.txt",'r') as f:
# split into lines
lines=f.read().splitlines()
for l in lines:
# split words in a line
words=l.split()
# apply the stemmer to the words
pstem=PorterStemmer()
# apply the lemmatization to the words
word_lemmatizer=WordNetLemmatizer()
for i in words:
print("Stemmed words")
print("-------------")
print(pstem.stem(i))
print("-------------")
print("Lemmatized words")
print("--------------")
print(word_lemmatizer.lemmatize(i))
print("--------------")
| [
"sravanthigogadi@gmail.com"
] | sravanthigogadi@gmail.com |
614225b8073e6d76f809b652caf54b6e2dc2d2fc | 1e249b27001dafb2ac73f34c9f271c444c618252 | /node_modules/glob-watcher/node_modules/fsevents/build/config.gypi | a3fd4588eb69535868aa65a6968067cbd48da0a0 | [
"MIT"
] | permissive | hsin-yunn/hex_school_week6 | e78c593060b9e7fb0578c2fe8a440d057f573df9 | 916af5e8412be0dd364cc62cf4d30ab8317bd139 | refs/heads/master | 2023-07-07T23:34:02.420434 | 2021-08-24T14:36:01 | 2021-08-24T14:36:01 | 393,963,111 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,709 | gypi | # Do not edit. File was generated by node-gyp's "configure" step
{
"target_defaults": {
"cflags": [],
"default_configuration": "Release",
"defines": [],
"include_dirs": [],
"libraries": []
},
"variables": {
"asan": 0,
"build_v8_with_gn": "false",
"coverage": "false",
"dcheck_always_on": 0,
"debug_nghttp2": "false",
"debug_node": "false",
"enable_lto": "false",
"enable_pgo_generate": "false",
"enable_pgo_use": "false",
"error_on_warn": "false",
"force_dynamic_crt": 0,
"host_arch": "x64",
"icu_data_in": "../../deps/icu-tmp/icudt67l.dat",
"icu_endianness": "l",
"icu_gyp_path": "tools/icu/icu-generic.gyp",
"icu_path": "deps/icu-small",
"icu_small": "false",
"icu_ver_major": "67",
"is_debug": 0,
"llvm_version": "11.0",
"napi_build_version": "7",
"node_byteorder": "little",
"node_debug_lib": "false",
"node_enable_d8": "false",
"node_install_npm": "true",
"node_module_version": 83,
"node_no_browser_globals": "false",
"node_prefix": "/",
"node_release_urlbase": "https://nodejs.org/download/release/",
"node_shared": "false",
"node_shared_brotli": "false",
"node_shared_cares": "false",
"node_shared_http_parser": "false",
"node_shared_libuv": "false",
"node_shared_nghttp2": "false",
"node_shared_openssl": "false",
"node_shared_zlib": "false",
"node_tag": "",
"node_target_type": "executable",
"node_use_bundled_v8": "true",
"node_use_dtrace": "true",
"node_use_etw": "false",
"node_use_node_code_cache": "true",
"node_use_node_snapshot": "true",
"node_use_openssl": "true",
"node_use_v8_platform": "true",
"node_with_ltcg": "false",
"node_without_node_options": "false",
"openssl_fips": "",
"openssl_is_fips": "false",
"ossfuzz": "false",
"shlib_suffix": "83.dylib",
"target_arch": "x64",
"v8_enable_31bit_smis_on_64bit_arch": 0,
"v8_enable_gdbjit": 0,
"v8_enable_i18n_support": 1,
"v8_enable_inspector": 1,
"v8_enable_lite_mode": 0,
"v8_enable_object_print": 1,
"v8_enable_pointer_compression": 0,
"v8_no_strict_aliasing": 1,
"v8_optimized_debug": 1,
"v8_promise_internal_field_count": 1,
"v8_random_seed": 0,
"v8_trace_maps": 0,
"v8_use_siphash": 1,
"want_separate_host_toolset": 0,
"xcode_version": "11.0",
"nodedir": "/Users/hsinyun/Library/Caches/node-gyp/14.13.0",
"standalone_static_library": 1,
"dry_run": "",
"legacy_bundling": "",
"save_dev": "",
"browser": "",
"commit_hooks": "true",
"only": "",
"viewer": "man",
"also": "",
"rollback": "true",
"sign_git_commit": "",
"audit": "true",
"usage": "",
"globalignorefile": "/usr/local/etc/npmignore",
"init_author_url": "",
"maxsockets": "50",
"shell": "/bin/bash",
"metrics_registry": "https://registry.npmjs.org/",
"parseable": "",
"shrinkwrap": "true",
"init_license": "ISC",
"timing": "",
"if_present": "",
"cache_max": "Infinity",
"init_author_email": "",
"sign_git_tag": "",
"cert": "",
"git_tag_version": "true",
"local_address": "",
"long": "",
"preid": "",
"fetch_retries": "2",
"registry": "https://registry.npmjs.org/",
"key": "",
"message": "%s",
"versions": "",
"globalconfig": "/usr/local/etc/npmrc",
"always_auth": "",
"logs_max": "10",
"prefer_online": "",
"cache_lock_retries": "10",
"global_style": "",
"update_notifier": "true",
"audit_level": "low",
"heading": "npm",
"fetch_retry_mintimeout": "10000",
"offline": "",
"read_only": "",
"searchlimit": "20",
"access": "",
"json": "",
"allow_same_version": "",
"description": "true",
"engine_strict": "",
"https_proxy": "",
"init_module": "/Users/hsinyun/.npm-init.js",
"userconfig": "/Users/hsinyun/.npmrc",
"cidr": "",
"node_version": "14.13.0",
"user": "",
"auth_type": "legacy",
"editor": "vi",
"ignore_prepublish": "",
"save": "true",
"script_shell": "",
"tag": "latest",
"before": "",
"global": "",
"progress": "true",
"ham_it_up": "",
"optional": "true",
"searchstaleness": "900",
"bin_links": "true",
"force": "",
"save_prod": "",
"searchopts": "",
"depth": "Infinity",
"node_gyp": "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js",
"rebuild_bundle": "true",
"sso_poll_frequency": "500",
"unicode": "true",
"fetch_retry_maxtimeout": "60000",
"ca": "",
"save_prefix": "^",
"scripts_prepend_node_path": "warn-only",
"sso_type": "oauth",
"strict_ssl": "true",
"tag_version_prefix": "v",
"dev": "",
"fetch_retry_factor": "10",
"group": "20",
"save_exact": "",
"cache_lock_stale": "60000",
"prefer_offline": "",
"version": "",
"cache_min": "10",
"otp": "",
"cache": "/Users/hsinyun/.npm",
"searchexclude": "",
"color": "true",
"package_lock": "true",
"fund": "true",
"package_lock_only": "",
"save_optional": "",
"user_agent": "npm/6.14.8 node/v14.13.0 darwin x64",
"ignore_scripts": "",
"cache_lock_wait": "10000",
"production": "",
"save_bundle": "",
"send_metrics": "",
"init_version": "1.0.0",
"node_options": "",
"umask": "0022",
"scope": "",
"git": "git",
"init_author_name": "",
"onload_script": "",
"tmp": "/var/folders/z9/xwkp86bx57sdspf2121511m00000gn/T",
"unsafe_perm": "true",
"format_package_lock": "true",
"link": "",
"prefix": "/usr/local"
}
}
| [
"ruby8941@gmail.com"
] | ruby8941@gmail.com |
850a3418ed0f61a65a90ea5281ae98f2f44ada75 | f278ad99e0df2c4d6832f20fa8a6df54cc65e2a0 | /src/python/greenhaus-cli/src/greenhaus-cli | 9ec1041da9ba7a406242a1e2d2e40e5c5d337482 | [
"MIT"
] | permissive | stefanadelbert/greenhaus | 5fcb31a520e978717d5f19c9bb8c1b98e155a66b | 776a6b78addb09ac7d59ab6dace077f52e7aa8dc | refs/heads/master | 2020-12-29T18:46:30.154126 | 2016-04-17T11:47:57 | 2016-04-17T11:47:57 | 56,431,349 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 919 | #!/usr/bin/env python
import logging
import logging.handlers
import cmd2
import redis
LOG = logging.getLogger(__name__)
class App(cmd2.Cmd):
use_rawinput = True
doc_header = "Shell commands (type help <topic>):"
app_cmd_header = "Application commands (type help <topic>):"
def __init__(self):
cmd2.Cmd.__init__(self, 'tab', stdin=None, stdout=None)
self.r = redis.Redis(host='localhost', port=6379, db=0)
def default(self, line):
pass
def do_help(self, arg):
print('This is the help')
def do_status(self, arg):
res = self.r.get('status')
if res:
print('Status: {}'.format(res))
else:
print('Unable to retieve status')
def precmd(self, statement):
return statement
def cmdloop(self):
self._cmdloop()
if __name__ == "__main__":
app = App()
app.cmdloop()
| [
"stef@vivcourt.com"
] | stef@vivcourt.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.