blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 281 | content_id stringlengths 40 40 | detected_licenses listlengths 0 57 | license_type stringclasses 2
values | repo_name stringlengths 6 116 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 313
values | visit_date timestamp[us] | revision_date timestamp[us] | committer_date timestamp[us] | github_id int64 18.2k 668M ⌀ | star_events_count int64 0 102k | fork_events_count int64 0 38.2k | gha_license_id stringclasses 17
values | gha_event_created_at timestamp[us] | gha_created_at timestamp[us] | gha_language stringclasses 107
values | src_encoding stringclasses 20
values | language stringclasses 1
value | is_vendor bool 2
classes | is_generated bool 2
classes | length_bytes int64 4 6.02M | extension stringclasses 78
values | content stringlengths 2 6.02M | authors listlengths 1 1 | author stringlengths 0 175 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
928f58c34772c2b4138712fdfcb79215f149fc96 | 7b4cbaa1e7bab897e34acba06f73ac17760d394a | /sdks/python/client/argo_workflows/model/persistent_volume_claim.py | ed5b44656127c9955f253ef33fcc1405c9aaf6b6 | [
"Apache-2.0"
] | permissive | nHurD/argo | 0fab7f56179c848ad8a77a9f8981cb62b4a71d09 | f4a65b11a184f7429d0615a6fa65bc2cea4cc425 | refs/heads/master | 2023-01-13T04:39:54.793473 | 2022-12-18T04:48:37 | 2022-12-18T04:48:37 | 227,931,854 | 0 | 2 | Apache-2.0 | 2019-12-13T22:24:19 | 2019-12-13T22:24:18 | null | UTF-8 | Python | false | false | 13,755 | py | """
Argo Workflows API
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. For more information, please see https://argoproj.github.io/argo-workflows/ # noqa: E501
The version of the OpenAPI document: VERSION
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from argo_workflows.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from argo_workflows.exceptions import ApiAttributeError
def lazy_import():
from argo_workflows.model.object_meta import ObjectMeta
from argo_workflows.model.persistent_volume_claim_spec import PersistentVolumeClaimSpec
from argo_workflows.model.persistent_volume_claim_status import PersistentVolumeClaimStatus
globals()['ObjectMeta'] = ObjectMeta
globals()['PersistentVolumeClaimSpec'] = PersistentVolumeClaimSpec
globals()['PersistentVolumeClaimStatus'] = PersistentVolumeClaimStatus
class PersistentVolumeClaim(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'api_version': (str,), # noqa: E501
'kind': (str,), # noqa: E501
'metadata': (ObjectMeta,), # noqa: E501
'spec': (PersistentVolumeClaimSpec,), # noqa: E501
'status': (PersistentVolumeClaimStatus,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'api_version': 'apiVersion', # noqa: E501
'kind': 'kind', # noqa: E501
'metadata': 'metadata', # noqa: E501
'spec': 'spec', # noqa: E501
'status': 'status', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""PersistentVolumeClaim - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
api_version (str): APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources. [optional] # noqa: E501
kind (str): Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds. [optional] # noqa: E501
metadata (ObjectMeta): [optional] # noqa: E501
spec (PersistentVolumeClaimSpec): [optional] # noqa: E501
status (PersistentVolumeClaimStatus): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""PersistentVolumeClaim - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
api_version (str): APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources. [optional] # noqa: E501
kind (str): Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds. [optional] # noqa: E501
metadata (ObjectMeta): [optional] # noqa: E501
spec (PersistentVolumeClaimSpec): [optional] # noqa: E501
status (PersistentVolumeClaimStatus): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")
| [
"noreply@github.com"
] | noreply@github.com |
ef3a5db952481818bb6e7c6d4a2fe09a5b89a6c9 | f7f2a9ed0d56a2d8a626f42444b60d28212c827a | /maml_rl/envs/laser/Laser/envs/reacher.py | 28ddd1d8808c71175a5a17019d85e6b6544cac84 | [
"MIT"
] | permissive | shawnmanuel000/pytorch-maml-rl | 568b641cbf8dea94fa163230e28590ed77cb8b42 | e040f011ce5551cac9f12c80572f126f88ff188d | refs/heads/master | 2022-11-23T11:19:17.279928 | 2020-07-26T17:27:59 | 2020-07-26T17:27:59 | 282,566,143 | 0 | 0 | null | 2020-07-26T03:13:24 | 2020-07-26T03:13:23 | null | UTF-8 | Python | false | false | 3,211 | py | import re
import copy
import random
import numpy as np
from .base import MujocoEnv
import xml.etree.ElementTree as ET
class Reacher(MujocoEnv):
def __init__(self):
super().__init__('Reacher.xml', frame_skip=2)
def init_task(self, root):
option = root.find("option")
option.set("gravity", "0 0 0")
worldbody = root.find("worldbody")
self.path_names = []
path = ET.Element("body")
path.set("name", "path")
path.set("pos", "0 0 0")
for i in range(10):
name = f"path{i}"
point = ET.Element("body")
point.set("name", name)
point.set("pos", "0 0 0")
point.append(ET.fromstring("<geom conaffinity='0' contype='0' pos='0 0 0' rgba='0.8 0.2 0.4 0.8' size='.002' type='sphere'/>"))
path.append(point)
self.path_names.append(name)
worldbody.append(path)
self.range = 1.0
self.origin = np.array([0, 0, 0.3])
self.size = np.maximum([0.25, 0.25, 0], 0.001)
space = f"<geom conaffinity='0' contype='0' name='space' pos='{' '.join([f'{p}' for p in self.origin])}' rgba='0.2 0.2 0.2 0.1' size='{self.size[0]}' type='sphere'/>"
el = ET.fromstring(space)
worldbody.append(el)
return root
def reset_task(self, task):
rand = np.random.uniform(-1, 1, size=self.size.shape)
while np.linalg.norm(rand) > 1 or np.linalg.norm(rand) < 0.1 or rand[1]>0:
rand = np.random.uniform(-1, 1, size=self.size.shape)
target_pos = self.origin + self.range*self.size*rand
self.model.body_pos[self.model.body_names.index("target")] = target_pos
qpos = 0.1*np.random.uniform(low=-1, high=1, size=self.model.nq) + self.init_qpos
qpos[0] = 0.5*np.random.uniform(-3.14, 3.14)
qvel = self.init_qvel + np.random.uniform(low=-.005, high=.005, size=self.model.nv)
self.set_state(qpos, qvel)
ef_pos = self.get_body_pos("fingertip")
target_pos = self.get_body_pos("target")
points = np.linspace(ef_pos, target_pos, len(self.path_names))
path_indices = [self.model.body_names.index(name) for name in self.path_names]
for i,point in zip(path_indices, points):
self.model.body_pos[i] = point
def task_reward(self):
ef_pos = self.get_body_pos("fingertip")
target_pos = self.get_body_pos("target")
path = [self.get_body_pos(name) for name in self.path_names]
target_dist = ef_pos-target_pos
path_dists = [ef_pos-path_pos for path_pos in path]
reward_goal = -np.linalg.norm(target_dist)*2
reward_path = -np.min(np.linalg.norm(path_dists, axis=-1))
reward = reward_goal + reward_path
return reward
def task_state(self):
path = [self.get_body_pos(name) for name in self.path_names]
return np.concatenate([*path])
def task_done(self):
return False
def observation(self):
pos = self.get_body_pos("fingertip")
return np.concatenate([super().observation(), pos])
def sample_tasks(self, num_tasks):
tasks = [{'id': i} for i in range(num_tasks)]
return tasks
| [
"shawn@DN51s561.SUNet"
] | shawn@DN51s561.SUNet |
a7cd2315cf74a1e14c921de851749ad44885f092 | ccf7ca1e3eb8918426fd33b61855768a0a4e06ee | /app/apps/order_item/migrations/0004_auto_20201206_1747.py | 7afc250f269ac9e8e762b1a98ab21d57fa280dc2 | [
"MIT"
] | permissive | barisortac/mini-erp-docker | b4a77370d8cc10bc010756180a6c9a033276399e | f5c37c71384c76e029a26e89f4771a59ed02f925 | refs/heads/master | 2023-02-13T16:27:58.594952 | 2021-01-15T21:23:40 | 2021-01-15T21:23:40 | 322,933,618 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 660 | py | # Generated by Django 3.1.3 on 2020-12-06 14:47
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('order_item', '0003_auto_20201206_1707'),
]
operations = [
migrations.AddField(
model_name='orderitem',
name='total_amount',
field=models.FloatField(default=0, verbose_name='Toplam Tutar'),
),
migrations.AddField(
model_name='orderitem',
name='total_amount_with_vat',
field=models.FloatField(default=0,
verbose_name='Toplam Tutar (KDVli)'),
),
]
| [
"baris.ortac@hamurlabs.com"
] | baris.ortac@hamurlabs.com |
11c49749e86f0d534753ff193be9ff0791d68364 | 0b159f8a12c7d4e624511b12109e5d6e569b7be1 | /matrix_product.py | 0c531c7e7f1a2d8f1d5a69359a57d56b13662c24 | [] | no_license | Kuvaldis/StepicNeuralNetwork | 1d007bc2d394348d6ceabaeccf04f4fb3c51227e | 5fb4685486d35fe9196a19c34076f1a09854b2af | refs/heads/master | 2016-08-12T19:01:15.692020 | 2016-05-08T13:53:34 | 2016-05-08T13:53:34 | 55,699,717 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 544 | py | import numpy as np
x_shape = tuple(map(int, input().split())) # reads first line from input and maps it to int iterator, then converted into tuple. this is the size nxm
X = np.fromiter(map(int, input().split()), np.int).reshape(x_shape) # reads second line and creates flatten array from iterator, then reshapes it to matrix
y_shape = tuple(map(int, input().split()))
Y = np.fromiter(map(int, input().split()), np.int).reshape(y_shape)
if x_shape[1] != y_shape[1]:
print("matrix shapes do not match")
else:
print(X.dot(Y.T))
| [
"kuvaldis1988@gmail.com"
] | kuvaldis1988@gmail.com |
1cf3ae0a6ba180ced5b33a3c2ebba81e91fb0720 | 503c623c76a0ee1d3c72d47dd013a516f074ac8c | /App_Login/migrations/0001_initial.py | 2eb2f96d36fc070b4094b26e5d323d6754d78462 | [] | no_license | abbappii/My-Blog_Project | a0b73622d82b273cfd7d2b2630fb4602cc3f313f | 0fecb9b71ac141771cf6ebc947acd11c4902852b | refs/heads/main | 2023-03-16T00:26:33.855191 | 2021-03-03T09:27:02 | 2021-03-03T09:27:02 | 344,067,153 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 793 | py | # Generated by Django 3.1.6 on 2021-02-16 08:35
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='UserProfile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('profile_pic', models.ImageField(upload_to='profile_pics')),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='user_profile', to=settings.AUTH_USER_MODEL)),
],
),
]
| [
"bappi142434@gmail.com"
] | bappi142434@gmail.com |
3f217f46168d101b201473002d4ac69810cc1ce2 | 3df1c70b3deb923cc9089f53fd5be41cba1d9c4a | /test.py | 6aa9f53d8a8aba050d8ccaac5dabd2a471c85eab | [] | no_license | gzm1997/add_christmas_hat | a8d2c800e1e8bd509aec257e24383fbeb8ae2b3d | e0a6befdc8be943aa6d872d5a56ef044029f06fe | refs/heads/master | 2021-08-28T06:50:58.893334 | 2017-12-11T13:35:59 | 2017-12-11T13:35:59 | 113,862,956 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 411 | py | import numpy as np
import cv2
s_cascade = cv2.CascadeClassifier('data/haarcascade_profileface.xml')
img = cv2.imread('head5.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
results = s_cascade.detectMultiScale(gray, 1.3, 5)
print("r", results)
for (x, y, w, h) in results:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows() | [
"1617899539@qq.com"
] | 1617899539@qq.com |
5b9db0eb730220f5629ce6332d08817059389b44 | f1e34dd513f55e18c9a2e8565c39950918f4e8fc | /django_playground/models.py | 832a4ecee876708e914f5ae9dd5f8e1200809e33 | [] | no_license | HondoOhnaka/django_playground | d9ed720ea6172545f4da2a6013e807e9454e86c0 | caf0e3112935efb32a7524238b511a9908fd7027 | refs/heads/master | 2021-05-26T20:46:39.981724 | 2013-01-04T20:22:06 | 2013-01-04T20:22:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 708 | py | from django.db import models
from django.core.urlresolvers import reverse, reverse_lazy
MY_CHOICES = (
('FOO', 'foo'),
('BAR', 'bar'),
('BAZ', 'baz'),
)
class Playground(models.Model):
name = models.CharField(max_length=20)
price = models.DecimalField(null=True, max_digits=5, decimal_places=2)
description = models.TextField(null=True, blank=True)
my_choices = models.CharField(max_length=3, choices=MY_CHOICES)
def __unicode__(self):
return u"This is a %s" % (self.name,)
def get_absolute_url(self):
return reverse('playground_detail', kwargs={'pk': self.pk})
class Simple(models.Model):
foo = models.CharField(max_length=10)
def get_absolute_url(self):
return reverse('simple_list') | [
"jim.munro@sendgrid.com"
] | jim.munro@sendgrid.com |
256584a19b23c90cb6a68016a4812c2daa606738 | 60bb28bc36989b4332e35a32ff4c373fd3acd163 | /lessons/function.py | cbf1147f342a8d7bd04957cfe2c5bacc62fe9e30 | [] | no_license | shristi-unc/comp110-21f-workspace | 78939b0d8fefa8b9b6be61ac9a0ada392fe862bf | 7ec440cdad8a333bfec59ff9ae9a13bf7ddc2e2a | refs/heads/main | 2023-07-21T00:21:27.118765 | 2021-09-10T01:45:41 | 2021-09-10T01:45:41 | 404,922,994 | 0 | 0 | null | 2021-09-10T01:45:49 | 2021-09-10T01:45:48 | null | UTF-8 | Python | false | false | 182 | py | def my_max(a: int, b: int) -> int:
"""Returns the greatest argument."""
if a >= b:
return a
return b
x: int = 6
y: int = 5 + 2
z: int = my_max(x, y)
print(z) | [
"ssharma@unc.edu"
] | ssharma@unc.edu |
99f1d92efabf6b36037742d64efe5971305ca901 | cfae8cc592c09c943efb171e2abdc7278c518216 | /bib/source/dblp-parse.py | 9abe850c8de8f40ecba84005fa13c899cf744df6 | [] | no_license | mattnmorgan/ECU-19-Redis | 3d3879d4d3fc076c9e39e3d80394e032447aa451 | 63cab256d644e91e61119ae4b093efd06ac313af | refs/heads/master | 2022-07-22T19:49:30.970750 | 2019-05-03T18:12:33 | 2019-05-03T18:12:33 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,107 | py | """
File: dblp-parse.py
Author: Matthew Morgan
Date: 2 May 2019
Description:
This file parses the DBLP file, filtering out groups of 10000 entries and parsing
them into a bibtex format.
"""
import sys, os, re
# records is a collection of all the records
# skip is the number of meta-data lines to skip
# rec is a list of all the lines of data for the current entry
# typ is the type of the current entry
records, skip = {}, 3
rec, typ = [], ''
PATH, file_count = '../dblp/', 0
def bibify(rec):
""" bibify(rec) bibtex-ifies a single record generated by process(rec). It
returns a string representing the record in bibtex form. """
res = ("@%s{%s,\n" % (rec['type'].title(), rec['key']))
for fld in rec:
if not fld in ['key', 'type']:
if fld == 'auth': res += ('%10s = "%s",\n' % (fld, ' and '.join(rec[fld])))
else: res += ('%10s = "%s",\n' % (fld, rec[fld]))
res = res[:len(res)-2]+'\n}' # Clean up an excess comma
return res
def process(rec):
""" process(rec) processes a single record - that is, a collection of lines from
the DBLP file that contain the XML fields and attributes for a single record
in the massive file. It then resets values for a new record's values to be
accrued for processing. """
print(rec[0].split(' ')[0][1:], rec[0])
global file_count, records
# Generate record structure and metadata
record = {}
meta = rec[0][1:len(rec[0])-1].split(' ')
date, key = meta[1].split('=')[1], meta[2].split('=')[1]
record['type'] = meta[0]
record['date'] = date[1:len(date)-1]
record['key'] = key[1:len(key)-1]
record['auth'] = []
# Process all fields found for the record; some fields may require special
# processing, such as authors or page numbers
for ln in rec[1:]:
field, val = re.findall(r'.*</(.*)>$', ln), re.findall(r'<.*>(.*)</.*>', ln)
if len(val) > 0:
field, val = field[0], val[0]
if field == 'author': record['auth'].append(val)
elif field == 'pages': record['pages'] = '--'.join(val.split('-'))
else: record[field] = val
# Add the record to the collection. If the collection has breached a certain size, then
# print those records to a file
records[record['key']] = record
if len(records) == 10000:
with open(PATH+'entries/'+str(file_count)+'.bib', 'w+') as fw:
fw.write(',\n\n'.join([bibify(records[r]) for r in records]))
file_count += 1
records = {}
# Reset for a new record
rec = []
typ = ''
with open(PATH+'dblp.xml', 'r', encoding='utf-8') as fr:
for ln in fr:
ln = ln.strip()
# Skip the first few lines, and also skip blank lines
if skip > 0:
skip -= 1
continue
if not ln: continue
if typ == '': typ = ln.split(' ')[0][1:]
else:
# Check if the record terminates in the line or is terminated by the line
end = '</{}>'.format(typ)
if ln == end:
process(rec)
elif end in ln:
process(rec)
rec = [ ln[len(end):] ]
rec.append(ln)
# Process the last record
process(rec)
print('done') | [
"morganmat16@d25kg0yjdnmp.intra.ecu.edu"
] | morganmat16@d25kg0yjdnmp.intra.ecu.edu |
238d95c0bfbc0c23adaa9fc2f97336faf8c29914 | 2544c3ba081d8e16a6f1a483b76915e9e51cccbe | /blog/admin.py | 37d7a1eee9c154d891cfcfdcb57b9a035817477d | [] | no_license | kingleoric2010/Blog_project | beb7c4375d92c2166f130b3ce09b266cc90d2690 | 1d84856633da46c9b2a52437c080189c6941e57a | refs/heads/master | 2021-05-08T00:02:59.535840 | 2016-08-01T22:26:30 | 2016-08-01T22:26:30 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 983 | py | # -*- coding:utf-8 -*-
from django.contrib import admin
from models import *
# Register your models here.
class ArticleAdmin(admin.ModelAdmin):
list_display = ('title', 'desc', 'click_count',)
list_display_links = ('title', 'desc', )
list_editable = ('click_count',)
fieldsets = (
(None, {
'fields': ('title', 'desc', 'content', 'user', 'category', 'tag','date_publish', )
}),
('高级设置', {
'classes': ('collapse',),
'fields': ('click_count', 'is_recommend',)
}),
)
class Media:
js = (
'/static/js/kindeditor-4.1.10/kindeditor-min.js',
'/static/js/kindeditor-4.1.10/lang/zh_CN.js',
'/static/js/kindeditor-4.1.10/config.js',
)
admin.site.register(User)
admin.site.register(Tag)
admin.site.register(Article, ArticleAdmin)
admin.site.register(Category)
admin.site.register(Comment)
admin.site.register(Links)
admin.site.register(Ad) | [
"“luuuuqi@163.comgit config --global user.name “cenyu00"
] | “luuuuqi@163.comgit config --global user.name “cenyu00 |
8680b4918578c16766d9dc72be2fd9a01d9c7ee9 | cfbfdc61295322af13eb760ee4ebc60a3420eb7b | /InputAndOutput/shelveExample.py | f481ea64f0776d94ca9facf30591626471aba80a | [] | no_license | joshuagato/learning-python | eba8974d5cb41c1f3b641eae3a9770169a1a9839 | 19669dd09fd17ecfbb20059f4d3b5d2e62f5a8f2 | refs/heads/master | 2021-05-20T11:35:02.973848 | 2020-04-13T06:52:25 | 2020-04-13T06:52:25 | 252,277,879 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 431 | py | import shelve
with shelve.open('ShelfTest') as fruit:
fruit['orange'] = "a sweet, citrus fruit"
fruit['apple'] = "good for making soda"
fruit['lemon'] = "sour yellow citrus fruit"
fruit['grape'] = "a small, sweet fruit growing in bunches"
fruit['lime'] = "a sour, green citrus fruit"
print(fruit['lemon'])
print(fruit['grape'])
# It is your responsibility to manually close the shelf
# fruit.close()
| [
"joshuagatogato37@gmail.com"
] | joshuagatogato37@gmail.com |
0c32284d1a23f3a1a9d753a4db5bfa2e17be9d3d | ed73ee4cddb06dc9a545abf5a1c8deea3ce2f58f | /5-face-recognition.py | 976c29fb6fd633f9a98f052433408cd4815f9aa3 | [] | no_license | gerafko/Face-Regonation-OpenCV-python36- | bfa98069cfe17947afe4696c26be37f15d2b2636 | 2b739d7817f3521cbf96d13b0567e5b22e7b1d9a | refs/heads/master | 2021-01-24T08:22:06.654721 | 2018-02-26T14:22:50 | 2018-02-26T14:22:50 | 122,978,939 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,832 | py | #!/usr/bin/env python
# Software License Agreement (BSD License)
#
# Copyright (c) 2012, Philipp Wagner <bytefish[at]gmx[dot]de>.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of the author nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
# ------------------------------------------------------------------------------------------------
# Note:
# When using the FaceRecognizer interface in combination with Python, please stick to Python 2.
# Some underlying scripts like create_csv will not work in other versions, like Python 3.
# ------------------------------------------------------------------------------------------------
import os
import sys
import cv2
import numpy as np
p1 = r'C:\projects\python\anton-goryunov-diploma\training-datasets-for-read-image'
def normalize(X, low, high, dtype=None):
"""Normalizes a given array in X to a value between low and high."""
X = np.asarray(X)
minX, maxX = np.min(X), np.max(X)
# normalize to [0...1].
X = X - float(minX)
X = X / float((maxX - minX))
# scale to [low...high].
X = X * (high-low)
X = X + low
if dtype is None:
return np.asarray(X)
return np.asarray(X, dtype=dtype)
def read_images(path, sz=None):
"""Reads the images in a given folder, resizes images on the fly if size is given.
Args:
path: Path to a folder with subfolders representing the subjects (persons).
sz: A tuple with the size Resizes
Returns:
A list [X,y]
X: The images, which is a Python list of numpy arrays.
y: The corresponding labels (the unique number of the subject, person) in a Python list.
"""
c = 0
X,y = [], []
for dirname, dirnames, filenames in os.walk(path):
for subdirname in dirnames:
subject_path = os.path.join(dirname, subdirname)
for filename in os.listdir(subject_path):
try:
if (filename == ".directory"):
continue
filepath = os.path.join(subject_path, filename)
im = cv2.imread(os.path.join(subject_path, filename), cv2.IMREAD_GRAYSCALE)
if (im is None):
print( "image " + filepath + " is none" )
# resize to given size (if given)
if (sz is not None):
im = cv2.resize(im, sz)
X.append(np.asarray(im, dtype=np.uint8))
y.append(c)
except IOError as err:
print( "I/O error({0}): {1}".format(err))
except:
print( "Unexpected error:", sys.exc_info()[0])
raise
c = c+1
return [X,y]
#if __name__ != "__main__":
if __name__ == "__main__":
# This is where we write the images, if an output_dir is given
# in command line:
out_dir = None
# You'll need at least a path to your image data, please see
# the tutorial coming with this source code on how to prepare
# your image data:
if len(sys.argv) < 2:
pass
#print( "USAGE: facerec_demo.py </path/to/images> [</path/to/store/images/at>]")
#sys.exit()
# Now read in the image data. This must be a valid path!
#[X,y] = read_images(sys.argv[1])
tsdirname = r'C:\projects\python\anton-goryunov-diploma\training-datasets-for-read-image'
[X,y] = read_images(tsdirname)
# Convert labels to 32bit integers. This is a workaround for 64bit machines,
# because the labels will truncated else. This will be fixed in code as
# soon as possible, so Python users don't need to know about this.
# Thanks to Leo Dirac for reporting:
y = np.asarray(y, dtype=np.int32)
# If a out_dir is given, set it:
if len(sys.argv) == 3:
out_dir = sys.argv[2]
# Create the Eigenfaces model. We are going to use the default
# parameters for this simple example, please read the documentation
# for thresholding:
#model = cv2.face.createEigenFaceRecognizer()
model = cv2.face.EigenFaceRecognizer_create()
# Read
# Learn the model. Remember our function returns Python lists,
# so we use np.asarray to turn them into NumPy lists to make
# the OpenCV wrapper happy:
model.train(np.asarray(X), np.asarray(y))
# We now get a prediction from the model! In reality you
# should always use unseen images for testing your model.
# But so many people were confused, when I sliced an image
# off in the C++ version, so I am just using an image we
# have trained with.
#
# model.predict is going to return the predicted label and
# the associated confidence:
[p_label, p_confidence] = model.predict(np.asarray(X[0]))
# Print it:
print( "Predicted label = %d (confidence=%.2f)" % (p_label, p_confidence))
# Cool! Finally we'll plot the Eigenfaces, because that's
# what most people read in the papers are keen to see.
#
# Just like in C++ you have access to all model internal
# data, because the cv::FaceRecognizer is a cv::Algorithm.
#
# You can see the available parameters with getParams():
#print( model.getParams())
model.getEigenValues()
# Now let's get some data:
#mean = model.getMat("mean")
mean = model.getMean()
eigenvectors = model.getMat("eigenvectors")
# We'll save the mean, by first normalizing it:
mean_norm = normalize(mean, 0, 255, dtype=np.uint8)
mean_resized = mean_norm.reshape(X[0].shape)
if out_dir is None:
cv2.imshow("mean", mean_resized)
else:
cv2.imwrite("%s/mean.png" % (out_dir), mean_resized)
# Turn the first (at most) 16 eigenvectors into grayscale
# images. You could also use cv::normalize here, but sticking
# to NumPy is much easier for now.
# Note: eigenvectors are stored by column:
for i in xrange(min(len(X), 16)):
eigenvector_i = eigenvectors[:,i].reshape(X[0].shape)
eigenvector_i_norm = normalize(eigenvector_i, 0, 255, dtype=np.uint8)
# Show or save the images:
if out_dir is None:
cv2.imshow("%s/eigenface_%d" % (out_dir,i), eigenvector_i_norm)
else:
cv2.imwrite("%s/eigenface_%d.png" % (out_dir,i), eigenvector_i_norm)
# Show the images:
if out_dir is None:
cv2.waitKey(0)
#ri_res = read_images(p1)
| [
"goryu-anton@ya.ru"
] | goryu-anton@ya.ru |
24cc400d27586e265618100bac0ae076245ebbd0 | d80d3adc3fd0fcaa2f818f2754f8bb68bc98650f | /fetus_to_mother.py | 15810e46fb0ee84cc8dcae033859519bff33f003 | [] | no_license | harel-coffee/fragilex-checker | 42fd78b90e7fa05d1c35756d42c3ebc88ab5ec7f | 2f7c9c505c4047f4f279e18a61900c530c1e7bad | refs/heads/main | 2023-07-28T09:36:13.300227 | 2021-09-10T11:07:12 | 2021-09-10T11:07:12 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,108 | py | module load python/miniconda3-4.5.12-pytorch
/share/apps/python/miniconda3-4.5.12-pytorch/bin/python
MIN_LENGTH = 140
MIN_AVERAGE_PHRED = 20 #(up to 37)
MIN_MAPPING_QUAL = 20 #(up to 60, pysam recommends 50)
EDGE_NUC_NUM = 6 #recommended 3-10
DIFF_NUC_ALLOWED = 1 #always less then EDGE_NUC_NUM
import pysam
samfile = pysam.AlignmentFile("/groups/nshomron/tomr/projects/cffdna/runs/fam03/S03.recal.sorted.hg38.bam", "rb")
path_mom = "/groups/nshomron/tomr/projects/cffdna/runs/fam03/M03.recal.sorted.hg38.bam"
path_plasma = "/groups/nshomron/tomr/projects/cffdna/runs/fam03/S03.recal.sorted.hg38.bam"
path_mom_simulation = "/groups/nshomron/hadasvol/projects/simulation/art/parents41/art_sim/FM41sim.srt.bam"
path_plasma_simulation = "/groups/nshomron/hadasvol/projects/simulation/art/S41/art_sim/Ssim.chrX.srt.bam"
def create_all_possibilities():
edge1 = "C"
edge2 = "G"
edge3 = "G"
for i in range(1,EDGE_NUC_NUM):
if (i%3 == 1):
edge1 += "G"
edge2 += "G"
edge3 += "C"
elif (i%3 == 2):
edge1 += "G"
edge2 += "C"
edge3 += "G"
else:
edge1 += "C"
edge2 += "G"
edge3 += "G"
return edge1, edge2, edge3
def diff_nuc(check,edge1, edge2, edge3):
a = sum( check[i] != edge1[i] for i in range(len(check)) )
b = sum( check[i] != edge2[i] for i in range(len(check)) )
c = sum( check[i] != edge3[i] for i in range(len(check)) )
#print(f"min diff of {min(a,b,c)}")
return min(a,b,c)
def clean_to_span(samfile):
tot_cnt = 0
used_cnt = 0
clean_cnt = 0
clean_weight = 0
spanning_cnt = 0
spanning_weight = 0
partial_cnt = 0
partial_weight = 0
#used for self check
short_cnt = 0
bad_map_cnt = 0
bad_phred_edge = 0
edge1, edge2, edge3 = create_all_possibilities()
for read in samfile.fetch("chrX",147912050, 147912110):
tot_cnt += 1
#only use long reads (small ones can't be spanning so they dont add data)
if (len(read.seq) >= MIN_LENGTH and read.mapping_quality >= MIN_MAPPING_QUAL and sum(read.query_qualities[:EDGE_NUC_NUM]) / EDGE_NUC_NUM >= MIN_AVERAGE_PHRED and sum(read.query_qualities[-1 * EDGE_NUC_NUM:]) / EDGE_NUC_NUM >= MIN_AVERAGE_PHRED):
used_cnt +=1
clean = 0
#check if left part is clean
if (diff_nuc(read.query_sequence[:EDGE_NUC_NUM], edge1, edge2, edge3) <= DIFF_NUC_ALLOWED):
clean+=1
#check if right part is clean
if (diff_nuc(read.query_sequence[-1 * EDGE_NUC_NUM:], edge1, edge2, edge3) <= DIFF_NUC_ALLOWED):
clean+=1
if (clean == 2):
clean_cnt += 1
clean_weight += read.mapping_quality * (sum(read.query_qualities[:EDGE_NUC_NUM]) / EDGE_NUC_NUM + sum(read.query_qualities[-1 * EDGE_NUC_NUM:]) / EDGE_NUC_NUM)
print("clean: " + str(read.query_sequence))
elif (clean == 0):
spanning_cnt += 1
spanning_weight += read.mapping_quality * (sum(read.query_qualities[:EDGE_NUC_NUM]) / EDGE_NUC_NUM + sum(read.query_qualities[-1 * EDGE_NUC_NUM:]) / EDGE_NUC_NUM)
print("spanning: " + str(read.query_sequence))
else:
partial_cnt += 1
partial_weight += read.mapping_quality * (sum(read.query_qualities[:EDGE_NUC_NUM]) / EDGE_NUC_NUM + sum(read.query_qualities[-1 * EDGE_NUC_NUM:]) / EDGE_NUC_NUM)
print("partial: " + str(read.query_sequence))
else:
if (not len(read.seq) >= MIN_LENGTH ):
short_cnt +=1
if (not read.mapping_quality >= MIN_MAPPING_QUAL):
bad_map_cnt +=1
if (not (sum(read.query_qualities[:EDGE_NUC_NUM]) / EDGE_NUC_NUM >= MIN_AVERAGE_PHRED and sum(read.query_qualities[-1 * EDGE_NUC_NUM:]) / EDGE_NUC_NUM >= MIN_AVERAGE_PHRED)):
bad_phred_edge += 1
print(f"threw out: \n{short_cnt} for being too short \n{bad_map_cnt} for bad mapping\n{bad_phred_edge} for bad phred on edge")
print(f"found {clean_cnt} clean, {spanning_cnt} spanning and {partial_cnt} partial.")
if (partial_cnt != 0):
return (spanning_cnt / partial_cnt,spanning_weight / partial_weight, used_cnt, tot_cnt)
return (1,clean_cnt + spanning_cnt, used_cnt, tot_cnt)
def print_res(path_plasma, path_mom):
sam_mom = pysam.AlignmentFile(path_mom, "rb")
sam_plasma = pysam.AlignmentFile(path_plasma, "rb")
(a,b,c,d) = clean_to_span(sam_mom)
(e,f,g,h) = clean_to_span(sam_plasma)
print(f"mom's clean to spanning ratio is {b} ({a} when weighted) out of {c} good samples out of {d} samples).\n plasma's clean to spanning ratio is {f} ({e} when weighted) out of {g} good samples out of {h} samples).")
print(f"Parameter is {(a/e)} .The bigger it is (when bigger than 1), the bigger the chance of healthy child )")
return a/e
print_res(path_plasma, path_mom)
print_res(path_plasma_simulation, path_mom_simulation) | [
"59049322+Arik-coffee@users.noreply.github.com"
] | 59049322+Arik-coffee@users.noreply.github.com |
c6870f0f392d973959a4e8236a2aadf2507635ad | eebb210f13d452822d46432643215a4c8b656906 | /bsl_21716/settings.py | f2b659a27d63e57f748807a78ce2ab48d20a787c | [] | no_license | crowdbotics-apps/bsl-21716 | 5e817b94f0a692469a986f3955cbaf1e813102c9 | 3e4cb348cdd4f84c5b78c35bf5c4f4f37d58fdbb | refs/heads/master | 2023-01-03T18:12:09.360592 | 2020-10-19T19:11:41 | 2020-10-19T19:11:41 | 305,488,134 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,823 | py | """
Django settings for bsl_21716 project.
Generated by 'django-admin startproject' using Django 2.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
import environ
import logging
env = environ.Env()
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool("DEBUG", default=False)
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env.str("SECRET_KEY")
ALLOWED_HOSTS = env.list("HOST", default=["*"])
SITE_ID = 1
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_SSL_REDIRECT = env.bool("SECURE_REDIRECT", default=False)
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites'
]
LOCAL_APPS = [
'home',
'users.apps.UsersConfig',
]
THIRD_PARTY_APPS = [
'rest_framework',
'rest_framework.authtoken',
'rest_auth',
'rest_auth.registration',
'bootstrap4',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.google',
'django_extensions',
'drf_yasg',
]
INSTALLED_APPS += LOCAL_APPS + THIRD_PARTY_APPS
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'bsl_21716.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'bsl_21716.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
if env.str("DATABASE_URL", default=None):
DATABASES = {
'default': env.db()
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
MIDDLEWARE += ['whitenoise.middleware.WhiteNoiseMiddleware']
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend'
)
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static')
]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# allauth / users
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_EMAIL_VERIFICATION = "optional"
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
ACCOUNT_UNIQUE_EMAIL = True
LOGIN_REDIRECT_URL = "users:redirect"
ACCOUNT_ADAPTER = "users.adapters.AccountAdapter"
SOCIALACCOUNT_ADAPTER = "users.adapters.SocialAccountAdapter"
ACCOUNT_ALLOW_REGISTRATION = env.bool("ACCOUNT_ALLOW_REGISTRATION", True)
SOCIALACCOUNT_ALLOW_REGISTRATION = env.bool("SOCIALACCOUNT_ALLOW_REGISTRATION", True)
REST_AUTH_SERIALIZERS = {
# Replace password reset serializer to fix 500 error
"PASSWORD_RESET_SERIALIZER": "home.api.v1.serializers.PasswordSerializer",
}
REST_AUTH_REGISTER_SERIALIZERS = {
# Use custom serializer that has no username and matches web signup
"REGISTER_SERIALIZER": "home.api.v1.serializers.SignupSerializer",
}
# Custom user model
AUTH_USER_MODEL = "users.User"
EMAIL_HOST = env.str("EMAIL_HOST", "smtp.sendgrid.net")
EMAIL_HOST_USER = env.str("SENDGRID_USERNAME", "")
EMAIL_HOST_PASSWORD = env.str("SENDGRID_PASSWORD", "")
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# Swagger settings for api docs
SWAGGER_SETTINGS = {
"DEFAULT_INFO": f"{ROOT_URLCONF}.api_info",
}
if DEBUG or not (EMAIL_HOST_USER and EMAIL_HOST_PASSWORD):
# output email to console instead of sending
if not DEBUG:
logging.warning("You should setup `SENDGRID_USERNAME` and `SENDGRID_PASSWORD` env vars to send emails.")
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
89bc553261509785779919691202bc8ff9d94257 | 5c69e63f3bb1286a79cb81ca70c969bccd65d740 | /bocadillo/exceptions.py | e8cfd93892c17f615e36715a36c0bba654f5a71f | [
"MIT"
] | permissive | stjordanis/bocadillo | 85dc5895966d3e2031df365db55e4def156e92aa | 658cce55b196d60489530aaefde80b066cb8054b | refs/heads/master | 2020-04-14T09:36:47.245246 | 2019-01-01T19:27:37 | 2019-01-01T19:27:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,196 | py | from http import HTTPStatus
from typing import Union, Any, List
from starlette.websockets import WebSocketDisconnect as _WebSocketDisconnect
WebSocketDisconnect = _WebSocketDisconnect
class HTTPError(Exception):
"""Raised when an HTTP error occurs.
You can raise this within a view or an error handler to interrupt
request processing.
# Parameters
status (int or HTTPStatus):
the status code of the error.
detail (any):
extra detail information about the error. The exact rendering is
determined by the configured error handler for `HTTPError`.
# See Also
- [HTTP response status codes (MDN web docs)](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)
"""
def __init__(self, status: Union[int, HTTPStatus], detail: Any = ""):
if isinstance(status, int):
status = HTTPStatus(status)
else:
assert isinstance(
status, HTTPStatus
), f"Expected int or HTTPStatus, got {type(status)}"
self._status = status
self.detail = detail
@property
def status_code(self) -> int:
"""Return the HTTP error's status code, e.g. `404`."""
return self._status.value
@property
def status_phrase(self) -> str:
"""Return the HTTP error's status phrase, e.g. `"Not Found"`."""
return self._status.phrase
@property
def title(self) -> str:
"""Return the HTTP error's title, e.g. `"404 Not Found"`."""
return f"{self.status_code} {self.status_phrase}"
def __str__(self):
return self.title
class UnsupportedMediaType(Exception):
"""Raised when trying to use an unsupported media type.
# Parameters
media_type (str):
the unsupported media type.
available (list of str):
a list of supported media types.
"""
def __init__(self, media_type: str, available: List[str]):
self._media_type = media_type
self._available = available
def __str__(self):
return f'{self._media_type} (available: {", ".join(self._available)})'
class RouteDeclarationError(Exception):
"""Raised when a route is ill-declared."""
| [
"florimond.manca@gmail.com"
] | florimond.manca@gmail.com |
967cc3984e379c358d4879323b95f39ca6f0b86c | 2416013b474fa2e34cb53eab35f8b43be9e97245 | /PySuffixTree.py | 8f77c1d58a54d4d2b07e42764a2a2c04cb025efd | [
"Apache-2.0"
] | permissive | bssrdf/PySuffixTree | 196ed1cb9ddb5443befee63d2a91d9111a188a92 | 432c6aa98c842f9bfe73826efde7e77462bccf1d | refs/heads/master | 2021-12-03T12:21:45.655196 | 2014-06-29T02:56:46 | 2014-06-29T02:56:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,657 | py | """
Copyright 2014 Sam Clarke
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
class SharedCounter(object):
"""
A shared counter object (Integer)
"""
def __init__(self, start_value):
self.val = start_value
def getVal(self):
return self.val
def nextVal(self):
self.val += 1
class Node(object):
"""
A generic node object for the suffix tree.
self.id = the id of the node
self.parent_edge = The edge leading back to root
self.child_edges = A dict of edges and their char ids
self.suffix_link = A suffix link obj (jump to common nodes)
"""
def __init__(self, Id):
self.id = Id
self.parent_edge = None
self.child_edges = {}
self.suffix_link = None
def __str__(self):
if self.hasSuffixLink():
return 'Node ' + str(self.id) + ' S-linked to ' + \
str(self.suffix_link.getDestination())
else:
return 'Node ' + str(self.id)
def setParent(self, edge):
"""
Set the parent edge
"""
self.parent_edge = edge
def getParent(self):
"""
Get the parent edge
"""
return self.parent_edge
def addChild(self, edge):
"""
Add a child edge
"""
self.child_edges[edge.getId()] = edge
def removeChild(self, edge):
"""
Remove a child edge
"""
del self.child_edges[edge.getId()]
def getChildren(self):
"""
Get child edges (dict)
"""
return self.child_edges
def addSuffixLink(self, suffix_link):
"""
Add a suffix link from this node
"""
self.suffix_link = suffix_link
def hasSuffixLink(self):
"""
Check this node for a suffix link
"""
return False if self.suffix_link is None else True
def getSuffixLink(self):
"""
Get the suffix link for this node
"""
return self.suffix_link
class Edge(object):
"""
A generic edge object for the suffix tree.
self.id = the starting char of the suffix. i.e 'a'
self.start = the index of the suffix start within the target
self.stop = the index of the suffix stop
self.dest_node = the node we connect to (if any)
"""
def __init__(self, Id, start, stop, destination_node = None):
self.id = Id
self.start = start
self.stop = stop
self.dest_node = destination_node
def __str__(self):
if type(self.stop) is SharedCounter:
return 'Edge '+str(self.id)+' '+str(self.getLength())+' suffix ['+\
str(self.start)+':'+str(self.stop.getVal())+'] connected to '\
+ str(self.dest_node)
else:
return 'Edge '+str(self.id)+' '+str(self.getLength())+' suffix ['+\
str(self.start)+':'+str(self.stop)+'] connected to ' + \
str(self.dest_node)
def getId(self):
"""
Get the edge id (the starting char of the suffix).
"""
return self.id
def setDestination(self, node):
"""
Set the destination node.
"""
self.dest_node = node
def getDestination(self):
"""
Get the destination node.
"""
return self.dest_node
def setBound(self, start = None, stop = None):
"""
Set the suffix indexes.
"""
if start is not None: self.start = start
elif stop is not None: self.stop = stop
def getLength(self):
"""
Get the length of the suffix
"""
stop = 0
if type(self.stop) is SharedCounter:
stop = self.stop.getVal()
else:
stop = self.stop
return stop - self.start
def getSuffix(self):
"""
Get the suffix indexes.
"""
stop = 0
if type(self.stop) is SharedCounter:
stop = self.stop.getVal()
else:
stop = self.stop
return (self.start, stop)
class SuffixLink(Edge):
"""
A sufffix link edge object.
A logical link between common nodes in the tree.
"""
def __init__(self, destination_node):
self.dest_node = destination_node
def __str__(self):
return str(self.dest_node)
def getDestination(self):
"""
Get the destination node
"""
return self.dest_node
class SuffixTree(object):
"""
A sufffix tree edge object.
Makes use of the Ukkonen algorithm and it's optimisations.
"""
def __init__(self):
self.pos = SharedCounter(-1)
self.edge_cnt = 0
self.edges = []
self.link = None
self.remainder = 0
self.active_len = 0
self.active_length = 0
self.active_edge = None
self.root = Node(0)
self.nodes = [self.root]
self.active_node = self.root
self.latest_node = self.root
self.target = ''
def __str__(self):
"""
Prints the nodes in the tree sequentially along with their edges
(in node:edge order).
"""
s = ''
for node in self.nodes:
s += '\n\n'+str(node)+'\n\t'
edges = node.getChildren()
keys = edges.keys()
keys.sort()
for key in keys:
bounds = edges[key].getSuffix()
s += str(edges[key])+' '
for i in xrange(bounds[0], bounds[1]):
s += self.target[i]
s += '\n\t'
return s
def buildTree(self, string, debug=False):
self.target = string
self.remainder = 1
for char in string:
attach_link = False
self.pos.nextVal()
# Remainder is one @ each step
node_edges = self.active_node.getChildren()
# If the active node does not have an edge for this
if char in node_edges:
if debug: print 'Edge exists'
self.remainder += 1
self.active_length += 1
if self.active_edge is None or self.active_length == 1:
self.active_edge = char
if self.active_length >= node_edges[char].getLength():
self.moveDown(node_edges[self.active_edge])
while self.active_length > node_edges[char].getLength():
# move to the edge dest
if not self.moveDown(node_edges[self.active_edge]):
break
else:
if debug: print 'Edge doesn\'t exist'
while self.remainder > 1:
#if char == '$': break
if debug: print 'Splitting edge', self.active_edge, self.active_node,'Remainder', self.remainder
node_edges = self.active_node.getChildren()
self.splitEdge(node_edges[self.active_edge],
self.active_length, attach_link)
attach_link = True
self.remainder -= 1
else:
if debug: print 'Adding edge', char
new_edge = Edge(char, self.pos.getVal(), self.pos)
self.active_node.addChild(new_edge)
if debug:
print 'Char', char
print 'Active node', self.active_node
print 'Active edge', self.active_edge
print 'Active length', self.active_length
print 'Remainder', self.remainder
print self
def moveDown(self, edge):
"""
Move down to the destination node of the supplied edge.
e.g
move from 'A' to 'B'...
[A]-------------------[B]
"""
# move to the edge destination node
dest = edge.getDestination()
if dest is not None:
self.active_node = dest
self.active_edge = None
self.active_length = 0
return True
else:
return False
def splitEdge(self, edge, index, link):
"""
Split an existing edge at index
e.g
n-----(i)---------- pos++
\
\--------- pos++
"""
#print 'Splitting edge', edge
node = Node(len(self.nodes))
if link:
suffix_link = SuffixLink(node)
self.latest_node.addSuffixLink(suffix_link)
# copy out existing destination and bounds
old_dest = edge.getDestination()
old_bounds = edge.getSuffix()
old_start = old_bounds[0]
# Adjust edge finish point to n + index, connect new node
edge.setBound(stop = old_start + index)
edge.setDestination(node)
# Create new edge representing the remains of the old edge
offcut = Edge(self.target[old_start + index], old_start + index,
old_bounds[1], old_dest)
# Add the offcut edge as a child of the new node
node.addChild(offcut)
# Create new edge for the current pos
n = self.pos.getVal()
new_edge = Edge(self.target[n], n, self.pos)
node.addChild(new_edge)
self.nodes.append(node)
self.latest_node = node
# rule 1 - a split from the root node
if self.active_node == self.root:
# active length decrements
self.active_length -= 1
pos = self.pos.getVal()
# active edge changes
self.active_edge = self.target[pos - self.active_length]
# active node remains root
else:
if self.active_node.hasSuffixLink():
# Set the active node to the link destination
link = self.active_node.getSuffixLink()
self.active_node = link.getDestination()
else:
self.active_node = self.root # set active node to root
| [
"clarke.sam1@gmail.com"
] | clarke.sam1@gmail.com |
aab761a34ab72f36dae08a23dc3656ce3ff5b2db | 69f5f0e2717489c605b8f60ee56fd76e67c6ca2d | /python/examples/SwapMultipleReturns.py | b1d10b131dea10fc2ad4f2477c9f4c5b6d23a518 | [
"Apache-2.0"
] | permissive | kmova/bootstrap | b95c774d558453a52c75d7f646f51fa3b8929357 | f162e100fe9398e1e344dd224fdd75071674c24b | refs/heads/master | 2021-12-08T02:57:01.850488 | 2021-12-03T13:45:21 | 2021-12-03T13:45:21 | 74,550,534 | 17 | 16 | Apache-2.0 | 2021-07-09T17:31:03 | 2016-11-23T07:15:35 | Shell | UTF-8 | Python | false | false | 191 | py | import sys
def swap ( a, b):
return b, a
if __name__ == "__main__":
if len(sys.argv) < 3:
print "Need atleast two arguements"
sys.exit(1)
print swap(sys.argv[1], sys.argv[2])
| [
"kiran.mova@cloudbyte.com"
] | kiran.mova@cloudbyte.com |
16df9dd074e96e634f0fd4a337b0b549b576ad0c | 9eaca6376c8867e50da0b70fd262dc633a4191f8 | /model_functions/DataPreprocess.py | 815750f66984f67bcda07a5dfc308cff8f6d6e0b | [] | no_license | wangqi1919/LPEP2_code | b29ef59ebe3e0c87a923e6193f0f9cd5f1816cec | d9346af6c1e37332d4254f92b6bb2475f4284454 | refs/heads/master | 2020-05-31T13:00:45.000835 | 2019-06-04T23:41:29 | 2019-06-04T23:41:29 | 190,294,148 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,903 | py | import numpy as np
from scipy import stats
def take_log(y):
y = y + 0.1
y = np.log(y)
return y
class Data:
def __init__(self, data_path, splitting_method, response, conditional):
self.data_path = data_path
self.splitting_method = splitting_method
self.response = response
self.X = None
self.y = None
self.splitting = None
self.conditional = conditional
self.zone = None
self.position = None
self.type = None
self.lakeid = None
def load_data(self):
# current version only fit shortdata
data = np.genfromtxt(self.data_path, dtype=float, delimiter=',',skip_header=1)
self.splitting = data[:, self.splitting_method + 3]
self.zone = data[:, 1]
self.position = data[:, 2:4]
self.type = np.isnan(np.expand_dims(data[:, 14], 1))
self.lakeid = data[:, 0]
if self.conditional == 2:
self.X = data[:, 11:]
self.y = self.X[:, self.response-1]
self.X = np.delete(self.X, self.response-1, 1)
if self.conditional == 0:
self.X = data[:, 15:]
self.y = data[:, 11 + self.response - 1]
if self.conditional == 1:
self.X = data[:, 15:]
self.y = data[:, 11 + self.response - 1]
# add secchi when the target is not secchi for conditional models
if self.response != 4:
secchi = np.expand_dims(data[:, 14], 1)
self.X = np.concatenate((self.X, secchi), axis=1)
def preprocess(self):
# remove data with response == NaN
ID_not_NaN = ~np.isnan(self.y)
self.y = self.y[ID_not_NaN]
self.X = self.X[ID_not_NaN, :]
self.zone = self.zone[ID_not_NaN]
self.lakeid = self.lakeid[ID_not_NaN]
self.type = self.type[ID_not_NaN]
self.position = self.position[ID_not_NaN]
# using mean to replace NaN in X
col_mean = np.nanmean(self.X, axis=0)
inds = np.where(np.isnan(self.X))
self.X[inds] = np.take(col_mean, inds[1])
# splittiing trainning, testing
splitting = self.splitting[ID_not_NaN]
tr_id = splitting == 0
te_id = splitting == 1
Xtest_nonzscore = self.X[te_id, :]
self.X = stats.zscore(self.X)
# take log to all y
self.y = take_log(self.y)
Xtrain = self.X[tr_id, :]
Xtest = self.X[te_id, :]
ytrain = self.y[tr_id]
ytest = self.y[te_id]
zonetrain = self.zone[tr_id]
zonetest = self.zone[te_id]
type_test = self.type[te_id]
position_test = self.position[te_id]
lakeid_test = self.lakeid[te_id]
lakeid_train = self.lakeid[tr_id]
return Xtrain, Xtest, Xtest_nonzscore, ytrain, ytest, zonetrain, zonetest, type_test, position_test, lakeid_test, lakeid_train
| [
"wangqi19@msu.edu"
] | wangqi19@msu.edu |
69354a0bd822307b273f6b1b5fdfdcb3a5c10b88 | 16a5c9c9f0d7519a6808efc61b592b4b614102cf | /Python/16.py | 3672e6b7b0f7551a670966ce8b46ac170ca86da6 | [] | no_license | kevin851066/Leetcode | c1d86b2e028526231b80c6d4fb6d0be7ae8d39e5 | 885a9af8a7bee3c228c7ae4e295dca810bd91d01 | refs/heads/main | 2023-08-10T16:50:12.426440 | 2021-09-28T15:23:26 | 2021-09-28T15:23:26 | 336,277,469 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,012 | py | class Solution:
def threeSumClosest(self, nums, target):
'''
:type: nums: List[int]
:type: target: int
:rtype: int
'''
nums.sort()
dis = float('inf')
res = 0
for i in range(len(nums)-2):
if i == 0 or nums[i] != nums[i-1]:
l, r = i + 1, len(nums) - 1
while r > l:
s = nums[i] + nums[r] + nums[l]
diff = abs(target - s)
if diff < dis:
dis = diff
res = s
if target > s:
while r > l and nums[l] == nums[l+1]:
l += 1
l += 1
elif target < s:
while r > l and nums[r] == nums[r-1]:
r -= 1
r -= 1
else:
return res
return res
| [
"kevin851066@gmail.com"
] | kevin851066@gmail.com |
95755e9a1e0f833b83378e6bb2c2e32f27beba58 | ae99d9bb5b51f7824f74b196e9b93748ac02a3cd | /ParserForRabotaCom/rabota_parser_loggin.py | e52a44fabaf343b98a76bf60192a9451c1e468ab | [] | no_license | Vika030718/LerningPython | 4e62579f7220acec5ff3e518402e1548ce25d9f4 | a93d1d4ef50f5ea20e47d32d0544655d0364bf39 | refs/heads/master | 2020-03-15T04:11:53.365706 | 2018-07-08T11:22:37 | 2018-07-08T11:22:37 | 131,959,708 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 652 | py | import csv
class Logger(object):
@classmethod
def log_reader(cls):
with open('rabota_data.csv', 'r') as csv_file:
csv_reader = csv.reader(csv_file)
for line in csv_reader:
print(line)
@classmethod
def log_cleaner(cls):
with open('rabota_data.csv', 'w') as csv_file:
csv_file.truncate()
@classmethod
def log_writer(cls, data):
log_row = [data['name'], data['city'], data['description']]
with open('rabota_data.csv', 'a') as csv_file:
csv_writer = csv.writer(csv_file, delimiter=',')
csv_writer.writerow(log_row)
| [
"vika030718@gmail.com"
] | vika030718@gmail.com |
846723238f8d5247b99f655d2a26cfc3593ec6eb | 42d2dea48cba0dc6eb9ffcc40677783f18f97c01 | /core/__init__.py | 00b59d490afbf30ce6ae09cd31e2bdd0468cde26 | [] | no_license | BreadBomb/FameGrame | 0e32f6d919873c1453dc6c2efc9d6b239f2c5c84 | 2115f49fe44abeed60383825b3273c00252d5be3 | refs/heads/master | 2022-11-18T15:09:05.809544 | 2020-07-17T21:03:55 | 2020-07-17T21:03:55 | 280,434,336 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 125 | py | from core.renderer import Renderer
from core.application import Application
from core.bootstrap import bootstrap_application
| [
"jonah@wichtrup-li.de"
] | jonah@wichtrup-li.de |
45ceb9ec67132ca6081f9b283ae0ee535f79bf15 | 9d488348a134800ca1a4d9f3105e826fcb669087 | /scrape_mars.py | d426197a94a50b670ff04df14aed639d053a9a40 | [] | no_license | nfarooqi92/web-scraping-challenge | b51e85c066a4d6914b9979bdba8a0d170f7f76e3 | 2923c664e92717499588690e35a661f202250bb1 | refs/heads/main | 2023-03-18T21:20:57.964019 | 2021-03-18T18:02:31 | 2021-03-18T18:02:31 | 349,142,300 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,072 | py | #!/usr/bin/env python
# coding: utf-8
# In[5]:
import requests
import bs4
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
from splinter import Browser
import pandas as pd
import time
# ## NASA Mars News
# * Scrape the NASA Mars News Site and collect the **latest** News Title and Paragraph Text. Assign the text to variables that you can reference later.
def scrape_info():
# Setup splinter
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
time.sleep(3)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
news_title = soup.find_all("div", class_="content_title")[1].text
news_description = soup.find("div", class_='article_teaser_body').text
soup.find_all("div", class_="content_title")[1].find("a")["href"]
url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html'
browser.visit(url)
time.sleep(3)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
image = soup.find("a", class_="fancybox-thumbs")["href"]
featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/'+image
url = 'https://space-facts.com/mars/'
all_tables = pd.read_html(url)
all_tables
mars_facts_table = all_tables[0]
mars_facts_table.columns = ["Description", "Value"]
mars_facts_table
table_html = mars_facts_table.to_html()
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
time.sleep(3)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
all_hemispheres = soup.find('div', class_='collapsible results')
hemispheres = all_hemispheres.find_all('div', class_='item')
starting_url = 'https://astrogeology.usgs.gov'
hemisphere_image_urls = []
for result in hemispheres:
hemisphere = result.find('div', class_="description")
title = hemisphere.h3.text
ending_url = hemisphere.a["href"]
browser.visit(starting_url + ending_url)
time.sleep(3)
image_html = browser.html
image_soup = BeautifulSoup(image_html, 'html.parser')
image_link = image_soup.find('div', class_='downloads')
image_url = image_link.find('li').a['href']
hemisphere_dict = {}
hemisphere_dict['title'] = title
hemisphere_dict['img_url'] = image_url
hemisphere_image_urls.append(hemisphere_dict)
hemisphere_image_urls
# Store data in a dictionary
mars_data = {
"news_title": news_title,
"news_description": news_description,
"featured_image_url": featured_image_url,
"table_html": table_html,
"hemisphere_image_urls" : hemisphere_image_urls
}
# Close the browser after scraping
browser.quit()
# Return results
return mars_data
| [
"noreply@github.com"
] | noreply@github.com |
63d0acc2591e05e070551f3ffbb2f7032f3f8bfa | dd0958767a5b16df3137288a1b67ba1db25c0a37 | /airflow/dags/training.py | 15e8c67f8b41812f8014c770a4d2d5d674e82b4c | [] | no_license | chinhang0104/-credit-risk-analysis-aks | c0adfe316f2fc3125782be3481417cb465cc6f48 | d9f8c3e3513988f3fb0722d0b43fbb3fb6dcf6b6 | refs/heads/master | 2023-03-27T19:23:25.499402 | 2020-12-19T18:59:06 | 2020-12-19T18:59:06 | 352,958,027 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 579 | py | from datetime import timedelta
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.operators.dummy_operator import DummyOperator
from airflow.utils.dates import days_ago
args = {
'owner': 'airflow',
}
dag = DAG(
dag_id='training',
default_args=args,
schedule_interval='@daily',
start_date=days_ago(0),
dagrun_timeout=timedelta(minutes=60),
)
command = "/opt/airflow/dags/spark-submit.sh "
BashOperator(
task_id='credit',
bash_command=command,
dag=dag,
)
if __name__ == "__main__":
dag.cli()
| [
"kingychiu@gmail.com"
] | kingychiu@gmail.com |
44c36be3d14151335716e257311f97e0760b11f5 | 3712a929d1124f514ea7af1ac0d4a1de03bb6773 | /开班笔记/个人项目/weather/venv/Scripts/pip3.6-script.py | d64c53761d2ebf358b440b0154196e579055f766 | [] | no_license | jiyabing/learning | abd82aa3fd37310b4a98b11ea802c5b0e37b7ad9 | 6059006b0f86aee9a74cfc116d2284eb44173f41 | refs/heads/master | 2020-04-02T20:47:33.025331 | 2018-10-26T05:46:10 | 2018-10-26T05:46:10 | 154,779,387 | 0 | 0 | null | null | null | null | GB18030 | Python | false | false | 450 | py | #!E:\学习文件\python学习资料\开班笔记\个人项目\weather\venv\Scripts\python.exe -x
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==10.0.1','console_scripts','pip3.6'
__requires__ = 'pip==10.0.1'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pip==10.0.1', 'console_scripts', 'pip3.6')()
)
| [
"yabing_ji@163.com"
] | yabing_ji@163.com |
e8d41c9181c17d69226ce323936457aa4a6847e4 | 91bfbeacd0e5c0c6c807131dcce2c0424ec501b3 | /rasanahal/stars/user_get_characters.py | 84cd87015587245391fe75d82c8ac4c843969e4a | [] | no_license | Fermitech-Softworks/rasanahal-backend | a54bb709d5eaa259c5775a2b55cdf097401a78f5 | b5526b611014e332c033a17cf5c4cc362c4c4afb | refs/heads/master | 2022-10-01T22:46:22.217149 | 2020-06-08T18:25:30 | 2020-06-08T18:25:30 | 222,428,629 | 0 | 3 | null | 2020-05-10T00:31:15 | 2019-11-18T11:02:40 | Python | UTF-8 | Python | false | false | 791 | py | import royalnet.utils as ru
import royalnet.constellation.api as rca
import royalnet.constellation.api.apierrors as rcae
from royalnet.backpack.tables import User
from rasanahal.tables import Character
class UserGetCharStar(rca.ApiStar):
summary = "Method that returns all the characters of a certain user."
description = """This method returns all data concerning a user's character."""
methods = ["GET"]
path = "/api/user/get_characters"
requires_auth = True
tags = ["user"]
async def api(self, data: rca.ApiData) -> ru.JSON:
user = await data.user()
CharT = self.alchemy.get(Character)
chars = data.session.query(CharT).filter_by(user_id=user.uid).order_by(CharT.name).all()
return {"character": c.json(True) for c in chars}
| [
"lorenzo.balugani@gmail.com"
] | lorenzo.balugani@gmail.com |
92c7c1d781c48f2d0f2eb4a87f4712597bfac026 | 398aa75f1698fd86702f25837b76dfc9bdbe2418 | /examples/plain-win.py | e9183f6f1e22ceebfacb269dfffc51c5e07390e6 | [
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | haddadabdelaziz/pysciter | b0e783f6946675680281526d5e2382e40781cab1 | 4567d8111c1583d8889e80720e50eee75ff85ff3 | refs/heads/master | 2020-03-22T18:08:41.296963 | 2018-07-04T19:35:22 | 2018-07-04T19:35:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,742 | py | """Sciter sample for Win32 API."""
# sciter import
from sciter import sapi
from sciter.scdef import *
# ctypes import
from ctypes import *
from ctypes.wintypes import *
# defs
WS_EX_APPWINDOW = 0x40000
WS_OVERLAPPEDWINDOW = 0xcf0000
WS_CAPTION = 0xc00000
SW_SHOWNORMAL = 1
SW_SHOW = 5
CS_HREDRAW = 2
CS_VREDRAW = 1
CW_USEDEFAULT = 0x80000000
WM_DESTROY = 2
WHITE_BRUSH = 0
IDC_ARROW = 31514
WNDPROCTYPE = WINFUNCTYPE(c_int, HWND, c_uint, WPARAM, LPARAM)
class WNDCLASSEX(Structure):
_fields_ = [
("cbSize", c_uint),
("style", c_uint),
("lpfnWndProc", WNDPROCTYPE),
("cbClsExtra", c_int),
("cbWndExtra", c_int),
("hInstance", HANDLE),
("hIcon", HANDLE),
("hCursor", HANDLE),
("hBrush", HANDLE),
("lpszMenuName", LPCWSTR),
("lpszClassName", LPCWSTR),
("hIconSm", HANDLE)]
def on_load_data(ld):
"""Custom documents loader, just for example."""
uri = ld.uri
uri = uri
return 0
def on_create_behavior(ld):
"""Custom behavior factory, just for example."""
name = ld.behaviorName
name = name
return 0
def on_sciter_callback(pld, param):
"""Sciter notifications callback."""
ld = pld.contents
if ld.code == SciterNotification.SC_LOAD_DATA:
return on_load_data(cast(pld, POINTER(SCN_LOAD_DATA)).contents)
elif ld.code == SciterNotification.SC_ATTACH_BEHAVIOR:
return on_create_behavior(cast(pld, POINTER(SCN_ATTACH_BEHAVIOR)).contents)
return 0
def on_wnd_message(hWnd, Msg, wParam, lParam):
"""WindowProc Function."""
handled = BOOL(0)
lr = sapi.SciterProcND(hWnd, Msg, wParam, lParam, byref(handled))
if handled:
return lr
if Msg == WM_DESTROY:
windll.user32.PostQuitMessage(0)
return 0
try:
return windll.user32.DefWindowProcW(hWnd, Msg, wParam, lParam)
except:
# etype, evalue, estack = sys.exc_info()
print("WndProc exception: %X, 0x%04X, 0x%X, 0x%X" % (hWnd, Msg, wParam, lParam))
# traceback.print_exception(etype, evalue, estack)
return 0
def main():
clsname = sapi.SciterClassName()
title = u"Win32 Sciter"
clsname = u"PySciter"
WndProc = WNDPROCTYPE(on_wnd_message)
wndClass = WNDCLASSEX()
wndClass.cbSize = sizeof(WNDCLASSEX)
wndClass.style = CS_HREDRAW | CS_VREDRAW
wndClass.lpfnWndProc = WndProc
wndClass.cbClsExtra = 0
wndClass.cbWndExtra = 0
wndClass.hInstance = windll.kernel32.GetModuleHandleW(0)
wndClass.hIcon = 0
wndClass.hCursor = windll.user32.LoadCursorW(0, IDC_ARROW)
wndClass.hBrush = windll.gdi32.GetStockObject(WHITE_BRUSH)
wndClass.lpszMenuName = 0
wndClass.lpszClassName = clsname
wndClass.hIconSm = 0
if not windll.user32.RegisterClassExW(byref(wndClass)):
err = windll.kernel32.GetLastError()
print('Failed to register window: ', err)
exit(0)
hWnd = windll.user32.CreateWindowExW(0, clsname, title, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, 800, 600, 0, 0, 0, 0)
if not hWnd:
err = windll.kernel32.GetLastError()
print('Failed to create window: ', err)
exit(0)
scproc = SciterHostCallback(on_sciter_callback)
sapi.SciterSetCallback(hWnd, scproc, None)
url = u"examples/minimal.htm"
sapi.SciterLoadFile(hWnd, url)
windll.user32.ShowWindow(hWnd, SW_SHOW)
windll.user32.UpdateWindow(hWnd)
msg = MSG()
lpmsg = pointer(msg)
print('Entering message loop')
while windll.user32.GetMessageW(lpmsg, 0, 0, 0) != 0:
windll.user32.TranslateMessage(lpmsg)
windll.user32.DispatchMessageW(lpmsg)
print('Quit.')
if __name__ == '__main__':
main()
| [
"ehysta@gmail.com"
] | ehysta@gmail.com |
3b145a926f138b7e3c7654e62897c4923073adf6 | 6beecae61b6cf917ea4acd90f4064a13375d4457 | /cob_roboskin_exp/script/lwa_script_server.py | 734dd39996faa301f89d6339c874ffeabec4f1c8 | [] | no_license | ipa-rmb-mo/cob_bringup_sandbox | f1d0fd1f4d5fa239be27380efdfd12566eb99ecc | da256f1ef78d0e3e985685dd17d7930c56360414 | refs/heads/master | 2020-05-29T11:57:54.578091 | 2016-11-07T01:51:38 | 2016-11-07T01:51:38 | 56,847,368 | 0 | 1 | null | 2016-04-22T10:24:06 | 2016-04-22T10:24:06 | null | UTF-8 | Python | false | false | 2,454 | py | #!/usr/bin/python
import time
import roslib
roslib.load_manifest('cob_roboskin_test')
import rospy
from simple_script_server import script
import tf
from geometry_msgs.msg import *
from kinematics_msgs.srv import *
#this should be in manipulation_msgs
#from cob_mmcontroller.msg import *
class GraspScript(script):
def Initialize(self):
# initialize components (not needed for simulation)
self.listener = tf.TransformListener(True, rospy.Duration(10.0))
def callIKSolver(self, current_pose, goal_pose):
req = GetPositionIKRequest()
req.ik_request.ik_link_name = "arm_7_link"
req.ik_request.ik_seed_state.joint_state.position = current_pose
req.ik_request.pose_stamped = goal_pose
resp = self.iks(req)
result = []
for o in resp.solution.joint_state.position:
result.append(o)
return (result, resp.error_code)
def Run(self):
self.iks = rospy.ServiceProxy('/arm_kinematics/get_ik', GetPositionIK)
listener = tf.TransformListener(True, rospy.Duration(10.0))
rospy.sleep(2)
object_pose_bl = PoseStamped()
object_pose_bl.header.stamp = rospy.Time.now()
object_pose_bl.header.frame_id = "/arm_7_link"
object_pose_bl.pose.position.x = 0
object_pose_bl.pose.position.y = 0
object_pose_bl.pose.position.z = 0
rospy.sleep(2)
if not self.sss.parse:
object_pose_in = PoseStamped()
object_pose_in = object_pose_bl
object_pose_in.header.stamp = listener.getLatestCommonTime("/base_link",object_pose_in.header.frame_id)
object_pose_bl = listener.transformPose("/base_link", object_pose_in)
rospy.sleep(2)
[new_x, new_y, new_z, new_w] = tf.transformations.quaternion_from_euler(-1.552, -0.042, 2.481) # rpy
#[new_x, new_y, new_z, new_w] = tf.transformations.quaternion_from_euler(0,0,0) # rpy
object_pose_bl.pose.orientation.x = new_x
object_pose_bl.pose.orientation.y = new_y
object_pose_bl.pose.orientation.z = new_z
object_pose_bl.pose.orientation.w = new_w
#arm_pre_grasp = rospy.get_param("/script_server/arm/pregrasp")
arm_home = rospy.get_param("/script_server/arm/home")
# calculate ik solutions for grasp configuration
(grasp_conf, error_code) = self.callIKSolver(arm_home[0], object_pose_bl)
if(error_code.val != error_code.SUCCESS):
rospy.logerr("Ik grasp Failed")
#return 'retry'
handle_arm = self.sss.move("arm", [grasp_conf])
handle_arm.wait()
if __name__ == "__main__":
SCRIPT = GraspScript()
SCRIPT.Start()
| [
"nadia.hammoudeh-garcia@ipa.fraunhofer.de"
] | nadia.hammoudeh-garcia@ipa.fraunhofer.de |
d29c8376fe707012ec1dc91d756bf033412ffdd8 | 6053bb25f87a01b55660d07acef8a67de76bd179 | /src/py/flwr_example/quickstart_pytorch/server.py | da64cfaf683fa26ef166d746f57a7236707beeca | [
"Apache-2.0"
] | permissive | GaryYe/flower | 87c352a38698062a5f55aee62a2dc47bff0f89f7 | c47ad4532070f0ac2137a6191151fe39c59f103f | refs/heads/main | 2023-04-30T12:09:47.519081 | 2021-05-24T17:04:37 | 2021-05-24T17:04:37 | 370,049,658 | 1 | 0 | Apache-2.0 | 2021-05-23T12:39:58 | 2021-05-23T12:39:57 | null | UTF-8 | Python | false | false | 775 | py | # Copyright 2020 Adap GmbH. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import flwr as fl
if __name__ == "__main__":
fl.server.start_server(config={"num_rounds": 3})
| [
"noreply@github.com"
] | noreply@github.com |
f68bb0271e39fa7b40af4acddcc7e06351cc1a00 | a28e8305a18fb0b513b1cbb75a8897bf9cd26a49 | /catalog/migrations/0001_initial.py | 41e0c9e45131aced00921c670742750c49b4f852 | [] | no_license | mkarki1/Assignment2_deploying | 414d17e45046217f8679bfe4160e60e5f2b15d0a | 7cd38f5a53c1128c90b3582b4e87df3c3e28d740 | refs/heads/main | 2023-05-23T04:42:15.970670 | 2021-06-03T22:51:57 | 2021-06-03T22:51:57 | 373,661,053 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,949 | py | # Generated by Django 3.2.3 on 2021-05-28 23:37
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Author',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first_name', models.CharField(max_length=100)),
('last_name', models.CharField(max_length=100)),
('date_of_birth', models.DateField(blank=True, null=True)),
('date_of_death', models.DateField(blank=True, null=True, verbose_name='Died')),
],
options={
'ordering': ['last_name', 'first_name'],
},
),
migrations.CreateModel(
name='Book',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=200)),
('summary', models.TextField(help_text='Enter a brief description of the book', max_length=1000)),
('isbn', models.CharField(help_text='13 Character <a href="https://www.isbn-international.org/content/what-isbn">ISBN number</a>', max_length=13, unique=True, verbose_name='ISBN')),
('author', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='catalog.author')),
],
),
migrations.CreateModel(
name='Genre',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(help_text='Enter a book genre (e.g. Science Fiction)', max_length=200)),
],
),
migrations.CreateModel(
name='BookInstance',
fields=[
('id', models.UUIDField(default=uuid.uuid4, help_text='Unique ID for this particular book across whole library', primary_key=True, serialize=False)),
('imprint', models.CharField(max_length=200)),
('due_back', models.DateField(blank=True, null=True)),
('status', models.CharField(blank=True, choices=[('m', 'Maintenance'), ('o', 'On loan'), ('a', 'Available'), ('r', 'Reserved')], default='m', help_text='Book availability', max_length=1)),
('book', models.ForeignKey(null=True, on_delete=django.db.models.deletion.RESTRICT, to='catalog.book')),
],
options={
'ordering': ['due_back'],
},
),
migrations.AddField(
model_name='book',
name='genre',
field=models.ManyToManyField(help_text='Select a genre for this book', to='catalog.Genre'),
),
]
| [
"noreply@github.com"
] | noreply@github.com |
78190f02bb4cf4d02a5972eb17fd30fd9c0d3bea | 87b1736c19cd79903aaf7df9e8a7f52b0bbe355c | /lab6_outlab/p7/p7.py | 2e8605962ae6378431a00764fa1be19a2423427a | [] | no_license | imagine5am/cs699-pm9 | 85029a0ab8e41f90038ab86caf0e8db0edb6bee1 | 0c28d479c688387f66575317bcadf667d8abb78a | refs/heads/main | 2023-04-06T15:37:19.828890 | 2021-04-29T18:27:31 | 2021-04-29T18:27:31 | 362,910,668 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 637 | py | if __name__ == '__main__':
num_match = int(input())
matches = dict()
players = dict()
for i in range(num_match):
match_line = input().split(':')
match_name, player_scores = match_line[0], match_line[1]
matches[match_name] = dict()
for player_score in player_scores.split(','):
player = player_score.split('-')
name, score = player[0], int(player[1])
matches[match_name][name] = score
if name in players:
players[name] += score
else:
players[name] = score
players = list(players.items())
players.sort(key = lambda player: (player[1], player[0]), reverse = True)
print(matches)
print(players) | [
"ssood@cse.iitb.ac.in"
] | ssood@cse.iitb.ac.in |
92e2ffb6e01ccf60d49d3427184289857918536d | 20f951bd927e4e5cde8ef7781813fcf0d51cc3ea | /fossir/web/forms/util.py | 1f03bb0a3347fb4b7c709e2715f01c827d7c8c71 | [] | no_license | HodardCodeclub/SoftwareDevelopment | 60a0fbab045cb1802925d4dd5012d5b030c272e0 | 6300f2fae830c0c2c73fe0afd9c684383bce63e5 | refs/heads/master | 2021-01-20T00:30:02.800383 | 2018-04-27T09:28:25 | 2018-04-27T09:28:25 | 101,277,325 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 2,463 | py |
from __future__ import unicode_literals
from collections import OrderedDict
from copy import deepcopy
from wtforms.fields.core import UnboundField
def get_form_field_names(form_class):
"""Returns the list of field names of a WTForm
:param form_class: A `Form` subclass
"""
unbound_fields = form_class._unbound_fields
if unbound_fields:
return [f[0] for f in unbound_fields]
field_names = []
# the following logic has been taken from FormMeta.__call__
for name in dir(form_class):
if not name.startswith('_'):
unbound_field = getattr(form_class, name)
if hasattr(unbound_field, '_formfield'):
field_names.append(name)
return field_names
def inject_validators(form, field_name, validators, early=False):
"""Add extra validators to a form field.
This function may be called from the ``__init__`` method of a
form before the ``super().__init__()`` call or on a form class.
When using a Form class note that this will modify the class, so
all new instances of it will be affected!
:param form: the `Form` instance or a `Form` subclass
:param field_name: the name of the field to change
:param validators: a list of validators to add
:param early: whether to inject the validator before any existing
validators. this is needed if a field has a validator
that stops validation such as DataRequired and the
injected one is e.g. HiddenUnless which needs to run
even if the field is invalid
"""
unbound = deepcopy(getattr(form, field_name))
assert isinstance(unbound, UnboundField)
if 'validators' in unbound.kwargs:
if early:
unbound.kwargs['validators'] = validators + unbound.kwargs['validators']
else:
unbound.kwargs['validators'] += validators
elif len(unbound.args) > 1:
if early:
validators_arg = validators + unbound.args[1]
else:
validators_arg = unbound.args[1] + validators
unbound.args = unbound.args[:1] + (validators_arg,) + unbound.args[2:]
else:
unbound.kwargs['validators'] = validators
setattr(form, field_name, unbound)
if form._unbound_fields is not None:
unbound_fields = OrderedDict(form._unbound_fields)
unbound_fields[field_name] = unbound
form._unbound_fields = unbound_fields.items()
| [
"hodardhazwinayo@gmail.com"
] | hodardhazwinayo@gmail.com |
39c7a34748c1b3e7fb4a8f0b57485acabb6c4b65 | e7efae2b83216d9621bd93390959d652de779c3d | /kyototycoon/tests/test_kyototycoon.py | b7e964c19cda690b27c0b54a0d45f7ae777a268f | [
"BSD-3-Clause",
"MIT",
"BSD-3-Clause-Modification",
"Unlicense",
"Apache-2.0",
"LGPL-3.0-only",
"LicenseRef-scancode-public-domain",
"BSD-2-Clause",
"CC0-1.0"
] | permissive | DataDog/integrations-core | ee1886cc7655972b2791e6ab8a1c62ab35afdb47 | 406072e4294edff5b46b513f0cdf7c2c00fac9d2 | refs/heads/master | 2023-08-31T04:08:06.243593 | 2023-08-30T18:22:10 | 2023-08-30T18:22:10 | 47,203,045 | 852 | 1,548 | BSD-3-Clause | 2023-09-14T16:39:54 | 2015-12-01T16:41:45 | Python | UTF-8 | Python | false | false | 1,657 | py | # (C) Datadog, Inc. 2018-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from copy import deepcopy
import pytest
from datadog_checks.kyototycoon import KyotoTycoonCheck
from .common import DEFAULT_INSTANCE, TAGS
GAUGES = list(KyotoTycoonCheck.GAUGES.values())
DB_GAUGES = list(KyotoTycoonCheck.DB_GAUGES.values())
TOTALS = list(KyotoTycoonCheck.TOTALS.values())
RATES = list(KyotoTycoonCheck.RATES.values())
# all the RATE type metrics
ALL_RATES = TOTALS + RATES
def test_check(aggregator, dd_environment):
kt = KyotoTycoonCheck('kyototycoon', {}, {})
kt.check(deepcopy(DEFAULT_INSTANCE))
kt.check(deepcopy(DEFAULT_INSTANCE))
_assert_check(aggregator)
@pytest.mark.e2e
def test_e2e(dd_agent_check):
aggregator = dd_agent_check(DEFAULT_INSTANCE, rate=True)
_assert_check(aggregator, rate_metric_count=1)
def _assert_check(aggregator, rate_metric_count=2):
# prefix every metric with check name (kyototycoon.)
# no replications, so ignore kyototycoon.replication.delay
for mname in GAUGES:
if mname != 'replication.delay':
aggregator.assert_metric('kyototycoon.{}'.format(mname), tags=TAGS, count=2)
for mname in DB_GAUGES:
aggregator.assert_metric('kyototycoon.{}'.format(mname), tags=TAGS + ['db:0'], count=2)
for mname in ALL_RATES:
aggregator.assert_metric('kyototycoon.{}_per_s'.format(mname), tags=TAGS, count=rate_metric_count)
# service check
aggregator.assert_service_check(KyotoTycoonCheck.SERVICE_CHECK_NAME, status=KyotoTycoonCheck.OK, tags=TAGS, count=2)
aggregator.assert_all_metrics_covered()
| [
"noreply@github.com"
] | noreply@github.com |
c425b9cf9d1956cd084ab9cc72fc011bac13d073 | fd059414d6474f004d583b77d106196d0f13dfe2 | /analyser.py | e6cd4984c0a8562f4bd2f88e2320c529ed578957 | [] | no_license | parsekarnehal/pythonWorkshop | 743171e4f95b10836f510a7755c1e5ce091cce49 | 60bb396e7b650bfd6d6171f4a9906b106087fbb4 | refs/heads/master | 2020-04-29T08:41:56.176518 | 2019-03-16T16:21:16 | 2019-03-16T16:21:16 | 169,849,686 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 259 | py | from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyser = SentimentIntensityAnalyzer()
def scoreAnalyser(sentence):
score = analyser.polarity_scores(sentence)
print(score['compound'])
scoreAnalyser("The movie was very good")
| [
"parsekarnehal@gmail.com"
] | parsekarnehal@gmail.com |
7174ff52ea33f34fd5f98c5cf397e907b859575a | 61f6bc565cedebec5424f0c765b12bb6ce7cb63b | /printing_tweets_inCSV.py | 667f4ee797d251a9f32d70113780598cfeb583c0 | [] | no_license | Vikashpro/tweets_sentiments | 4a86a5f1f150d893491238db6438d2d980981ad1 | 3c9b76ec8b386d5fba361ed0454c39d92d781c8e | refs/heads/master | 2018-09-20T12:38:53.678035 | 2018-06-06T14:03:56 | 2018-06-06T14:03:56 | 136,315,730 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,234 | py | import tweepy
from textblob import TextBlob
import numpy as np
import operator
#step 1 : get authentication
consumer_key = 'COMSUMER_KEY'
consumer_secret = 'CONSUMER_SECRET'
access_token = 'ACCESS_TOKEN'
access_token_secret = 'ACCESS_TOKEN_SECRET'
auth = tweepy.OAuthHandler(consumer_key,consumer_secret)
auth.set_access_token(access_token,access_token_secret)
api = tweepy.API(auth)
#step 2: prepare query features
list_of_hashTags = ['Startup','ArtificialIntelligence','MachineLearning','Entrepreneurship']
since_date = "2017-01-01"
until_date = "2017-06-05"
#step 3 - Function of labelisation of analysis
def get_label(analysis, threshold = 0.1):
if analysis.sentiment.polarity>threshold:
return 'Positive'
elif analysis.sentiment.polarity>-0.1:
return 'Neutral'
else:
return 'Negative'
#step 4: retrive tweets and save them
all_polarities = dict()
for hash_tag in list_of_hashTags:
this_hashTag_tweets = api.search(hash_taggit)
with open('%s_tweets.csv' % hash_tag,'w') as this_hashTag_file:
this_hashTag_file.write('tweet, sentiment_label\n')
for tweet in this_hashTag_tweets:
analysis = TextBlob(tweet.text)
inputt = tweet.text + ", " + get_label(analysis) + "\n"
this_hashTag_file.write(inputt)
| [
"vikprogrammer@gmail.com"
] | vikprogrammer@gmail.com |
aaa4d43eedd631f95373b5c1e619128ca3d84716 | 0211ae622c2adedb637bd7da1a9678d98d8ae14a | /views.py | f8c81622466fc7d5a530780f794b03244321c719 | [] | no_license | paramjit-tech/TextUtills | 83a0489dda8d3588b0ce6267bee09fd1c04aa9b0 | 64c1b163369d1a69addd37388d6fa38b9ae6228a | refs/heads/master | 2021-05-23T09:32:40.236845 | 2020-04-06T02:21:40 | 2020-04-06T02:21:40 | 253,222,846 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,993 | py | # I have created this file-Param
from typing import Dict
from django.http import HttpResponse
from django.shortcuts import render
def index(request):
params = {'name':'paramjit', 'place':'Canada'}
return render(request, 'index.html',params)
def analyze(request):
#get the text
djtext= request.POST.get('text','default')
# check checkbox value
removepunc= request.POST.get('removepunc','off')
fullcaps= request.POST.get('fullcaps','off')
newlinerem= request.POST.get('newlinerem','off')
spaceremover= request.POST.get('spaceremover','off')
print(removepunc)
print(djtext)
#check with checkbox is on
if removepunc == "on":
#analyzed = djtext
punctuations = '''! ()-[] {}:;' "\,<>./?@#$%^&*_~'''
analyzed=""
for char in djtext:
if char not in punctuations:
analyzed= analyzed+char
params = {'purpose': 'Removed punctuation', 'analyzed_text': analyzed}
djtext=analyzed
if(fullcaps=='on'):
analyzed=""
for char in djtext:
analyzed=analyzed+char.upper()
params = {'purpose': 'Changed to uppercase', 'analyzed_text': analyzed}
djtext=analyzed
if (newlinerem == 'on'):
analyzed = ""
for char in djtext:
if char != "\n" and char !="\r":
analyzed = analyzed + char
params = {'purpose': 'Remove new lines', 'analyzed_text': analyzed}
djtext = analyzed
if(spaceremover=='on'):
analyzed = ""
for index, char in enumerate(djtext):
if djtext[index]==" " and djtext[index+1]== " ":
pass
else:
analyzed = analyzed + char
params = {'purpose': 'Extra space remover', 'analyzed_text': analyzed}
# analyze the text
if(removepunc !="on" and fullcaps !='on'and newlinerem != 'on' and spaceremover !='on' ):
return HttpResponse(djtext)
return render(request, 'analyze.html', params)
| [
"ts2102102@gmail.com"
] | ts2102102@gmail.com |
5af066017354ddfe77b49febe3569943670c638a | 9faf3bd9b92f761ca5dc840a1b8fa237486e459b | /ros/src/robot_base/scripts/base/base_node.py | 9d2ea65f31d873cf03281ad07ac41e676663a251 | [] | no_license | lolwuz/IDP | abbc6686159eeef6a1ad139faf12bdc7cada7f4e | da276f8197f815aa398d65d2e9e516c4121f6d0b | refs/heads/master | 2020-05-16T10:37:35.801795 | 2019-04-23T10:20:17 | 2019-04-23T10:20:17 | 182,990,013 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,611 | py | #!/usr/bin/env python
import rospy
import math
from robot_controller.msg import MotorState, PwmState, SteerState, GoalPosition, Led, GripperState
from robot_base.msg import BaseMsg
from sensor_msgs.msg import JointState
class BaseNode(object):
def __init__(self, name, is_running, rate):
""" Base node with default functions """
rospy.init_node('robot_base_' + name, anonymous=True)
# Publishers for controller, motor direction, pwm, servo_steer and servo_suspension
self.motor_state_publisher = rospy.Publisher('motor_state', MotorState, queue_size=1)
self.motor_pwm_publisher = rospy.Publisher('motor_pwm', PwmState, queue_size=1)
self.motor_servo_publisher = rospy.Publisher('motor_servo', PwmState, queue_size=1)
self.servo_steer_publisher = rospy.Publisher('servo_steer', JointState, queue_size=1)
self.servo_suspension_publisher = rospy.Publisher('suspension', JointState, queue_size=1)
self.goal_position_publisher = rospy.Publisher('set_goal_position', GoalPosition, queue_size=1)
self.led_publisher = rospy.Publisher('led', Led, queue_size=1)
self.set_state_gripper = rospy.Publisher('change_gripper_state', GripperState, queue_size=1)
# Base subscriber
rospy.Subscriber('robot_base', BaseMsg, self.base_callback)
self.start_time = 0
self.move_array = [] # Array of moves that are already executed
self.led_array = []
self.motor_array = []
# Update loop
self.rate = rospy.Rate(rate)
self.is_running = is_running
self.name = name
def base_callback(self, data):
"""
Base callback from remote
:param data:
:return:
"""
rospy.logout(data)
if data.base_msg == self.name:
self.is_running = True
self.move_array = []
self.led_array = []
self.motor_array = []
self.start_time = rospy.get_time()
else:
self.is_running = False
def update(self):
""" Default update loop with sleep """
self.rate.sleep()
def set_goal_position(self, id_array, position_array, speed_array):
"""
Set servo goal position from dxl angle
:return:
"""
goal_message = GoalPosition()
goal_message.header.stamp = rospy.Time.now()
goal_message.id_array = id_array
goal_message.position_array = position_array
goal_message.speed_array = speed_array
self.goal_position_publisher.publish(goal_message)
def change_motor_state(self, name, direction):
"""
Send the command to change the direction of a motor
:param name: name of motor to change. "all" for all motors
:param direction: "left", "right" or "off"
"""
motor_message = MotorState()
motor_message.header.stamp = rospy.Time.now()
motor_message.name = name
motor_message.state = direction
self.motor_state_publisher.publish(motor_message)
def change_pwm_state(self, pwm):
"""
Send the command to change the pwm speed of all the motors.
:param pwm: pwm value in the range 0-4095
:return:
"""
pwm_message = PwmState()
pwm_message.header.stamp = rospy.Time.now()
pwm_message.pwm = pwm
self.motor_pwm_publisher.publish(pwm_message)
def set_led_function(self, rgb, section, part, function):
"""
:param section: led section of the robot
:param function: function off led section
:return:
"""
led_message = Led()
led_message.header.stamp = rospy.Time.now()
led_message.section = section
led_message.part = part
led_message.function = function
led_message.rgb = rgb
self.led_publisher.publish(led_message)
def set_gripper_state(self, pwm):
"""
Set the gripper state (pwm) in range (246, 491) open/close
:param pwm: pwm value to set
:return:
"""
servo_message = PwmState()
servo_message.header.stamp = rospy.Time.now()
servo_message.pwm = pwm
self.motor_servo_publisher.publish(servo_message)
def change_gripper_state(self, clamp=True):
"""
Changes the gripper state from clamp to unclamp
:return:
"""
gripper_state = GripperState()
gripper_state.header.stamp = rospy.Time.now()
gripper_state.clamp = clamp
self.set_state_gripper.publish(gripper_state)
| [
"martenhoekstra2@gmail.com"
] | martenhoekstra2@gmail.com |
912647855a1b4890e7b9513bccec2b3a0971cbf6 | 95b1bfb8b1208de406c32c213913c046967d04ba | /evaluation/IAM-evaluation-sample/take_evaluation_sample.py | a1be3d92a345ef95be37e17d32381f536529b305 | [] | no_license | Linguistics575/575_OCR | 7755f58bbadd5f01e6d399b8bdcd6f6ab997d3a9 | b3e8c3074e0bc8813189064cf7fca83b0b307e9c | refs/heads/master | 2020-03-09T02:11:40.584585 | 2018-06-07T16:03:34 | 2018-06-07T16:03:34 | 128,534,525 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,783 | py | '''
script to take a sample of 100 forms from the IAM database, stored locally.
Takes the sample and copies the files.
All paths are hard-coded.
'''
import numpy as np
import os
from shutil import copy
from sys import stderr
def get_all_form_ids(form_file):
'''
Return a list of all form_ids from the form_file
the form_ids are the first whitespace delimited element on the line.
"#" is a comment character, so skip those lines
'''
form_ids = []
with open(form_file) as f:
for line in f.readlines():
if line.startswith("#"):
continue
elements = line.split()
if elements:
form_ids.append(elements[0])
return form_ids
def take_sample(population, sample_size, seed_value=0):
'''
Take a sample from a population using a seed for a random state
Parameters:
-----------
population : list
population from which to sample
sample_size : int
seed_value : int (defaults to 0)
seed to use for random state (for reproducibility purposes)
Returns:
--------
sample : list
sample of size sample_size from population taken using seed_value
'''
random_state = np.random.RandomState(seed=seed_value)
sample = random_state.choice(population, sample_size, replace=False)
return sorted(sample)
def main():
# file that lists all the forms:
form_file = r'/media/jbruno/big_media/575_data/IAM/ascii/forms.txt'
# file that will list the members of our sample
sample_list_file = './sample_forms.ls'
# directory that holds all the images
png_dir = '/media/jbruno/big_media/575_data/IAM/forms'
# directory to hold the sample:
sample_dir = "./sample_png_files"
# if it doesn't exist, make it:
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
# read in all the form ids
all_form_ids = get_all_form_ids(form_file)
sample_size = 100
seed_value = 9
sample = take_sample(all_form_ids, sample_size, seed_value)
# output the members of our sample to the list file:
with open(sample_list_file, "w") as f:
for form in sample:
print(form, file=f)
# and copy the files over
for form in sample:
source = os.path.join(png_dir, form + ".png")
# we're going to call these "not_done" because we're going to manually
# crop them. As we crop them, we'll take the "not_done" away from the
# filename
dest = os.path.join(sample_dir, form + "not_done.png")
if os.path.exists(dest):
print(dest, "exists. Skipping this one.", file=stderr)
else:
copy(source, dest)
if __name__ == '__main__':
main()
| [
"jbruno@uw.edu"
] | jbruno@uw.edu |
fcd1dd7f9a39674fd518e6db3c7dda9b4f19be65 | 78ce9fd8a5e91e7c375739d834cb2488d3b9b2d3 | /gravityinfraredco2sensor/__main__.py | d2acbb1b137bdd1fb77cc4222ddac38516ec6028 | [
"MIT"
] | permissive | osoken/py-gravity-infrared-co2-sensor | 2829c1dbd9b61cc23abfb01da0ba8555c40318c6 | 242885e16d2bb0c43d8abbb9807f9932c8209427 | refs/heads/master | 2020-06-04T13:24:01.665379 | 2019-06-15T05:54:20 | 2019-06-15T05:54:20 | 192,040,068 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 258 | py | # -*- coding: utf-8 -*-
from argparse import ArgumentParser
from . server import gen_app
parser = ArgumentParser(description='run CO2 sensor server')
args = parser.parse_args()
app = gen_app()
app.run(host=app.config['HOST'], port=app.config['PORT'])
| [
"osoken.devel@outlook.jp"
] | osoken.devel@outlook.jp |
09a83e46e47353b343109304cf6f67d42fd334f5 | aeef6b4493c804d09e872346213f80caa6227c7e | /tf2caffe/utils/util.py | a3a9c2a5054be4466df2435c243df374683d1dcc | [] | no_license | manogna-s/TF_to_Caffe_conversion | 906730b5c6d26d9c2cf63d06dd774a9dacd0c6ea | c801d133a9a0116e63c24feb4680ea202706bf7a | refs/heads/master | 2023-05-30T09:41:24.993957 | 2020-03-14T19:13:02 | 2020-03-14T19:13:02 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,481 | py | # Author: Bichen Wu (bichen@berkeley.edu) 08/25/2016
"""Utility functions."""
import numpy as np
import time
#import tensorflow as tf
def iou(box1, box2):
"""Compute the Intersection-Over-Union of two given boxes.
Args:
box1: array of 4 elements [cx, cy, width, height].
box2: same as above
Returns:
iou: a float number in range [0, 1]. iou of the two boxes.
"""
lr = min(box1[0]+0.5*box1[2], box2[0]+0.5*box2[2]) - \
max(box1[0]-0.5*box1[2], box2[0]-0.5*box2[2])
if lr > 0:
tb = min(box1[1]+0.5*box1[3], box2[1]+0.5*box2[3]) - \
max(box1[1]-0.5*box1[3], box2[1]-0.5*box2[3])
if tb > 0:
intersection = tb*lr
union = box1[2]*box1[3]+box2[2]*box2[3]-intersection
return intersection/union
return 0
def batch_iou(boxes, box):
"""Compute the Intersection-Over-Union of a batch of boxes with another
box.
Args:
box1: 2D array of [cx, cy, width, height].
box2: a single array of [cx, cy, width, height]
Returns:
ious: array of a float number in range [0, 1].
"""
lr = np.maximum(
np.minimum(boxes[:,0]+0.5*boxes[:,2], box[0]+0.5*box[2]) - \
np.maximum(boxes[:,0]-0.5*boxes[:,2], box[0]-0.5*box[2]),
0
)
tb = np.maximum(
np.minimum(boxes[:,1]+0.5*boxes[:,3], box[1]+0.5*box[3]) - \
np.maximum(boxes[:,1]-0.5*boxes[:,3], box[1]-0.5*box[3]),
0
)
inter = lr*tb
union = boxes[:,2]*boxes[:,3] + box[2]*box[3] - inter
return inter/union
def nms(boxes, probs, threshold):
"""Non-Maximum supression.
Args:
boxes: array of [cx, cy, w, h] (center format)
probs: array of probabilities
threshold: two boxes are considered overlapping if their IOU is largher than
this threshold
form: 'center' or 'diagonal'
Returns:
keep: array of True or False.
"""
order = probs.argsort()[::-1]
keep = [True]*len(order)
for i in range(len(order)-1):
ovps = batch_iou(boxes[order[i+1:]], boxes[order[i]])
for j, ov in enumerate(ovps):
if ov > threshold:
keep[order[j+i+1]] = False
return keep
# TODO(bichen): this is not equivalent with full NMS. Need to improve it.
def recursive_nms(boxes, probs, threshold, form='center'):
"""Recursive Non-Maximum supression.
Args:
boxes: array of [cx, cy, w, h] (center format) or [xmin, ymin, xmax, ymax]
probs: array of probabilities
threshold: two boxes are considered overlapping if their IOU is largher than
this threshold
form: 'center' or 'diagonal'
Returns:
keep: array of True or False.
"""
assert form == 'center' or form == 'diagonal', \
'bounding box format not accepted: {}.'.format(form)
if form == 'center':
# convert to diagonal format
boxes = np.array([bbox_transform(b) for b in boxes])
areas = (boxes[:, 2]-boxes[:, 0])*(boxes[:, 3]-boxes[:, 1])
hidx = boxes[:, 0].argsort()
keep = [True]*len(hidx)
def _nms(hidx):
order = probs[hidx].argsort()[::-1]
for idx in range(len(order)):
if not keep[hidx[order[idx]]]:
continue
xx2 = boxes[hidx[order[idx]], 2]
for jdx in range(idx+1, len(order)):
if not keep[hidx[order[jdx]]]:
continue
xx1 = boxes[hidx[order[jdx]], 0]
if xx2 < xx1:
break
w = xx2 - xx1
yy1 = max(boxes[hidx[order[idx]], 1], boxes[hidx[order[jdx]], 1])
yy2 = min(boxes[hidx[order[idx]], 3], boxes[hidx[order[jdx]], 3])
if yy2 <= yy1:
continue
h = yy2-yy1
inter = w*h
iou = inter/(areas[hidx[order[idx]]]+areas[hidx[order[jdx]]]-inter)
if iou > threshold:
keep[hidx[order[jdx]]] = False
def _recur(hidx):
if len(hidx) <= 20:
_nms(hidx)
else:
mid = len(hidx)/2
_recur(hidx[:mid])
_recur(hidx[mid:])
_nms([idx for idx in hidx if keep[idx]])
_recur(hidx)
return keep
def sparse_to_dense(sp_indices, output_shape, values, default_value=0):
"""Build a dense matrix from sparse representations.
Args:
sp_indices: A [0-2]-D array that contains the index to place values.
shape: shape of the dense matrix.
values: A {0,1}-D array where values corresponds to the index in each row of
sp_indices.
default_value: values to set for indices not specified in sp_indices.
Return:
A dense numpy N-D array with shape output_shape.
"""
assert len(sp_indices) == len(values), \
'Length of sp_indices is not equal to length of values'
array = np.ones(output_shape) * default_value
for idx, value in zip(sp_indices, values):
array[tuple(idx)] = value
return array
def bgr_to_rgb(ims):
"""Convert a list of images from BGR format to RGB format."""
out = []
for im in ims:
out.append(im[:,:,::-1])
return out
def bbox_transform(bbox):
"""convert a bbox of form [cx, cy, w, h] to [xmin, ymin, xmax, ymax]. Works
for numpy array or list of tensors.
"""
# with tf.variable_scope('bbox_transform') as scope:
cx, cy, w, h = bbox
out_box = [[]]*4
out_box[0] = cx-w/2
out_box[1] = cy-h/2
out_box[2] = cx+w/2
out_box[3] = cy+h/2
return out_box
def bbox_transform_inv(bbox):
"""convert a bbox of form [xmin, ymin, xmax, ymax] to [cx, cy, w, h]. Works
for numpy array or list of tensors.
"""
#with tf.variable_scope('bbox_transform_inv') as scope:
xmin, ymin, xmax, ymax = bbox
out_box = [[]]*4
width = xmax - xmin + 1.0
height = ymax - ymin + 1.0
out_box[0] = xmin + 0.5*width
out_box[1] = ymin + 0.5*height
out_box[2] = width
out_box[3] = height
return out_box
class Timer(object):
def __init__(self):
self.total_time = 0.0
self.calls = 0
self.start_time = 0.0
self.duration = 0.0
self.average_time = 0.0
def tic(self):
self.start_time = time.time()
def toc(self, average=True):
self.duration = time.time() - self.start_time
self.total_time += self.duration
self.calls += 1
self.average_time = self.total_time/self.calls
if average:
return self.average_time
else:
return self.duration
def safe_exp(w, thresh):
"""Safe exponential function for tensors."""
slope = np.exp(thresh)
#with tf.variable_scope('safe_exponential'):
lin_bool = w > thresh
lin_region = lin_bool.astype(np.float32)
lin_out = slope*(w - thresh + 1.)
exp_out = np.exp(np.where(lin_bool, np.zeros_like(w), w))
out = lin_region*lin_out + (1.-lin_region)*exp_out
return out
| [
"noreply@github.com"
] | noreply@github.com |
1d71b7125d01aec9110239b478452b9f53a321c7 | 460f928904593ff770809cd977b7a20fb2c9c119 | /Remove Duplicates.py | c8c81c184e0c78190d164d20c4e1da842bb4090e | [] | no_license | srilekha-peace/Strings | 0d679d268d016bf6ea631840152a5d107bbad65a | d9c28b544dc7d85f844f32138e22c459a16bb2ff | refs/heads/master | 2020-12-01T17:26:21.288003 | 2020-02-09T17:08:00 | 2020-02-09T17:08:00 | 230,710,752 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 365 | py | def remove_duplicates(s, n):
res = 0
for i in range(0, n):
for j in range(0, i+1):
if s[i] == s[j]:
break
if(j == i):
s[res] = s[i]
res += 1
return "".join(s[:res])
if __name__ == '__main__':
str = "abacbdce"
s = list(str)
n = len(s)
print(remove_duplicates(s, n))
| [
"noreply@github.com"
] | noreply@github.com |
e84b95bb39988f554a01b1e8c3b68a406c186fe0 | 76b5c27b67ac195abeb817032527c2c80911e4b7 | /peekalink/tests/models/test_link_preview.py | e112146bed078d6450edb5d90c4c373b7a6c7310 | [
"MIT"
] | permissive | jozsefsallai/peekalink.py | e371ed0117f7b4e5525c4db5c357f9d4a797aea1 | fcd3b5573557768ede9b0d96fe931572be7c468d | refs/heads/master | 2023-06-10T07:35:22.899682 | 2021-06-27T14:57:46 | 2021-06-27T14:57:46 | 380,608,158 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,136 | py | import unittest
from datetime import datetime
from dateutil.tz import tzutc
from peekalink.models.link_preview import LinkPreview
from peekalink.models.helpers.image_asset import ImageAsset
from peekalink.models.helpers.link_details import TwitterDetails, YouTubeDetails
from peekalink.models.helpers.content_type import ContentType
from peekalink.tests.support.fixtures import *
class TestLinkPreview(unittest.TestCase):
def test_from_json(self):
preview = LinkPreview.from_json(YOUTUBE_PREVIEW)
self.assertEqual(preview.url, YOUTUBE_PREVIEW['url'])
self.assertEqual(preview.domain, YOUTUBE_PREVIEW['domain'])
self.assertEqual(preview.last_updated, datetime(2021, 4, 9, 23, 28, 10, 364301, tzinfo=tzutc()))
self.assertEqual(preview.next_update, datetime(2021, 4, 10, 23, 28, 9, 16859, tzinfo=tzutc()))
self.assertEqual(preview.content_type, ContentType.HTML)
self.assertEqual(preview.mime_type, YOUTUBE_PREVIEW['mimeType'])
self.assertEqual(preview.size, YOUTUBE_PREVIEW['size'])
self.assertTrue(preview.redirected)
self.assertIsNotNone(preview.redirection_url)
self.assertEqual(preview.redirection_url, YOUTUBE_PREVIEW['redirectionUrl'])
self.assertIsNotNone(preview.redirection_count)
self.assertEqual(preview.redirection_count, YOUTUBE_PREVIEW['redirectionCount'])
self.assertIsNotNone(preview.redirection_trail)
self.assertListEqual(preview.redirection_trail, YOUTUBE_PREVIEW['redirectionTrail'])
self.assertIsNotNone(preview.title)
self.assertEqual(preview.title, YOUTUBE_PREVIEW['title'])
self.assertIsNotNone(preview.description)
self.assertEqual(preview.description, YOUTUBE_PREVIEW['description'])
self.assertEqual(preview.name, YOUTUBE_PREVIEW['name'])
self.assertTrue(preview.trackers_detected)
self.assertIsNotNone(preview.icon)
self.assertIsInstance(preview.icon, ImageAsset)
self.assertIsNotNone(preview.image)
self.assertIsInstance(preview.image, ImageAsset)
def test_is_youtube(self):
preview = LinkPreview.from_json(GENERIC_PREVIEW)
self.assertFalse(preview.is_youtube())
preview = LinkPreview.from_json(YOUTUBE_PREVIEW)
self.assertTrue(preview.is_youtube())
def test_is_twitter(self):
preview = LinkPreview.from_json(GENERIC_PREVIEW)
self.assertFalse(preview.is_twitter())
preview = LinkPreview.from_json(TWITTER_PREVIEW)
self.assertTrue(preview.is_twitter())
def test_youtube(self):
preview = LinkPreview.from_json(GENERIC_PREVIEW)
self.assertIsNone(preview.youtube())
preview = LinkPreview.from_json(YOUTUBE_PREVIEW)
self.assertIsNotNone(preview.youtube())
self.assertIsInstance(preview.youtube(), YouTubeDetails)
def test_twitter(self):
preview = LinkPreview.from_json(GENERIC_PREVIEW)
self.assertIsNone(preview.twitter())
preview = LinkPreview.from_json(TWITTER_PREVIEW)
self.assertIsNotNone(preview.twitter())
self.assertIsInstance(preview.twitter(), TwitterDetails)
def test_to_json_dict(self):
preview = LinkPreview.from_json(YOUTUBE_PREVIEW)
self.assertDictEqual(preview.to_json_dict(), YOUTUBE_PREVIEW)
| [
"jozsef@sallai.me"
] | jozsef@sallai.me |
c96e7a60b206a9a0ed2292d388e43a340c284cc5 | 5c2f520dde0cf8077facc0fcd9a92bc1a96d168b | /from_cpython/Lib/types.py | b8166edf94fcba78305f53822b17e69b61cb466e | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference",
"Python-2.0"
] | permissive | nagyist/pyston | b613337a030ef21a3f03708febebe76cedf34c61 | 14ba2e6e6fb5c7316f66ccca86e6c6a836d96cab | refs/heads/master | 2022-12-24T03:56:12.885732 | 2015-02-25T11:11:08 | 2015-02-25T11:28:13 | 31,314,596 | 0 | 0 | NOASSERTION | 2022-12-17T08:15:11 | 2015-02-25T13:24:41 | Python | UTF-8 | Python | false | false | 2,342 | py | """Define names for all type symbols known in the standard interpreter.
Types that are part of optional modules (e.g. array) are not listed.
"""
import sys
# Iterators in Python aren't a matter of type but of protocol. A large
# and changing number of builtin types implement *some* flavor of
# iterator. Don't check the type! Use hasattr to check for both
# "__iter__" and "next" attributes instead.
NoneType = type(None)
TypeType = type
ObjectType = object
IntType = int
LongType = long
FloatType = float
BooleanType = bool
try:
ComplexType = complex
except NameError:
pass
StringType = str
# StringTypes is already outdated. Instead of writing "type(x) in
# types.StringTypes", you should use "isinstance(x, basestring)". But
# we keep around for compatibility with Python 2.2.
try:
UnicodeType = unicode
StringTypes = (StringType, UnicodeType)
except NameError:
StringTypes = (StringType,)
# Pyston change: 'buffer' is not implemented yet
# BufferType = buffer
TupleType = tuple
ListType = list
DictType = DictionaryType = dict
def _f(): pass
FunctionType = type(_f)
LambdaType = type(lambda: None) # Same as FunctionType
# Pyston change: there is no concept of a "code object" yet:
# CodeType = type(_f.func_code)
def _g():
yield 1
GeneratorType = type(_g())
class _C:
def _m(self): pass
ClassType = type(_C)
UnboundMethodType = type(_C._m) # Same as MethodType
_x = _C()
InstanceType = type(_x)
MethodType = type(_x._m)
BuiltinFunctionType = type(len)
BuiltinMethodType = type([].append) # Same as BuiltinFunctionType
ModuleType = type(sys)
FileType = file
XRangeType = xrange
# Pyston change: we don't support sys.exc_info yet
"""
try:
raise TypeError
except TypeError:
tb = sys.exc_info()[2]
TracebackType = type(tb)
FrameType = type(tb.tb_frame)
del tb
"""
SliceType = slice
# Pyston change: don't support this yet
# EllipsisType = type(Ellipsis)
# Pyston change: don't support this yet
# DictProxyType = type(TypeType.__dict__)
NotImplementedType = type(NotImplemented)
# For Jython, the following two types are identical
# Pyston change: don't support these yet
# GetSetDescriptorType = type(FunctionType.func_code)
# MemberDescriptorType = type(FunctionType.func_globals)
del sys, _f, _g, _C, _x # Not for export
| [
"kmod@dropbox.com"
] | kmod@dropbox.com |
24da27cc4ddcbf764296ab7823036015e6244e5c | 97eb2aed00f42ada3df7ac63d240eb8073051fb8 | /Analysis.py | 9bfcccfc28bb7124f000211f7385454c35c83a41 | [] | no_license | Ty-Stinson/Opioid_Crisis_Project | dabbcad61df65bc27d5e384c696cc9c976fc0a06 | cff09970b95da096f88b2e723c31be6c306cccda | refs/heads/master | 2022-02-16T18:49:52.929451 | 2019-08-21T20:51:59 | 2019-08-21T20:51:59 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,981 | py | # Hussein, Ty, and Ongun
# Age Groups (15 - 64) vs. Years (01 - 15)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Import~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.interpolate as sc
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Data Table~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
filename = "Data.txt"
table = pd.read_table(filename)
print(table)
print(table.columns)
print()
print()
print()
print()
print()
#~~~~~~~~~~~~~~~~~~~Plots, Regression Models, and Error Bars~~~~~~~~~~~~~~~~~~~~
# Age Group: 15 - 24
deaths1 = table["Deaths"][0:15]
population1 = table["Population"][0:15]
y1 = (deaths1/population1) * 100
years1 = table["Year"][0:15]
fig,ax1 = plt.subplots(1)
ax1.plot(years1, y1, "o")
plt.title("Opioid Death Rate for Age Group: 15 - 24")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
p1 = np.polyfit(years1, y1, 3)
slope1 = p1[0]
intercept11 = p1[1]
intercept12 = p1[2]
intercept13 = p1[3]
yfit1 = (slope1 * (years1 ** 3)) + (intercept11 * (years1 ** 2)) + (intercept12 * years1) + intercept13
sd1 = np.std(y1, ddof = 1);
se1 = sd1 / (np.sqrt(15))
errors1 = [se1] * 15
plt.plot(years1, np.polyval(p1, years1), "r-")
plt.errorbar(years1, y1, yerr = errors1, fmt = "o")
plt.show()
results1 = {}
results1["polynomial_1"] = p1.tolist()
correlation1 = np.poly1d(p1)
y1hat = correlation1(years1)
y1bar = np.sum(y1)/len(y1)
ssreg1 = np.sum((y1hat - y1bar) ** 2)
sstot1 = np.sum((y1 - y1bar) ** 2)
results1["determination_1"] = ssreg1/sstot1
print(results1)
print()
# Age Group: 25 - 34
deaths2 = table["Deaths"][16:31]
population2 = table["Population"][16:31]
y2 = (deaths2/population2) * 100
years2 = table["Year"][16:31]
fig,ax2 = plt.subplots(1)
ax2.plot(years2, y2, "o")
plt.title("Opioid Death Rate for Age Group: 25 - 34")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
p2 = np.polyfit(years2, y2, 3)
slope2 = p2[0]
intercept21 = p2[1]
intercept22 = p2[2]
intercept23 = p2[3]
yfit2 = (slope2 * (years2 ** 3)) + (intercept21 * (years2 ** 2)) + (intercept22 * years2) + intercept23
sd2 = np.std(y2, ddof = 1);
se2 = sd2 / (np.sqrt(15))
errors2 = [se2] * 15
plt.plot(years2, np.polyval(p2, years2), "r-")
plt.errorbar(years2, y2, yerr = errors2, fmt = "o")
plt.show()
results2 = {}
results2["polynomial_2"] = p2.tolist()
correlation2 = np.poly1d(p2)
y2hat = correlation2(years2)
y2bar = np.sum(y2)/len(y2)
ssreg2 = np.sum((y2hat - y2bar) ** 2)
sstot2 = np.sum((y2 - y2bar) ** 2)
results2["determination_2"] = ssreg2/sstot2
print(results2)
print()
# Age Group: 35 - 44
deaths3 = table["Deaths"][32:47]
population3 = table["Population"][32:47]
y3 = (deaths3/population3) * 100
years3 = table["Year"][32:47]
fig,ax3 = plt.subplots(1)
ax3.plot(years3, y3, "o")
plt.title("Opioid Death Rate for Age Group: 35 - 44")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
p3 = np.polyfit(years3, y3, 4)
slope3 = p3[0]
intercept31 = p3[1]
intercept32 = p3[2]
intercept33 = p3[3]
intercept34 = p3[4]
yfit3 = (slope3 * (years3 ** 4)) + (intercept31 * (years3 ** 3)) + (intercept32 * (years3 ** 2)) + (intercept33 * years3) + intercept34
sd3 = np.std(y3, ddof = 1);
se3 = sd3 / (np.sqrt(15))
errors3 = [se3] * 15
plt.plot(years3, np.polyval(p3, years3), "r-")
plt.errorbar(years3, y3, yerr = errors3, fmt = "o")
plt.show()
results3 = {}
results3["polynomial_3"] = p3.tolist()
correlation3 = np.poly1d(p3)
y3hat = correlation3(years3)
y3bar = np.sum(y3)/len(y3)
ssreg3 = np.sum((y3hat - y3bar) ** 2)
sstot3 = np.sum((y3 - y3bar) ** 2)
results3["determination_3"] = ssreg3/sstot3
print(results3)
print()
# Age Group: 45 - 54
deaths4 = table["Deaths"][48:63]
population4 = table["Population"][48:63]
y4 = (deaths4/population4) * 100
years4 = table["Year"][48:63]
fig,ax4 = plt.subplots(1)
ax4.plot(years4, y4, "o")
plt.title("Opioid Death Rate for Age Group: 45 - 54")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
p4 = np.polyfit(years4, y4, 4)
slope4 = p4[0]
intercept41 = p4[1]
intercept42 = p4[2]
intercept43 = p4[3]
intercept44 = p4[4]
yfit4 = (slope4 * (years4 ** 4)) + (intercept41 * (years4 ** 3)) + (intercept42 * (years4 ** 2)) + (intercept43 * years4) + intercept44
sd4 = np.std(y4, ddof = 1);
se4 = sd4 / (np.sqrt(15))
errors4 = [se4] * 15
plt.plot(years4, np.polyval(p4, years4), "r-")
plt.errorbar(years4, y4, yerr = errors4, fmt = "o")
plt.show()
results4 = {}
results4["polynomial_4"] = p4.tolist()
correlation4 = np.poly1d(p4)
y4hat = correlation4(years4)
y4bar = np.sum(y4)/len(y4)
ssreg4 = np.sum((y4hat - y4bar) ** 2)
sstot4 = np.sum((y4 - y4bar) ** 2)
results4["determination_4"] = ssreg4/sstot4
print(results4)
print()
# Age Group: 55 - 64
deaths5 = table["Deaths"][64:79]
population5 = table["Population"][64:79]
y5 = (deaths5/population5) * 100
years5 = table["Year"][64:79]
fig,ax5 = plt.subplots(1)
ax5.plot(years5, y5, "o")
plt.title("Opioid Death Rate for Age Group: 55 - 64")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
p5 = np.polyfit(years5, y5, 1)
slope5 = p5[0]
intercept51 = p5[1]
yfit5 = (slope5 * years5) + intercept51
sd5 = np.std(y5, ddof = 1);
se5 = sd5 / (np.sqrt(15))
errors5 = [se5] * 15
plt.plot(years5, np.polyval(p5, years5), "r-")
plt.errorbar(years5, y5, yerr = errors5, fmt = "o")
plt.show()
results5 = {}
results5["polynomial_5"] = p5.tolist()
correlation5 = np.poly1d(p5)
y5hat = correlation5(years5)
y5bar = np.sum(y5)/len(y5)
ssreg5 = np.sum((y5hat - y5bar) ** 2)
sstot5 = np.sum((y5 - y5bar) ** 2)
results5["determination_5"] = ssreg5/sstot5
print(results5)
print()
#~~~~~~~~~~~~~~~~~~Predicted Rate of Death Plots for 2016 - 2026~~~~~~~~~~~~~~~~
x = np.array(range(2016, 2027))
plt.plot(x, np.polyval(p1, x), "r-")
plt.title("Opioid Death Rate for Age Group: 15 - 24")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
plt.show()
plt.plot(x, np.polyval(p2, x), "r-")
plt.title("Opioid Death Rate for Age Group: 25 - 34")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
plt.show()
plt.plot(x, np.polyval(p3, x), "r-")
plt.title("Opioid Death Rate for Age Group: 35 - 44")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
plt.show()
plt.plot(x, np.polyval(p4, x), "r-")
plt.title("Opioid Death Rate for Age Group: 45 - 54")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
plt.show()
plt.plot(x, np.polyval(p5, x), "r-")
plt.title("Opioid Death Rate for Age Group: 55 - 64")
plt.xlabel("Years")
plt.ylabel("Rate of Death")
plt.show()
#~~~~~~~~~~~~~~~~Algorithm: Predicted Rate of Death from Year Input~~~~~~~~~~~~~
quit = False
while (not quit):
year_str = input("Please enter a year or enter 'q' to quit: ")
print()
if (year_str != "q"):
year_int = int(year_str)
value1 = (slope1 * (year_int ** 3)) + (intercept11 * (year_int ** 2)) + (intercept12 * year_int) + intercept13
value2 = (slope2 * (year_int ** 3)) + (intercept21 * (year_int ** 2)) + (intercept22 * year_int) + intercept23
value3 = (slope3 * (year_int ** 4)) + (intercept31 * (year_int ** 3)) + (intercept32 * (year_int ** 2)) + (intercept33 * year_int) + intercept34
value4 = (slope4 * (year_int ** 4)) + (intercept41 * (year_int ** 3)) + (intercept42 * (year_int ** 2)) + (intercept43 * year_int) + intercept44
value5 = (slope5 * year_int) + intercept51
print("The predicated rate of death for ages 15 - 24: ", value1 * 100, "%")
print()
print("The predicated rate of death for ages 25 - 34: ", value2 * 100, "%")
print()
print("The predicated rate of death for ages 35 - 44: ", value3 * 100, "%")
print()
print("The predicated rate of death for ages 45 - 54: ", value4 * 100, "%")
print()
print("The predicated rate of death for ages 55 - 64: ", value5 * 100, "%")
print()
else:
quit = True | [
"noreply@github.com"
] | noreply@github.com |
bf25c491d026c56c2680ee54c6c6da0ef243d622 | 1d96db84225301d972f07cad95c2a13f4fbafa84 | /python/my_PyFeyn/testing/pyfeyn-test2.py | 82ff7bcb08a3d9741dea8c1a1909d75c7af37668 | [] | no_license | mattbellis/matts-work-environment | 9eb9b25040dd8fb4a444819b01a80c2d5342b150 | 41988f3c310f497223445f16e2537e8d1a3f71bc | refs/heads/master | 2023-08-23T09:02:37.193619 | 2023-08-09T05:36:32 | 2023-08-09T05:36:32 | 32,194,439 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 826 | py | #! /usr/bin/env python
from pyfeyn.user import *
fd = FeynDiagram()
p1 = Point(2, -2)
p2 = Point(-2, 2)
p3 = Vertex(1.25, 1.25, mark=CIRCLE)
p4 = p1.midpoint(p2)
p5 = p4.midpoint(p1)
p6 = p4.midpoint(p2)
c1 = Circle(center=p1, radius=0.5, fill=[RED], points=[p1])
c2 = Circle(center=p2, radius=0.3, fill=[GREEN], points=[p2])
e1 = Ellipse(center=p4, xradius=0.5, yradius=1.0,
fill=[MIDNIGHTBLUE], points=[p4])
l0a = Fermion(p1, p4)
l0b = Fermion(p2, p4)
l1 = NamedLine["gluon"](p2, p1).arcThru(x=3, y=0)
l2 = NamedLine["photon"](p1, p2).arcThru(x=0, y=-3)
l3 = Gluon(p2, p3)
l4 = Photon(p1, p3)
l5 = Gluon(p5, p6).bend(-p5.distance(p6)/2.0)
loop1 = Line(p3, p3).arcThru(x=1.75, y=1.75).addArrow(0.55)
l1.addLabel(r"\Pgluon")
l2.addLabel(r"\Pphoton")
l5.addLabel(r"$\Pgluon_1$")
fd.draw("pyfeyn-test2.pdf")
| [
"matthew.bellis@gmail.com"
] | matthew.bellis@gmail.com |
08358000152a8399a848e04f23b01ebc3517344e | 744cd757c3a97894dbe1e86573c950dc7ee6f12e | /posts/tests.py | 96f8d159459f45caed875a4c1d3e96fe2f006eae | [] | no_license | Leopoldo-Flores/Message-Board | 2b4cbc5143ef6bcde318ad35c90314a4686e4450 | 4ea0ac69da9ae7d9855e253990f15f2695c34cfb | refs/heads/main | 2023-07-09T23:29:59.330359 | 2021-08-16T22:18:55 | 2021-08-16T22:18:55 | 396,993,453 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,240 | py |
from django.test import TestCase
from django.urls import reverse
from .models import Post
class PostModelTest(TestCase):
def setUp(self):
Post.objects.create(text="Just a test")
def test_text_content(self):
post = Post.objects.get(id=1)
expected_object_name = f"{post.text}"
self.assertEqual(expected_object_name, "Just a test")
class HomePageviewTest(TestCase):
def setUp(self):
Post.objects.create(text="The Next Test")
def test_view_url_exists_at_proper_location(self):
resp = self.client.get("/")
self.assertEqual(resp.status_code, 200)
def test_view_url_by_name(self):
resp = self.client.get(reverse("home"))
self.assertEqual(resp.status_code, 200)
def test_view_uses_coreect_template(self):
resp = self.client.get(reverse("home"))
self.assertTemplateUsed(resp, "home.html")
def test_view_contains_post(self):
resp = self.client.get(reverse("home"))
self =self.client.get(resp.status_code, 200)
self.assertContains(resp, "This is another")
def test_view_extends_base_template(self):
resp =self.client.get(reverse("home"))
self.assertTemplateUsed(resp, "base.html") | [
"leopoldoflores2002@gmail.com"
] | leopoldoflores2002@gmail.com |
af984f8b92fa6d9f2f3f4f2529a36de9c3b048da | 3ac9cc9f54b1d6c6d5e05317bb0b977f4c1b363d | /profab/main.py | 811a78a9ea7a173bb25ab8d19309d19a26ddd8c8 | [
"Apache-2.0",
"BSL-1.0"
] | permissive | sittisak/profab | 5f5a92d8da7a07af80727eee337993929931ba2a | ff3967397b31986c9396f70a44a565d85178e6a6 | refs/heads/master | 2020-04-05T14:05:47.613997 | 2016-11-22T02:50:12 | 2016-11-22T02:50:12 | 94,763,557 | 0 | 1 | null | 2017-08-21T09:09:46 | 2017-06-19T10:09:29 | Python | UTF-8 | Python | false | false | 541 | py | """Helper functions for the entry point scripts.
"""
def process_arguments(*args):
"""Do the initial argument parse phase. This produces tuples of role
instructions
"""
args = list(args) # Convert tuple to list
args.reverse() # We really wanted head() here, but no matter...
instructions = []
while len(args):
head = args.pop()
if head.startswith('--'):
instructions.append((head[2:], args.pop()))
else:
instructions.append((head, None))
return instructions
| [
"k@kirit.com"
] | k@kirit.com |
7fc0c30c3be8ed6c6ac93d5c7a31ef363035a72f | 8caeffc71aa7ad6d06267de82458102205e6a62d | /Step_14/15649.py | f3fe19385ac08d555eac9eae5a21f5c625835734 | [] | no_license | eymin1259/Baekjoon_Python | c75b61cf6a83d8d3c4e6ab75e44dd8f8e431a99a | 09f3fb1a664c926aa8b589aebf88faf054878d22 | refs/heads/master | 2023-07-18T17:48:48.280179 | 2021-09-24T02:37:51 | 2021-09-24T02:37:51 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 384 | py | N, M = map(int, input().split())
selected = [False for _ in range(N+1)]
selection = []
def dfs(cnt):
if(cnt == M):
print(*selection)
return
for i in range(1, N+1):
if(selected[i]):
continue
selected[i] = True
selection.append(i)
dfs(cnt + 1)
selection.pop()
selected[i] = False
dfs(0) | [
"susan900000@gmail.com"
] | susan900000@gmail.com |
49c62fa3d24304217c0bc06286f9eb152d473d28 | 68b2ac84025100a9cd44622eab8f1afe2c05ca14 | /fleet_asset_project_mro_task_issue/fleet_asset_project_mro_task_issue.py | 6b3b8eda96f3976548059da0d9228fa3f2c8ac9a | [] | no_license | stellaf/fleet_asset_project_mro_task_issue | 863d0b28570a0db95fbe1d9fc3d870e0d16caaec | 8be381c231a29af50d72c101c10f1827aaebfadb | refs/heads/master | 2021-01-20T04:42:56.369229 | 2017-04-28T16:14:57 | 2017-04-28T16:14:57 | 89,719,229 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 10,252 | py | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Addon by CLEARCORP S.A. <http://clearcorp.co.cr> and AURIUM TECHNOLOGIES <http://auriumtechnologies.com>
#
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import string
from lxml import etree
from odoo import api, fields, models, tools, SUPERUSER_ID, _
from odoo.exceptions import UserError, ValidationError
from odoo.tools.safe_eval import safe_eval
class FleetVehicle(models.Model):
_inherit = 'fleet.vehicle'
@api.model
def create(self, vals):
acount_obj=self.env['account.analytic.account']
asset_obj=self.env['asset.asset']
fleet_id = super(FleetVehicle, self).create(vals)
account_id=acount_obj.create({'name':self._vehicle_name_get(fleet_id),'use_tasks':True,'use_issues':True})
asset_id=asset_obj.create({'name':self._vehicle_name_get(fleet_id),'model':self.model_id.name,'asset_number':self.license_plate,'criticality':'0','maintenance_state_id.id':'21'})
fleet_id.write({'analytic_account_id':account_id.id,'asset_id':asset_id.id})
return fleet_id
@api.multi
def write(self, vals):
acount_obj=self.env['account.analytic.account']
asset_obj=self.env['asset.asset']
res = super(FleetVehicle, self).write(vals)
if not self.analytic_account_id:
account_id=acount_obj.create({'name':self._vehicle_name_get(self),'use_tasks':True,'use_issues':True})
self.write({'analytic_account_id':account_id.id})
if not self.asset_id:
asset_id=asset_obj.create({'name':self._vehicle_name_get(self),'model':self.model_id.name,'asset_number':self.license_plate,'criticality':'0','maintenance_state_id.id':'21'})
self.write({'asset_id':asset_id.id })
self.analytic_account_id.write({'analytic_account_id':self.analytic_account_id.id, 'name':self.name,'use_tasks':True,'use_issues':True})
self.asset_id.write({'asset_id':self.asset_id.id,'name':self._vehicle_name_get(self),'model':self.model_id.name,'asset_number':self.license_plate,'criticality':'0','maintenance_state_id.id':'21'})
return res
@api.multi
def unlink(self):
self.env['account.analytic.account'].search([('id', '=', self.analytic_account_id.id)]).unlink()
self.env['asset.asset'].search([('id', '=', self.asset_id.id)]).unlink()
return super(FleetVehicle,self).unlink()
@api.multi
def _compute_mrorequest_count(self):
mrorequest_obj=self.env['mro.request']
self.mrorequest_count=len(mrorequest_obj.search([('asset_id', '=', self.asset_id.id)]).ids)
@api.multi
def _compute_mroorder_count(self):
asset_obj=self.env['asset.asset']
self.mroorder_count=asset_obj.search([('id', '=', self.asset_id.id)]).mro_count
@api.multi
def _compute_attached_docs_count(self):
project_obj = self.env['project.project']
self.doc_count=project_obj.search([('analytic_account_id', '=', self.analytic_account_id.id)]).doc_count
@api.multi
def _count_vehicle_task(self):
project_obj = self.env['project.project']
self.task_count=len(project_obj.search([('analytic_account_id', '=', self.analytic_account_id.id)]).task_ids)
@api.multi
def _count_vehicle_issue(self):
issue_obj = self.env['project.project']
self.issue_count=len(issue_obj.search([('analytic_account_id', '=', self.analytic_account_id.id)]).issue_ids)
@api.multi
def _vehicle_name_get(self,record):
res = (record.model_id.brand_id.name + '/' + record.model_id.name + '/' + record.license_plate).strip(" ")
return res
@api.multi
def action_view_alltasks(self):
action = self.env.ref('project.act_project_project_2_project_task_all')
active_id = self.env['project.project'].search([('analytic_account_id', '=', self.analytic_account_id.id)]).id
context = {'group_by': 'stage_id', 'search_default_project_id': [active_id], 'default_project_id': active_id, }
return {
'key2':'tree_but_open',
'name': action.name,
'res_model': 'project.task',
'help': action.help,
'type': action.type,
'view_type': action.view_type,
'view_mode': action.view_mode,
'res_id': active_id,
'views': action.views,
'target': action.target,
'context':context,
'nodestroy': True,
'flags': {'form': {'action_buttons': True}}
}
@api.multi
def action_view_allissues(self):
action = self.env.ref('project_issue.act_project_project_2_project_issue_all')
active_id = self.env['project.project'].search([('analytic_account_id', '=', self.analytic_account_id.id)]).id
context = {'group_by': 'stage_id', 'search_default_project_id': [active_id], 'default_project_id': active_id,}
return {
'name': action.name,
'res_model': 'project.issue',
'help': action.help,
'type': action.type,
'view_type': action.view_type,
'view_mode': action.view_mode,
'views': action.views,
'target': action.target,
'res_id': active_id,
'context':context,
'nodestroy': True,
'flags': {'form': {'action_buttons': True}}
}
@api.multi
def action_view_attachments(self):
order_by ='state DESC'
return self.env['project.project'].search([('analytic_account_id', '=', self.analytic_account_id.id)]).attachment_tree_view()
@api.multi
def action_view_mro_request(self):
active_ids = self.env['asset.asset'].search([('id', '=', self.asset_id.id)]).ids
domain = "[('asset_id','in',[" + ','.join(map(str, active_ids)) + "])]"
action = self.env.ref('mro.action_requests')
context={'search_default_open': 1,'search_default_asset_id': [self.asset_id.id],'default_asset_id': self.asset_id.id,}
return {
'name': action.name,
'res_model': 'mro.request',
'help': action.help,
'type': action.type,
'view_type': action.view_type,
'view_mode': action.view_mode,
'views': action.views,
'res_id': active_ids,
'domain':domain,
'context':context,
'target': action.target,
'nodestroy': True,
'flags': {'form': {'action_buttons': True}}
}
@api.multi
def action_view_mroorders(self):
active_ids = self.env['asset.asset'].search([('id', '=', self.asset_id.id)]).ids
domain = "[('asset_id','in',[" + ','.join(map(str, active_ids)) + "])]"
action = self.env.ref('mro.action_orders')
context={'search_default_open': 1,'search_default_asset_id': [self.asset_id.id],'default_asset_id': self.asset_id.id,}
return {
'name': action.name,
'res_model': 'mro.order',
'help': action.help,
'type': action.type,
'view_type': action.view_type,
'view_mode': action.view_mode,
'views': action.views,
'res_id': active_ids,
'domain':domain,
'context':context,
'target': action.target,
'nodestroy': True,
'flags': {'form': {'action_buttons': True}}
}
analytic_account_id = fields.Many2one('account.analytic.account',string='Analytic Account')
asset_id = fields.Many2one('asset.asset',string='Asset id')
task_count = fields.Integer(compute=_count_vehicle_task, string="Vehicle Tasks" , multi=True)
issue_count = fields.Integer(compute=_count_vehicle_issue, string="Vehicle Issues" , multi=True)
doc_count = fields.Integer(compute=_compute_attached_docs_count, string="Number of documents attached",multi=True)
mroorder_count = fields.Integer(compute= _compute_mroorder_count, string="Number of mro orders",multi=True)
mrorequest_count = fields.Integer(compute= _compute_mrorequest_count, string="Number of mro request",multi=True)
class fleet_vehicle_log_services(models.Model):
_inherit = 'fleet.vehicle.log.services'
invoice_id = fields.Many2one('account.invoice',string='Facture')
class Project(models.Model):
_inherit = "project.project"
@api.multi
def attachment_tree_view(self):
self.ensure_one()
domain = [
'|',
'&', ('res_model', '=', 'project.project'), ('res_id', 'in', self.ids),
'&', ('res_model', '=', 'project.task'), ('res_id', 'in', self.task_ids.ids)]
order_by ='create_date DESC'
return {
'name': _('Attachments'),
'domain': domain,
'res_model': 'ir.attachment',
'type': 'ir.actions.act_window',
'view_id': False,
'view_mode': 'tree,kanban,form',
'view_type': 'form',
'help': _('''<p class="oe_view_nocontent_create">
Documents are attached to the tasks and issues of your project.</p><p>
Send messages or log internal notes with attachments to link
documents to your project.
</p>'''),
'limit': 80,
'context': "{'default_res_model': '%s','default_res_id': %d}" % (self._name, self.id)
}
| [
"noreply@github.com"
] | noreply@github.com |
d758fcb94a1bd0ce5d642054a8d44bb749b2bcd6 | 793e94e5c530e54b4728fe75138c8ad22717ad21 | /src/data_collect/data_collect.py | c7cda3b93aebcbbaa364e10982318025ac41dfaa | [] | no_license | currylien/mbed08new | 382c164140288bd94f1eee140a60d509b6af2fb2 | 6a0b670798990193eecf66146fa97b20e8e5ffd1 | refs/heads/master | 2023-04-03T01:45:09.754149 | 2021-04-14T10:06:08 | 2021-04-14T10:06:08 | 357,840,319 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 868 | py | import serial
import sys
import time
serdev = '/dev/ttyACM3'
s = serial.Serial(serdev)
data = []
data_new = []
while True:
try:
line = s.readline().decode()
if '---start---' in line:
print("---start---")
data_new.clear()
elif '---stop---' in line:
print("---stop---")
if len(data_new) > 0:
print("Data saved:")
print(data_new)
data.append(data_new.copy())
data_new.clear()
print("Data Num =", len(data))
else:
print(line, end="")
data_new.append(line)
except KeyboardInterrupt:
filename = "gesture_"+str(time.strftime("%Y%m%d%H%M%S"))+".txt"
with open(filename, "w") as f:
for lines in data:
f.write("-,-,-\n")
for line in lines:
f.write(line)
print("Exiting...")
print("Save file in", filename)
s.close()
sys.exit() | [
"lianmelody102321@gmail.com"
] | lianmelody102321@gmail.com |
201bd48c1a55e0576ab1353a135a62a1f411a968 | 395cddc0d991199b459f84e3589ffa9761217cb9 | /Chapter 03/exercise-3-1.py | 2320de67f8c4c49d5dc4fc2fef201029d065b22c | [] | no_license | tanakorn-dev/code-for-python-book | 4aec0027d4693741abdb468dd6465f145a17cc7f | a06c05756c537dd46c22c3f4b9b576387cf0595b | refs/heads/master | 2020-08-17T19:32:14.467199 | 2019-12-12T19:42:28 | 2019-12-12T19:42:28 | 215,703,271 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 711 | py | # Assigning Values to Variables
counter = 100 # An integer assignment
miles = 1000.0 # A floating point
name = "John" # A string
print(counter) # Respond: 100
print(miles) # Respond: 1000.0
print(name) # Respond: John
# Multiple Assignment Type 1
a = b = c = 1
print(a) # Respond: 1
print(b) # Respond: 1
print(c) # Respond: 1
# Multiple Assignment Type 2
a, b, c = 1, 2, "John"
print(a) # Respond: 1
print(b) # Respond: 2
print(c) # Respond: John
# Change Values of Variables
x = 5
print(x) # Respond: 5
x = 10
print(x) # Respond: 10
# Change Type of Variables
y = 5
print(y) # Respond: 5
y = 10.25
print(y) # Respond: 10.25
y = "Jonh"
print(y) # Respond: Jonh | [
"tanakorn@eventpop.me"
] | tanakorn@eventpop.me |
4bead64dcf7a59f2fb8ac1ad7ff3f597a268d9d1 | 180531c591e7a20da9d8c26d1e75c200917e4052 | /COM/imageProc/QCT_ConvertToShort.py | f8590c96cf22e03e24d1dfedaf2939ec102dfe26 | [] | no_license | jamesben6688/FemurSegmentation-MSKI2017 | a8ffc78352c75ade3b0aa9d5ef22f939da27927f | 90286d0ff990b8a68bf027b12ee2911a843d03b1 | refs/heads/master | 2021-06-22T02:46:29.520500 | 2017-07-11T16:06:50 | 2017-07-11T16:06:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,621 | py | # History:
# 2017.04.06 babesler Created
#
# Description:
# Convert an image to short. This is needed for Elastix
#
# Notes:
# - No range checking, because that seems like a pain
# - Runs a connectivity filter over the image since MITK-GEM introduces weird
# noise at the edge of images.
#
# Usage:
# python QCT_ConvertToShort.py input output
import vtk
import argparse
import os
# Setup and parse command line arguments
parser = argparse.ArgumentParser(description='Subget medical data',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('inputImage',
help='The input NIfTI (*.nii) image)')
parser.add_argument('outputImage',
help='The output NIfTI (*.nii) image)')
parser.add_argument('-f', '--force',
action='store_true',
help='Set to overwrite output without asking')
args = parser.parse_args()
# Check that input file/dir exists
if not os.path.isfile(args.inputImage):
os.sys.exit('Input file \"{inputImage}\" does not exist. Exiting...'.format(inputImage=args.inputImage))
# Check that output does not exist, or we can over write
for fileName in [args.inputImage, args.outputImage]:
if not fileName.lower().endswith('.nii'):
os.sys.exit('Output file \"{outputImage}\" is not a .nii file. Exiting...'.format(outputImage=fileName))
if os.path.isfile(args.outputImage):
if not args.force:
answer = raw_input('Output file \"{outputImage}\" exists. Overwrite? [Y/n]'.format(outputImage=args.outputImage))
if str(answer).lower() not in set(['yes','y', 'ye', '']):
os.sys.exit('Will not overwrite \"{inputFile}\". Exiting...'.
format(inputFile=args.outputImage))
# Set reader
reader = vtk.vtkNIFTIImageReader()
reader.SetFileName(args.inputImage)
print("Loading data...")
reader.Update()
dimensions = reader.GetOutput().GetDimensions()
print("Loaded data with dimensions {dims}".format(dims=dimensions))
# Connected components
cc = vtk.vtkImageConnectivityFilter()
cc.SetInputConnection(reader.GetOutputPort())
cc.SetExtractionModeToLargestRegion()
cc.SetScalarRange(1,1)
print('Component labelling for bones')
cc.Update()
# Convert
caster = vtk.vtkImageCast()
caster.SetInputConnection(cc.GetOutputPort())
caster.SetOutputScalarTypeToShort()
caster.ClampOverflowOn()
print('Casting')
caster.Update()
# Writer
writer = vtk.vtkNIFTIImageWriter()
writer.SetFileName(args.outputImage)
writer.SetInputConnection(caster.GetOutputPort())
print("Writing to {}".format(args.outputImage))
writer.Update()
| [
"babesler@ucalgary.ca"
] | babesler@ucalgary.ca |
fda6fd5ae09921326fb145dd50de3ff9024f26a4 | 8da47c5ce21009b56e71110ffeac836762a680bb | /aae/__init__.py | 7be02ceac5de34816454d3db3d1ddcb886483931 | [] | no_license | watsonjiang/nodetool | 7272c505c6e590d8a922d2567fc09d71964b99b7 | e2b180335d4289f1a25a2bc93f9be1c96df485c1 | refs/heads/master | 2016-09-05T10:11:44.842956 | 2014-03-19T06:38:02 | 2014-03-19T06:38:02 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 215 | py | #!/usr/bin/python
from treebuilder import TreeBuilder
from treebuilder import tree_cmp
from console import AaeConsoleFactory
from filterlist import FilterList
from config import Config
from metadata import Metadata
| [
"watson.jiang@gmail.com"
] | watson.jiang@gmail.com |
cb933e18f1201ebaeebd417ad77557ec55ced8dd | 2e7e3d3ebdf1c7d8cf62dcf0f48c4b83d31bbc21 | /day02.py | 0dd13bcb695ea9d919096aba1bed88555c0095ee | [] | no_license | jeremystephencobb/AdventOfCode_2019 | e4dc265fd6b25756769fa521afa918472e0702d1 | d1608136f1f6c380661545e2e8f837e9e7be872a | refs/heads/master | 2020-09-28T04:38:35.352706 | 2019-12-08T05:54:59 | 2019-12-08T05:54:59 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 768 | py | inp = open("inputs/day02.txt")
arr = inp.read().split(",")
def part1(noun, verb):
codes = [int(a) for a in arr]
codes[1] = noun
codes[2] = verb
for i in range(0,len(codes),4):
opcode = codes[i]
oper1 = codes[i+1]
oper2 = codes[i+2]
oper3 = codes[i+3]
if opcode == 99:
break
elif opcode == 1:
codes[oper3] = codes[oper1] + codes[oper2]
elif opcode == 2:
codes[oper3] = codes[oper1] * codes[oper2]
return codes[0]
def part2(target):
for noun in range(100):
for verb in range(100):
if part1(noun, verb) == target:
return 100 * noun + verb
return -1
print (part1(12, 2))
print (part2(19690720)) | [
"guitarman.dan29@gmail.com"
] | guitarman.dan29@gmail.com |
641e1a92713a233f7a9a9ebadd2bacedb157a27b | 21f200208146a92154d49ce04e9a76331a819d73 | /utils.py | 1ece16f8d4756282bfaf610cb8b0e42ebc4195d3 | [
"MIT"
] | permissive | oguzserbetci/discrete-ehr | 72737cb77ab896d9b9d235de9e4200a8cce73b3e | 3bc873d56d00987ff11851276d43d254b20f57cf | refs/heads/main | 2023-02-04T21:58:28.177190 | 2020-12-27T15:47:58 | 2020-12-29T22:57:38 | 324,794,852 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 14,945 | py | import csv
import gzip
import importlib
import logging
import math
import os
import sys
from collections import defaultdict
from glob import glob
from os import makedirs
from pathlib import Path
from typing import Dict, List, Callable
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import torch
import yaml
from colorlog import ColoredFormatter
from ignite.engine import Engine
from sklearn.metrics import auc
import wandb
from dataloader.data import DemographicsFeature, TabularFeature, JointTabularFeature
class EarlyStopping(object):
"""EarlyStopping handler can be used to stop the training if no improvement after a given number of events.
Args:
patience (int):
Number of events to wait if no improvement and then stop the training.
score_function (callable):
It should be a function taking a single argument, an :class:`~ignite.engine.Engine` object,
and return a score `float`. An improvement is considered if the score is higher.
trainer (Engine):
trainer engine to stop the run if no improvement.
callback (callable):
You can pass a function to be called everytime an early stopping point is marked using the score_function.
Examples:
.. code-block:: python
from ignite.engine import Engine, Events
from ignite.handlers import EarlyStopping
def score_function(engine):
val_loss = engine.state.metrics['nll']
return -val_loss
handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)
# Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset).
evaluator.add_event_handler(Events.COMPLETED, handler)
"""
def __init__(self, patience, score_function, trainer):
if not callable(score_function):
raise TypeError("Argument score_function should be a function.")
if patience < 1:
raise ValueError("Argument patience should be positive integer.")
if not isinstance(trainer, Engine):
raise TypeError("Argument trainer should be an instance of Engine.")
self.score_function = score_function
self.patience = patience
self.trainer = trainer
self.counter = 0
self.best_score = None
self._logger = logging.getLogger(__name__ + "." + self.__class__.__name__)
self._logger.addHandler(logging.NullHandler())
def __call__(self, engine):
score = self.score_function(engine)
if self.best_score is None:
self.best_score = score
engine.state.best_metrics = {k: v for k, v in engine.state.metrics.items() if 'skip' not in k}
wandb.run.summary.update({f'best_val_{k}': v for k, v in engine.state.metrics.items() if 'plot' not in k})
self.trainer.state.stop_epoch = self.trainer.state.epoch
elif score <= self.best_score:
self.counter += 1
self._logger.debug("EarlyStopping: %i / %i" % (self.counter, self.patience))
if self.counter >= self.patience:
self._logger.info("EarlyStopping: Stop training")
self.trainer.terminate()
else:
self.best_score = score
engine.state.best_metrics = {k: v for k, v in engine.state.metrics.items() if 'skip' not in k}
wandb.run.summary.update({f'best_val_{k}': v for k, v in engine.state.metrics.items() if 'plot' not in k})
self.trainer.state.stop_epoch = self.trainer.state.epoch
self.counter = 0
class ActivationHandler:
def __init__(self, n_bins: int, global_step_transform=None):
self.n_bins = n_bins
self.global_step_transform = global_step_transform
self._reset()
def _reset(self):
self.histograms = defaultdict(lambda: np.zeros(self.n_bins))
self.bins = defaultdict(lambda: np.zeros(self.n_bins + 1))
def __call__(self, engine):
if engine.state.iteration == 1:
self.histograms['timesteps'], self.bins['timesteps'] = np.histogram(torch.cat(engine.state.output['timesteps'], -1).cpu(), self.n_bins)
self.histograms['patient'], self.bins['patient'] = np.histogram(engine.state.output['patient'].cpu(), self.n_bins)
elif engine.state.iteration <= len(engine.state.dataloader):
self.histograms['timesteps'] += np.histogram(torch.cat(engine.state.output['timesteps'], -1).cpu(), self.bins['timesteps'])[0]
self.histograms['patient'] += np.histogram(engine.state.output['patient'].cpu(), self.bins['patient'])[0]
else:
raise RuntimeError
def setup_logger(path, level="INFO"):
formatter = ColoredFormatter('%(asctime)s|%(funcName)s|%(levelname)s: %(message)s')
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(formatter)
stdout_handler.setLevel(level)
makedirs(Path(path).parent, exist_ok=True)
file_handler = logging.FileHandler(path, mode='a')
file_handler.setFormatter(formatter)
file_handler.setLevel(level)
logger = logging.getLogger()
logger.handlers = []
logger.setLevel(level)
logger.addHandler(file_handler)
logger.addHandler(stdout_handler)
def create_model_on_gpu(pc, device_name="cuda:0", **kwargs):
torch.cuda.empty_cache()
model = load_class(pc['modelcls'])(**pc, **kwargs)
logging.info(model)
device = torch.device(device_name)
return model.to(device), device
def multidim_pad_sequences(sequences, batch_first, padding_value=0):
max_size = sequences[0].size()
trailing_dims = max_size[2:]
max_len = np.max([[s.size(0), s.size(1)] for s in sequences], 0)
if batch_first:
out_dims = (len(sequences), *max_len) + trailing_dims
else:
out_dims = (*max_len, len(sequences)) + trailing_dims
out_tensor = sequences[0].data.new(*out_dims).fill_(padding_value)
for i, tensor in enumerate(sequences):
length1 = tensor.size(0)
length2 = tensor.size(1)
# use index notation to prevent duplicate references to the tensor
if batch_first:
out_tensor[i, :length1, :length2, ...] = tensor
else:
out_tensor[:length1, i, :length2, ...] = tensor
return out_tensor
def pad_batch(batch, tables=[], labels={}, limit=None, event_limit=None):
x = {}
x['extra'] = {}
for key in batch[0]['extra'].keys():
x['extra'][key] = [sample['extra'][key] for sample in batch]
x['inputs'] = {}
for table in tables:
if isinstance(table, TabularFeature) or isinstance(table, JointTabularFeature):
# N, L, L, C
x['inputs'][table.table] = multidim_pad_sequences([sample['inputs'][table.table][:limit] for sample in batch], batch_first=True)
x['inputs'][table.table] = x['inputs'][table.table][:,:,:event_limit]
elif isinstance(table, DemographicsFeature):
x['inputs'][table.table] = torch.stack([sample['inputs'][table.table] for sample in batch])
if x['inputs'][table.table].device.type == 'cuda':
x['inputs'][table.table] = x['inputs'][table.table].pin_memory()
x['targets'] = {}
for key, label in labels.items():
x['targets'][key] = label.batch(batch)
if x['targets'][key].device.type == 'cuda':
x['targets'][key] = x['targets'][key].pin_memory()
return x
def multidim_shortest_sequences(sequences, batch_first=True, event_limit=None, padding_value=0):
'''Truncates and pads a list of patient histories. Permutes events.
sequences: N length sorted sequences list of tensors with shape L1, L2, *.
returns N, truncated timesteps dimension L1, padded event dimension L2, *
'''
trailing_dims = sequences[0].shape[2:]
# Assume length sorted sequences
min_length = int(np.mean([s.size(0) for s in sequences]))
if event_limit:
max_events = min(event_limit, int(np.mean([s.size(1) for s in sequences])))
else:
max_events = int(np.mean([s.size(1) for s in sequences]))
length1 = min_length
length2 = max_events
if batch_first:
out_dims = (len(sequences), length1, length2, *trailing_dims)
else:
out_dims = (*length1, len(sequences), length2, *trailing_dims)
out_tensor = sequences[0].data.new(*out_dims).fill_(padding_value)
for i, tensor in enumerate(sequences):
if length1 >= tensor.size(0):
L1 = length1
repeat_times = math.ceil(length1 / tensor.size(0))
tensor = tensor.repeat(repeat_times, *([1] * (tensor.ndim - 1)))
else:
L1 = length1
if length2 >= tensor.size(1):
L2 = tensor.size(1)
else:
L2 = length2
# randomly sample the measures
L2ix = np.random.permutation(range(L2))
# use index notation to prevent duplicate references to the tensor
if batch_first:
out_tensor[i, :L1, :L2, ...] = tensor[:L1, L2ix]
else:
out_tensor[:L1, i, :L2, ...] = tensor[:L1, L2ix]
return out_tensor
def min_batch(batch, tables=[], labels={}, limit=None, event_limit=None):
x = {}
# sort batch to make the positive sample first and thus with least padding
# https://stackoverflow.com/questions/6618515/sorting-list-based-on-values-from-another-list
time_lengths = [list(s['inputs'].values())[-1].size(0) for s in batch]
batch = [sample for _, sample in sorted(zip(time_lengths, batch),
key=lambda pair: pair[0],
reverse=True)]
x['extra'] = {}
for key in batch[0]['extra'].keys():
x['extra'][key] = [sample['extra'][key] for sample in batch]
x['inputs'] = {}
for table in tables:
if isinstance(table, TabularFeature) or isinstance(table, JointTabularFeature):
# N, L, L, C
x['inputs'][table.table] = multidim_shortest_sequences([sample['inputs'][table.table][:limit] for sample in batch], batch_first=True, event_limit=event_limit)
elif isinstance(table, DemographicsFeature):
x['inputs'][table.table] = torch.stack([sample['inputs'][table.table] for sample in batch])
if x['inputs'][table.table].device.type == 'cuda':
x['inputs'][table.table] = x['inputs'][table.table].pin_memory()
x['targets'] = {}
for key, label in labels.items():
x['targets'][key] = label.batch(batch)
if x['targets'][key].device.type == 'cuda':
x['targets'][key] = x['targets'][key].pin_memory()
return x
def plot_confusion_matrix(cm, label, ax=None, annot=True,fmt='d', square=True, **kwargs):
"""
Keyword Arguments:
correct_labels -- These are your true classification categories.
predict_labels -- These are you predicted classification categories
label -- This is a list of string labels corresponding labels
Returns: Figure
"""
if not ax:
fig = plt.figure(figsize=(6, 6), dpi=72, facecolor='w', edgecolor='k')
ax = fig.add_subplot(1, 1, 1)
df = pd.DataFrame(cm, label.classes, label.classes)
ax = sns.heatmap(df, annot=annot, cmap='Oranges', fmt=fmt, cbar=False, square=square, ax=ax, **kwargs)
ax.set_xlabel('Predicted')
ax.set_ylabel('True label')
plt.tight_layout()
return ax, cm
def plot_pr_curve(precision, recall):
"""
Keyword Arguments:
correct_labels -- These are your true classification categories.
predict_labels -- These are you predicted classification categories
labels -- This is a list of values that occur in y_true
classes -- This is a list of string labels corresponding labels
Returns: Figure
"""
aucpr = auc(recall, precision)
fig = plt.figure(figsize=(6, 6), dpi=72, facecolor='w', edgecolor='k')
ax = fig.add_subplot(1, 1, 1)
ax.plot(recall, precision)
ax.set_xlabel('Recall')
ax.set_ylabel('Precision')
ax.set_title(f'AUCPR={aucpr}')
fig.tight_layout()
return fig
def plot_heatmap(arr, **kwargs):
"""
Keyword Arguments:
arr: array to heatmap
Returns: Figure
"""
fig = plt.figure(figsize=(6, 6), dpi=72, facecolor='w', edgecolor='k')
ax = fig.add_subplot(1, 1, 1)
sns.heatmap(arr, ax=ax, **kwargs)
fig.tight_layout()
return fig
def load_class(full_class_string):
"""
dynamically load a class from a string
via https://thomassileo.name/blog/2012/12/21/dynamically-load-python-modules-or-classes/
"""
class_data = full_class_string.split(".")
module_path = ".".join(class_data[:-1])
class_str = class_data[-1]
module = importlib.import_module(module_path)
# Finally, we retrieve the Class
return getattr(module, class_str)
def prepare_batch(batch:Dict, device=None):
'''
Input: sample dict
Output: (input, targets, extras)
'''
x, y_true = {}, {}
for k, v in batch['inputs'].items():
x[k] = v.to(device)
for k, v in batch['targets'].items():
y_true[k] = v.to(device)
return x, y_true, batch['extra']
def load_model(params, joint_vocab, tables, device):
model = load_class(params['modelcls'])(joint_vocab, tables, **params).to(device)
epoch_paths = glob(f'wandb/run-*{params["wandb_id"]}/**/best_checkpoint*.pt', recursive=True)
latest_epoch_path = sorted(epoch_paths)[-1]
logging.info('LOAD LATEST BEST MODEL', latest_epoch_path)
params['model_path'] = latest_epoch_path
state_dict = torch.load(latest_epoch_path, map_location=device)
model.load_state_dict(state_dict['model'], strict=False)
return model
def load_config(wandb_id):
config_filepath = glob(f'wandb/run-*{wandb_id}/**/config.yaml', recursive=True)
# Load earliest config to make sure we don't load a modified version.
file_path = sorted(config_filepath)[0]
logging.info(f'LOAD EARLIEST CONFIG AT {file_path}')
with open(file_path) as f:
c = yaml.load(f)
config = {k: v['value'] for k, v in c.items() if 'wandb' not in k}
# Deal with string True, False in wanbd config
for k, v in config.items():
if v in ['True', 'False']:
config[k] = eval(v)
config['wandb_id'] = wandb_id
config['config_path'] = file_path
return config
def load_latest_checkpoint(glob_str, wandb_id):
model_paths = glob(f'wandb/run-*-{wandb_id}/**/{glob_str}', recursive=True)
latest_model_path = sorted(model_paths)[-1]
logging.info(f'LOAD LATEST CHECKPOINT AT {latest_model_path}')
with open(latest_model_path, 'rb') as checkpoint_file:
checkpoint = torch.load(checkpoint_file, map_location=torch.device('cpu'))
epoch = latest_model_path.split('_')[2]
return checkpoint, epoch
| [
"oguz.serbetci@gmail.com"
] | oguz.serbetci@gmail.com |
1eb854a21b4d1c66f1bafe2b2b4809fe75cbfdbe | 444a9480bce2035565332d4d4654244c0b5cd47b | /research/cv/tgcn/src/model/loss.py | df6f31eabe38cc43ea7ed62b3e9617852ab03f2a | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference",
"LicenseRef-scancode-proprietary-license"
] | permissive | mindspore-ai/models | 7ede9c6454e77e995e674628204e1c6e76bd7b27 | eab643f51336dbf7d711f02d27e6516e5affee59 | refs/heads/master | 2023-07-20T01:49:34.614616 | 2023-07-17T11:43:18 | 2023-07-17T11:43:18 | 417,393,380 | 301 | 92 | Apache-2.0 | 2023-05-17T11:22:28 | 2021-10-15T06:38:37 | Python | UTF-8 | Python | false | false | 1,186 | py | # Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
T-GCN loss cell
"""
import mindspore.nn as nn
import mindspore.numpy as np
class TGCNLoss(nn.Cell):
"""
Custom T-GCN loss cell
"""
def construct(self, predictions, targets):
"""
Calculate loss
Args:
predictions(Tensor): predictions from models
targets(Tensor): ground truth
Returns:
loss: loss value
"""
targets = targets.reshape((-1, targets.shape[2]))
return np.sum((predictions - targets) ** 2) / 2
| [
"yinanf@foxmail.com"
] | yinanf@foxmail.com |
f665b88702d4599076b1bb8085a8e8dfde5326c1 | 408698afe271cb013cccb558674d8e48a115a2db | /player.py | abee1dd0af45ab3efdc128f0f8fd4cf12e08b419 | [] | no_license | Gleb-Vagin/survival | 2465a5d5ed439c5809dcbbe0aeeae3d6a793e094 | 7aff1f9df21eb9e82fe31ea841b4baac2c456a50 | refs/heads/master | 2023-01-09T11:14:54.174840 | 2020-11-10T17:31:02 | 2020-11-10T17:31:02 | 288,446,223 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 3,243 | py | import pygame
import settings
import utilite
PLAYER_STATE_STAND = 1
PLAYER_STATE_DOWN = 2
PLAYER_STATE_UP = 3
PLAYER_IMAGE_STAND = pygame.image.load(utilite.get_path('assets/player_stand.png'))
PLAYER_IMAGE_DOWN = pygame.image.load(utilite.get_path('assets/player_down.png'))
PLAYER_IMAGE_UP = pygame.image.load(utilite.get_path('assets/player_up.png'))
PLAYER_STATE_LEFT = 4
PLAYER_STATE_RIGHT = 5
class Player(pygame.sprite.Sprite):
def __init__(self, name, x, y):
pygame.sprite.Sprite.__init__(self)
self.name = name
self.x = x
self.y = y
self.rect = pygame.Rect(x, y, 85, 92)
self.rect.centerx = x
self.rect.centery = y
self.speed_y = 0
self.speed_x = 0
self.state_x = PLAYER_STATE_LEFT
self.state_y = PLAYER_STATE_STAND
self.__update_state()
def __update_state(self):
if self.state_y == PLAYER_STATE_STAND:
self.image = PLAYER_IMAGE_STAND
elif self.state_y == PLAYER_STATE_UP:
self.image = PLAYER_IMAGE_UP
elif self.state_y == PLAYER_STATE_DOWN:
self.image = PLAYER_IMAGE_DOWN
if self.state_x == PLAYER_STATE_LEFT:
self.image = pygame.transform.flip(self.image, True, False)
def jump(self):
if self.state_y == PLAYER_STATE_STAND:
self.speed_y = - 60
def update(self, group_blocks):
self.speed_y += 4
if self.speed_y < 0:
self.state_y = PLAYER_STATE_UP
if self.speed_y > 0:
self.state_y = PLAYER_STATE_DOWN
# if self.speed_x < 3:
# self.speed_x += 1
self.rect.y += self.speed_y
f = pygame.sprite.spritecollideany(self, group_blocks)
if f is not None:
if self.speed_y > 0:
self.rect.bottom = f.rect.top
self.state_y = PLAYER_STATE_STAND
if self.speed_y < 0:
self.rect.top = f.rect.bottom
self.speed_y = 0
if self.rect.top < 0:
self.rect.top = 0
if self.rect.bottom > settings.GROUND_HEIGHT:
self.rect.bottom = settings.GROUND_HEIGHT
self.speed_y = 0
if self.rect.bottom == settings.GROUND_HEIGHT:
self.state_y = PLAYER_STATE_STAND
self.rect.x += self.speed_x
# Проверяем на столкновение с блоками по x
f = pygame.sprite.spritecollideany(self, group_blocks)
if f is not None:
if self.speed_x > 0:
self.rect.right = f.rect.left
if self.speed_x < 0:
self.rect.left = f.rect.right
# Делаем так, что бы человечек но мог выйти за границы экрана
# if self.rect.right > settings.SCREEN_WIDTH:
# self.rect.right = settings.SCREEN_WIDTH
# if self.rect.left < 0:
# self.rect.left = 0
if self.speed_x < 0:
self.state_x = PLAYER_STATE_LEFT
if self.speed_x > 0:
self.state_x = PLAYER_STATE_RIGHT
self.__update_state()
| [
"vagingleb439@gmail.com"
] | vagingleb439@gmail.com |
e9d18118dbebc168c5107c01343996c5c4b9cf41 | 3c8808ed813114b6e605d1b36822eb653b5e0bf2 | /days/10/bestsellers.py | c607bf75c595c28bee043ed5871f033a71ba8366 | [] | no_license | shaversj/100-days-of-code | ad5382ceb600dfbecac0301ddb1dbaff846f94f2 | 12ed32abaf7795cee2c4242e37e53584f191f2d7 | refs/heads/master | 2020-03-29T01:13:36.632380 | 2019-01-09T02:30:16 | 2019-01-09T02:30:16 | 149,377,806 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,931 | py | bestsellers_list = []
def main():
filename = input('Please enter the filename: ' "")
read_data(filename)
answer = 0
while answer != 'Q':
print()
answer = (input(
'What would you like to do? \n1: Look up year range \n2: Look up month/year \n3: Search for author \n4: Search for title \nQ: Quit \n' ""))
if answer == 'Q':
break
answer = int(answer)
if answer == 1:
beginning_year = int(input('Enter beginning year: '""))
ending_year = int(input('Enter ending year: '""))
search_by_years(beginning_year, ending_year)
elif answer == 2:
search_month = int(input('Enter month (as a number, 1-12): '""))
search_year = int(input('Enter year: '""))
search_by_month_and_year(search_month, search_year)
elif answer == 3:
author_search_string = input(
'Enter an authors name (or part of a name): '"")
author_search_string = author_search_string.capitalize()
search_by_author(author_search_string)
elif answer == 4:
title_search_string = str(
input('Enter a title (or part of a title): '""))
title_search_string = title_search_string.capitalize()
search_by_title(title_search_string)
def read_data(filename):
"""
The program will input the data set and construct a list of books. If the list of books cannot be constructed, the program will display an appropriate error message and halt.
"""
with open(filename, 'r', encoding='utf-8') as f:
for line in f:
line = line.strip().split("\t")
bestsellers_list.append(line)
def search_by_years(beginning_year, ending_year):
"""
Prompt the user for two years (a starting year and an ending year), then display all books which reached the #1 spot between those two years (inclusive). For example, if the user entered “1970” and “1973”, display all books which reached #1 in 1970, 1971, 1972 or 1973.
"""
# beginning_year = 1960
# ending_year = 1962
year_range = range(beginning_year, ending_year + 1, 1)
for title, author, publisher, date, genre in bestsellers_list:
for year in year_range:
if str(year) in date:
print(f'{title.strip()}, by {author.strip()} ({date.strip()})')
def search_by_month_and_year(search_month, search_year):
"""
Prompt the user to enter a month and year, then display all books which reached #1 during that month. For example, if the user entered “7” and “1985”, display all books which reached #1 during the month of July in 1985.
>>> search_by_month_and_year(9, 1990)
Four Past Midnight, by Stephen King (9/16/1990)
Memories of Midnight, by Sidney Sheldon (9/2/1990)
Darkness Visible, by William Styron (9/16/1990)
Millie's Book, by Barbara Bush (9/30/1990)
Trump: Surviving at the Top, by Donald Trump (9/9/1990)
"""
# search_month = 9
# search_year = 1990
for title, author, publisher, date, genre in bestsellers_list:
if (date.startswith(str(search_month))) and (date.endswith(str(search_year))) is True:
print(f'{title.strip()}, by {author.strip()} ({date.strip()})')
def search_by_author(author_search_string: str):
"""
Prompt the user for a string, then display all books whose author’s name contains that string (regardless of case). For example, if the user enters “ST”, display all books whose author’s name contains (or matches) the string “ST”, “St”, “sT” or “st”.
>>> search_by_author('Tolkein')
Silmarillion, by J. R. R. Tolkein (10/2/1977)
The Children of the Hurin, by J.R.R. Tolkein (5/6/2007)
"""
for title, author, publisher, date, genre in bestsellers_list:
if author_search_string in author:
print(f'{title.strip()}, by {author.strip()} ({date.strip()})')
def search_by_title(title_search_string: str):
"""
Prompt the user for a string, then display all books whose title contains that string (regardless of case). For example, if the user enters “secret”, three books are found: “The Secret of Santa Vittoria” by Robert Crichton, “The Secret Pilgrim” by John le Carré, and “Harry Potter and the Chamber of Secrets”.
>>> search_by_title('Secret')
Harry Potter and the Chamber of Secrets, by J. K. Rowling (6/20/1999)
The Secret of Santa Vittoria, by Robert Crichton (11/20/1966)
The Secret Pilgrim, by John le Carre (1/20/1991)
"""
# Use str.strip() to remove the whitespace before the string is printed.
for title, author, publisher, date, genre in bestsellers_list:
if title_search_string in title:
print(f'{title.strip()}, by {author.strip()} ({date.strip()})')
if __name__ == "__main__":
main()
| [
"shaversj@gmail.com"
] | shaversj@gmail.com |
5c7ce8feb82bd96466bdcc23bd4a7765826cf693 | d08d442320edeb36323cea858b88ce4b42813102 | /usuario/urls.py | ebd1328864177c52da8f33067d3e6bbfbb367097 | [] | no_license | boabner/treecePythonDjango | 0177604fa63ddb7637a577802028bfa9cd770592 | cb41b409b18ea4ff945fc9f03926e182bc3b162f | refs/heads/master | 2022-12-12T08:14:01.752494 | 2020-09-09T05:02:59 | 2020-09-09T05:02:59 | 294,000,971 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 790 | py | from django.conf import settings
from django.conf.urls.static import static
from django.urls import path
from usuario.viewss import UsuarioDeleteView, RecuperarView, PostCreateView, PostUpdateView
app_name = 'usuario'
urlpatterns = [
path('usuarios/cadastrar/', PostCreateView.as_view(), name="cadastra_usuario"),
path('usuarios/recuperar/<op>', RecuperarView.as_view(), name="recuperar_senha"),
path('usuarios/recuperar/sendemail', RecuperarView.as_view(), name="recuperar"),
path('usuarios/<pk>', PostUpdateView.as_view(), name="atualiza_usuario"),
path('usuarios/excluir/<pk>', UsuarioDeleteView.as_view(), name="deleta_usuario"),
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT)
| [
"boabner@gmail.com"
] | boabner@gmail.com |
534a63d88de1f26d7ee6eba727a6b55f06405dad | 2efd470ee359d547f475bae0c5b2546abc86208c | /Useful_annex_codes/Intersection_over_union/IOU_ratio.py | 7290ba360a1b0500ca09613e86f8e81e3041875b | [
"MIT"
] | permissive | oceam/EasyMPE | 109a9a3ec6ad7877989edb32be3fbc0a0c62a56d | b4c828aba4e82984b3e7fbc7e31165dbb885f85e | refs/heads/master | 2021-06-17T10:07:39.430284 | 2019-12-25T10:22:34 | 2019-12-25T10:22:34 | 167,278,171 | 5 | 5 | MIT | 2021-02-08T17:02:13 | 2019-01-24T01:02:28 | Python | UTF-8 | Python | false | false | 2,800 | py | # -*- coding: utf-8 -*-
"""
Created on Sun Jan 6 09:03:21 2019
@author: leatr
calculate the Intersection Over Union ratio based on specific inputs
Text file must have 8 columns as:
y_1, x_1, y_2, x_2, y_3, x_3, y_4, x_4
which can be obtained by running:
- Get_IOU_coordinates.py over the Plot_original_whole folder of the EasyMPE
output
- get_coordinates_from_shp.py over the SHP_files folder of the EasyMPE output
(georeferenced coordinates will be outputted, which might be less precise as
the coordinates will not have decimals)
"""
###############################################################################
#################################### ENV ######################################
###############################################################################
from shapely.geometry import Polygon
import numpy as np
from path import Path
###############################################################################
################################## INPUTS #####################################
###############################################################################
FIELDNAME = '2017_Memuro_production_LATEST'
f_prog = open(r'D:/LEA/2017MEMURO_sugarbeat_production/IOU/coordinates_program_made_LATEST.txt', 'r')
box_prog = f_prog.readlines()
f_hand = open(r'D:/LEA/2017MEMURO_sugarbeat_production/IOU/coordinates_handmade_LATEST.txt', 'r')
box_handmade = f_hand.readlines()
###############################################################################
#################################### CODE ######################################
###############################################################################
def intersection_over_union(boxA, boxB):
# use coordinates to make a linear ring
a = Polygon(boxA)
b = Polygon(boxB)
# get the intersection of both box
inter = a.intersection(b).area
# and the union
union = a.union(b).area
#calculate the inter over union percent
iou = inter/union
iou = "%.3f" % iou
# return
return (a.area, b.area, "%.3f" % inter, "%.3f" % union, iou)
rows_csv = []
for k in range(len(box_prog)):
boxA = eval(box_prog[k][:-1].split(' ; ')[1])
boxB = eval(box_handmade[k][:-1].split(' ; ')[1])
A_area, B_area, inter, union, iou = intersection_over_union(boxA, boxB)
values = [str(k), float(A_area), float(B_area), float(inter), float(union), float(iou)]
rows_csv.append(values)
csvfile = Path(r'D:\LEA\Semi_automatic_segmentation\IOU_results_and_codes\iou_'+FIELDNAME+'.csv')
np.savetxt(csvfile, rows_csv, delimiter = ';', newline='\n', header = 'Plot;Program_box_area;Handmade_box_area;Intersection;Union;IOU', comments = '', fmt='%s')
f_prog.close()
f_hand.close()
| [
"noreply@github.com"
] | noreply@github.com |
12c6166544742411950cb2b3abd36c9d9dbe6081 | 794d9dfdc092264be377d611d81fea1531d75356 | /src/TaskAndFlow/migrations/0012_custominfo.py | 939844c5ac61e0b55520c44c3e1cb16fc205dd95 | [] | no_license | 56907dzq/11111111 | ba86b41c59522bc4ede66351a2d41dd2384b90cc | bae27a13e973c057f0aa3f26d96a32dae01e766b | refs/heads/master | 2020-09-23T13:41:44.820501 | 2017-12-29T04:00:59 | 2017-12-29T04:00:59 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 901 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('TaskAndFlow', '0011_unitproject_isdefault'),
]
operations = [
migrations.CreateModel(
name='CustomInfo',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('doctype', models.CharField(max_length=20, choices=[(b'pblist', b'pblist')])),
('custominfo', models.CharField(max_length=120, null=True, blank=True)),
],
options={
'verbose_name': '\u7528\u6237\u81ea\u5b9a\u4e49\u4fe1\u606f',
'verbose_name_plural': '\u7528\u6237\u81ea\u5b9a\u4e49\u4fe1\u606f',
},
bases=(models.Model,),
),
]
| [
"13865363107@163.com"
] | 13865363107@163.com |
bad546783b9b98f9247c0f019612677ae5645e1c | 7146f78c9a72e1fbb9b4c33c06fe692cd1d0b7d0 | /board/urls.py | c968b63e981573dcd6ea53a6f920dd207b6e0e5d | [] | no_license | kimhagyeong/LikeLion_GASILI_Project | 031919c7aa64539149d1d96cbeaa79d7e33fcf87 | 791a18542cf9aee245ded1fa37123315d8b1f24f | refs/heads/master | 2022-02-20T09:38:36.237129 | 2019-08-19T02:21:41 | 2019-08-19T02:21:41 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 430 | py | from django.urls import path
from . import views
urlpatterns = [
path('new/',views.board_new, name="board_new"),
path('create/',views.create, name="board_create"),
path('test/<int:board_id>',views.test, name="test"),
path('test/<int:board_id>/create/comment',views.createcomment, name="createcomment"),
path('test/chart/<int:board_id>', views.chart, name='chart'),
path('', views.board, name="board"),
]
| [
"hong7511059@koreatech.ac.kr"
] | hong7511059@koreatech.ac.kr |
f9b3093a80598c1ebdcda0c19e3f5fb6c3bdbe2c | 37c5185268936be570e26f2c99f97c64c25dba30 | /3_1_3/parse_qs.py | 2b8dda0615102db9a5d00571dd3fa885650eecec | [] | no_license | JasonSam1996/Python3CrawlerDemo | 7b19beb96b28691d9f386f383f63801dce2a3d23 | 7ad2c78e3e2ff9e804b99ef88d8ebbaf11fa3e97 | refs/heads/master | 2020-04-08T04:09:32.109601 | 2018-12-09T10:34:07 | 2018-12-09T10:34:07 | 159,005,281 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 116 | py | from urllib.parse import parse_qs
# 反序列化,返回字典
query = 'name=germey&age=22'
print(parse_qs(query)) | [
"json_sam@json-samdeiMac.local"
] | json_sam@json-samdeiMac.local |
a7aee24e288378ec3f0696bd416597f7e660d0f0 | 47628df1cc45d9f04489c4a8061431f34bfd4184 | /portfolio/settings.py | b7899977f99b97351a21d798dda397ac90edfed2 | [] | no_license | mahesh190495/portfolio123 | a2c053088ab624b761a934a7002c61445d1f2274 | ae191c43105fd9bc55010d00a48233f8765e5d76 | refs/heads/master | 2023-04-30T07:09:58.231973 | 2020-01-03T08:29:23 | 2020-01-03T08:29:23 | 231,352,361 | 0 | 0 | null | 2023-04-21T20:44:31 | 2020-01-02T09:47:42 | HTML | UTF-8 | Python | false | false | 3,486 | py | """
Django settings for portfolio project.
Generated by 'django-admin startproject' using Django 2.2.8.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'n02qc4@al7k8t_3=kh9^c8=v=!f_q_8uvv0p(9ee3ps=mwc*ka'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'blog.apps.BlogConfig',
'jobs.apps.JobsConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'portfolio.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'portfolio.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': ' ',
'USER':'postgres',
'PASSWORD':'mahesh908809',
'HOST':'localhost',
'PORT':'5432',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATICFILES_DIRS= [
os.path.join(BASE_DIR, 'portfolio/static')
]
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
try:
from local_settings import *
except ImportError:
pass
| [
"“mp526613@gmail.com”"
] | “mp526613@gmail.com” |
7892120a6d6a7156cc8d1cd0c3b45f581a21e5bc | b23d294fdffabe72c336644f119860f5ce704eef | /python_1000phone/语言基础-老师代码/day12-生成器和模块/day12-生成器和模块/game/image.py | fd2e2576d1f4c32bae8d829d16fda3308ee0debd | [] | no_license | ikaros274556330/my_code | 65232758fd20820e9f4fa8cb5a6c91a1969862a2 | 92db21c4abcbd88b7bd77e78d9f660b4534b5071 | refs/heads/master | 2020-11-26T09:43:58.200990 | 2019-12-23T02:08:39 | 2019-12-23T02:08:39 | 229,032,315 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 48 | py | """__author__=余婷"""
print('image被执行') | [
"274556330@qq.com"
] | 274556330@qq.com |
cb9e5f6db6e0b3873a0632324314e9ab5a8908c9 | d211d9f7635a301ee44f57b8a853c6b6271ab68f | /55/steam.py | e047d9bfecd7c39c82aa0464c34fde567ba63555 | [] | no_license | kincerb/bitesofpy | 77c8a01208e9928e2382c506abd1e2970bf8200e | 401bf1bd66934baeee33a82825b140668dfaff56 | refs/heads/master | 2020-12-27T11:33:05.175158 | 2020-07-28T01:12:29 | 2020-07-28T01:12:29 | 237,887,760 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 542 | py | from collections import namedtuple
import feedparser
# cached version to have predictable results for testing
FEED_URL = "https://bites-data.s3.us-east-2.amazonaws.com/steam_gaming.xml"
Game = namedtuple('Game', 'title link')
def get_games():
"""Parses Steam's RSS feed and returns a list of Game namedtuples"""
return [game for game in _get_games_iter(FEED_URL)]
def _get_games_iter(url):
parser = feedparser.parse(url)
for entry in parser.get('entries', []):
yield Game(entry.get('title'), entry.get('link'))
| [
"dev.bkincer@gmail.com"
] | dev.bkincer@gmail.com |
1f3bac0bbd584f3c5e69e06a55233271cc44667a | 0f0df335f4aa65f86e901fba4836e9b4c50e4a83 | /phase3/lbevents/migrations/0001_initial.py | 358fe8eaf973bdb612598dd8eba15209c42a41cb | [] | no_license | btezergil/Location-Based-Events | f781df16eb078879190ea31d23814aebf3b0d950 | 00a6d7d192867d6b4e537929339de98b30e88956 | refs/heads/master | 2021-09-10T07:48:26.307900 | 2018-03-22T10:56:19 | 2018-03-22T10:56:19 | 111,220,278 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,498 | py | # Generated by Django 2.0 on 2017-12-29 13:53
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Event',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('lon', models.DecimalField(decimal_places=6, max_digits=9)),
('lat', models.DecimalField(decimal_places=6, max_digits=9)),
('locname', models.CharField(max_length=256)),
('title', models.CharField(max_length=256)),
('desc', models.CharField(max_length=256)),
('catlist', models.CharField(max_length=256)),
('stime', models.DateTimeField()),
('to', models.DateTimeField()),
('timetoann', models.DateTimeField()),
],
),
migrations.CreateModel(
name='EventMap',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=256)),
],
),
migrations.AddField(
model_name='event',
name='Map',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='lbevents.EventMap'),
),
]
| [
"ege.cikla@gmail.com"
] | ege.cikla@gmail.com |
b00a21dd568d108fa15dddaabca63ca0aa04260b | 56cf8e324d088d4ab5232c395a7e0ebaa2c621d6 | /venv/bin/pasteurize | 646874e5de9e9a6fae1cbc158a03f31518192138 | [] | no_license | jussupov/sdukz_bot | 18722671c8cb6ad78f21d62a329eb5759b15419b | 42905995e3ff899ee19f957273d0381df80747b5 | refs/heads/master | 2020-07-09T01:21:36.417407 | 2019-10-20T03:21:33 | 2019-10-20T03:21:33 | 203,834,088 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 420 | #!/Users/jussupov/Desktop/work/sdu/venv/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'future==0.17.1','console_scripts','pasteurize'
__requires__ = 'future==0.17.1'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('future==0.17.1', 'console_scripts', 'pasteurize')()
)
| [
"jus.kz09@gmail.com"
] | jus.kz09@gmail.com | |
752815958bfdfa5e32e199e674509154536a9079 | 8aab760584ec587f173ed19e00f4b73d4ce9ccdc | /common_util/DictUtil.py | 91a6c96bf17482426a5bd27208fa8d87e74ae341 | [] | no_license | chenhz2284/python_lib | 4650a751e8de6514f31c7e23af31e6d4e299833e | f3b8c27ea14c5c7694716fdb42eeed22922f20c7 | refs/heads/master | 2021-01-10T13:36:50.607299 | 2015-11-04T09:17:56 | 2015-11-04T09:17:56 | 43,418,286 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,931 | py | # encoding: utf8
import types
from common_util import TypesUtil
import logging
from collections import OrderedDict
from operator import isMappingType
import datetime
from logging import Logger
import threading
def put(varDict, key, value):
varDict[key] = value
def get(varDict, key, defValue=None):
if varDict.has_key(key):
return varDict[key]
return defValue
def removeKey(varDict, key):
if varDict.has_key(key):
del(varDict[key])
# transform a obj into a Dictionary data
# deep
def toDictionary(obj, max_deep=3, cur_deep=1):
# print("toDictionary()", type(obj), obj)
if obj==None:
return obj;
elif isinstance(obj, int):
return obj;
elif isinstance(obj, long):
return obj;
elif isinstance(obj, float):
return obj;
elif isinstance(obj, bool):
return obj;
elif isinstance(obj, str):
return obj;
elif isinstance(obj, datetime.datetime):
return str(obj);
elif isinstance(obj, Logger):
return "logger [%s] of %s" % (obj.name, str(obj));
elif type(obj)==types.TypeType:
return str(obj);
elif callable(obj):
return str(obj)
else:
if cur_deep > max_deep:
return "<--(%s)(reach max deep: %s)-->" % (obj, max_deep)
cur_deep += 1
if isMappingType(obj):
# dict || OrderedDict
_rt = {}
for _p in obj:
_rt[_p] = toDictionary(obj[_p], max_deep=max_deep, cur_deep=cur_deep)
return _rt;
else:
# list || set
try:
iter(obj) # if iterable
_rt = []
for _p in obj:
_rt.append(toDictionary(_p, max_deep=max_deep, cur_deep=cur_deep))
return _rt;
except:
pass
# every type of class
_rt = {"__type__":str(obj)}
for _p in dir(obj):
if type(_p)==types.StringType and _p.startswith("__"):
continue
_value = getattr(obj, _p)
if callable(_value):
continue
_rt[_p] = toDictionary(_value, max_deep=max_deep, cur_deep=cur_deep)
return _rt;
#--------------------
# _dict = {
# "a" : "1a",
# "b" : {
# "a" : "2a",
# "b" : {
# "a" : "3a",
# "b" : {
# "a" : "4a",
# "b" : {
# "a" : "5a",
# "b" : "5b"
# }
# }
# }
# }
# }
#
#
# print toDictionary(_dict)
| [
"chenhongzhen@zhicloud.com"
] | chenhongzhen@zhicloud.com |
1d292c1d3a6db92cbb96b393c1ac4d37ab18ce94 | f192f8007e8ef49377333da89c3c7a33d6b7e5cb | /heroku.py | 366ce97c7fdfce69b1c885114fbfc65f5130328e | [] | no_license | JumaKahiga/Send-IT-API | 94315d9bebd72f4d6db5588bc99e8d6dbb3cca36 | 74918c594422b9d1e0e3d89d9c2e0149b8e5e342 | refs/heads/develop | 2022-12-17T00:30:14.918174 | 2019-02-06T16:38:15 | 2019-02-06T16:38:15 | 156,717,216 | 0 | 1 | null | 2022-05-25T00:44:52 | 2018-11-08T14:15:18 | Python | UTF-8 | Python | false | false | 104 | py | from app import create_app
app = create_app(config='production')
if __name__ == '__main__':
app.run() | [
"kabirumwangi@gmail.com"
] | kabirumwangi@gmail.com |
6b94e747755d5a2a25c566732d1a63a93bbab36f | f043a0935aac61b9d726668809505a19b93480a7 | /bullet.py | 9a0bbfded60eb0fcff43a95e9aba21a5aeef6911 | [] | no_license | zhenggl/alien_invation | 85f52ec7dd82dcab3adc7b009648b00faa205c67 | 877ff0a70fda9228963b83994a8be9a24c9237f5 | refs/heads/master | 2022-10-30T21:05:48.585248 | 2020-06-18T15:20:18 | 2020-06-18T15:20:18 | 271,929,982 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,217 | py | # _*_ coding:utf-8 _*_
# 开发团队:
# 开发人员:Administrator
# 开发时间:2020/6/106:21
# 文件名称:bullet
# 开发工具:PyCharm
import pygame
from pygame.sprite import Sprite
class Bullet(Sprite):
"""对飞船发射的子单管理的类"""
def __init__(self, ai_settings, screen, ship):
"""在飞船所在处创建一个子弹"""
super().__init__()
self.screen = screen
# 在(0,0)处创建一个表示子弹的矩形,再设置正取位置
self.rect = pygame.Rect(0, 0, ai_settings.bullet_width, ai_settings.bullet_height)
self.rect.centerx = ship.rect.centerx
self.rect.top = ship.rect.top
# 存储用小数表示的子弹位置
self.y = float(self.rect.y)
self.color = ai_settings.bullet_color
self.speed_factor = ai_settings.bullet_speed_factor
def update(self):
"""向上移动子弹"""
"""更新表示子弹位置的小数值"""
self.y -= self.speed_factor
"""更新表示子弹的rect位置"""
self.rect.y = self.y
def draw_bullet(self):
"""在屏幕上绘制子弹"""
pygame.draw.rect(self.screen, self.color, self.rect)
| [
"zhenggl_ing@163.com"
] | zhenggl_ing@163.com |
24285adff16977ae58d4235dd26177b2aa1cd910 | 96c18f190e4850db3aaca45ce62e4b2aa30f4c5e | /landside/catkin_ws/src/gui/src/gui/camera_status_widget.py | 0892c181921616a23eea9fece5810f52f60f6f16 | [] | no_license | DukeRobotics/robosub-ros | 085c878f7fe1a22401309cc3aa19c47141594685 | e2fd7ab924d143bf6354806a104f49d982f32fb1 | refs/heads/master | 2023-08-10T14:32:10.684964 | 2023-07-31T00:35:50 | 2023-07-31T00:35:50 | 176,614,524 | 24 | 33 | null | 2023-09-14T00:00:35 | 2019-03-19T23:29:57 | C++ | UTF-8 | Python | false | false | 16,235 | py | from datetime import datetime
from enum import Enum
from python_qt_binding import loadUi
from python_qt_binding.QtWidgets import (
QWidget,
QTableWidget,
QHeaderView,
QTableWidgetItem,
QTabWidget,
QDialog,
QGridLayout,
QMessageBox,
QAbstractItemView,
QLineEdit,
QDialogButtonBox,
QFormLayout,
QCheckBox,
QLabel
)
from python_qt_binding.QtCore import QTimer, QObject, QRunnable, QThreadPool, pyqtProperty, pyqtSignal, pyqtSlot
from python_qt_binding.QtGui import QColor, QIntValidator
import rospy
import rosgraph
import rostopic
import resource_retriever as rr
import rosservice
from custom_msgs.srv import ConnectUSBCamera, ConnectDepthAICamera
from diagnostic_msgs.msg import DiagnosticArray
class CameraStatusDataType(Enum):
PING = 0
STEREO = 1
MONO = 2
CAMERA_STATUS_CAMERA_TYPES = [CameraStatusDataType.STEREO, CameraStatusDataType.MONO]
CAMERA_STATUS_DATA_TYPE_INFORMATION = {
CameraStatusDataType.PING: {
'name': 'Ping',
'index': 0,
'topic_name': '/ping_host'
},
CameraStatusDataType.STEREO: {
'name': 'Stereo',
'index': 1,
'service_name': '/connect_depthai_camera',
'service_type': ConnectDepthAICamera
},
CameraStatusDataType.MONO: {
'name': 'Mono',
'index': 2,
'service_name': '/connect_usb_camera',
'service_type': ConnectUSBCamera
}
}
class CallConnectCameraServiceSignals(QObject):
connected_signal = pyqtSignal(CameraStatusDataType, bool, str)
class CallConnectCameraService(QRunnable):
def __init__(self, camera_type, service_args):
super(CallConnectCameraService, self).__init__()
if camera_type not in CAMERA_STATUS_CAMERA_TYPES:
raise ValueError('Invalid camera type')
self.service_name = CAMERA_STATUS_DATA_TYPE_INFORMATION[camera_type]['service_name']
self.service_type = CAMERA_STATUS_DATA_TYPE_INFORMATION[camera_type]['service_type']
self.camera_type = camera_type
self.signals = CallConnectCameraServiceSignals()
self.service_args = service_args
@pyqtSlot()
def run(self):
try:
rospy.wait_for_service(self.service_name, timeout=1)
connect_camera_service = rospy.ServiceProxy(self.service_name, self.service_type)
status = connect_camera_service(*self.service_args).success
timestamp = datetime.now().strftime("%H:%M:%S")
self.signals.connected_signal.emit(self.camera_type, status, timestamp)
except Exception:
self.signals.connected_signal.emit(self.camera_type, False, None)
class CameraStatusWidget(QWidget):
data_updated = pyqtSignal(CameraStatusDataType, bool, str, str, name='data_updated')
def __init__(self):
super(CameraStatusWidget, self).__init__()
ui_file = rr.get_filename('package://gui/resource/CameraStatusWidget.ui', use_protocol=False)
loadUi(ui_file, self)
self.ping_hostname = ''
self.usb_channel = -1
self.log = None
self.threadpool = QThreadPool()
self.logs_button.clicked.connect(self.open_conection_log)
self.check_camera_buttons = {
CameraStatusDataType.STEREO: self.check_stereo_button,
CameraStatusDataType.MONO: self.check_mono_button
}
for camera_type in self.check_camera_buttons:
self.check_camera_buttons[camera_type].clicked.connect(
lambda _, camera_type=camera_type: self.check_camera_connection(camera_type)
)
self.camera_service_args = {
CameraStatusDataType.STEREO: lambda: (),
CameraStatusDataType.MONO: lambda: (self.channel,)
}
self.checking = {}
for camera_type in CAMERA_STATUS_CAMERA_TYPES:
self.checking[camera_type] = False
self.status_logs = {}
for data_type in CameraStatusDataType:
self.status_logs[data_type] = []
self.status_table.horizontalHeader().setSectionResizeMode(QHeaderView.Stretch)
self.timer = QTimer(self)
self.timer.timeout.connect(self.timer_check)
self.timer.start(100)
self.subscriber = rospy.Subscriber(
CAMERA_STATUS_DATA_TYPE_INFORMATION[CameraStatusDataType.PING]["topic_name"],
DiagnosticArray,
self.ping_response
)
self.init_table()
rospy.loginfo('Camera Status Widget successfully initialized')
@pyqtProperty(str)
def hostname(self):
return self.ping_hostname
@hostname.setter
def hostname(self, value):
self.ping_hostname = value
@pyqtProperty(int)
def channel(self):
return self.usb_channel
@channel.setter
def channel(self, value):
self.usb_channel = value
def timer_check(self):
self.check_buttons_enabled()
self.check_ping_publisher()
def check_ping_publisher(self):
master = rosgraph.Master('/rostopic')
pubs, _ = rostopic.get_topic_list(master=master)
for topic_name, _, publishing_nodes in pubs:
if topic_name == CAMERA_STATUS_DATA_TYPE_INFORMATION[CameraStatusDataType.PING]["topic_name"] and \
len(publishing_nodes) > 0:
if self.subscriber is None:
self.create_new_subscriber()
return
if self.subscriber is not None:
self.remove_subscriber()
def create_new_subscriber(self):
self.subscriber = rospy.Subscriber(
CAMERA_STATUS_DATA_TYPE_INFORMATION[CameraStatusDataType.PING]["topic_name"],
DiagnosticArray,
self.ping_response
)
def remove_subscriber(self):
self.subscriber.unregister()
self.subscriber = None
def check_buttons_enabled(self):
service_list = rosservice.get_service_list()
for camera_type in self.check_camera_buttons:
self.check_camera_buttons[camera_type].setEnabled(
CAMERA_STATUS_DATA_TYPE_INFORMATION[camera_type]['service_name'] in service_list and
not self.checking[camera_type]
)
def open_conection_log(self):
self.log = CameraStatusLog(self.data_updated, self.status_logs)
self.log.exec()
def check_camera_connection(self, camera_type):
call_connect_camera_service = CallConnectCameraService(camera_type, self.camera_service_args[camera_type]())
call_connect_camera_service.signals.connected_signal.connect(self.connected_camera)
self.threadpool.start(call_connect_camera_service)
self.checking[camera_type] = True
self.check_camera_buttons[camera_type].setText("Checking...")
def connected_camera(self, camera_type, status, timestamp):
self.checking[camera_type] = False
self.check_camera_buttons[camera_type].setText(CAMERA_STATUS_DATA_TYPE_INFORMATION[camera_type]['name'])
if timestamp:
self.status_logs[camera_type].append({"status": status, "timestamp": timestamp, "message": None})
self.data_updated.emit(camera_type, status, timestamp, None)
self.update_table(camera_type, status, timestamp)
else:
# Display an alert indicating that the service call failed
alert = QMessageBox()
alert.setIcon(QMessageBox.Warning)
alert.setText("Could not complete the service call to connect to the " +
f"{CAMERA_STATUS_DATA_TYPE_INFORMATION[camera_type]['name']} camera.")
alert.exec_()
def ping_response(self, response):
# This method is called when a new message is published to the ping topic
# Make sure response hostname matches self.ping_hostname before proceeding
if response.status[0].name != self.ping_hostname:
return
data_type = CameraStatusDataType.PING
status_info = {}
status_info["status"] = response.status[0].level == 0
status_info["message"] = response.status[0].message
status_info["timestamp"] = datetime.fromtimestamp(response.header.stamp.secs).strftime("%H:%M:%S")
self.status_logs[data_type].append(status_info)
self.data_updated.emit(data_type, status_info["status"], status_info["timestamp"], status_info["message"])
self.update_table(data_type, status_info["status"], status_info["timestamp"])
def init_table(self):
for _, data_dict in CAMERA_STATUS_DATA_TYPE_INFORMATION.items():
self.status_table.insertRow(data_dict["index"])
self.status_table.setItem(data_dict["index"], 0, QTableWidgetItem(data_dict["name"]))
self.status_table.setItem(data_dict["index"], 1, QTableWidgetItem("-"))
self.status_table.setItem(data_dict["index"], 2, QTableWidgetItem("-"))
self.status_table.setRowHeight(data_dict["index"], 10)
def update_table(self, type, status, timestamp):
type_info = CAMERA_STATUS_DATA_TYPE_INFORMATION[type]
status_msg = "Successful" if status else "Failed"
color = "green" if status else "red"
name_item = QTableWidgetItem(type_info["name"])
status_item = QTableWidgetItem(status_msg)
status_item.setForeground(QColor(color))
timestamp_item = QTableWidgetItem(timestamp)
self.status_table.setItem(type_info["index"], 0, name_item)
self.status_table.setItem(type_info["index"], 1, status_item)
self.status_table.setItem(type_info["index"], 2, timestamp_item)
def help(self):
text = "This widget allows you to check the status of the cameras on the robot.\n\n" + \
"To check if the stereo camera can be pinged, launch cv/ping_host.launch. This plugin will only " + \
f"display the ping status for {self.ping_hostname}.\n\n" + \
"To check if the mono and stereo cameras are connected, launch cv/camera_test_connect.launch and click " + \
"the 'Mono' and 'Stereo' buttons. If camera_test_connect.launch is not running, the buttons will be " + \
f"disabled. The channel used for the mono camera is {self.usb_channel}.\n\n" + \
"To change the ping hostname or mono camera channel, click the settings icon. If the plugin appears to " + \
"be unresponsive to publishing ping messages, you can restart the ping subscriber from settings."
alert = QMessageBox()
alert.setWindowTitle("Camera Status Widget Help")
alert.setIcon(QMessageBox.Information)
alert.setText(text)
alert.exec_()
def settings(self):
settings = CameraStatusWidgetSettings(self, self.ping_hostname, self.usb_channel)
if settings.exec_():
self.hostname, self.channel, restart_ping = settings.get_values()
if restart_ping and self.subscriber is not None:
self.remove_subscriber()
self.create_new_subscriber()
def close(self):
self.timer.stop()
if self.log and self.log.isVisible():
self.log.close()
if self.subscriber is not None:
self.remove_subscriber()
self.threadpool.clear()
if self.threadpool.activeThreadCount() > 0:
message = f"Camera Status Widget waiting for {self.threadpool.activeThreadCount()} threads to finish. " + \
"It will close automatically when all threads are finished."
rospy.loginfo(message)
alert = QMessageBox()
alert.setWindowTitle("Waiting for Threads to Finish")
alert.setIcon(QMessageBox.Information)
alert.setText(message)
alert.exec_()
self.threadpool.waitForDone()
rospy.loginfo("Camera Status Widget has finished all threads and is successfully closed.")
class CameraStatusLog(QDialog):
def __init__(self, data_updated_signal, init_data):
super(CameraStatusLog, self).__init__()
data_updated_signal.connect(self.update)
layout = QGridLayout()
tab_widget = QTabWidget()
self.data = {}
self.log_tables = {}
for data_type in init_data:
self.data[data_type] = []
table = QTableWidget()
table.horizontalHeader().setVisible(False)
table.verticalHeader().setVisible(False)
table.horizontalHeader().setSectionResizeMode(QHeaderView.Stretch)
table.setColumnCount(2)
table.setEditTriggers(QAbstractItemView.NoEditTriggers)
if data_type == CameraStatusDataType.PING:
table.cellDoubleClicked.connect(self.table_clicked)
tab_widget.addTab(table, CAMERA_STATUS_DATA_TYPE_INFORMATION[data_type]["name"])
self.log_tables[data_type] = table
layout.addWidget(tab_widget, 0, 0)
self.setLayout(layout)
for type, data in init_data.items():
for row in data:
self.update(type, row["status"], row["timestamp"], row["message"])
def update(self, type, status, timestamp, message):
table = self.log_tables[type]
status_msg = "Successful" if status else "Failed"
color = "green" if status else "red"
status_item = QTableWidgetItem(status_msg)
status_item.setForeground(QColor(color))
timestamp_item = QTableWidgetItem(timestamp)
rowPosition = 0
table.insertRow(rowPosition)
table.setItem(rowPosition, 0, status_item)
table.setItem(rowPosition, 1, timestamp_item)
self.data[type].insert(0, {"status": status, "timestamp": timestamp, "message": message})
def table_clicked(self, index):
message = self.data[CameraStatusDataType.PING][index]["message"]
alert = QMessageBox()
alert.setWindowTitle("Ping Message")
alert.setIcon(QMessageBox.Information)
alert.setText(message)
alert.exec_()
class CameraStatusWidgetSettings(QDialog):
def __init__(self, parent, ping_hostname, usb_channel):
super().__init__(parent)
self.given_usb_channel = usb_channel
self.ping_hostname_line_edit = QLineEdit(self)
self.ping_hostname_line_edit.setText(str(ping_hostname))
self.usb_channel_line_edit = QLineEdit(self)
self.usb_channel_line_edit.setText(str(usb_channel))
validator = QIntValidator(self)
validator.setBottom(0)
self.usb_channel_line_edit.setValidator(validator)
self.restart_ping_subscriber_checkbox = QCheckBox(self)
self.restart_ping_subscriber_label = QLabel("Restart Ping Subscriber (?)", self)
self.restart_ping_subscriber_label.setToolTip("If checked, the ping subscriber will be restarted when the " +
"settings are saved. This is useful if the ping hostname has " +
"changed, ping_host.launch has been recently restarted, or " +
"if the plugin does not appear to receive ping messages even" +
"though ping_host.launch is running.")
buttonBox = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel, self)
layout = QFormLayout(self)
layout.addRow("Ping Hostname", self.ping_hostname_line_edit)
layout.addRow("USB Channel", self.usb_channel_line_edit)
layout.addRow(self.restart_ping_subscriber_label, self.restart_ping_subscriber_checkbox)
layout.addWidget(buttonBox)
buttonBox.accepted.connect(self.accept)
buttonBox.rejected.connect(self.reject)
def get_values(self):
channel = self.usb_channel_line_edit.text()
try:
channel = int(channel)
except Exception:
rospy.logwarn("Invalid USB channel (not an integer). The USB channel has not been changed.")
channel = self.given_usb_channel
return (self.ping_hostname_line_edit.text(), channel, self.restart_ping_subscriber_checkbox.isChecked())
| [
"noreply@github.com"
] | noreply@github.com |
2ca98c5399ca4e051f6ba3b6370573ba00678d56 | ee3e0a69093e82deff1bddf607f6ce0dde372c48 | /ndb769/개념/linked_list.py | 96c6f60d40eb889dc16b88a18505851cfabdcac7 | [] | no_license | cndqjacndqja/algorithm_python | 202f9990ea367629aecdd14304201eb6fa2aa37e | 843269cdf8fb9d4c215c92a97fc2d007a8f96699 | refs/heads/master | 2023-06-24T08:12:29.639424 | 2021-07-24T05:08:46 | 2021-07-24T05:08:46 | 255,552,956 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,117 | py | class Node:
def __init__(self, data):
self.data = data
self.next = None
class LinkedList:
def __init__(self, data):
self.head = Node(data)
def append(self, data):
cur = self.head
while cur.next is not None:
cur = cur.next
cur.next = Node(data)
def get_node(self, index):
cnt = 0
node = self.head
while cnt < index:
cnt += 1
node = node.next
return node
def add_node(self, index, value):
new_node = Node(value)
if index == 0:
new_node.next = self.head
self.head = new_node
return
node = self.get_node(index - 1)
new_node.next = node.next
node.next = new_node
def delete_node(self, index):
if index == 0:
self.head = self.head.next
return
node = self.get_node(index-1)
node.next = node.next.next
if __name__ == "__main__":
data = list(input())
list = LinkedList(data[0])
for i in range(1, len(data)):
list.append(data[i])
| [
"cndqjacndqja@gmail.com"
] | cndqjacndqja@gmail.com |
a23f86e441d5f4f5da7915919e8b469c935713f3 | 4a3fa0a12e840a78a3a0e6a9e5791bf9b5d171cf | /KNN.py | 31ceb81b182022ded07bfa81063a109d2dc04e00 | [] | no_license | mylesdoolan/knn | 1fb2baef475b5f04089ed69d4fe07c381880c20b | 88ce319b2973d3dbb884d568289e4d81bd2f5f5a | refs/heads/master | 2022-04-23T06:03:26.908906 | 2020-04-23T16:54:45 | 2020-04-23T16:54:45 | 257,062,569 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,195 | py | #K Nearest Neighbour
import math
import csv
# C:\Users\Myles\Downloads\CFIR-dataset-2020.csv
def main():
unchecked = []
checked = []
k = 3
#change
print("Please supply the CSV file you wish to have KNN'd:\n")
file = input()
print("Opening: ", file)
with open(file, 'r') as csvfile:
csvfile.seek(0)
reader = csv.DictReader(csvfile)
data = list(reader)
k = k_calc(len(data))
for record in data:
if record.get('State') == '':
unchecked.append(record)
else:
checked.append(record)
results = status_calculator(unchecked, checked, k)
keys = results[0].keys()
with open('/results.csv', 'w', newline='') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
dict_writer.writerows(results)
def status_calculator(unchecked, checked, k):
for uncheckedValue in unchecked:
euc_vals = []
for checkedValue in checked:
# print(uncheckedValue.get('x'))
euc_dist = euclidean_distance(uncheckedValue.get('x'), uncheckedValue.get('y'), checkedValue.get('x'), checkedValue.get('y'))
if (len(euc_vals) <= k):
euc_vals.append({'Host': checkedValue.get('Host'), 'State': checkedValue.get('State'), 'Distance': euc_dist})
else:
euc_vals = sorted(euc_vals, key=lambda i: i['Distance'])
if euc_vals[-1].get('Distance') > euc_dist:
euc_vals[-1] = {'Host': checkedValue.get('Host'), 'State': checkedValue.get('State'), 'Distance': euc_dist}
uncheckedValue['State'] = count_states(euc_vals, k)
return checked + unchecked
def euclidean_distance(x1, y1, x2, y2):
return math.sqrt((float(x1) - float(x2))**2 + (float(y1) - float(y2))**2)
def count_states(euc_vals, k):
normal = 0
for neighbour in euc_vals:
if neighbour.get('State') == 'Normal':
normal += 1
if normal > (k / 2):
return 'Normal'
else:
return 'Infected'
def k_calc(count):
return round(math.sqrt(count))
if __name__ == "__main__":
main() | [
"mylesdoolan@gmail.com"
] | mylesdoolan@gmail.com |
80d5e8488b998c3a964276ac08c88b9427f99959 | 342a719a93627a5fc79ba687c42e5d02ff5c5185 | /SICOS/facturacion/migrations/0014_auto_20201002_1324.py | 6fc61ebb3d9ea59bec97dd40d90d8b2e4f640f2a | [] | no_license | wilmerurango/sicos | bee82bac95c05af5c9479a74dad51defb0634e38 | cac42d2a0c009697ca1c5bfd4531ea72c785a533 | refs/heads/master | 2023-04-11T12:07:08.427674 | 2021-04-11T16:46:45 | 2021-04-11T16:46:45 | 321,412,147 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,039 | py | # Generated by Django 3.1 on 2020-10-02 18:24
import datetime
from django.db import migrations, models
from django.utils.timezone import utc
class Migration(migrations.Migration):
dependencies = [
('facturacion', '0013_auto_20201002_1321'),
]
operations = [
migrations.AlterField(
model_name='contrato',
name='fecha',
field=models.DateField(default=datetime.datetime(2020, 10, 2, 18, 24, 41, 520242, tzinfo=utc), verbose_name='Fecha'),
),
migrations.AlterField(
model_name='especialista',
name='fechafact_esp',
field=models.DateField(default=datetime.datetime(2020, 10, 2, 18, 24, 41, 520242, tzinfo=utc), verbose_name='Fecha de Registro'),
),
migrations.AlterField(
model_name='fac_especialista',
name='fechafac_esp',
field=models.DateField(default=datetime.datetime(2020, 10, 2, 18, 24, 41, 520242, tzinfo=utc), verbose_name='Fecha de Registro'),
),
]
| [
"analista.costos@clinicadelrio.org"
] | analista.costos@clinicadelrio.org |
e81831a09161f0c07cee9bea289671b167dea5db | 1f235736e804be5a614bc492028081e9c169f976 | /minions.py | 503d13041e57e6bf9d4a21606df52a31efe02160 | [
"Apache-2.0"
] | permissive | Diavolo/minions | 9d781d2789b34a21d6761b29ba685a4d77ca81af | 18fc278e46d5c06c3b0805979a7c5fd0db9b2d19 | refs/heads/master | 2021-09-25T01:46:16.234359 | 2021-09-11T03:27:13 | 2021-09-11T03:27:13 | 79,663,675 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,373 | py | #!/usr/bin/env python3
#
# ██████╗ █████╗ ██╗ ██╗██████╗ ███╗ ██╗███████╗████████╗
# ██╔════╝ ██╔══██╗██║ ██║██╔══██╗ ████╗ ██║██╔════╝╚══██╔══╝
# ██║ ███╗███████║███████║██║ ██║ ██╔██╗ ██║█████╗ ██║
# ██║ ██║██╔══██║██╔══██║██║ ██║ ██║╚██╗██║██╔══╝ ██║
# ╚██████╔╝██║ ██║██║ ██║██████╔╝██╗██║ ╚████║███████╗ ██║
# ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝╚═╝ ╚═══╝╚══════╝ ╚═╝
# ____ ___ _ _ ____
# / ___|/ _ \| | | | _ \ minions/minions.py
# | | _| |_| | |_| | | | |
# | |_| | _ | _ | |_| | Gustavo Huarcaya
# \____|_| |_|_| |_|____/ https://gahd.net
#
# Python minions main menu
#
import os
import sys
from python.isocalendar import today_isocalendar, custom_isocalendar
from python.util import SEPARATOR, menu_header
__filename = os.path.basename(__file__).upper()
header = menu_header(__filename)
menu = dict([
("1", "Today Isocalendar"),
("2", "Custom Isocalendar"),
("0", "Exit"),
])
menu_options = tuple(set(tuple(menu.keys()) + ("q",)))
def clear():
"""Clear console"""
os.system("clear")
def display_header():
"""Show menu header"""
clear()
for i in header:
print(i)
def display_menu():
"""Display menu"""
for k, v in menu.items():
print(f" [{k}] {v}")
print(SEPARATOR)
def ask_option():
"""Ask the user which minion he wants to use"""
valid_option = False
while(not valid_option):
user_input = input("Select an option: ")
valid_option = user_input in menu_options
return user_input
def ask_continue():
"""Ask the user if he wants to continue using another minion"""
user_input = input("Press [1] to return to the main menu, [0] to exit: ")
if user_input == "1":
minions()
elif user_input == "0" or user_input.lower() == "q":
sys.exit()
else:
ask_continue()
def principal():
opt = ask_option()
if opt == "1":
clear()
iso_calendar = today_isocalendar()
print(os.system("cal -1"))
print(f"Week: \t\t{iso_calendar['week']}")
print(f"Weekday: \t{iso_calendar['weekday']}")
print()
ask_continue()
elif opt == "2":
clear()
year = input("Year: ")
month = input("Month (from Jan = 1 to Dec = 12): ")
day = input("Day of the month: ")
clear()
print((os.system(f"cal {day} {month} {year}")))
iso_calendar = custom_isocalendar(int(year), int(month), int(day))
print(f"Week: \t\t{iso_calendar['week']}")
print(f"Weekday: \t{iso_calendar['weekday']}")
elif opt == "0" or "q":
clear()
sys.exit()
def minions():
display_header()
display_menu()
principal()
if __name__ == "__main__":
minions()
| [
"diavolo@gahd.net"
] | diavolo@gahd.net |
cb663256f0b048e27918df956633071fb141407d | 24c050e1c2053b7b5e240de14f4fb3f8bdb8f1b7 | /algorithm/dPPOcC.py | a83b51c6a3403b5c69c74c71e4143552dc0246ac | [] | no_license | hybug/test_ppo | 9d4d426888ebecd1763c97f85f6d716249125090 | 4c4db6924be47e1cca92a877feff379400e139eb | refs/heads/master | 2023-02-05T06:35:43.108082 | 2020-12-16T03:28:14 | 2020-12-16T03:28:14 | 321,856,460 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 650 | py | # coding: utf-8
from collections import namedtuple
from algorithm import dPPOc
from module import mse
PPOcCloss = namedtuple("PPOcCloss", ["p_loss", "v_loss"])
def dPPOcC(act, policy_logits, behavior_logits, advantage, policy_clip, vf, vf_target, value_clip, old_vf):
a_loss = dPPOc(act=act,
policy_logits=policy_logits,
behavior_logits=behavior_logits,
advantage=advantage,
clip=policy_clip)
c_loss = mse(y_hat=vf,
y_target=vf_target,
clip=value_clip,
clip_center=old_vf)
return PPOcCloss(a_loss, c_loss)
| [
"hanyu01@mail.jj.cn"
] | hanyu01@mail.jj.cn |
eff87412bf6ca4715fe05272080cc15cbc15f11a | a9aa0bce4e45b8712ce77045d0ec52eb4014692f | /manage.py | b001c976f32b96704d291a9e4d06d4bf5c399154 | [] | no_license | Aleleonel/BionicoBusiness | 93431e94a750f86c86d99925952cfc7b7c8cdb6d | 85326614f64235d8348aebac8e8b42a9e0764e18 | refs/heads/master | 2023-08-17T00:41:53.660157 | 2020-06-01T22:04:06 | 2020-06-01T22:04:06 | 268,640,058 | 0 | 0 | null | 2021-06-10T19:24:12 | 2020-06-01T21:49:19 | Python | UTF-8 | Python | false | false | 627 | py | #!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'bionico.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| [
"aleleonel@gmail.com"
] | aleleonel@gmail.com |
9b61853924eb18e6ee43387888b12daaf7b0dea5 | 0b0ca6853f351530384fcb9f3f9c91d4c034512b | /website/opensource/views.py | ba63fbd77e3145251be0ac3cead121e00526bdd4 | [] | no_license | thanhleviet/syrusakbary.com | d767129c6b00c092816e3cb58f063d1b052f0df0 | ca04f55462db72bb603bfc0453b9404b04ee6687 | refs/heads/master | 2021-01-18T09:30:31.278757 | 2012-07-22T20:32:12 | 2012-07-22T20:32:12 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 930 | py | # Create your views here.
#from django.http import HttpResponse
#from coffin.template import add_to_builtins
#add_to_builtins('jinja2-mediasync.media')
#from coffin.shortcuts import render_to_response
#from django.shortcuts import render_to_response
#from django.template import add_to_builtins
#add_to_builtins('mediasync.templatetags.media')
#from django.template import RequestContext
from django.views.generic import ListView, DetailView
from .models import Project
class OpenSourceDetailView(DetailView):
template_name='opensource/opensource_detail.jade'
#queryset = Project.objects.all()
queryset = []
class OpenSourceListView(ListView):
template_name='opensource/opensource_list.jade'
queryset = Project.objects.all()
context_object_name = "project_list"
#queryset = []
#def index(request):
# return render_to_response('projects/index.html',context_instance=RequestContext(request))
| [
"me@syrusakbary.com"
] | me@syrusakbary.com |
ab0546c97ea3957a9270775cbf4888ba4f60f9f7 | f211198984faad9ee550f79cd6db727347b6ad94 | /yolov3/proj_code/trainer.py | 85c90d0522a77f74a80c19dcc7488ba571382a48 | [] | no_license | Morris88826/yolo-depth-estimation | 30fa484fc64d243e4ca36bf2565c07e160ae023e | d9f6f19317b3661fa0b03e3590e5fff1d75a5241 | refs/heads/master | 2022-09-20T11:10:59.118888 | 2020-06-03T17:31:47 | 2020-06-03T17:31:47 | 246,574,792 | 6 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,586 | py | import tensorflow as tf
class Trainer():
def __init__(self, model, lr=1e-5, decay=0.9):
self.model = model
self.optimizer = tf.optimizers.RMSprop(learning_rate=lr, decay=0.01)
self.lr = lr
def train(self, X, y):
with tf.GradientTape() as tape:
_, y_predict = self.model(X)
loss_value = self.MSE_Loss(y, y_predict)
grads = tape.gradient(loss_value, self.model.trainable_variables)
self.optimizer.apply_gradients(zip(grads, self.model.trainable_variables))
return loss_value.numpy().mean()
def MSE_Loss(self, target_y, predicted_y):
# Use mean square error
return tf.reduce_mean(tf.square(target_y - predicted_y))
def depth_loss_function(self, y_true, y_pred, theta=0.1, maxDepthVal=1000.0/10.0):
# y_true = tf.convert_to_tensor(y_true)
# Point-wise depth
l_depth = tf.keras.backend.mean(tf.keras.backend.abs(y_pred - y_true), axis=-1)
# Edges
dy_true, dx_true = tf.image.image_gradients(y_true)
dy_pred, dx_pred = tf.image.image_gradients(y_pred)
l_edges = tf.keras.backend.mean(tf.keras.backend.abs(dy_pred - dy_true) + tf.keras.backend.abs(dx_pred - dx_true), axis=-1)
# Structural similarity (SSIM) index
l_ssim = tf.keras.backend.clip((1 - tf.image.ssim(y_true, y_pred, maxDepthVal)) * 0.5, 0, 1)
# Weights
w1 = 1.0
w2 = 1.0
w3 = theta
return (w1 * l_ssim) + (w2 * tf.keras.backend.mean(l_edges)) + (w3 * tf.keras.backend.mean(l_depth)) | [
"morris88826@gmail.com"
] | morris88826@gmail.com |
7b095e3e066626392f53c9d6e431e87be22263e4 | 2a7fe1988b9a9aaf5e301637883319c43d38bcb9 | /users/serializers.py | e18ccb36912540ec79d98c4ac36785882da3fc0c | [] | no_license | kenassash/django_rest_notes | 65d799b32f520faef2dbd02fae2e05efa8535797 | eab022e6e57aaa06918ee5ab80586c8a1a8894c3 | refs/heads/master | 2023-09-03T19:36:28.304504 | 2021-10-27T08:19:14 | 2021-10-27T08:19:14 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 747 | py | from rest_framework.serializers import HyperlinkedModelSerializer
# from .models import NoteUser
from .models import User
class UserModelSerializer(HyperlinkedModelSerializer):
class Meta:
# model = NoteUser
# fields = ('username', 'firstname', 'lastname', 'email')
# fields = '__all__'
model = User
fields = ('id', 'url', 'username', 'first_name', 'last_name', 'email', 'is_superuser', 'is_staff')
# fields = ('username', 'first_name', 'last_name', 'email')
# fields = '__all__'
class UserModelSerializerV2(HyperlinkedModelSerializer):
class Meta:
model = User
# fields = '__all__'
fields = ('id', 'url', 'username', 'first_name', 'last_name', 'email')
| [
"travis@travis-ci.org"
] | travis@travis-ci.org |
3998ac44feab7bf948caa2d13bff1d8ba307d16f | 1b2f82c41677a73f8ad68536d56f9a68a99e54a7 | /CryptoAttacks/tests/Block/cbc_oracles.py | ee8396ea5460ebbf112924f40b1c4daf64cd21cc | [
"MIT"
] | permissive | akbarszcz/CryptoAttacks | 0cced001bf245bdc7c1ffeb4621797db54636ff6 | ae675d016b314414a3dc9b23c7d8a32da4c62457 | refs/heads/master | 2020-09-13T03:11:23.414940 | 2019-11-19T08:32:27 | 2019-11-19T08:32:27 | 222,640,867 | 0 | 0 | MIT | 2019-11-19T07:59:34 | 2019-11-19T07:59:32 | null | UTF-8 | Python | false | false | 1,883 | py | #!/usr/bin/python
from __future__ import print_function
import sys
from builtins import bytes
# aes returns bytes or strings depending on python version
from Crypto.Cipher import AES
from CryptoAttacks.Utils import (add_padding, b2h, bytes, h2b, print_function,
random_bytes, strip_padding, xor)
KEY = bytes(b'asdf'*4)
iv_as_key = False
block_size = AES.block_size
class BadPadding(RuntimeError):
pass
def encrypt(data, iv_as_key=False):
iv = random_bytes(block_size)
if iv_as_key:
iv = KEY
aes = AES.new(KEY, AES.MODE_CBC, iv)
return iv + bytes(aes.encrypt(add_padding(data)))
def decrypt(data, iv_as_key=False):
if iv_as_key:
iv = KEY
else:
iv = data[:block_size]
data = data[block_size:]
aes = AES.new(KEY, AES.MODE_CBC, iv)
p = bytes(aes.decrypt(data))
try:
p = strip_padding(p, block_size)
return p
except:
raise BadPadding
def padding_oracle(payload, iv):
global iv_as_key
payload = iv + payload
try:
decrypt(payload, iv_as_key)
except BadPadding as e:
return False
return True
blocks_with_correct_padding = encrypt(bytes(b'A' * (block_size + 5)))[block_size:]
def decryption_oracle(payload):
global iv_as_key
iv = bytes(b'A' * block_size)
payload = iv + payload + blocks_with_correct_padding
plaintext = decrypt(payload, iv_as_key)
if iv_as_key:
return xor(plaintext[block_size:block_size*2], iv)
return xor(plaintext[:block_size], iv)
if __name__ == '__main__':
if len(sys.argv) != 3 or sys.argv[1] not in ['encrypt', 'decrypt']:
print("Usage: {} encrypt|decrypt data".format(sys.argv[0]))
sys.exit(1)
if sys.argv[1] == 'encrypt':
print(b2h(encrypt(h2b(sys.argv[2]))))
else:
print(b2h(decrypt(h2b(sys.argv[2]))))
| [
"e2.8a.95@gmail.com"
] | e2.8a.95@gmail.com |
f2501ce2955e913fdc89e767db67ecad2564420a | 925bdd2e81d678e5e428afb07e9cd77e1c7dca4f | /dalsil_pkg_basic/model/supplier_return/__init__.py | 5b35f9dd35831b12520b9a7aaca2db95873bc48c | [] | no_license | Irawan123/dalsilsoft | 78b607515d43753ed8bb9dfb6211eead129c19cc | 1862560375b5dfd4d7f4fade7c4491bf7235b58e | refs/heads/master | 2021-01-16T18:34:07.176625 | 2018-11-12T09:07:57 | 2018-11-12T09:07:57 | 100,094,412 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 50 | py | import supplier_return_line
import supplier_return | [
"michaelputrawijaya@gmail.com"
] | michaelputrawijaya@gmail.com |
156cee855fb4337b19f958444b0a101e766d89a5 | cee0df2a184f3f99306193b9f34aba16889cc57c | /pvextractor/utils/wcs_utils.py | 88cd249959d5af27eeb395dc415eeb0160e41646 | [] | no_license | teuben/pvextractor | 169f3317eb2d53013eb981fca18f69d17fa3a8b3 | 889c108a964d8130b1a17066890c7325b57daf4c | refs/heads/master | 2021-01-14T13:16:48.485846 | 2014-04-18T23:29:17 | 2014-04-18T23:29:17 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,278 | py | import numpy as np
from astropy import units as u
from astropy import wcs
def get_pixel_scales(mywcs, assert_square=True):
# borrowed from aplpy
mywcs = mywcs.sub([wcs.WCSSUB_CELESTIAL])
cdelt = np.matrix(mywcs.wcs.get_cdelt())
pc = np.matrix(mywcs.wcs.get_pc())
scale = np.array(cdelt * pc)
if (assert_square and
(abs(cdelt[0,0]) != abs(cdelt[0,1]) or
abs(pc[0,0]) != abs(pc[1,1]) or
abs(scale[0,0]) != abs(scale[0,1]))):
raise ValueError("Non-square pixels. Please resample data.")
return abs(scale[0,0])
def sanitize_wcs(mywcs):
pc = np.matrix(mywcs.wcs.get_pc())
if (pc[:,2].sum() != pc[2,2] or pc[2,:].sum() != pc[2,2]):
raise ValueError("Non-independent 3rd axis.")
axtypes = mywcs.get_axis_types()
if ((axtypes[0]['coordinate_type'] != 'celestial' or
axtypes[1]['coordinate_type'] != 'celestial' or
axtypes[2]['coordinate_type'] != 'spectral')):
cunit3 = mywcs.wcs.cunit[2]
ctype3 = mywcs.wcs.ctype[2]
if cunit3 != '':
cunit3 = u.Unit(cunit3)
if cunit3.is_equivalent(u.m/u.s):
mywcs.wcs.ctype[2] = 'VELO'
elif cunit3.is_equivalent(u.Hz):
mywcs.wcs.ctype[2] = 'FREQ'
elif cunit3.is_equivalent(u.m):
mywcs.wcs.ctype[2] = 'WAVE'
else:
raise ValueError("Could not determine type of 3rd axis.")
elif ctype3 != '':
if 'VELO' in ctype3:
mywcs.wcs.ctype[2] = 'VELO'
elif 'FELO' in ctype3:
mywcs.wcs.ctype[2] = 'VELO-F2V'
elif 'FREQ' in ctype3:
mywcs.wcs.ctype[2] = 'FREQ'
elif 'WAVE' in ctype3:
mywcs.wcs.ctype[2] = 'WAVE'
else:
raise ValueError("Could not determine type of 3rd axis.")
else:
raise ValueError("Cube axes not in expected orientation: PPV")
return mywcs
def wcs_spacing(mywcs, spacing):
"""
Return spacing in pixels
Parameters
----------
wcs : `~astropy.wcs.WCS`
spacing : `~astropy.units.Quantity` or float
"""
if spacing is not None:
if hasattr(spacing,'unit'):
if not spacing.unit.is_equivalent(u.arcsec):
raise TypeError("Spacing is not in angular units.")
else:
platescale = get_pixel_scales(mywcs)
newspacing = spacing.to(u.deg).value / platescale
else:
# if no units, assume pixels already
newspacing = spacing
else:
# if no spacing, return pixscale
newspacing = 1
return newspacing
def pixel_to_wcs_spacing(mywcs, pspacing):
"""
Return spacing in degrees
Parameters
----------
wcs : `~astropy.wcs.WCS`
spacing : float
"""
platescale = get_pixel_scales(mywcs)
wspacing = platescale * pspacing * u.deg
return wspacing
def get_wcs_system_name(mywcs):
"""TODO: move to astropy.wcs.utils"""
ct = mywcs.sub([wcs.WCSSUB_CELESTIAL]).wcs.ctype
if 'GLON' in ct[0]:
return 'galactic'
elif 'RA' in ct[0]:
return 'icrs'
else:
raise ValueError("Unrecognized coordinate system")
| [
"keflavich@gmail.com"
] | keflavich@gmail.com |
3cea7cfc0e0c7f0ce28a2297adca573ec9a9e999 | eb9c3dac0dca0ecd184df14b1fda62e61cc8c7d7 | /google/devtools/cloudbuild/v1/devtools-cloudbuild-v1-py/google/devtools/cloudbuild_v1/services/cloud_build/transports/base.py | 36c52c54ceb1c9b4ddd4224c7b0ac3a69084ee44 | [
"Apache-2.0"
] | permissive | Tryweirder/googleapis-gen | 2e5daf46574c3af3d448f1177eaebe809100c346 | 45d8e9377379f9d1d4e166e80415a8c1737f284d | refs/heads/master | 2023-04-05T06:30:04.726589 | 2021-04-13T23:35:20 | 2021-04-13T23:35:20 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 15,757 | py | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import abc
import typing
import pkg_resources
from google import auth # type: ignore
from google.api_core import exceptions # type: ignore
from google.api_core import gapic_v1 # type: ignore
from google.api_core import retry as retries # type: ignore
from google.api_core import operations_v1 # type: ignore
from google.auth import credentials # type: ignore
from google.devtools.cloudbuild_v1.types import cloudbuild
from google.longrunning import operations_pb2 as operations # type: ignore
from google.protobuf import empty_pb2 as empty # type: ignore
try:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo(
gapic_version=pkg_resources.get_distribution(
'google-devtools-cloudbuild',
).version,
)
except pkg_resources.DistributionNotFound:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo()
class CloudBuildTransport(abc.ABC):
"""Abstract transport class for CloudBuild."""
AUTH_SCOPES = (
'https://www.googleapis.com/auth/cloud-platform',
)
def __init__(
self, *,
host: str = 'cloudbuild.googleapis.com',
credentials: credentials.Credentials = None,
credentials_file: typing.Optional[str] = None,
scopes: typing.Optional[typing.Sequence[str]] = AUTH_SCOPES,
quota_project_id: typing.Optional[str] = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO,
**kwargs,
) -> None:
"""Instantiate the transport.
Args:
host (Optional[str]): The hostname to connect to.
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
credentials_file (Optional[str]): A file with credentials that can
be loaded with :func:`google.auth.load_credentials_from_file`.
This argument is mutually exclusive with credentials.
scope (Optional[Sequence[str]]): A list of scopes.
quota_project_id (Optional[str]): An optional project to use for billing
and quota.
client_info (google.api_core.gapic_v1.client_info.ClientInfo):
The client info used to send a user-agent string along with
API requests. If ``None``, then default info will be used.
Generally, you only need to set this if you're developing
your own client library.
"""
# Save the hostname. Default to port 443 (HTTPS) if none is specified.
if ':' not in host:
host += ':443'
self._host = host
# Save the scopes.
self._scopes = scopes or self.AUTH_SCOPES
# If no credentials are provided, then determine the appropriate
# defaults.
if credentials and credentials_file:
raise exceptions.DuplicateCredentialArgs("'credentials_file' and 'credentials' are mutually exclusive")
if credentials_file is not None:
credentials, _ = auth.load_credentials_from_file(
credentials_file,
scopes=self._scopes,
quota_project_id=quota_project_id
)
elif credentials is None:
credentials, _ = auth.default(scopes=self._scopes, quota_project_id=quota_project_id)
# Save the credentials.
self._credentials = credentials
def _prep_wrapped_messages(self, client_info):
# Precompute the wrapped methods.
self._wrapped_methods = {
self.create_build: gapic_v1.method.wrap_method(
self.create_build,
default_timeout=600.0,
client_info=client_info,
),
self.get_build: gapic_v1.method.wrap_method(
self.get_build,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
exceptions.DeadlineExceeded,
exceptions.ServiceUnavailable,
),
deadline=600.0,
),
default_timeout=600.0,
client_info=client_info,
),
self.list_builds: gapic_v1.method.wrap_method(
self.list_builds,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
exceptions.DeadlineExceeded,
exceptions.ServiceUnavailable,
),
deadline=600.0,
),
default_timeout=600.0,
client_info=client_info,
),
self.cancel_build: gapic_v1.method.wrap_method(
self.cancel_build,
default_timeout=600.0,
client_info=client_info,
),
self.retry_build: gapic_v1.method.wrap_method(
self.retry_build,
default_timeout=600.0,
client_info=client_info,
),
self.create_build_trigger: gapic_v1.method.wrap_method(
self.create_build_trigger,
default_timeout=600.0,
client_info=client_info,
),
self.get_build_trigger: gapic_v1.method.wrap_method(
self.get_build_trigger,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
exceptions.DeadlineExceeded,
exceptions.ServiceUnavailable,
),
deadline=600.0,
),
default_timeout=600.0,
client_info=client_info,
),
self.list_build_triggers: gapic_v1.method.wrap_method(
self.list_build_triggers,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
exceptions.DeadlineExceeded,
exceptions.ServiceUnavailable,
),
deadline=600.0,
),
default_timeout=600.0,
client_info=client_info,
),
self.delete_build_trigger: gapic_v1.method.wrap_method(
self.delete_build_trigger,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
exceptions.DeadlineExceeded,
exceptions.ServiceUnavailable,
),
deadline=600.0,
),
default_timeout=600.0,
client_info=client_info,
),
self.update_build_trigger: gapic_v1.method.wrap_method(
self.update_build_trigger,
default_timeout=600.0,
client_info=client_info,
),
self.run_build_trigger: gapic_v1.method.wrap_method(
self.run_build_trigger,
default_timeout=600.0,
client_info=client_info,
),
self.receive_trigger_webhook: gapic_v1.method.wrap_method(
self.receive_trigger_webhook,
default_timeout=None,
client_info=client_info,
),
self.create_worker_pool: gapic_v1.method.wrap_method(
self.create_worker_pool,
default_timeout=600.0,
client_info=client_info,
),
self.get_worker_pool: gapic_v1.method.wrap_method(
self.get_worker_pool,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
exceptions.DeadlineExceeded,
exceptions.ServiceUnavailable,
),
deadline=600.0,
),
default_timeout=600.0,
client_info=client_info,
),
self.delete_worker_pool: gapic_v1.method.wrap_method(
self.delete_worker_pool,
default_timeout=600.0,
client_info=client_info,
),
self.update_worker_pool: gapic_v1.method.wrap_method(
self.update_worker_pool,
default_timeout=600.0,
client_info=client_info,
),
self.list_worker_pools: gapic_v1.method.wrap_method(
self.list_worker_pools,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
exceptions.DeadlineExceeded,
exceptions.ServiceUnavailable,
),
deadline=600.0,
),
default_timeout=600.0,
client_info=client_info,
),
}
@property
def operations_client(self) -> operations_v1.OperationsClient:
"""Return the client designed to process long-running operations."""
raise NotImplementedError()
@property
def create_build(self) -> typing.Callable[
[cloudbuild.CreateBuildRequest],
typing.Union[
operations.Operation,
typing.Awaitable[operations.Operation]
]]:
raise NotImplementedError()
@property
def get_build(self) -> typing.Callable[
[cloudbuild.GetBuildRequest],
typing.Union[
cloudbuild.Build,
typing.Awaitable[cloudbuild.Build]
]]:
raise NotImplementedError()
@property
def list_builds(self) -> typing.Callable[
[cloudbuild.ListBuildsRequest],
typing.Union[
cloudbuild.ListBuildsResponse,
typing.Awaitable[cloudbuild.ListBuildsResponse]
]]:
raise NotImplementedError()
@property
def cancel_build(self) -> typing.Callable[
[cloudbuild.CancelBuildRequest],
typing.Union[
cloudbuild.Build,
typing.Awaitable[cloudbuild.Build]
]]:
raise NotImplementedError()
@property
def retry_build(self) -> typing.Callable[
[cloudbuild.RetryBuildRequest],
typing.Union[
operations.Operation,
typing.Awaitable[operations.Operation]
]]:
raise NotImplementedError()
@property
def create_build_trigger(self) -> typing.Callable[
[cloudbuild.CreateBuildTriggerRequest],
typing.Union[
cloudbuild.BuildTrigger,
typing.Awaitable[cloudbuild.BuildTrigger]
]]:
raise NotImplementedError()
@property
def get_build_trigger(self) -> typing.Callable[
[cloudbuild.GetBuildTriggerRequest],
typing.Union[
cloudbuild.BuildTrigger,
typing.Awaitable[cloudbuild.BuildTrigger]
]]:
raise NotImplementedError()
@property
def list_build_triggers(self) -> typing.Callable[
[cloudbuild.ListBuildTriggersRequest],
typing.Union[
cloudbuild.ListBuildTriggersResponse,
typing.Awaitable[cloudbuild.ListBuildTriggersResponse]
]]:
raise NotImplementedError()
@property
def delete_build_trigger(self) -> typing.Callable[
[cloudbuild.DeleteBuildTriggerRequest],
typing.Union[
empty.Empty,
typing.Awaitable[empty.Empty]
]]:
raise NotImplementedError()
@property
def update_build_trigger(self) -> typing.Callable[
[cloudbuild.UpdateBuildTriggerRequest],
typing.Union[
cloudbuild.BuildTrigger,
typing.Awaitable[cloudbuild.BuildTrigger]
]]:
raise NotImplementedError()
@property
def run_build_trigger(self) -> typing.Callable[
[cloudbuild.RunBuildTriggerRequest],
typing.Union[
operations.Operation,
typing.Awaitable[operations.Operation]
]]:
raise NotImplementedError()
@property
def receive_trigger_webhook(self) -> typing.Callable[
[cloudbuild.ReceiveTriggerWebhookRequest],
typing.Union[
cloudbuild.ReceiveTriggerWebhookResponse,
typing.Awaitable[cloudbuild.ReceiveTriggerWebhookResponse]
]]:
raise NotImplementedError()
@property
def create_worker_pool(self) -> typing.Callable[
[cloudbuild.CreateWorkerPoolRequest],
typing.Union[
cloudbuild.WorkerPool,
typing.Awaitable[cloudbuild.WorkerPool]
]]:
raise NotImplementedError()
@property
def get_worker_pool(self) -> typing.Callable[
[cloudbuild.GetWorkerPoolRequest],
typing.Union[
cloudbuild.WorkerPool,
typing.Awaitable[cloudbuild.WorkerPool]
]]:
raise NotImplementedError()
@property
def delete_worker_pool(self) -> typing.Callable[
[cloudbuild.DeleteWorkerPoolRequest],
typing.Union[
empty.Empty,
typing.Awaitable[empty.Empty]
]]:
raise NotImplementedError()
@property
def update_worker_pool(self) -> typing.Callable[
[cloudbuild.UpdateWorkerPoolRequest],
typing.Union[
cloudbuild.WorkerPool,
typing.Awaitable[cloudbuild.WorkerPool]
]]:
raise NotImplementedError()
@property
def list_worker_pools(self) -> typing.Callable[
[cloudbuild.ListWorkerPoolsRequest],
typing.Union[
cloudbuild.ListWorkerPoolsResponse,
typing.Awaitable[cloudbuild.ListWorkerPoolsResponse]
]]:
raise NotImplementedError()
__all__ = (
'CloudBuildTransport',
)
| [
"bazel-bot-development[bot]@users.noreply.github.com"
] | bazel-bot-development[bot]@users.noreply.github.com |
3b8eff9089bde89425a026654008455602339787 | eae717f81315d3902ba85eb4f799b3ab3f2a07a2 | /AIPC/01_Searching_and_Sorting/01_Insertion_Sort/Python/insertion_sort.py | 7fdd2d08d655123f64e2de260755b8408e91ab3a | [] | no_license | tejakummarikuntla/algo-ds | 56af87c28925f02f3cac8b950771607cdebe9358 | 008164bff1ea01a2e1c85ab18ad3e4ae461fbe04 | refs/heads/master | 2020-06-18T20:42:52.697814 | 2020-04-21T18:10:45 | 2020-04-21T18:10:45 | 196,440,639 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 351 | py | def insertionSort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i-1
while j >=0 and key < arr[j] :
arr[j+1] = arr[j]
j -= 1
arr[j+1] = key
arr = [12, 11, 13, 5, 6]
insertionSort(arr)
print ("Sorted array is:")
for i in range(len(arr)):
print ("%d" %arr[i])
| [
"teja.kummarikuntla@gmail.com"
] | teja.kummarikuntla@gmail.com |
91c7b020d129097e537fd56e7bb8fe53783057e1 | fcb97f6a222644a062bcde5bbec928c5ff4c7072 | /A2/src/lineADT.py | d8a852a76664f4812d7ea9bf257cd266d5e538db | [] | no_license | pollyyao/CS2ME3 | 4fae1b662267fe2989e7d3c5319441909b4cb5d4 | 024f03c83e0da1a429440e9d132eb66c25384c32 | refs/heads/master | 2020-03-20T21:41:09.635133 | 2018-06-18T13:21:10 | 2018-06-18T13:21:10 | 137,754,218 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,440 | py | ## @file line.py
# @title lineADT
# @author Polly Yao
# @date 2/19/2017
## @brief This class represents a line
# @details This class represents a point as p1 and p2 define two point of a line
import math
import pointADT
## @brief Constructor for LineT
# @param p1 a point at the front of the line
# @param p2 a point at the back of the line
class LineT:
def __init__(self, p1, p2):
self.p1 = p1
self.p2 = p2
## @brief This function receives p1 from the constructor
# @return the beginning point of the line
def beg(self):
return (self.p1)
## @brief This function receives p2 from the constructor
# @return the back point of the line
def end(self):
return (self.p2)
## @brief This function calculates the length of the line
# @return the length
def len(self):
result = (self.p1).dist(self.p2)
return result
## @brief This function calculates the midpoint of the line
# @return the a midpoint(PointT)
def mdpt(self):
a0 = (self.p1.xcrd())
a1 = (self.p2.xcrd())
af = (a0 + a1)/2
b0 = (self.p1.ycrd())
b1 = (self.p2.ycrd())
bf = (b0 + b1)/2
res = pointADT.PointT(af, bf)
return res
## @brief This function rotates front point and back point of the line
# @param phi a radian
def rot(self, phi):
(self.p1).rot(phi)
(self.p2).rot(phi)
| [
"dreamingpuff_711@hotmail.com"
] | dreamingpuff_711@hotmail.com |
e79b562e547ad917a43f9220f4fe08ced3997797 | 8ac272bf68c98de18ff6290ce213584dc29468de | /test.py | ee900ebd10de49c823d725f0e17d8d7ebf460d45 | [] | no_license | SkironYong/-Offer | 7c6ce8bba6bdfadfea27201f581e8024aea01776 | 37dcad819bb1fb711a644d61b019a5a1d09a9f84 | refs/heads/master | 2020-09-27T12:58:45.778926 | 2019-12-07T13:46:04 | 2019-12-07T13:46:04 | 226,522,035 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 253 | py | list = []
file = open(r'C:\Users\HP\Desktop\output.txt','w+')
with open(r'C:\Users\HP\Desktop\rpc_MasterProvisional2_2.out', 'r') as f:
for line in f:
ls = line.replace(',','\n').replace(' ','')
list.append(ls)
file.write(''.join(list)) | [
"sxy@email.com"
] | sxy@email.com |
21096abaca6cc57f72b3d2d60587a14a10cc9954 | eea07c016236a95216ed7b7f2519f002beb69d9a | /K-Nearest Neighbours/k-nearest_neighbours.py | 0f80fd1015961466d6beb9775287c507b0bfc1d2 | [] | no_license | SayJayWay/Data-Science-Algorithms | 7b44e18bf2a38b96b4240162658b88d93aacf50d | 34e3ab35ed85297229679fc1fc4188bdf5f9876d | refs/heads/master | 2020-06-03T15:47:19.525170 | 2020-03-17T20:46:13 | 2020-03-17T20:46:13 | 191,636,192 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,496 | py | from collections import Counter
import numpy as np
import math
def raw_majority_vote(labels):
votes = Counter(labels)
winner,_ = votes.most_common(1)
return winner
def majority_vote(labels):
"""assumes labels are ordered from nearest to farthest"""
vote_counts = Counter(labels)
winner, winner_count = vote_counts.most_common(1)[0]
num_winners = len([count
for count in vote_counts.values()
if count == winner_count])
if num_winners == 1:
return winner # unique winner, so return it
else:
return majority_vote(labels[:-1]) # try again w/o farthest neighbour
def knn_classify(k, labeled_points, new_point):
"""each labeled point should be a pair(point,label)"""
# order the labeled points from nearest to farthest
by_distance = sorted(labeled_points,
key = lambda point:math.sqrt(np.dot((point, new_point))))
# find the labels for the k closest
k_nearest_labels = [label for _, label in by_distance[:k]]
# let them vote
return majority_vote(k_nearest_labels)
# Note: KNN runs into trouble in higher dimension, since they are vast. Points in high dimensional tend not to be close
# to one another. To visualize this, randomly generate pairs of points in d-dimensional "unit cube" in variety of
# dimensions and calculate the distance between them
| [
"noreply@github.com"
] | noreply@github.com |
3dd9e13d0dd32e098150e68b9ee292c05b615573 | f1994887e5ec2845ed9cdda27629fba7b0a86067 | /stk_prc.py | ef447e899367db827c7409077a1493c9ddbf1e4e | [] | no_license | ma-lijun/ISO100-Stock | 6208b71bcb03091334e7fa4a8ac451504d5013d3 | e76fabbdcb0aa4674e6a9b2fd2276c830c23bfe9 | refs/heads/master | 2020-03-31T12:47:29.407892 | 2016-11-15T06:52:35 | 2016-11-15T06:52:35 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,133 | py | #Created on 2015/06/15, last modified on 2016/05/04
#By Teague Xiao
#This script is aim to download the current price of a specific stock from SINA Stock
#The info of each column is: stock number, stock name, opening price today, closing price today, current price, highest price today,lowest price today,don't know, don't know,stock dealed, price dealed today, date, time
#Sample 1#: http://hq.sinajs.cn/list=sh601003
#Sample 2#: http://hq.sinajs.cn/list=sz000002
import urllib
#import sqlite3
import MySQLdb
import os
import ConfigParser
#stock_num = raw_input('Please enter the stock number(6 digits): ')
#example stock_num = '000002'
#conn = sqlite3.connect('stock.sqlite')
Config = ConfigParser.ConfigParser()
Config.read("settings.ini")
con = MySQLdb.connect( Config.get('mysql', 'host'), Config.get('mysql', 'username'), Config.get('mysql', 'password'), Config.get('mysql', 'DB'), charset="utf8" )
c = con.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS stk_prc(
stk_num CHAR(20) PRIMARY KEY,
stk_name CHAR(20),
open_prc float,
close_prc float,
current_prc float,
highest_prc float,
lowest_prc float,
buy1 float,
sell1 float,
stock_dealed float,
price_dealed float,
date CHAR(20),
time CHAR(20)
)''')
c.execute("SELECT stk_num from stk_lst")
for stk_num in c.fetchall():
stk_num = stk_num[0]
if stk_num.startswith('6'):
url = 'http://hq.sinajs.cn/list=sh' + stk_num
elif stk_num.startswith('0'):
url = 'http://hq.sinajs.cn/list=sz' + stk_num
elif stk_num.startswith('3'):
url = 'http://hq.sinajs.cn/list=sz' + stk_num
else:
print 'Invalid stock number!'
continue
try:
html = urllib.urlopen(url).read()
except:
print 'Invalid stock number!'
continue
l = html.split(',')
start = l[0]
#stk_name = start[-8:].decode('gb2312','ignore')
stk_name = start[21:].decode('gb2312','ignore')
#Remove the spaces between charaters
stk_name = stk_name.replace(" ", "")
#print len(html)
if len(html) == 24:
continue
else:
open_prc = l[1]
close_prc = l[2]
current_prc = l[3]
highest_prc = l[4]
lowest_prc = l[5]
buy1 = l[6]
sell1 = l[7]
stock_dealed = l[8]
price_dealed = l[9]
date = l[30]
time = l[31]
#print stk_name,open_prc,close_prc,current_prc,highest_prc,lowest_prc,buy1,sell1
c.execute('''REPLACE INTO stk_prc (
stk_num, stk_name, open_prc, close_prc, current_prc, highest_prc, lowest_prc, buy1, sell1, stock_dealed, price_dealed, date, time)
VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)''',
(stk_num,stk_name,open_prc,close_prc,current_prc,highest_prc,lowest_prc,buy1,sell1,stock_dealed,price_dealed,date,time,))
con.commit()
c.close()
#print "stk_prc process done!" | [
"xiaopeiqing@gmail.com"
] | xiaopeiqing@gmail.com |
e4daeee18e6019172b6e6ca634c32a0f38eaad55 | 80e6bfc88c8c7118c6a9c08e7d20d8ae2ae5332d | /DataUtils/PrintAndPlot.py | ae1ffaa5c13b7502da37a570e12aebf35873129a | [] | no_license | CuberMessenger/MultiTask-US-AI-Model | 48c72a67dd9e61e42cd733e379b2bb0352a91a1b | 4b2e7c0e93d1a78f6490f70ad46d0847e81d9e30 | refs/heads/main | 2023-05-15T14:52:56.093338 | 2021-06-08T09:46:54 | 2021-06-08T09:46:54 | 374,961,127 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 14,122 | py | import os
import csv
import torch
import numpy as np
import matplotlib.pylab as plot
import sys,inspect
current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parent_dir = os.path.dirname(current_dir)
sys.path.insert(0, parent_dir)
from StatisticsUtils import CalculateAUC, ClassificationMetrics, BinaryClassificationMetric, MultipleClassificationMetric
targetResolutionPerSubFigure = 1080
targetDPI = 200
class SingleTaskClassificationAnswer():
def __init__(self):
self.Outputs = torch.Tensor()
self.Labels = torch.Tensor()
self.DataIndexes = []
self.Accuracy = 0
self.Recall = 0
self.Precision = 0
self.Specificity = 0
self.TrainLosses = None
self.ValidationLosses = None
class MultiTaskClassificationAnswer():
def __init__(self):
self.Outputs = [torch.Tensor()] * 11
self.Labels = torch.Tensor()
self.DataIndexes = []
self.Accuracy = [0] * 11
self.Recall = [0] * 11
self.Precision = [0] * 11
self.Specificity = [0] * 11
self.TrainLosses = None
self.TrainLabelLosses = None
self.ValidationLosses = None
self.ValidationLabelLosses = None
def DrawPlots(validationFPRs, validationTPRs, validationAUCs,\
testFPRs, testTPRs, testAUCs,\
ensembleFPR, ensembleTPR, ensembleAUC,\
validationAnswers, saveFolderPath, numOfFold):
gridSize = 2
targetFigureSize = (targetResolutionPerSubFigure * gridSize / targetDPI, targetResolutionPerSubFigure * gridSize / targetDPI)
plot.figure(figsize = targetFigureSize, dpi = targetDPI)
plot.subplot(gridSize, gridSize, 1)
for i in range(5):
plot.title("Validation AUC by folds")
plot.plot(validationFPRs[i], validationTPRs[i], alpha = 0.7, label = ("Fold %d Val AUC = %0.3f" % (i, validationAUCs[i])))
plot.legend(loc = "lower right")
plot.plot([0, 1], [0, 1],"r--")
plot.xlim([0, 1])
plot.ylim([0, 1.05])
plot.ylabel("True Positive Rate")
plot.xlabel("False Positive Rate")
plot.subplot(gridSize, gridSize, 2)
for i in range(5):
plot.title("Test AUC by folds")
plot.plot(testFPRs[i], testTPRs[i], alpha = 0.7, label = ("Fold %d Test AUC = %0.3f" % (i, testAUCs[i])))
plot.legend(loc = "lower right")
plot.plot([0, 1], [0, 1],"r--")
plot.xlim([0, 1])
plot.ylim([0, 1.05])
plot.ylabel("True Positive Rate")
plot.xlabel("False Positive Rate")
plot.subplot(gridSize, gridSize, 3)
plot.title("Test AUC by ensemble")
plot.plot(ensembleFPR, ensembleTPR, alpha = 0.7, label = "Test AUC = %0.3f" % ensembleAUC)
plot.legend(loc = "lower right")
plot.plot([0, 1], [0, 1],"r--")
plot.xlim([0, 1])
plot.ylim([0, 1.05])
plot.ylabel("True Positive Rate")
plot.xlabel("False Positive Rate")
plot.savefig(os.path.join(saveFolderPath, "ROCCurvePlot.png"))
if validationAnswers[0].TrainLosses is None:
return
hasLabelLoss = hasattr(validationAnswers[0], "TrainLabelLosses")
gridSize = 4 if hasLabelLoss else 3
targetFigureSize = (targetResolutionPerSubFigure * gridSize / targetDPI, targetResolutionPerSubFigure * gridSize / targetDPI)
plot.figure(figsize = targetFigureSize, dpi = targetDPI)
for i in range(numOfFold):
plot.subplot(gridSize, gridSize, i + 1)
plot.title("Fold %d Losses" % i)
plot.plot(np.array(validationAnswers[i].TrainLosses), label = "Train Loss")
plot.plot(np.array(validationAnswers[i].ValidationLosses), label = "Validation Loss")
plot.legend(loc = "upper right")
plot.xlabel("Epoch")
plot.ylabel("Loss")
if hasLabelLoss:
plot.subplot(gridSize, gridSize, i + 6)
plot.title("Fold %d Label Losses" % i)
plot.plot(np.array(validationAnswers[i].TrainLabelLosses), label = "Train Label Loss")
plot.plot(np.array(validationAnswers[i].ValidationLabelLosses), label = "Validation Label Loss")
plot.legend(loc = "upper right")
plot.xlabel("Epoch")
plot.ylabel("Loss")
plot.savefig(os.path.join(saveFolderPath, "LossesPlot.png"))
def SingleTaskEnsembleTest(testAnswers, saveFolderPath):
foldPredict = np.array([testAnswer.Outputs[:, 1].numpy() for testAnswer in testAnswers])
label = testAnswers[0].Labels.numpy()
rawResults = np.mean(foldPredict, axis = 0)
predict = (rawResults > 0.5).astype(np.int)
P = (predict == 1).astype(np.int)
N = (predict == 0).astype(np.int)
TP = np.sum(P * label)
FP = np.sum(P * (1 - label))
TN = np.sum(N * (1 - label))
FN = np.sum(N * label)
accuracy, recall, precision, specificity = ClassificationMetrics(TP, FP, TN, FN)
ensembleAUC, ensembleFPR, ensembleTPR = CalculateAUC(rawResults, label)
print("\nEnsemble Test Results:")
print("AUC,%f\nAccuracy,%f\nRecall,%f\nPrecision,%f\nSpecificity,%f" %\
(ensembleAUC, accuracy, recall, precision, specificity))
with open(os.path.join(saveFolderPath, "TestResults.csv"), mode = "w", newline = "") as csvFile:
csvWriter = csv.writer(csvFile)
csvWriter.writerow(["DataIndex", "Ensembled"])
for i, dataIndex in enumerate(testAnswers[0].DataIndexes):
csvWriter.writerow([dataIndex, str(rawResults[i])])
return ensembleAUC, ensembleFPR, ensembleTPR
def MultiTaskEnsembleTest(testAnswers, saveFolderPath):
foldPredict = np.array([testAnswer.Outputs[0][:, 1].numpy() for testAnswer in testAnswers])
label = testAnswers[0].Labels[:, 0].numpy()
rawResults = np.mean(foldPredict, axis = 0)
predict = (rawResults > 0.5).astype(np.int)
P = (predict == 1).astype(np.int)
N = (predict == 0).astype(np.int)
TP = np.sum(P * label)
FP = np.sum(P * (1 - label))
TN = np.sum(N * (1 - label))
FN = np.sum(N * label)
accuracy, recall, precision, specificity = ClassificationMetrics(TP, FP, TN, FN)
ensembleAUC, ensembleFPR, ensembleTPR = CalculateAUC(rawResults, label)
print("\nEnsemble Test Results:")
print("AUC,%f\nAccuracy,%f\nRecall,%f\nPrecision,%f\nSpecificity,%f" %\
(ensembleAUC, accuracy, recall, precision, specificity))
foldPredicts = []
labels = []
for i in range(11):
foldPredict = np.array([testAnswer.Outputs[i].numpy() for testAnswer in testAnswers])
foldPredict = np.mean(foldPredict, axis = 0)
label = testAnswers[0].Labels[:, i].numpy()
foldPredicts.append(foldPredict)
labels.append(label)
with open(os.path.join(saveFolderPath, "TestResults.csv"), mode = "w", newline = "") as csvFile:
csvWriter = csv.writer(csvFile)
csvWriter.writerow(["DataIndex", \
"Malignancy", "", \
"Composition", "", "", "", \
"Echogenicity", "", "", "", "", \
"Shape", "", \
"Margin", "", \
"IrregularOrIobulated", "", \
"ExtraThyroidalExtension", "", \
"LargeCometTail", "", \
"Macrocalcification", "", \
"Peripheral", "", \
"Punctate", ""])
for r, dataIndex in enumerate(testAnswers[0].DataIndexes):
row = [str(dataIndex)]
for i in range(11):
row += list(foldPredicts[i][r, :])
csvWriter.writerow(row)
return ensembleAUC, ensembleFPR, ensembleTPR
def SingleTaskClassificationPrintAndPlot(validationAnswers, testAnswers, saveFolderPath):
numOfFold = len(validationAnswers)
#Accuracy, Recall, Precision, Specificity, AUC
validationAverages = [0] * 5
testAverages = [0] * 5
validationAUCs = []
validationFPRs = []
validationTPRs = []
testAUCs = []
testFPRs = []
testTPRs = []
print(",,,Validation,,,,,,Test,,,,")
print("Fold,Accuracy,Recall,Precision,Specificity,AUC,,Accuracy,Recall,Precision,Specificity,AUC,")
for i in range(numOfFold):
#Validation
validationAUC, validationFPR, validationTPR, validationBestThreshold =\
CalculateAUC(validationAnswers[i].Outputs[:, 1].numpy(), validationAnswers[i].Labels.numpy(), needThreshold = True)
validationAUCs.append(validationAUC)
validationFPRs.append(validationFPR)
validationTPRs.append(validationTPR)
validationAverages[0] += validationAnswers[i].Accuracy
validationAverages[1] += validationAnswers[i].Recall
validationAverages[2] += validationAnswers[i].Precision
validationAverages[3] += validationAnswers[i].Specificity
validationAverages[4] += validationAUC
print("%d," % i, end = "")
print("%f," % validationAnswers[i].Accuracy, end = "")
print("%f," % validationAnswers[i].Recall, end = "")
print("%f," % validationAnswers[i].Precision, end = "")
print("%f," % validationAnswers[i].Specificity, end = "")
print("%f,," % validationAUC, end = "")
#Test
testAUC, testFPR, testTPR, testBestThreshold =\
CalculateAUC(testAnswers[i].Outputs[:, 1].numpy(), testAnswers[i].Labels.numpy(), needThreshold = True)
testAUCs.append(testAUC)
testFPRs.append(testFPR)
testTPRs.append(testTPR)
testAverages[0] += testAnswers[i].Accuracy
testAverages[1] += testAnswers[i].Recall
testAverages[2] += testAnswers[i].Precision
testAverages[3] += testAnswers[i].Specificity
testAverages[4] += testAUC
print("%f," % testAnswers[i].Accuracy, end = "")
print("%f," % testAnswers[i].Recall, end = "")
print("%f," % testAnswers[i].Precision, end = "")
print("%f," % testAnswers[i].Specificity, end = "")
print("%f," % testAUC)
validationAverages = np.array(validationAverages) / numOfFold
testAverages = np.array(testAverages) / numOfFold
print("Average,", end = "")
for v in validationAverages:
print("%f," % v, end = "")
print(",", end = "")
for v in testAverages:
print("%f," % v, end = "")
print()
ensembleAUC, ensembleFPR, ensembleTPR = SingleTaskEnsembleTest(testAnswers, saveFolderPath)
DrawPlots(validationFPRs, validationTPRs, validationAUCs,\
testFPRs, testTPRs, testAUCs,\
ensembleFPR, ensembleTPR, ensembleAUC,\
validationAnswers, saveFolderPath, numOfFold)
def MultiTaskClassificationPrintAndPlot(validationAnswers, testAnswers, saveFolderPath):
numOfFold = len(validationAnswers)
#Accuracy, Recall, Precision, Specificity, AUC
validationAverages = [0] * 5
testAverages = [0] * 5
validationAUCs = []
validationFPRs = []
validationTPRs = []
testAUCs = []
testFPRs = []
testTPRs = []
print(",,,Validation,,,,,,Test,,,,")
print("Fold,Accuracy,Recall,Precision,Specificity,AUC,,Accuracy,Recall,Precision,Specificity,AUC,")
for i in range(numOfFold):
#Validation
validationAUC, validationFPR, validationTPR, validationBestThreshold =\
CalculateAUC(validationAnswers[i].Outputs[0][:, 1].numpy(), validationAnswers[i].Labels[:, 0].numpy(), needThreshold = True)
validationAUCs.append(validationAUC)
validationFPRs.append(validationFPR)
validationTPRs.append(validationTPR)
validationAverages[0] += validationAnswers[i].Accuracy[0]
validationAverages[1] += validationAnswers[i].Recall[0]
validationAverages[2] += validationAnswers[i].Precision[0]
validationAverages[3] += validationAnswers[i].Specificity[0]
validationAverages[4] += validationAUC
print("%d," % i, end = "")
print("%f," % validationAnswers[i].Accuracy[0], end = "")
print("%f," % validationAnswers[i].Recall[0], end = "")
print("%f," % validationAnswers[i].Precision[0], end = "")
print("%f," % validationAnswers[i].Specificity[0], end = "")
print("%f,," % validationAUC, end = "")
#Test
testAUC, testFPR, testTPR, testBestThreshold =\
CalculateAUC(testAnswers[i].Outputs[0][:, 1].numpy(), testAnswers[i].Labels[:, 0].numpy(), needThreshold = True)
testAUCs.append(testAUC)
testFPRs.append(testFPR)
testTPRs.append(testTPR)
testAverages[0] += testAnswers[i].Accuracy[0]
testAverages[1] += testAnswers[i].Recall[0]
testAverages[2] += testAnswers[i].Precision[0]
testAverages[3] += testAnswers[i].Specificity[0]
testAverages[4] += testAUC
print("%f," % testAnswers[i].Accuracy[0], end = "")
print("%f," % testAnswers[i].Recall[0], end = "")
print("%f," % testAnswers[i].Precision[0], end = "")
print("%f," % testAnswers[i].Specificity[0], end = "")
print("%f," % testAUC)
validationAverages = np.array(validationAverages) / numOfFold
testAverages = np.array(testAverages) / numOfFold
print("Average,", end = "")
for v in validationAverages:
print("%f," % v, end = "")
print(",", end = "")
for v in testAverages:
print("%f," % v, end = "")
print()
ensembleAUC, ensembleFPR, ensembleTPR = MultiTaskEnsembleTest(testAnswers, saveFolderPath)
DrawPlots(validationFPRs, validationTPRs, validationAUCs,\
testFPRs, testTPRs, testAUCs,\
ensembleFPR, ensembleTPR, ensembleAUC,\
validationAnswers, saveFolderPath, numOfFold)
def ClassificationPrintAndPlot(validationAnswers, testAnswers, saveFolderPath):
if type(validationAnswers[0]) is SingleTaskClassificationAnswer:
SingleTaskClassificationPrintAndPlot(validationAnswers, testAnswers, saveFolderPath)
if type(validationAnswers[0]) is MultiTaskClassificationAnswer:
MultiTaskClassificationPrintAndPlot(validationAnswers, testAnswers, saveFolderPath) | [
"cuber_messenger@hotmail.com"
] | cuber_messenger@hotmail.com |
6da64ddbe8b389fea336607a103d15766e44bd65 | 3dd58a087b59bdba102f2e6e95b3e200a9530c4c | /django_demo/mysite/polls/tests.py | 7d7c15443ccc50f0a7b07c5f29af512cd9f3697e | [] | no_license | natepill/Server-Side-Architecture | b17926cf467083182e96257589dfdc7c3d5ea40e | 4765136c5fe9d0eedc6b50a2bbbb0c9458170694 | refs/heads/master | 2022-12-11T03:41:46.296869 | 2019-06-24T19:53:15 | 2019-06-24T19:53:15 | 189,689,598 | 0 | 0 | null | 2022-12-08T01:22:18 | 2019-06-01T04:19:09 | Python | UTF-8 | Python | false | false | 5,492 | py | import datetime
from django.test import TestCase
from django.utils import timezone
from django.urls import reverse
from .models import Question, Choice,
def create_question(question_text, days):
"""
Create a question with the given `question_text` and published the
given number of `days` offset to now (negative for questions published
in the past, positive for questions that have yet to be published).
"""
time = timezone.now() + datetime.timedelta(days=days)
return Question.objects.create(question_text=question_text, pub_date=time)
def add_vote_to_choice(choice):
choice.votes += 1
class QuestionModelTests(TestCase):
def test_was_published_recently_with_future_question(self):
"""
was_published_recently() returns False for questions whose pub_date
is in the future.
"""
time = timezone.now() + datetime.timedelta(days=30)
future_question = Question(pub_date=time)
self.assertIs(future_question.was_published_recently(), False)
def test_was_publised_recently_with_old_question(self):
"""
was_published_recently() returns False for questions whose pub_date
is older than 1 day.
"""
time = timezone.now() - datetime.timedelta(days=1, seconds=1)
old_question = Question(pub_date=time)
self.assertIs(old_question.was_published_recently(), False)
def test_was_published_recently_with_old_question(self):
"""
was_published_recently() returns False for questions whose pub_date
is older than 1 day.
"""
time = timezone.now() - datetime.timedelta(days=1, seconds=1)
old_question = Question(pub_date=time)
self.assertIs(old_question.was_published_recently(), False)
def test_was_published_recently_with_recent_question(self):
"""
was_published_recently() returns True for questions whose pub_date
is within the last day.
"""
time = timezone.now() - datetime.timedelta(hours=23, minutes=59, seconds=59)
recent_question = Question(pub_date=time)
self.assertIs(recent_question.was_published_recently(), True)
class QuestionIndexViewTests(TestCase):
def test_no_questions(self):
"""
If no questions exist, an appropriate message is displayed.
"""
response = self.client.get(reverse('polls:index'))
self.assertEqual(response.status_code, 200)
self.assertContains(response, "No polls are available.")
self.assertQuerysetEqual(response.context['latest_question_list'], [])
def test_past_question(self):
"""
Questions with a pub_date in the past are displayed on the
index page.
"""
create_question(question_text="Past question.", days=-30)
response = self.client.get(reverse('polls:index'))
self.assertQuerysetEqual(
response.context['latest_question_list'],
['<Question: Past question.>']
)
def test_future_question(self):
"""
Questions with a pub_date in the future aren't displayed on
the index page.
"""
create_question(question_text="Future question.", days=30)
response = self.client.get(reverse('polls:index'))
self.assertContains(response, "No polls are available.")
self.assertQuerysetEqual(response.context['latest_question_list'], [])
def test_future_question_and_past_question(self):
"""
Even if both past and future questions exist, only past questions
are displayed.
"""
create_question(question_text="Past question.", days=-30)
create_question(question_text="Future question.", days=30)
response = self.client.get(reverse('polls:index'))
self.assertQuerysetEqual(
response.context['latest_question_list'],
['<Question: Past question.>']
)
def test_two_past_questions(self):
"""
The questions index page may display multiple questions.
"""
create_question(question_text="Past question 1.", days=-30)
create_question(question_text="Past question 2.", days=-5)
response = self.client.get(reverse('polls:index'))
self.assertQuerysetEqual(
response.context['latest_question_list'],
['<Question: Past question 2.>', '<Question: Past question 1.>']
)
class QuestionDetailViewTests(TestCase):
def test_future_question(self):
"""
The detail view of a question with a pub_date in the future
returns a 404 not found.
"""
future_question = create_question(question_text='Future question.', days=5)
url = reverse('polls:detail', args=(future_question.id,))
response = self.client.get(url)
self.assertEqual(response.status_code, 404)
def test_past_question(self):
"""
The detail view of a question with a pub_date in the past
displays the question's text.
"""
past_question = create_question(question_text='Past Question.', days=-5)
url = reverse('polls:detail', args=(past_question.id,))
response = self.client.get(url)
self.assertContains(response, past_question.question_text)
class QuestionResultViewTests(TestCase):
def test_votes_for_choice(self):
new_question = create_question(question_text='Testing that voting for choices work?')
| [
"natepill@gmail.com"
] | natepill@gmail.com |
1df7eadc27717cbe1e8b63f7988b4b0d50e0b417 | ef96386bfcf2bce8b32badf2c05bc6ba43be6b26 | /DB.py | 0b2e425232377ed3ab7efb3bec52df4eb38012cc | [] | no_license | George-Abdelmessh/Cafe-system | 59b24e265274bacb914114b0fedd060816e28ee6 | 61e4fe6c4e9e178676f7b272b4b08641613e5eeb | refs/heads/main | 2023-06-19T09:31:57.163760 | 2021-07-14T23:51:34 | 2021-07-14T23:51:34 | 386,106,251 | 17 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,913 | py | if __name__ != '__main__' or __name__ == '__main__':
import sqlite3 as sql
DB = sql.connect('shop.db')
cursor = DB.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS products(ID integer PRIMARY KEY, name Text, price Real, category Text)")
class Product:
ID = 0
name = ''
price = 0.0
category = ''
@classmethod
def addProduct(cls, name, price, cate):
cls.name = name
cls.price = price
cls.category = cate
cursor.execute(
f"INSERT INTO products(name, price, category) VALUES ('{cls.name}', {cls.price}, '{cls.category}')")
DB.commit()
@classmethod
def deleteProduct(cls, id):
cls.ID = id
cursor.execute(f"DELETE FROM products WHERE ID = {cls.ID}")
DB.commit()
@classmethod
def modifyProduct(cls, id, name, price, cate):
cls.ID = id
cls.name = name
cls.price = price
cls.category = cate
cursor.execute(f"UPDATE products set name = '{cls.name}' where ID = '{cls.ID}'")
cursor.execute(f"UPDATE products set price = '{cls.price}' where ID = '{cls.ID}'")
cursor.execute(f"UPDATE products set category = '{cls.category}' where ID = '{cls.ID}'")
DB.commit()
@classmethod
def selectPrice(cls, name):
cls.name = name
cursor.execute(f"SELECT price FROM products WHERE name = '{cls.name}'")
result = cursor.fetchone()
print(f"Price = {result[0]} $")
return f"Price = {result[0]} $"
@classmethod
def showAll(cls):
cursor.execute("SELECT * FROM products")
count = 0
result = cursor.fetchall()
result2 = ''
while count != len(result):
result1 = f"ID: {result[count][0]}\tName: {result[count][1]}\tPrice: {result[count][2]}\tCategory: {result[count][3]}\n"
result2 += result1
count += 1
return result2
@classmethod
def showAProduct(cls, ID):
cls.ID = ID
cursor.execute(f"SELECT * FROM products WHERE ID = {cls.ID}")
result = cursor.fetchone()
return f"ID: {result[0]} Name: {result[1]} Price: {result[2]} "
@classmethod
def showProductID(cls, name):
cls.name = name
cursor.execute(f"SELECT ID FROM products WHERE name = '{cls.name}'")
result = cursor.fetchone()
return f"Product ID is: {result[0]}"
@classmethod
def showAllNames(cls, cate):
cls.category = cate
cursor.execute(f"SELECT name FROM products WHERE category = '{cls.category}'")
result = cursor.fetchall()
re = []
cont = 0
for i in result:
re += result[cont]
cont += 1
return re
@classmethod
def saveClose(cls):
DB.commit()
DB.close()
| [
"noreply@github.com"
] | noreply@github.com |
de826dfae0fa88b16b9f28a90bfb267c18c844c7 | ff2b4b972865ab82464d550934329117500d3195 | /bfg9000/tools/mkdir_p.py | f12325a7350d79273dadc255603b865d4c34c859 | [
"BSD-3-Clause"
] | permissive | juntalis/bfg9000 | cd38a9194e6c08a4fbcf3be29f37c00bfa532588 | 594eb2aa7c259855e7658d69fe84acb6dad890fa | refs/heads/master | 2021-08-08T21:35:33.896506 | 2017-11-11T08:47:57 | 2017-11-11T08:47:57 | 110,331,128 | 0 | 0 | null | 2017-11-11T08:46:05 | 2017-11-11T08:46:04 | null | UTF-8 | Python | false | false | 396 | py | from . import tool
from .common import SimpleCommand
@tool('mkdir_p')
class MkdirP(SimpleCommand):
def __init__(self, env):
default = 'doppel -p' if env.platform.name == 'windows' else 'mkdir -p'
SimpleCommand.__init__(self, env, name='mkdir_p', env_var='MKDIR_P',
default=default)
def _call(self, cmd, path):
return cmd + [path]
| [
"jporter@mozilla.com"
] | jporter@mozilla.com |
e9324eef34fb658a2a2410163dc0d91ffe9b2fdd | d33e7882ea7152395253149ecceeee0884f6abc6 | /KeyedCipher.py | 4bd4ec0a6f989cb9fac1ffc2da44e9940e73a3ad | [] | no_license | meghna-2210/Ciphers | 794cd9247515d51f0c9f6c74446a1029879f9379 | d4fad54bd45f34ec4a3534408d5f8237e28ef743 | refs/heads/main | 2023-06-11T08:17:25.051546 | 2021-07-01T20:13:10 | 2021-07-01T20:13:10 | 382,145,436 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,163 | py | import string
import secrets
def convert_to_int(key):
key = list(key)
return [int(k) for k in key]
def permuteGrid(grid, key):
numRows, numCols = len(grid), len(grid[0])
gridPerm = [[''] * numCols for i in range(numRows)]
for r in range(numRows):
for idx, val in enumerate(key):
gridPerm[r][idx] = grid[r][val - 1]
return gridPerm
def invertKey(key):
idx = 1
invKey = [0] * len(key)
for i in range(len(key)):
invKey[key[i] - 1] = 1 + i
return invKey
def encryptKeyed(plainText, k):
k = convert_to_int(k)
l = len(plainText)
numCols = len(k)
numRows = (l + numCols - 1) // numCols
plainText += ''.join(
secrets.choice(string.ascii_letters)
for i in range(numCols * numRows - l))
print(plainText)
numRows += 1
grid = [[''] * numCols for i in range(numRows)]
for i in range(numCols):
grid[0][i] = 1 + i
idx = 0
for r in range(1, numRows):
for c in range(numCols):
grid[r][c] = plainText[idx]
idx += 1
gridPerm = permuteGrid(grid, k)
E = ""
for col in range(numCols):
for row in range(1, numRows):
E += gridPerm[row][col]
return E
def decryptKeyed(cipherText, k):
k = invertKey(convert_to_int(k))
l = len(cipherText)
numCols = len(k)
numRows = 1 + l // numCols
grid = [[''] * numCols for i in range(numRows)]
gridPerm = [[''] * numCols for i in range(numRows)]
for i in range(numCols):
grid[0][i] = 1 + i
idx = 0
for c in range(numCols):
for r in range(1, numRows):
grid[r][c] = cipherText[idx]
idx += 1
gridPerm = permuteGrid(grid, k)
D = ""
for r in range(1, numRows):
for c in range(numCols):
D += gridPerm[r][c]
return D
# Driver code
if __name__ == "__main__":
plain_text = input('Enter the string to be encrypted: ')
key = input("Enter the key: ")
cipher = encryptKeyed(plain_text, key)
print("Encrypted Message: {}".format(cipher))
print("Decrypted Message: {}".format(decryptKeyed(cipher, key)))
| [
"noreply@github.com"
] | noreply@github.com |
48f8756bb27e520f9b03a71ef2a2d39744eb78ca | 3c8b2fca42bbc011a645d0df8b3983cdcee99e01 | /Pi_Python_Code/face_recognition.py | c71553bf644a6d4411d8a343f92190bbd45bb38f | [
"MIT"
] | permissive | buvnswrn/Smart-Home-Manager | 6edb3dfa802aca75ce61fc4eb29efbb714b6575b | aa94740f432bb968d00909e2af2a92c89c7c8546 | refs/heads/master | 2022-07-21T03:34:17.874933 | 2022-07-20T04:52:34 | 2022-07-20T04:52:34 | 101,571,333 | 2 | 1 | NOASSERTION | 2022-01-05T18:52:31 | 2017-08-27T18:15:28 | JavaScript | UTF-8 | Python | false | false | 3,106 | py | from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
import numpy as np
import Id as findid
import db
def recognise():
facecascade = cv2.CascadeClassifier('Haar/haarcascade_frontalcatface.xml')
eye = cv2.CascadeClassifier('Haar/haarcascade_eye.xml')
spec = cv2.CascadeClassifier('Haar/haarcascade_eye_tree_eyeglasses.xml')
count=0
recognizer1 = cv2.face.createLBPHFaceRecognizer()
recognizer2=cv2.face.createEigenFaceRecognizer()
recognizer1.load('trainer/trainedData1.xml')
recognizer2.load('trainer/trainedData2.xml')
username="Bhuvan"
# Initialize and start the video frame capture
cam = PiCamera()
cam.resolution = (160, 120)
cam.framerate = 32
rawCapture = PiRGBArray(cam, size=(160, 120))
# allow the camera to warmup
time.sleep(0.1)
lastTime = time.time()*1000.0
# Loop
for frame in cam.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# Read the video frame
image = frame.array
# Convert the captured frame into grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
# Get all face from the video frame
faces = facecascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.CASCADE_SCALE_IMAGE
)
print time.time()*1000.0-lastTime," Found {0} faces!".format(len(faces))
lastTime = time.time()*1000.0
# For each face in faces
for(x,y,w,h) in faces:
# Create rectangle around the face
#cv2.rectangle(img, (x-20,y-20), (x+w+20,y+h+20), (0,255,0), 4)
cv2.circle(image, (x+w/2, y+h/2), int((w+h)/3), (255, 255, 255), 1)
facecrp=cv2.resize((gray[y:y+h,x:x+w]),(110,110))
# Recognize the face belongs to which ID
Id,confidence = recognizer1.predict(facecrp)
Id1,confidence1=recognizer2.predict(facecrp)
# Check the ID if exist
Name=findid.ID2Name(Id,confidence)
Name2=findid.ID2Name(Id1,confidence1/100)
print("Eigen:",Name)
print("LBPH",Name2)
# print(Id1,confidence1,Name,Name2,username,count)
if(count==0):
username=Name2
count+=1
if(count>0 and username==Name2):
count+=1
if count==10:
break
findid.DispID(x,y,w,h,Name,gray)
if Name2 is not None:
cv2.putText(image, Name2, ((x+w/2-(len(Name2)*7/2)), y-20), cv2.FONT_HERSHEY_DUPLEX, .4, [255,255,255])
else:
findid.DispID(x,y,w,h,"Face Not Recognized",gray)
cv2.imshow('Face',image)
rawCapture.truncate(0)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if count==10:
break
print(username)
cv2.imwrite("tmp/face.jpg",image)
db.fetch(username,"tmp/face.jpg")
cam.close()
cv2.destroyAllWindows()
return username
# recognise()
| [
"noreply@github.com"
] | noreply@github.com |
7eddbc7b95a49c0fbdf23d178fb7efe07f2a9248 | 3b642fe4bda6ab4b5529481285f1f31449c2ce32 | /ex11.py | 011f030da2b402d13703cc12ebce4afb15ba192c | [] | no_license | Arianazarkaman/python-hardway | f0dc490281fc2c5b915b2371863dd2ce04206e14 | a6fd874b8db09d4662d4203bd116eb28fea24163 | refs/heads/master | 2023-01-21T13:15:55.947782 | 2020-11-30T07:03:14 | 2020-11-30T07:03:14 | 286,235,592 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 226 | py | print("How old are you?", end=' ')
age = input()
print("How tall are you?", end=' ')
height = input()
print("How much do you weigh?", end=' ')
weight = input()
print(f"So, you're {age} old, {height} tall and {weight} heavy.")
| [
"arianazarkaman@yahoo.com"
] | arianazarkaman@yahoo.com |
75884c7471e9dd66681bd4ab738f236b9c0c488b | 024808b8fe97561b1df10d7030b00406cdea5ad4 | /map/__manifest__.py | 156437f17ded4ef45cd7533e4cb88199bc5d0661 | [
"MIT"
] | permissive | jaidis/Map-plugin-for-Odoo | ca42fec2fc6cfbc5cecab5ff64c4fbdbe30eb6ed | 09163b94017f1029a668fe14b14e2f2dd97a68ee | refs/heads/master | 2020-04-22T11:38:51.353286 | 2019-02-19T13:10:32 | 2019-02-19T13:10:32 | 170,348,262 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 433 | py | {
'name': 'Plugin Mapa',
'version': '0.1',
'description': 'Modelo básico para mapas',
'category': 'Tools',
'summary': 'Modulo para Odoo que utiliza las herramientas Leafletjs y Mapbox para mostrar mapas',
'author': 'Manuel Munoz',
'depends': ['web'],
'installable': True,
'auto_install': False,
'application': True,
'data': ['views/menu.xml', 'views/location.xml', 'views/contact.xml']
}
| [
"heidiaricel@hotmail.com"
] | heidiaricel@hotmail.com |
4a888d759547b059ec71cec92a291a9997a3c4ba | f2b2b4e6f6f0753433cd092ee09d6b4206cc4ea5 | /convert/convert_new_coreference.py | 76b28e86bbd25a30e650bc9b224eadae62fdf120 | [] | no_license | anshiquanshu66/analyse_bert | 894800d837b8d12b9f00b031f7ad016c6fd91efe | aa9b614fd96971825f71f31d6182aa55919966cb | refs/heads/master | 2022-03-30T14:00:32.929097 | 2020-01-03T06:14:39 | 2020-01-03T06:14:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,490 | py | import json
import argparse
from tqdm import tqdm
import re
coreference = {'version': 'coreference_new', 'data': []}
new_coref = {'version': 'coreference_new', 'data': []}
def convert_data_all(data_file, read_data_file, save_file):
with open(data_file, 'r', encoding='utf-8') as f:
with open(read_data_file, 'a+', encoding='utf-8') as w:
data = json.load(f)['data']
example_count = 0
for para in tqdm(data):
context = para['paragraphs'][0]['context']
question = para['paragraphs'][0]['qas'][0]['question']
start_pos = [m.start() for m in re.finditer('``', context)]
if not start_pos:
continue
for pos in start_pos:
coref = ''
for index in range(pos + 3, len(context)):
if context[index] == "'":
coref = context[pos + 3:index - 1].lower()
if coref != '' and coref in question:
new_coref['data'].append(para)
example_count += 1
# p = para['paragraphs'][0]
# qa = p['qas'][0]
# qp_pair = {}
# qp_pair['question'] = qa['question']
#
# query_index, para_index = qa['sync_pair'].keys(), qa['sync_pair'].values()
# query_index = [i for i in query_index][0]
# para_index = [i for i in para_index][0]
# # qp_pair['query_sync_tokens'] = [int(query_index)]
#
# ans_token_pos = p['char_to_word_offset_para'][para_index]
# res = expand_sync(ans_token_pos, p['context'].split(), p['qas'][0]['answer']['text'])
# if res == -1:
# continue
# else:
# qp_pair['query_sync_tokens'] = [res + len(qa['question'].split())]
#
# qp_pair['para_sync_tokens'] = [i + ans_token_pos + len(qa['question'].split()) for i in range(len(qa['answer']['text'].split()))]
# w.write(qa['question'] + '\n')
# # w.write(p['doc_tokens_para'][res])
# w.write(str(qp_pair['para_sync_tokens']) + '\n')
# w.write(str(qp_pair['query_sync_tokens']) + '\n')
#
# # w.write(str([p['doc_tokens_para'][i - len(qa['question'].split())] for i in qp_pair['para_sync_tokens']]) + ' ')
#
# qp_pair['paragraph'] = p['context']
# w.write(p['context'] + '\n')
# w.write('\n')
# coreference['data'].append(qp_pair)
# example_count += 1
with open(save_file, 'w', encoding='utf-8') as fout:
json.dump(new_coref, fout)
return example_count
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--data_file', required=True, default='')
parser.add_argument('--read_data_file', required=True, default='')
parser.add_argument('--save_file', required=True, default='')
args = parser.parse_args()
example_count = convert_data_all(args.data_file, args.read_data_file, args.save_file)
print('convert %d new coreference complete' % example_count)
| [
"caijie@pku.edu.cn"
] | caijie@pku.edu.cn |
8ee5c4ff0ed4aa486b8cac243bd512996bb6c74e | b599e8531a940ee32ea47018ea0aea2789d5ea3f | /flask/lib/python3.5/site-packages/ebcli/operations/useops.py | 28e6940661b775176b1501643c32f1cccd027c94 | [] | no_license | HarshitaSingh97/FetchGoogleTrends | 3bfaba9ac2b365530beeb8f740c6ca8de09e84e0 | 2acc62d42b1a4fc832c78fc1e4290d7531e25dcd | refs/heads/master | 2022-12-10T14:30:47.772224 | 2018-07-13T18:25:46 | 2018-07-13T18:25:46 | 138,040,665 | 3 | 0 | null | 2022-07-06T19:49:43 | 2018-06-20T14:02:16 | Python | UTF-8 | Python | false | false | 1,490 | py | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from ebcli.objects.exceptions import ServiceError, NotFoundError
from ebcli.lib import elasticbeanstalk, codecommit
from ebcli.operations import commonops, gitops
def switch_default_environment(env_name):
__verify_environment_exists(env_name)
commonops.set_environment_for_current_branch(env_name)
def switch_default_repo_and_branch(repo_name, branch_name):
__verify_codecommit_branch_and_repository_exist(repo_name, branch_name)
gitops.set_repo_default_for_current_environment(repo_name)
gitops.set_branch_default_for_current_environment(branch_name)
def __verify_environment_exists(env_name):
elasticbeanstalk.get_environment(env_name=env_name)
def __verify_codecommit_branch_and_repository_exist(repo_name, branch_name):
try:
codecommit.get_branch(repo_name, branch_name)
except ServiceError:
raise NotFoundError("CodeCommit branch not found: {}".format(branch_name))
| [
"harshitasingh1397@gmail.com"
] | harshitasingh1397@gmail.com |
0dc860e73fdb9073bea7b31d75831fe246b55cd2 | 35c3999aa3f6a9e31ae6f9170ac0235da4fe7e11 | /irekua_rest_api/serializers/devices/physical_devices.py | c9b5828b1b4bdec7cd965b6ca25e906f17db16f2 | [
"BSD-2-Clause"
] | permissive | CONABIO-audio/irekua-rest-api | 28cf9806330c8926437542ae9152b8a7da57714f | 35cf5153ed7f54d12ebad2ac07d472585f04e3e7 | refs/heads/master | 2022-12-12T09:24:18.217032 | 2020-08-15T21:01:20 | 2020-08-15T21:01:20 | 219,046,247 | 0 | 4 | BSD-4-Clause | 2022-12-08T10:54:47 | 2019-11-01T19:03:10 | Python | UTF-8 | Python | false | false | 2,098 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from rest_framework import serializers
from irekua_database.models import PhysicalDevice
from irekua_rest_api.serializers.base import IrekuaModelSerializer
from irekua_rest_api.serializers.base import IrekuaHyperlinkedModelSerializer
from irekua_rest_api.serializers.users import users
from . import devices
class SelectSerializer(IrekuaModelSerializer):
class Meta:
model = PhysicalDevice
fields = (
'url',
'id',
)
class ListSerializer(IrekuaModelSerializer):
type = serializers.CharField(
read_only=True,
source='device.device_type.name')
brand = serializers.CharField(
read_only=True,
source='device.brand.name')
model = serializers.CharField(
read_only=True,
source='device.model')
class Meta:
model = PhysicalDevice
fields = (
'url',
'id',
'serial_number',
'type',
'brand',
'model',
)
class DetailSerializer(IrekuaHyperlinkedModelSerializer):
device = devices.SelectSerializer(many=False, read_only=True)
owner = users.SelectSerializer(many=False, read_only=True)
class Meta:
model = PhysicalDevice
fields = (
'url',
'serial_number',
'owner',
'metadata',
'bundle',
'device',
'created_on',
'modified_on',
)
class CreateSerializer(IrekuaModelSerializer):
class Meta:
model = PhysicalDevice
fields = (
'serial_number',
'device',
'metadata',
'bundle',
)
def create(self, validated_data):
user = self.context['request'].user
validated_data['owner'] = user
return super().create(validated_data)
class UpdateSerializer(IrekuaModelSerializer):
class Meta:
model = PhysicalDevice
fields = (
'serial_number',
'metadata',
)
| [
"santiago.mbal@gmail.com"
] | santiago.mbal@gmail.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.