blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 69 | license_type stringclasses 2
values | repo_name stringlengths 5 118 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringlengths 4 63 | visit_date timestamp[us] | revision_date timestamp[us] | committer_date timestamp[us] | github_id int64 2.91k 686M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 23
values | gha_event_created_at timestamp[us] | gha_created_at timestamp[us] | gha_language stringclasses 220
values | src_encoding stringclasses 30
values | language stringclasses 1
value | is_vendor bool 2
classes | is_generated bool 2
classes | length_bytes int64 2 10.3M | extension stringclasses 257
values | content stringlengths 2 10.3M | authors listlengths 1 1 | author_id stringlengths 0 212 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4219a4b68fda829e5ffe9f53e3fc479e6f4e4f2f | 26f6313772161851b3b28b32a4f8d255499b3974 | /Python/PseudoPalindromicPathsinaBinaryTree.py | f55438ead603aea16a74885f9461cc385a4c486d | [] | no_license | here0009/LeetCode | 693e634a3096d929e5c842c5c5b989fa388e0fcd | f96a2273c6831a8035e1adacfa452f73c599ae16 | refs/heads/master | 2023-06-30T19:07:23.645941 | 2021-07-31T03:38:51 | 2021-07-31T03:38:51 | 266,287,834 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,315 | py | """
Given a binary tree where node values are digits from 1 to 9. A path in the binary tree is said to be pseudo-palindromic if at least one permutation of the node values in the path is a palindrome.
Return the number of pseudo-palindromic paths going from the root node to leaf nodes.
Example 1:
Input: root = [2,3,1,3,1,null,1]
Output: 2
Explanation: The figure above represents the given binary tree. There are three paths going from the root node to leaf nodes: the red path [2,3,3], the green path [2,1,1], and the path [2,3,1]. Among these paths only red path and green path are pseudo-palindromic paths since the red path [2,3,3] can be rearranged in [3,2,3] (palindrome) and the green path [2,1,1] can be rearranged in [1,2,1] (palindrome).
Example 2:
Input: root = [2,1,1,1,3,null,null,null,null,null,1]
Output: 1
Explanation: The figure above represents the given binary tree. There are three paths going from the root node to leaf nodes: the green path [2,1,1], the path [2,1,3,1], and the path [2,1]. Among these paths only the green path is pseudo-palindromic since [2,1,1] can be rearranged in [1,2,1] (palindrome).
Example 3:
Input: root = [9]
Output: 1
Constraints:
The given binary tree will have between 1 and 10^5 nodes.
Node values are digits from 1 to 9.
"""
# Definition for a binary tree node.
from collections import Counter
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
class Solution:
def pseudoPalindromicPaths(self, root: TreeNode) -> int:
def ispseduoPalindrom(string):
"""
return whether a string is a pseudoPalindrom
if the counts of a letter is odd, then odd +=1
if odd >=2, then the string is not a pseudoPalindrom
"""
c_string = Counter(string)
odds = sum([v % 2 for v in c_string.values()])
return odds < 2
def dfs(node, string):
if node:
string += str(node.val)
if not node.left and not node.right:
res += int(ispseduoPalindrom(string))
dfs(node.left, string)
dfs(node.right, string)
res = 0
dfs(root, '')
return res
| [
"here0009@163.com"
] | here0009@163.com |
19e7948bc88cf63a1a8cde28fed8b9edcaf91639 | 8e908074317c7260ca6076438e1b5d7ab59891cb | /2_1.py | 4fccfe37ddba158d76f5d2636da5ce2e3ef1e42f | [] | no_license | Kosuke-Szk/cp_seminar | 172f62666ec4cfdc037a9b8d659b27f5134f2c6e | 60daeef2929ae6cd531b8a5c1ea0e96e95cc043f | refs/heads/master | 2020-09-29T00:22:02.461969 | 2019-12-09T15:14:15 | 2019-12-09T15:14:15 | 226,901,359 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 503 | py | # input
# 3
# 2 1 3 13
# 1 3 2 13
# 3 2 1 10
eps = 1e-5
n = int(input())
A = []
for _ in range(n):
A.append(list(map(int, input().split())))
for i in range(n):
piv = A[i][i]
if abs(piv) < eps:
print('pivot number is too small.')
exit()
for j in range(n+1):
A[i][j] /= piv
for k in range(n):
d = A[k][i]
for j in range(n+1):
if k != i:
A[k][j] -= d*A[i][j]
for l in range(n):
print('x%d' % (l+1), '=', A[l][n])
| [
"brave.ksk@gmail.com"
] | brave.ksk@gmail.com |
ab3b07a51d4282d422421a37e93ecf49e4a5aa7c | 666827427e7a14ae2b2a9d4f9d7fa72cddcef7f0 | /default_settings.py | edf210538b2f0f9e624c082986b0bb8a0242f87a | [
"MIT"
] | permissive | cdax/ifttt_uber | 9a3221de49d3c87e16ed14d5560ef59b393739a7 | 3177084558ea4f845856374b25a1f4080bb222a3 | refs/heads/master | 2020-12-25T14:13:30.726811 | 2016-06-09T04:42:48 | 2016-06-09T04:42:48 | 60,712,684 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 343 | py | import os
UBER_BASE_API_URL = 'https://api.uber.com/v1'
UBER_AUTHORIZATION_ENDPOINT = 'https://login.uber.com/oauth/v2/authorize'
UBER_TOKEN_ENDPOINT = 'https://login.uber.com/oauth/v2/token'
UBER_CLIENT_ID = os.environ['UBER_CLIENT_ID']
UBER_CLIENT_SECRET = os.environ['UBER_CLIENT_SECRET']
IFTTT_MAKER_KEY = os.environ['IFTTT_MAKER_KEY']
| [
"das.chitharanjan@gmail.com"
] | das.chitharanjan@gmail.com |
5466f050d0de88a016903000c79668a9e733316f | f46eb0ad334a347b7cf3d95528dca1ad9a1b5301 | /nephele/AwsProcessor.py | 4bd1b897be029eb88ddcc043076437e60a0916a2 | [
"MIT"
] | permissive | earlye/nephele | 10aa2f8162fe80bc5e5de2b48aa81b3037af79ac | a7dadc68f4124671457f09119419978c4d22013e | refs/heads/master | 2021-01-23T00:34:20.709903 | 2018-12-03T16:14:35 | 2018-12-03T16:14:35 | 85,740,680 | 0 | 0 | MIT | 2021-08-19T17:53:23 | 2017-03-21T18:46:36 | Python | UTF-8 | Python | false | false | 15,723 | py | from SilentException import SilentException
from SlashException import SlashException
from stdplusAwsHelpers.AwsConnectionFactory import AwsConnectionFactory
from CommandArgumentParser import *
from stdplus import *
import cmd
import json
import os
import re
import signal
import sys
import traceback
import Config
from botocore.exceptions import ClientError
from pprint import pprint
def sshAddress(address,forwarding,replaceKey,keyscan,background,verbosity=0,command=None,ignoreHostKey=False,echoCommand=True,name=''):
if replaceKey or keyscan:
resetKnownHost(address)
if keyscan:
keyscanHost(address)
args=["/usr/bin/ssh",address]
if ignoreHostKey:
args.extend(["-o","StrictHostKeyChecking=no",
"-o","UpdateHostKeys=yes"])
if not forwarding == None:
for forwardInfo in forwarding:
if isInt(forwardInfo):
forwardInfo = "{0}:localhost:{0}".format(forwardInfo)
args.extend(["-L",forwardInfo])
if background:
args.extend(["-N","-n"])
else:
background = False # Background is ignored if not forwarding
if verbosity > 0:
args.append("-" + "v" * verbosity)
if 'ssh-jump-host' in Config.config['selectedProfile']:
if 'ssh-jump-user' in Config.config['selectedProfile']:
args.extend(["-q", "-J",'{}@{}'.format(Config.config['selectedProfile']['ssh-jump-user'],Config.config['selectedProfile']['ssh-jump-host'])])
else:
args.extend(["-q", "-J",Config.config['selectedProfile']['ssh-jump-host']])
if command:
args.append(command)
if echoCommand:
print "{}{}".format(name," ".join(args))
pid = fexecvp(args)
if background:
print "SSH Started in background. pid:{}".format(pid)
AwsProcessor.backgroundTasks.append(pid)
else:
os.waitpid(pid,0)
def ssh(instanceId,interfaceNumber,forwarding,replaceKey,keyscan,background,verbosity=0,command=None,ignoreHostKey=False,echoCommand=True,name=''):
if isIp(instanceId):
sshAddress(instanceId,forwarding,replaceKey,keyscan,background,verbosity,command,ignoreHostKey=ignoreHostKey)
else:
client = AwsConnectionFactory.getEc2Client()
response = client.describe_instances(InstanceIds=[instanceId])
networkInterfaces = response['Reservations'][0]['Instances'][0]['NetworkInterfaces'];
if None == interfaceNumber:
number = 0
for interface in networkInterfaces:
print "{0:3d} {1}".format(number,interface['PrivateIpAddress'])
number += 1
else:
address = "{}".format(networkInterfaces[interfaceNumber]['PrivateIpAddress'])
sshAddress(address,forwarding,replaceKey,keyscan,background,verbosity,command,ignoreHostKey=ignoreHostKey,echoCommand=echoCommand,name=name)
class AwsProcessor(cmd.Cmd):
backgroundTasks=[]
resourceTypeAliases={ 'AWS::AutoScaling::AutoScalingGroup' : 'asg',
'AWS::CloudFormation::Stack' : 'stack',
'AWS::EC2::NetworkInterface' : 'eni',
'AWS::Logs::LogGroup' : 'logGroup' }
processorFactory = None
def __init__(self,prompt,parent):
cmd.Cmd.__init__(self)
self.raw_prompt = prompt
self.prompt = prompt + "/: "
self.parent = parent
def emptyline(self):
pass
@staticmethod
def killBackgroundTasks():
for pid in AwsProcessor.backgroundTasks:
print "Killing pid:{}".format(pid)
os.kill(pid,signal.SIGQUIT)
def onecmd(self, line):
try:
return cmd.Cmd.onecmd(self,line)
except SystemExit, e:
raise e;
except SlashException, e:
if None == self.parent:
pass
else:
raise e
except SilentException:
pass
except ClientError as e:
# traceback.print_exc()
if e.response['Error']['Code'] == 'AccessDenied':
print "ERROR: Access Denied. Maybe you need to run mfa {code}"
traceback.print_exc()
except Exception, other:
traceback.print_exc()
except:
print "Unexpected error:", sys.exc_info()[0]
def mfa_devices(self, awsProfile='default'):
list_mfa_devices_command = ["aws","--profile",awsProfile,"--output","json","iam","list-mfa-devices"]
result = run_cmd(list_mfa_devices_command)
if result.retCode == 0 :
return json.loads("\n".join(result.stdout))['MFADevices']
else:
raise Exception('There was a problem fetching MFA devices from AWS')
def load_arn_from_aws(self, awsProfile):
devices = self.mfa_devices(awsProfile)
if len(devices):
return devices[0]['SerialNumber']
else:
raise Exception('No MFA devices were found for your account')
def do_mfa(self, args):
"""
Enter a 6-digit MFA token. Nephele will execute the appropriate
`aws` command line to authenticate that token.
mfa -h for more details
"""
parser = CommandArgumentParser("mfa")
parser.add_argument(dest='token',help='MFA token value');
parser.add_argument("-p","--profile",dest='awsProfile',default=AwsConnectionFactory.instance.getProfile(),help='MFA token value');
args = vars(parser.parse_args(args))
token = args['token']
awsProfile = args['awsProfile']
arn = AwsConnectionFactory.instance.load_arn(awsProfile)
credentials_command = ["aws","--profile",awsProfile,"--output","json","sts","get-session-token","--serial-number",arn,"--token-code",token]
output = run_cmd(credentials_command) # Throws on non-zero exit :yey:
credentials = json.loads("\n".join(output.stdout))['Credentials']
AwsConnectionFactory.instance.setMfaCredentials(credentials,awsProfile)
def do_up(self,args):
"""
Navigate up by one level.
For example, if you are in `(aws)/stack:.../asg:.../`, executing `up` will place you in `(aws)/stack:.../`.
up -h for more details
"""
parser = CommandArgumentParser("up")
args = vars(parser.parse_args(args))
if None == self.parent:
print "You're at the root. Try 'quit' to quit"
else:
return True
def do_slash(self,args):
"""
Navigate back to the root level.
For example, if you are in `(aws)/stack:.../asg:.../`, executing `slash` will place you in `(aws)/`.
slash -h for more details
"""
parser = CommandArgumentParser("slash")
args = vars(parser.parse_args(args))
if None == self.parent:
print "You're at the root. Try 'quit' to quit"
else:
raise SlashException()
def do_profile(self,args):
"""
Select nephele profile
profile -h for more details
"""
parser = CommandArgumentParser("profile")
parser.add_argument(dest="profile",help="Profile name")
parser.add_argument('-v','--verbose',dest="verbose",action='store_true',help='verbose')
args = vars(parser.parse_args(args))
profile = args['profile']
verbose = args['verbose']
if verbose:
print "Selecting profile '{}'".format(profile)
selectedProfile = {}
if profile in Config.config['profiles']:
selectedProfile = Config.config['profiles'][profile]
selectedProfile['name'] = profile
Config.config['selectedProfile'] = selectedProfile
awsProfile = profile
if 'awsProfile' in selectedProfile:
awsProfile = selectedProfile['awsProfile']
AwsConnectionFactory.resetInstance(profile=awsProfile)
def do_quit(self,args):
"""
Exit nephele
"""
raise SystemExit
def childLoop(self,child):
try:
child.cmdloop()
except SilentException, e:
raise e
except SlashException, e:
raise e
except Exception, e:
print "Exception: {}".format(e)
traceback.print_exc()
def stackResource(self,stackName,logicalId):
print "loading stack resource {}.{}".format(stackName,logicalId)
stackResource = AwsConnectionFactory.instance.getCfResource().StackResource(stackName,logicalId)
pprint(stackResource)
if "AWS::CloudFormation::Stack" == stackResource.resource_type:
pprint(stackResource)
print "Found a stack w/ physical id:{}".format(stackResource.physical_resource_id)
childStack = AwsConnectionFactory.instance.getCfResource().Stack(stackResource.physical_resource_id)
print "Creating prompt"
self.childLoop(AwsProcessor.processorFactory.Stack(childStack,logicalId,self))
elif "AWS::AutoScaling::AutoScalingGroup" == stackResource.resource_type:
scalingGroup = stackResource.physical_resource_id
self.childLoop(AwsProcessor.processorFactory.AutoScalingGroup(scalingGroup,self))
elif "AWS::EC2::NetworkInterface" == stackResource.resource_type:
eniId = stackResource.physical_resource_id
self.childLoop(AwsProcessor.processorFactory.Eni(eniId,self))
elif "AWS::Logs::LogGroup" == stackResource.resource_type:
self.childLoop(AwsProcessor.processorFactory.LogGroup(stackResource,self))
elif "AWS::IAM::Role" == stackResource.resource_type:
self.childLoop(AwsProcessor.processorFactory.Role(stackResource,self))
else:
pprint(stackResource)
print("- description:{}".format(stackResource.description))
print("- last_updated_timestamp:{}".format(stackResource.last_updated_timestamp))
print("- logical_resource_id:{}".format(stackResource.logical_resource_id))
print("- metadata:{}".format(stackResource.metadata.strip()))
print("- physical_resource_id:{}".format(stackResource.physical_resource_id))
print("- resource_status:{}".format(stackResource.resource_status))
print("- resource_status_reason:{}".format(stackResource.resource_status_reason))
print("- resource_type:{}".format(stackResource.resource_type))
print("- stack_id:{}".format(stackResource.stack_id))
def do_ssh(self,args):
"""
SSH to an instance.
Note: This command is extended in more specific contexts, for example inside Autoscaling Groups
ssh -h for more details
"""
parser = CommandArgumentParser("ssh")
parser.add_argument(dest='id',help='identifier of the instance to ssh to [aws instance-id or ip address]')
parser.add_argument('-a','--interface-number',dest='interface-number',default='0',help='instance id of the instance to ssh to')
parser.add_argument('-ii','--ignore-host-key',dest='ignore-host-key',default=False,action='store_true',help='Ignore host key')
parser.add_argument('-ne','--no-echo',dest='no-echo',default=False,action='store_true',help='Do not echo command')
parser.add_argument('-L',dest='forwarding',nargs='*',help="port forwarding string: {localport}:{host-visible-to-instance}:{remoteport} or {port}")
parser.add_argument('-R','--replace-key',dest='replaceKey',default=False,action='store_true',help="Replace the host's key. This is useful when AWS recycles an IP address you've seen before.")
parser.add_argument('-Y','--keyscan',dest='keyscan',default=False,action='store_true',help="Perform a keyscan to avoid having to say 'yes' for a new host. Implies -R.")
parser.add_argument('-B','--background',dest='background',default=False,action='store_true',help="Run in the background. (e.g., forward an ssh session and then do other stuff in aws-shell).")
parser.add_argument('-v',dest='verbosity',default=0,action=VAction,nargs='?',help='Verbosity. The more instances, the more verbose');
parser.add_argument('-m',dest='macro',default=False,action='store_true',help='{command} is a series of macros to execute, not the actual command to run on the host');
parser.add_argument(dest='command',nargs='*',help="Command to run")
args = vars(parser.parse_args(args))
targetId = args['id']
interfaceNumber = int(args['interface-number'])
forwarding = args['forwarding']
replaceKey = args['replaceKey']
keyscan = args['keyscan']
background = args['background']
verbosity = args['verbosity']
ignoreHostKey = args['ignore-host-key']
noEcho = args['no-echo']
if args['macro']:
if len(args['command']) > 1:
print("Only one macro may be specified with the -m switch.")
return
else:
macro = args['command'][0]
print("Macro:{}".format(macro))
command = Config.config['ssh-macros'][macro]
else:
command = ' '.join(args['command'])
ssh(targetId,interfaceNumber, forwarding, replaceKey, keyscan, background, verbosity, command, ignoreHostKey=ignoreHostKey, echoCommand = not noEcho)
def do_config(self,args):
"""
Deal with configuration. Available subcommands:
* config print - print the current configuration
* config reload - reload the current configuration from disk
* config set - change a setting in the configuration
* config save - save the configuration to disk
config -h for more details
"""
parser = CommandArgumentParser("config")
subparsers = parser.add_subparsers(help='sub-command help',dest='command')
# subparsers.required=
subparsers._parser_class = argparse.ArgumentParser # This is to work around `TypeError: __init__() got an unexpected keyword argument 'prog'`
parserPrint = subparsers.add_parser('print',help='Print the current configuration')
parserPrint.add_argument(dest='keys',nargs='*',help='Key(s) to print')
parserSet = subparsers.add_parser('set',help='Set a configuration value')
parserSave = subparsers.add_parser('save',help='Save the current configuration')
parserReload = subparsers.add_parser('reload',help='Reload the configuration from disk')
args = vars(parser.parse_args(args))
print("Command:{}".format(args['command']))
{
'print' : AwsProcessor.sub_configPrint,
'set' : AwsProcessor.sub_configSet,
'save' : AwsProcessor.sub_configSave,
'reload' : AwsProcessor.sub_configReload
}[args['command']]( self, args )
def sub_configPrint(self,args):
if not args['keys']:
print("Current configuration:{}".format(Config.config))
else:
for key in args['keys']:
subkeys = key.split('.')
entry = Config.config
for subkey in subkeys:
if subkey in entry:
entry = entry[subkey]
else:
entry = None
print "{}: {}".format(key,entry)
def sub_configSet(self,args):
print("Set configuration:{}".format(args))
def sub_configSave(self,args):
print("Save configuration:{}".format(args))
def sub_configReload(self,args):
Config.loadConfig()
| [
"earlye@gmail.com"
] | earlye@gmail.com |
4b1e6ff8dcab39ce71d92053b69511dbb5cc419d | 85a9ffeccb64f6159adbd164ff98edf4ac315e33 | /pysnmp-with-texts/APPIAN-STRATUM-MIB.py | 06de6f8880fa3548a8b448db96e72c44e32cc272 | [
"LicenseRef-scancode-warranty-disclaimer",
"LicenseRef-scancode-proprietary-license",
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
] | permissive | agustinhenze/mibs.snmplabs.com | 5d7d5d4da84424c5f5a1ed2752f5043ae00019fb | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | refs/heads/master | 2020-12-26T12:41:41.132395 | 2019-08-16T15:51:41 | 2019-08-16T15:53:57 | 237,512,469 | 0 | 0 | Apache-2.0 | 2020-01-31T20:41:36 | 2020-01-31T20:41:35 | null | UTF-8 | Python | false | false | 19,741 | py | #
# PySNMP MIB module APPIAN-STRATUM-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/APPIAN-STRATUM-MIB
# Produced by pysmi-0.3.4 at Wed May 1 11:23:58 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
acChassisCurrentTime, acChassisRingId = mibBuilder.importSymbols("APPIAN-CHASSIS-MIB", "acChassisCurrentTime", "acChassisRingId")
acOsap, AcOpStatus, AcNodeId = mibBuilder.importSymbols("APPIAN-SMI-MIB", "acOsap", "AcOpStatus", "AcNodeId")
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, ConstraintsIntersection, SingleValueConstraint, ConstraintsUnion, ValueSizeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "ConstraintsIntersection", "SingleValueConstraint", "ConstraintsUnion", "ValueSizeConstraint")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
IpAddress, ModuleIdentity, Bits, MibScalar, MibTable, MibTableRow, MibTableColumn, NotificationType, TimeTicks, Counter64, Gauge32, ObjectIdentity, Counter32, MibIdentifier, Integer32, iso, Unsigned32 = mibBuilder.importSymbols("SNMPv2-SMI", "IpAddress", "ModuleIdentity", "Bits", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "NotificationType", "TimeTicks", "Counter64", "Gauge32", "ObjectIdentity", "Counter32", "MibIdentifier", "Integer32", "iso", "Unsigned32")
TruthValue, DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "TruthValue", "DisplayString", "TextualConvention")
acStratum = ModuleIdentity((1, 3, 6, 1, 4, 1, 2785, 2, 9))
acStratum.setRevisions(('1900-08-22 00:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: acStratum.setRevisionsDescriptions(('Draft MIB for Engineering use only.',))
if mibBuilder.loadTexts: acStratum.setLastUpdated('0008220000Z')
if mibBuilder.loadTexts: acStratum.setOrganization('Appian Communications, Inc.')
if mibBuilder.loadTexts: acStratum.setContactInfo('Brian Johnson')
if mibBuilder.loadTexts: acStratum.setDescription('Appian Communications Stratum MIB contain the definitions for the configuration and control of Stratum Clock module hardware information and status.')
acStratumTable = MibTable((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1), )
if mibBuilder.loadTexts: acStratumTable.setStatus('current')
if mibBuilder.loadTexts: acStratumTable.setDescription('This table contains two rows for access and control of the Stratum-3 clock modules.')
acStratumEntry = MibTableRow((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1), ).setIndexNames((0, "APPIAN-STRATUM-MIB", "acStratumNodeId"))
if mibBuilder.loadTexts: acStratumEntry.setStatus('current')
if mibBuilder.loadTexts: acStratumEntry.setDescription('A row within the Stratum table containing access control and status information relating to the operation of the Stratum-3 clock module.')
acStratumNodeId = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 1), AcNodeId()).setMaxAccess("accessiblefornotify")
if mibBuilder.loadTexts: acStratumNodeId.setStatus('current')
if mibBuilder.loadTexts: acStratumNodeId.setDescription("The unique node identification number representing a chassis within a ring of OSAP's.")
acStratumClockSource = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("internal", 1), ("bits", 2), ("line", 3))).clone('internal')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumClockSource.setStatus('current')
if mibBuilder.loadTexts: acStratumClockSource.setDescription('This attribute determines the clock source.')
acStratumOpStatusModuleA = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 3), AcOpStatus()).setMaxAccess("readonly")
if mibBuilder.loadTexts: acStratumOpStatusModuleA.setStatus('current')
if mibBuilder.loadTexts: acStratumOpStatusModuleA.setDescription('This field indicates the current operational status for the clock card in slot 16, module A . Only the following values are applicable to the module: operational, offline, initializing, selfTesting, upgrading, standby, shuttingDown, failed, and hw not present.')
acStratumOpStatusModuleB = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 4), AcOpStatus()).setMaxAccess("readonly")
if mibBuilder.loadTexts: acStratumOpStatusModuleB.setStatus('current')
if mibBuilder.loadTexts: acStratumOpStatusModuleB.setDescription('This field indicates the current operational status for the clock card in slot 16, module B . Only the following values are applicable to the module: operational, offline, initializing, selfTesting, upgrading, standby, shuttingDown, failed, and hw not present.')
acStratumAlarmStatusModuleA = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 6))).setMaxAccess("readonly")
if mibBuilder.loadTexts: acStratumAlarmStatusModuleA.setStatus('current')
if mibBuilder.loadTexts: acStratumAlarmStatusModuleA.setDescription('This attribute contains the current status of the clock alarms. The acStratumAlarmStatus is a bit map represented as a sum. Normal may only be set if and only if no other alarms are set. The various bit positions are: 1 normal No alarm present 2 los Loss of Signal 4 lof Loss of Frame ')
acStratumAlarmStatusModuleB = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 6))).setMaxAccess("readonly")
if mibBuilder.loadTexts: acStratumAlarmStatusModuleB.setStatus('current')
if mibBuilder.loadTexts: acStratumAlarmStatusModuleB.setDescription('This attribute contains the current status of the clock alarms. The acStratumAlarmStatus is a bit map represented as a sum. Normal must be set if and oly if no other flash is set. The various bit positions are: 1 normal No alarm present 2 los Loss of Signal 4 lof Loss of Frame ')
acStratumCurrentClockSourceModuleA = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))).clone(namedValues=NamedValues(("unknown", 0), ("none", 1), ("bits-a", 2), ("bits-b", 3), ("line-slot1-port1", 4), ("line-slot1-port2", 5), ("line-slot2-port1", 6), ("line-slot2-port2", 7), ("holdover", 8), ("internal", 9)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: acStratumCurrentClockSourceModuleA.setStatus('current')
if mibBuilder.loadTexts: acStratumCurrentClockSourceModuleA.setDescription('This attribute displays the current source that the clock card is selecting.')
acStratumCurrentClockSourceModuleB = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 8), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))).clone(namedValues=NamedValues(("unknown", 0), ("none", 1), ("bits-a", 2), ("bits-b", 3), ("line-slot1-port1", 4), ("line-slot1-port2", 5), ("line-slot2-port1", 6), ("line-slot2-port2", 7), ("holdover", 8), ("internal", 9)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: acStratumCurrentClockSourceModuleB.setStatus('current')
if mibBuilder.loadTexts: acStratumCurrentClockSourceModuleB.setDescription('This attribute displays the current source that the clock card is selecting.')
acStratumLockoutReference = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 63))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumLockoutReference.setStatus('current')
if mibBuilder.loadTexts: acStratumLockoutReference.setDescription('This attribute is a bit mask of clock references that should be locked out from selection for the clock source. None can only be selected when no other lockout references are selected. The various bit positions are: 0 none No clock references are locked out from selection. 1 bits-a BITS source from clock module A is locked out. 2 bits-b BITS source from clock module B is locked out. 4 line-slot1 LINE timing source from SONET slot 1 is locked out. 8 line-slot2 LINE timing source from SONET slot 2 is locked out. 16 holdover-a Holdover from clock module A is locked out. 32 holdover-b Holdover from clock module B is locked out. ')
acStratumManualSwitch = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 10), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4))).clone(namedValues=NamedValues(("none", 0), ("bits-a", 1), ("bits-b", 2), ("line-slot1", 3), ("line-slot2", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumManualSwitch.setStatus('current')
if mibBuilder.loadTexts: acStratumManualSwitch.setDescription('This attribute will manually switch the clock references. If the clock reference does not exist, is locked out, or the reference has failed, the switch will not take place.')
acStratumForcedSwitch = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 11), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4))).clone(namedValues=NamedValues(("none", 0), ("bits-a", 1), ("bits-b", 2), ("line-slot1", 3), ("line-slot2", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumForcedSwitch.setStatus('current')
if mibBuilder.loadTexts: acStratumForcedSwitch.setDescription('This attribute will force switch the clock references. If the clock reference does not exist or is locked out, the switch will not take place.')
acStratumRevertiveRefSwitchEnabled = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 12), TruthValue().clone('false')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumRevertiveRefSwitchEnabled.setStatus('current')
if mibBuilder.loadTexts: acStratumRevertiveRefSwitchEnabled.setDescription('Setting of this attribute to true(1) will the reference to revert back to the original reference when that reference become ready again.')
acStratumClearAlarms = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 13), TruthValue().clone('false')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumClearAlarms.setStatus('current')
if mibBuilder.loadTexts: acStratumClearAlarms.setDescription('Setting of this attribute to true(1) will cause the alarm contacts to clear. Reading this attribute will always return false.')
acStratumLineTimingPortSlot1 = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 14), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2)).clone(1)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumLineTimingPortSlot1.setStatus('current')
if mibBuilder.loadTexts: acStratumLineTimingPortSlot1.setDescription('When configured for line timing, this value describes which port on the SONET card will be used to drive the line. This value is not applicable when not configured for line timing.')
acStratumLineTimingPortSlot2 = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 15), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2)).clone(1)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumLineTimingPortSlot2.setStatus('current')
if mibBuilder.loadTexts: acStratumLineTimingPortSlot2.setDescription('When configured for line timing, this value describes which port on the SONET card will be used to drive the line. This value is not applicable when not configured for line timing.')
acStratumBITSFramingType = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 16), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("esf", 1), ("d4", 2))).clone('esf')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: acStratumBITSFramingType.setStatus('current')
if mibBuilder.loadTexts: acStratumBITSFramingType.setDescription('When configured for BITS timing, this value describes the type of framing that will be used on the BITS interface. This value is not applicable when not configured for BITS timing.')
acStratumCurrentClockSourceSystem = MibTableColumn((1, 3, 6, 1, 4, 1, 2785, 2, 9, 1, 1, 17), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12))).clone(namedValues=NamedValues(("unknown", 0), ("bits-a", 1), ("bits-b", 2), ("line-slot1-port1", 3), ("line-slot1-port2", 4), ("line-slot2-port1", 5), ("line-slot2-port2", 6), ("holdover-clock-a", 7), ("holdover-clock-b", 8), ("internal-clock-a", 9), ("internal-clock-b", 10), ("internal-sonet-slot1", 11), ("internal-sonet-slot2", 12)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: acStratumCurrentClockSourceSystem.setStatus('current')
if mibBuilder.loadTexts: acStratumCurrentClockSourceSystem.setDescription('This attribute displays the current clock source that the system is selecting.')
acStratumTraps = MibIdentifier((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0))
acStratumFailedModuleATrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 1)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"))
if mibBuilder.loadTexts: acStratumFailedModuleATrap.setStatus('current')
if mibBuilder.loadTexts: acStratumFailedModuleATrap.setDescription('The stratum clock module failed.')
acStratumFailedModuleBTrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 2)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"))
if mibBuilder.loadTexts: acStratumFailedModuleBTrap.setStatus('current')
if mibBuilder.loadTexts: acStratumFailedModuleBTrap.setDescription('The stratum clock module failed.')
acStratumClockFailureModuleATrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 3)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"), ("APPIAN-STRATUM-MIB", "acStratumAlarmStatusModuleA"))
if mibBuilder.loadTexts: acStratumClockFailureModuleATrap.setStatus('current')
if mibBuilder.loadTexts: acStratumClockFailureModuleATrap.setDescription('Stratum clock agent has detected a clock timing failure.')
acStratumClockFailureModuleBTrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 4)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"), ("APPIAN-STRATUM-MIB", "acStratumAlarmStatusModuleB"))
if mibBuilder.loadTexts: acStratumClockFailureModuleBTrap.setStatus('current')
if mibBuilder.loadTexts: acStratumClockFailureModuleBTrap.setDescription('Stratum clock agent has detected a clock timing failure.')
acStratumRemovalModuleATrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 5)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"))
if mibBuilder.loadTexts: acStratumRemovalModuleATrap.setStatus('current')
if mibBuilder.loadTexts: acStratumRemovalModuleATrap.setDescription('The stratum clock module has been removed from the system.')
acStratumRemovalModuleBTrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 6)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"))
if mibBuilder.loadTexts: acStratumRemovalModuleBTrap.setStatus('current')
if mibBuilder.loadTexts: acStratumRemovalModuleBTrap.setDescription('The stratum clock module has been removed from the system.')
acStratumInsertedModuleATrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 7)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"))
if mibBuilder.loadTexts: acStratumInsertedModuleATrap.setStatus('current')
if mibBuilder.loadTexts: acStratumInsertedModuleATrap.setDescription('A stratum clock module has been inserted into the system.')
acStratumInsertedModuleBTrap = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 8)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"))
if mibBuilder.loadTexts: acStratumInsertedModuleBTrap.setStatus('current')
if mibBuilder.loadTexts: acStratumInsertedModuleBTrap.setDescription('A stratum clock module has been inserted into the system.')
acStratumClockModuleAOk = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 9)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"), ("APPIAN-STRATUM-MIB", "acStratumAlarmStatusModuleA"))
if mibBuilder.loadTexts: acStratumClockModuleAOk.setStatus('current')
if mibBuilder.loadTexts: acStratumClockModuleAOk.setDescription('Stratum clock agent has recovered clock timing.')
acStratumClockModuleBOk = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 10)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"), ("APPIAN-STRATUM-MIB", "acStratumAlarmStatusModuleB"))
if mibBuilder.loadTexts: acStratumClockModuleBOk.setStatus('current')
if mibBuilder.loadTexts: acStratumClockModuleBOk.setDescription('Stratum clock agent has recovered clock timing.')
acStratumSystemClockSourceChange = NotificationType((1, 3, 6, 1, 4, 1, 2785, 2, 9, 0, 11)).setObjects(("APPIAN-CHASSIS-MIB", "acChassisCurrentTime"), ("APPIAN-CHASSIS-MIB", "acChassisRingId"), ("APPIAN-STRATUM-MIB", "acStratumNodeId"), ("APPIAN-STRATUM-MIB", "acStratumCurrentClockSourceSystem"))
if mibBuilder.loadTexts: acStratumSystemClockSourceChange.setStatus('current')
if mibBuilder.loadTexts: acStratumSystemClockSourceChange.setDescription('Stratum clock source has changed to acStratumCurrentClockSourceSystem.')
mibBuilder.exportSymbols("APPIAN-STRATUM-MIB", acStratumClockFailureModuleATrap=acStratumClockFailureModuleATrap, acStratumManualSwitch=acStratumManualSwitch, acStratumClockModuleBOk=acStratumClockModuleBOk, acStratumRemovalModuleBTrap=acStratumRemovalModuleBTrap, acStratumBITSFramingType=acStratumBITSFramingType, acStratumTable=acStratumTable, acStratumRevertiveRefSwitchEnabled=acStratumRevertiveRefSwitchEnabled, acStratumRemovalModuleATrap=acStratumRemovalModuleATrap, acStratumFailedModuleBTrap=acStratumFailedModuleBTrap, acStratumLineTimingPortSlot2=acStratumLineTimingPortSlot2, acStratumInsertedModuleATrap=acStratumInsertedModuleATrap, acStratumFailedModuleATrap=acStratumFailedModuleATrap, acStratumTraps=acStratumTraps, acStratumAlarmStatusModuleA=acStratumAlarmStatusModuleA, acStratumNodeId=acStratumNodeId, acStratumClockModuleAOk=acStratumClockModuleAOk, acStratumOpStatusModuleB=acStratumOpStatusModuleB, acStratumForcedSwitch=acStratumForcedSwitch, acStratumCurrentClockSourceModuleA=acStratumCurrentClockSourceModuleA, acStratumAlarmStatusModuleB=acStratumAlarmStatusModuleB, acStratumCurrentClockSourceSystem=acStratumCurrentClockSourceSystem, acStratumClockSource=acStratumClockSource, acStratumCurrentClockSourceModuleB=acStratumCurrentClockSourceModuleB, PYSNMP_MODULE_ID=acStratum, acStratum=acStratum, acStratumLineTimingPortSlot1=acStratumLineTimingPortSlot1, acStratumSystemClockSourceChange=acStratumSystemClockSourceChange, acStratumEntry=acStratumEntry, acStratumOpStatusModuleA=acStratumOpStatusModuleA, acStratumClearAlarms=acStratumClearAlarms, acStratumLockoutReference=acStratumLockoutReference, acStratumClockFailureModuleBTrap=acStratumClockFailureModuleBTrap, acStratumInsertedModuleBTrap=acStratumInsertedModuleBTrap)
| [
"dcwangmit01@gmail.com"
] | dcwangmit01@gmail.com |
1b43082d768a96c889d523cd9c34162a613e63b8 | 5c883c87f337be7ffd52f49f0a4e6c72bbd58932 | /apps/almacenes/migrations/0026_auto_20170322_1009.py | 6bc93dfdfc7667e77ff7d1173f2e6f96fe4acf6f | [] | no_license | DARKDEYMON/Tesis-2-Vidaurre-J.C. | f1b0d8e8a593a9d4a585bdd14b21d4809d55ce9f | 4299cea2e990ee798b02724849d747bfd558b97d | refs/heads/master | 2021-06-20T09:25:53.273225 | 2017-05-25T22:20:31 | 2017-05-25T22:20:31 | 65,408,196 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,348 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.10.6 on 2017-03-22 14:09
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('almacenes', '0025_auto_20161029_1535'),
]
operations = [
migrations.AlterField(
model_name='herramientas',
name='decripcion',
field=models.CharField(max_length=100, unique=True),
),
migrations.AlterField(
model_name='insumos',
name='decripcion',
field=models.CharField(max_length=100, unique=True),
),
migrations.AlterField(
model_name='maquinaria_equipo',
name='decripcion',
field=models.CharField(max_length=100, unique=True),
),
migrations.AlterField(
model_name='material',
name='decripcion',
field=models.CharField(max_length=100, unique=True),
),
migrations.AlterField(
model_name='proveedor',
name='rason_social',
field=models.CharField(max_length=100, unique=True),
),
migrations.AlterField(
model_name='tipoactivo',
name='tipo',
field=models.CharField(max_length=60, unique=True),
),
]
| [
"darkdeymon04@gmail.com"
] | darkdeymon04@gmail.com |
f05403c9405ed398d5a9c6787a185a0aa4a7714c | b643295375635be92723725a3a139cb8b51f2a77 | /customer/migrations/0001_initial.py | 5132d4b5e9450ad2612e78de7b0a70f39c5156a7 | [] | no_license | Minwook11/wehuddling_test_repo | 660ba918e1d31d4515ff668cbf577368e6042fc6 | e4bd3e6972806d030f493e963fa5b8ed55df5242 | refs/heads/master | 2023-01-24T02:46:52.117065 | 2020-11-29T11:57:49 | 2020-11-29T11:57:49 | 315,496,350 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 852 | py | # Generated by Django 3.1.3 on 2020-11-27 02:39
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Customer',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=64)),
('phone_number', models.CharField(max_length=64, unique=True)),
('account', models.CharField(max_length=128, unique=True)),
('password', models.CharField(max_length=256, null=True)),
('new_address', models.CharField(max_length=256)),
('old_address', models.CharField(max_length=256)),
],
),
]
| [
"alsdnr4874@gmail.com"
] | alsdnr4874@gmail.com |
0b0e608d2f19c22e783a4d5221fa6fc1e6713dfb | 0aca1abc5f26938a7ef3b521a8e685223d361ff1 | /basic/wy_atom.py | 3036df64a9e99d10d5850dc762ce299ccc6ad72a | [] | no_license | soloapple/python_study | 92adca71797bb3efa7f9085e73eac19497abb7db | 97f8d09451de13e0a1d462462c397ce4d48eac2d | refs/heads/master | 2020-08-03T20:37:30.913209 | 2016-11-23T15:25:12 | 2016-11-23T15:25:12 | 73,537,441 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 78 | py | count = 1
while count <= 10:
print(count)
count += 1
input("atom")
| [
"root@solos-MacBook-Pro.local"
] | root@solos-MacBook-Pro.local |
21c212f2a026f32103f091e74dba2ff70fdfddde | 245bb7314fe5ce16a7870acee293248b4ef0c783 | /test.py | c1fb0596026504be48c11f784010c1a681bcf990 | [] | no_license | kitiv/Moving_tests | ef2a7823efcc3ba444bb857da5ab24071fe77640 | 7fe56c57f890645ae958fa1b0cd5d0a3eca72508 | refs/heads/master | 2022-12-17T12:48:07.667419 | 2020-09-22T01:56:42 | 2020-09-22T01:56:42 | 256,955,764 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,022 | py | from Phase_detection import *
from split_func import *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
L = 20 # Distance
H = 180 / 250 # Growth
A = 24 # Age
S = 1 # Sex
W = 84 / 200 # Weigth
id = 'M00001'
dis = 'C:\\Users\\Александр\\Documents\\GitHub\\Moving_tests\\WT901WIFI\\WT901WIFI\\WIFI software-UDP mode pairing network\\data\\20200427'
name = dis + '\\22.log'
[NofS, lNofS] = split_func(name) # Разделение массива [Номера датчиков по порядку массива, кол-во датчиков]
Incl = np.genfromtxt(name, delimiter=',')[:, 2:26] # [0:-1:lNofS, 2:26]
# Incl2=Incl[[i for i, ltr in enumerate(NofS) if ltr == "WT4700001010"],5]
# x = [i for i, ltr in enumerate(NofS) if ltr == "WT4700000973"]
'Crop massive'
print('Выбор крайних точек сигнала')
plt.plot(Incl[:, 3:6]) # Построение угловых скоростей для выбора крайних точек сигнала
th = plt.ginput(2) # Выбор крайних точек
plt.close()
x_th = [0, 0] # Коорд крайних точек
x_th[0] = int(th[0][0])
x_th[1] = int(th[1][0])
Incl1 = Incl[x_th[0]:x_th[1], :] # Обрезание данных с датчика по крайним точкам
NofS1 = NofS[x_th[0]:x_th[1]] # Обрезание массива имен датчиков по крайним точкам
d_name = list(set(NofS1)) # выделение уникальных имен датчиков
print('Используемые датчики: ', d_name)
# sens_names={'W0038':'1 или Left_arm'}
# name_sen=sens_name[str(NofS(0))] -> 'Left_arm'
for i in d_name: # Цикл по датчикам
Incl2 = Incl1[[j for j, ltr in enumerate(NofS1) if ltr == i],
:] # Выделение строк относящ к опред датчику из общего массива
[p1, p1y, p2, p2y, p3, p3y, p4, p4y, p5, p5y, phase_time, step_time, NFog, TF] = Phase_detection(
Incl2) # Функция расчета фаз шага
d = {'p1': p1, 'p1y': p1y["peak_heights"],
'p2': p2, 'p2y': p2y["peak_heights"],
'p3': p3, 'p3y': p3y["peak_heights"],
'p4': p4, 'p4y': p4y["peak_heights"],
'p5': p5, 'p5y': p5y["peak_heights"]} # Массив фаз
print(d)
d2 = phase_time.transpose()
frame = pd.DataFrame(d) # собираем фрейм
frame1 = pd.DataFrame(d2)
frame.to_csv(dis + '\\P0S' + i + '_phases.csv', index=False)
frame1.to_csv(dis + '\\P0S' + i + '_phases_time.csv', mode='w', index=False)
Nstep = len(p2)
Twalk = p1[-1] - p1[0] # Проверить
GP1 = L / Nstep / H
GP2 = L / Nstep / Twalk
# GP3_1=0 # Задумка встроить инеграл от угловой скорости на переносе ноги, чтобы вычилить макс поднятие
GP3_2 = np.mean(p3y)
[GP4, GP5, GP6, GP7, GP8, GP9, GP10, GP11] = [np.mean(phase_time[0, :]), np.mean(phase_time[1, :]),
np.mean(phase_time[2, :]),
np.mean(phase_time[3, :]), np.std(phase_time[0, :]),
np.std(phase_time[1, :]),
np.std(phase_time[2, :]), np.std(phase_time[3, :])]
GP12_1 = NFog # встроить по точкам определение N_FOG
GP12_2 = np.mean(TF) # встроить по точкам определение T_средн_FOG
Out_mass = {'Growth': H, 'Sex': S, 'Age': A, 'Weight': W, 'GP1': GP1, 'GP2': GP2, 'GP3.2': GP3_2, 'GP4': GP4,
'GP5': GP5, 'GP6': GP6, 'GP7': GP7, 'GP8': GP8, 'GP1': GP9, 'GP1': GP9, 'GP1': GP10, 'GP11': GP11,
'GP12.1': GP12_1, 'GP12.2': GP12_2} # Массив выходных параметров
out_frame = pd.DataFrame(Out_mass)
out_frame.to_csv(id + '.csv', index=False)
| [
"kotovsanyru@gmail.com"
] | kotovsanyru@gmail.com |
44bdb24f0256c2a9387cb9aabe3c13a946434d0c | 08a826f86e9c760d83ffe0e116a82414bbdfa03d | /src/powerpoint_image_exporter/powerpoint_image_exporter.py | bdcae1de98192308f7f5dcc4a45f42362dd18d91 | [] | no_license | tjkierzkowski/powerpoint-image-exporter-old | a75cef674ca7a7a8857305df077681f7672a4bbb | b0ec9d3b7961416e36210271bf703071803eb39d | refs/heads/master | 2022-12-12T12:50:09.713411 | 2020-09-20T00:18:55 | 2020-09-20T00:18:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 938 | py | import click
from pptx_export.pptx_export import DEFAULT_DIR, PowerPointImageExporter
from . import __version__
@click.command()
@click.argument(
"pptx_file_path",
metavar="<pptx file>",
required=1,
type=click.Path(exists=True, file_okay=True, dir_okay=False),
)
@click.option(
"-o",
"--output-dir",
"output_directory",
metavar="<output directory>",
help="full or relative path of either an empty or to be created "
"output directory for images.",
default=DEFAULT_DIR,
show_default="f{DEFAULT_DIR}",
)
@click.version_option(version=__version__)
def main(pptx_file_path, output_directory):
"""Export all images from a powerpoint lecture (.pptx) into a directory
pptx_file_path: full or relative path to the pptx file
"""
exporter = PowerPointImageExporter(pptx_file_path)
exporter.create_directory_for_images(output_directory)
exporter.copy_images_to_directory()
| [
"52253+tjkierzkowski@users.noreply.github.com"
] | 52253+tjkierzkowski@users.noreply.github.com |
a7a10c869e455f85d0277f3c8391df0683381241 | 742f8aa424b5ef4d9865dee98bebbd5f741a3831 | /tests/test_pregel.py | 8c876136c50ef8db82da2cb79530357b615bc4f3 | [
"MIT"
] | permissive | TZubiri/python-arango | a8be86f2cf9190c2d74d99eb2ef8f5f48b9f45c6 | 232c2d09c7bf9b5e0b71b7ab16fbce6682db383d | refs/heads/master | 2020-04-04T22:24:03.898075 | 2018-11-06T03:59:54 | 2018-11-06T03:59:54 | 156,322,851 | 0 | 0 | null | 2018-11-06T03:51:04 | 2018-11-06T03:51:03 | null | UTF-8 | Python | false | false | 1,823 | py | from __future__ import absolute_import, unicode_literals
from six import string_types
from arango.exceptions import (
PregelJobCreateError,
PregelJobGetError,
PregelJobDeleteError
)
from tests.helpers import (
assert_raises,
generate_string
)
def test_pregel_attributes(db, username):
assert db.pregel.context in ['default', 'async', 'batch', 'transaction']
assert db.pregel.username == username
assert db.pregel.db_name == db.name
assert repr(db.pregel) == '<Pregel in {}>'.format(db.name)
def test_pregel_management(db, graph):
# Test create pregel job
job_id = db.pregel.create_job(
graph.name,
'pagerank',
store=False,
max_gss=100,
thread_count=1,
async_mode=False,
result_field='result',
algorithm_params={'threshold': 0.000001}
)
assert isinstance(job_id, int)
# Test create pregel job with unsupported algorithm
with assert_raises(PregelJobCreateError) as err:
db.pregel.create_job(graph.name, 'invalid')
assert err.value.error_code == 10
# Test get existing pregel job
job = db.pregel.job(job_id)
assert isinstance(job['state'], string_types)
assert isinstance(job['aggregators'], dict)
assert isinstance(job['gss'], int)
assert isinstance(job['received_count'], int)
assert isinstance(job['send_count'], int)
assert isinstance(job['total_runtime'], float)
# Test delete existing pregel job
assert db.pregel.delete_job(job_id) is True
with assert_raises(PregelJobGetError) as err:
db.pregel.job(job_id)
assert err.value.error_code == 10
# Test delete missing pregel job
with assert_raises(PregelJobDeleteError) as err:
db.pregel.delete_job(generate_string())
assert err.value.error_code == 10
| [
"joohwan.oh@outlook.com"
] | joohwan.oh@outlook.com |
ceca0b7c612dbc730d0914f69b1966dcf1b11e30 | 937406981ada6607ab7ba7777da0c91f91a26428 | /diccionario.py | 89269177568820d8a477894c617cdabdf159cadf | [] | no_license | DevSoftw3/Reto1 | d4ea09a7370b35bd6dcd871717a313e23b5b212e | f5d3cbf29f7ead5374398fa0c3369533f7005328 | refs/heads/master | 2023-02-12T16:07:07.195273 | 2021-01-14T04:04:14 | 2021-01-14T04:04:14 | 328,731,475 | 0 | 1 | null | 2021-01-11T18:13:13 | 2021-01-11T16:52:53 | Python | UTF-8 | Python | false | false | 476 | py | # DICCIONARIO
# print(dicionario['azul']) para saber que resultado trae
# dicionario["Perro"] = 'Dog' para qgregar o modificar
# del(dicionario['Azul']), elimina
# print(dicionario.get(7, " No existe lo que esta buscando")), para controlar los errores
# print(dicionario.keys()) muestra solo las claves
# print(dicionario.values()) muestra todos los valorres
dicionario = {'Azul':'Blue','Rojo':'Red','Amerilla':'Lleyow'}
dicionario[7]='Hola Mundo'
print(dicionario.items()) | [
"marvin.roses@gmail.com"
] | marvin.roses@gmail.com |
6244ec064900b8dd809f7c79a459e071ac1fbc06 | cfa26ab2d83f25f88c61b040e385a8e2b80fad49 | /cmsplugin_cascade/cms_plugins.py | 8f455e4e6ff33669d4cff5e3df130c47f22dc72d | [
"MIT"
] | permissive | jrief/djangocms-cascade | e952ed65c5f8ec14a2d81b424b0797bc5a87413d | 6e4d5ec7d5cbcc076aa1ea9e16b7c55c07f0ef25 | refs/heads/master | 2023-07-07T07:40:20.368478 | 2022-09-13T14:52:53 | 2022-09-13T14:52:53 | 12,973,900 | 143 | 95 | MIT | 2022-05-11T08:16:45 | 2013-09-20T13:20:48 | Python | UTF-8 | Python | false | false | 1,088 | py | import sys
from importlib import import_module
from django.core.exceptions import ImproperlyConfigured
from . import app_settings
for module in app_settings.CASCADE_PLUGINS:
try:
# if a module was specified, load all plugins in module settings
module_settings = import_module('{}.settings'.format(module))
module_plugins = getattr(module_settings, 'CASCADE_PLUGINS', [])
for p in module_plugins:
try:
import_module('{}.{}'.format(module, p))
except ImportError as err:
traceback = sys.exc_info()[2]
msg = "Plugin {} as specified in {}.settings.CASCADE_PLUGINS could not be loaded: {}"
raise ImproperlyConfigured(msg.format(p, module, err.with_traceback(traceback)))
except ImportError:
try:
# otherwise try with cms_plugins in the named module
import_module('{}.cms_plugins'.format(module))
except ImportError:
# otherwise just use the named module as plugin
import_module('{}'.format(module))
| [
"jacob.rief@gmail.com"
] | jacob.rief@gmail.com |
211bdc6b9d5cb4a540272ac6391d3c876fc98729 | e1346bd1728483060286fa2a9167673134e830d3 | /findnoversion.py | b4dcb9bb191b83cfb1204a88524374ec72f1c675 | [] | no_license | checko/python_repo | c73078d9fe9c08d9296828b339cbdbdf62c98c4c | b44cd4f488633ccfdde2b961f459fb77713f04cf | refs/heads/master | 2020-07-25T20:20:17.893438 | 2015-08-18T06:52:32 | 2015-08-18T06:52:32 | 40,945,475 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 369 | py | import os
i=0
for root, dirlist, filelist in os.walk("./"):
toremove=[]
for ss in ['.repo','out']:
if ss in dirlist:
toremove.append(ss)
for ss in dirlist:
gpath = os.path.join(root,ss,'.git')
if os.path.isdir(gpath):
toremove.append(ss)
for ss in toremove:
dirlist.remove(ss)
if (len(dirlist)==0) and (len(toremove)==0):
print i, root
i=i+1
| [
"checko@gmail.com"
] | checko@gmail.com |
131d35d1d565d10ef50c1195b5c02c860cd82332 | 8a05f1656094c6b2dcfed2326ea2441e0f2ec3ab | /7_shortestPath.py | 6bcf14f351b19deeb41ba80a70d930e4a39430cf | [] | no_license | HSJung93/codingTest | 5f0a755023089d3e48def7f72b0f8821360432ee | 71a9eb7fcc580e135b9b5a666962b1aecf4c4c7d | refs/heads/main | 2023-07-31T18:49:13.735469 | 2021-09-10T13:36:29 | 2021-09-10T13:36:29 | 317,752,984 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,073 | py | #DijkstraAlgorithm
import sys
input = sys.stdin.readline
INF = int(1e9)
n, m = 6, 11
start = 1
graph = [[] for i in range(n+1)]
visited = [False] * (n+1)
distance = [INF] * (n+1)
graph = [
[],
[(2, 2), (3, 3), (4, 1)],
[(3, 3), (4, 2)],
[(2, 3), (6, 5)],
[(3, 3), (5, 1)],
[(3, 1), (6, 2)],
[]
]
# return unvisited node index which is shortest
def get_smallest_node():
min_value = INF
index = 0
for i in range(1, n+1):
if distance[i] < min_value and not visited[i]:
min_value = distance[i]
index = i
return index
def dijkstra(start):
# distance of start node is 0
distance[start] = 0
visited[start] = True
for j in graph[start]:
distance[j[0]] = j[1]
for i in range(n-1):
now = get_smallest_node()
visited[now] = True
for j in graph[now]:
cost = distance[now] + j[1]
if cost < distance[j[0]]:
distance[j[0]] = cost
dijkstra(start)
for i in range(1, n+1):
if distance[i] == INF:
print("INFINITY")
else:
print(distance[i])
#priorityQueue and Heap
import heapq
#heapSort
def minHeap(iterable):
h = []
result = []
for value in iterable:
heapq.heappush(h, value)
for i in range(len(h)):
result.append(heapq.heappop(h))
return result
def maxHeap(iterable):
h = []
result = []
for value in iterable:
heapq.heappush(h, -value)
for i in range(len(h)):
result.append(-heapq.heappop(h))
return result
mess = [1, 3, 5, 7, 9 ,2, 4, 6, 8, 0]
minh = minHeap(mess)
maxh = maxHeap(mess)
print(minh)
print(maxh)
def dijkstar_heap(start):
q = []
heapq.heappush(q, (0, start))
distance[start] = 0
while q:
dist, now = heapq.heappop(q)
# check whether the node is visited
if distance[now] < dist:
continue
for i in graph[now]:
cost = dist + i[1]
if cost < distance[i[0]]:
distance[i[0]] = cost
heapq.heappush(q, (cost, i[0]))
dijkstar_heap(start)
for i in range(1, n+1):
if distance[i] == INF:
print("INFINITY")
else:
print(distance[i])
#FloydWarshall distance from every node to every node
#dynamic algorithm with 2 dimension table
#D_(ab) = min(D_(ab), D_(ak)+ D_(kb))
n = 4
graph = [
[],
[[], 0, 4, INF, 6],
[[], 3, 0, 7, INF],
[[], 5, INF, 0, 4],
[[], INF, INF, 2, 0]
]
for k in range(1, n+1):
for a in range(1, n+1):
for b in range(1, n+1):
graph[a][b] = min(graph[a][b], graph[a][k] + graph[k][b])
for a in range(1, n+1):
for b in range(1, n+1):
if graph[a][b] == INF:
print("INFINITY", end=" ")
else:
print(graph[a][b], end=" ")
print()
#example
n, m, start= 3, 2, 1
graph = [
[],
[(2, 4), (3, 2)],
[],
[]
]
INF = int(1e9)
distance = [INF] * (n+1)
import heapq
#import sys
#input = sys.stdin.realine
def dijkstra(start):
q = []
heapq.heappush(q, (0, start))
distance[start] = 0
while q:
dist, now = heapq.heappop(q)
if distance[now] < dist:
continue
for i in graph[now]:
cost = dist + i[1]
if cost < distance[i[0]]:
distance[i[0]] = cost
heapq.heappush(q, (cost, i[0]))
dijkstra(start)
count = 0
max_distance = 0
for d in distance:
if d != 1e9:
count += 1
max_distance = max(max_distance, d)
print(count-1, max_distance)
#futureCity
n, m = 5, 7
x, k = 4, 5
graph = [
[INF,INF,INF,INF,INF,INF],
[INF, 0, 1, 1, 1, INF],
[INF, 1, 0, INF, 1, INF],
[INF, 1, INF, 0, 1, 1],
[INF, 1, 1, 1, 0, 1],
[INF, INF, INF, 1, 1,0]
]
for k in range(1, n+1):
for a in range(1, n+1):
for b in range(1, n+1):
graph[a][b] = min(graph[a][b], graph[a][k] + graph[k][b])
distance = graph[1][k] + graph[k][x]
if distance >= INF:
print("-1")
else:
print(distance) | [
"jrps2212@gmail.com"
] | jrps2212@gmail.com |
8bdad359dcf597e9a4a118fba408d6d99665be07 | 0102d0999e74deada2aacb8ccfcdc5896a2064a8 | /_request.py | 813f53258f15457ff6a4976a513774c2ea72de36 | [
"Apache-2.0"
] | permissive | CashWin2020/VideoCrawlerEngine | 1b09921add00bb492c8b01dcb0569f5d20c7bed1 | 175bb488dbf29cb0a7d7d15a93536889d022d1fb | refs/heads/master | 2022-10-20T22:24:12.450461 | 2020-06-16T14:32:28 | 2020-06-16T14:32:28 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 14,112 | py | from functools import wraps, partial
from inspect import getfullargspec, iscoroutinefunction
from context import impl_ctx
from utils import current_time
from worker import get_worker
from traceback import format_exc
import threading
from queue import Queue
import re
class Request:
""" Request 请求对象是用来描述从脚本的开始到完成过程中的处理方式。
name: 请求名称
"""
name = None
WEIGHT = 1
__simple__ = None
@property
def progress(self):
return self.__progress__
def start_request(self, context=None):
if context is None:
context = {}
context = impl_ctx(context)
self.progress.enqueue()
return get_worker(self.name).submit(self, context)
def end_request(self):
""" 结束请求。"""
raise NotImplementedError
def subrequest(self):
""" 返回该请求的子请求。 """
return []
def error_handler(self, exception):
""" 异常处理。"""
self.progress.error(format_exc())
def getresponse(self):
""" 返回响应 """
return self.__progress__.details()
def get_data(self, name, default=None):
return self.__progress__.data.get(name, default)
def sketch(self):
sketch = self.__progress__.sketch()
sketch.update({
'name': self.name,
})
return sketch
def details(self, log=False):
return self.__progress__.details(log)
def stop(self):
return self.progress.stop()
def __repr__(self):
return f'<{self.__class__.__name__}>'
def __new__(cls, *args, **kwargs):
inst = object.__new__(cls)
inst.__progress__ = RequestProgress()
return inst
def requester(request_name,
weight=1,
sketch_data=(),
bases_cls=None,
root=False,
auto_search=True):
""" 简单请求构建器。
Args:
request_name: 请求者名称
weight: 当前请求器在百分比percent中所占的权重
sketch_data: 上传upload的数据中被sketch()返回的数据字段组成的列表。
bases_cls:
root:
auto_search:
"""
def wrapper(func):
nonlocal bases_cls
argnames, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations = getfullargspec(func)
@wraps(func)
def wrapped(*args, **kwargs):
_worker = partial(inner_worker, *args, **kwargs)
kws = {}
# 设置默认的列表参数
for i, v in enumerate(argnames[len(argnames) - len(defaults or ()):]):
kws[v] = defaults[i]
narg = min(len(args), len(argnames))
# 设置列表参数
for i in range(narg):
kws[argnames[i]] = args[i]
# 关键词转列表参数
for k in tuple(kwargs):
if k in argnames:
kws[k] = kwargs.pop(k)
# 设置默认的关键词参数
for k in kwonlyargs:
kws[k] = kwargs.pop(k, kwonlydefaults[k])
# 设置未定义的关键词参数
kws.update({
'args': args[narg:],
'kwargs': kwargs
})
req = result(**kws)
req.end_request = _worker
if callable(initializer):
initializer(req)
if auto_search:
subs = _search_request(args)
subs.extend(_search_request(kwargs))
req.__subrequest__ = tuple(subs)
return req
initializer = None
def wrapped_init(init_func):
nonlocal initializer
initializer = init_func
return init_func
wrapped.initializer = wrapped_init
if iscoroutinefunction(func):
async def inner_worker(*args, **kwargs):
return await func(*args, **kwargs)
else:
def inner_worker(*args, **kwargs):
return func(*args, **kwargs)
def __init__(self, **kwargs):
self.args = ()
self.kwargs = {}
_ = {self.__setattr__(k, v) for k, v in kwargs.items()}
def __repr__(self):
return f'<{__name__}>'
def subrequest(self):
return self.__subrequest__
if sketch_data:
def sketch(self):
sk = Request.sketch(self)
for k in sketch_data:
sk[k] = self.get_data(k)
return sk
else:
sketch = Request.sketch
__name__ = f'{request_name.title()}Request'
__slots__ = tuple(list(argnames) + kwonlyargs + ['args', 'kwargs'])
class_namespace = {
'name': request_name,
'subrequest': subrequest,
'sketch': sketch,
'WEIGHT': weight,
'__slots__': __slots__,
'__init__': __init__,
'__repr__': __repr__,
'__doc__': func.__doc__,
'__subrequest__': (),
'__simple__': wrapped,
}
if bases_cls is None:
bases_cls = []
if root:
bases = (RootRequest,)
else:
bases = (Request,)
if bases[0] not in bases_cls:
bases_cls = bases + tuple(bases_cls)
result = type(__name__, bases_cls, class_namespace)
return wrapped
return wrapper
def get_requester(name):
""" 返回指定名称的请求器。
Args:
name: 请求器名称
"""
for req_cls in Request.__subclasses__():
if name == req_cls.name:
if req_cls.__simple__:
return req_cls.__simple__
else:
return req_cls
return None
def _is_related_types(obj):
return isinstance(obj, (Request, Option, Optional))
def _search_request(arg):
def _list_tuple_set(o):
for v in o:
if _is_related_types(v):
rs.append(v)
else:
_do(v)
def _dict(o):
for k, v in o.items():
if _is_related_types(k):
rs.append(k)
else:
_do(k)
if _is_related_types(v):
rs.append(v)
else:
_do(v)
def _do(o):
typ = type(o)
if typ in (list, tuple, set):
_list_tuple_set(o)
elif typ is dict:
_dict(o)
elif _is_related_types(o):
rs.append(o)
rs = []
_do(arg)
return rs
class RequestProgress:
EXPORT_ATTR = frozenset({
'percent', 'speed', 'timeleft', 'status'
})
EXPORT_METH = frozenset({
'upload', 'upload_default', 'start', 'close', 'task_done', 'get_data',
'error', 'success', 'info', 'warning', 'report', 'sketch', 'details', 'add_stopper'
})
__slots__ = ('data', 'logs', '_status', '_percent', '_speed', '_timeleft',
'__worker__', '_stoppers', '_stoppers', '_closed', '_lock', '_started')
def __init__(self):
self.data = {}
self.logs = []
self._status = REQ_READY
self._percent = 0
self._speed = 0
self._timeleft = float('inf')
self.__worker__ = None
self._stoppers = Queue()
self._lock = threading.Lock()
self._closed = False
self._started = False
@property
def status(self):
status = self._status
return status() if callable(status) else status
@status.setter
def status(self, value):
self._status = value
@property
def percent(self):
percent = self._percent
return percent() if callable(percent) else percent
@percent.setter
def percent(self, value):
self._percent = value
@property
def speed(self):
speed = self._speed
return speed() if callable(speed) else speed
@speed.setter
def speed(self, value):
self._speed = value
@property
def timeleft(self):
timeleft = self._timeleft
return timeleft() if callable(timeleft) else timeleft
@timeleft.setter
def timeleft(self, value):
self._timeleft = value
def sketch(self):
return {
'percent': self.percent,
'status': self.status,
'speed': self.speed,
'timeleft': self.timeleft,
'latest': (self.logs and self.logs[-1]) or ''
}
def details(self, log=False):
data = {k: v() if callable(v) else v for k, v in self.data.items()}
info = self.sketch()
info.update({
'data': data,
})
if log:
info['logs'] = self.logs
return info
def get_data(self, key, default=None):
return self.data.get(key, default)
def upload(self, **kwargs):
""" 上传数据。
:param
**kwargs: 描述信息
"""
for k, v in kwargs.items():
self.data[k] = v
def upload_default(self, key, default):
if key not in self.data:
self.data[key] = default
def enqueue(self, message=''):
self._status = REQ_QUEUING
self.percent = 0
self.report('ENQUEUE:' + message)
def start(self, worker=None):
with self._lock:
self._started = True
self._status = REQ_RUNNING
self.percent = 0
self.timeleft = float('inf')
self.report('START:')
self.__worker__ = worker
def stop(self):
self._status = REQ_STOPPED
with self._lock:
if self._started:
if self._closed:
return False
while True:
stopper = self._stoppers.get()
if stopper is None:
break
try:
stopper()
except:
pass
def close(self, *args, **kwargs):
self._stoppers.put(None)
def add_stopper(self, func):
self._stoppers.put(func)
def task_done(self, message=''):
if self.status == REQ_RUNNING:
self._status = REQ_DONE
self.percent = 100
self.timeleft = 0
self.report('TASK DONE:' + message)
def error(self, message):
self._status = REQ_ERROR
self.report('ERROR: ' + message)
def success(self, message):
self.report('SUCCESS: ' + message)
def info(self, message):
self.report('INFO: ' + message)
def warning(self, message):
self.report('WARNING: ' + message)
def report(self, message):
message = current_time() + ' ' + message
self.logs.append(message)
class Optional:
""" 可选请求列表 """
__slots__ = '_options', '_selected'
def __init__(self, options):
"""
:param
list: 可选择的项列表
sort_key: 项目排序的key
"""
self._options = options
self._selected = None
def __iter__(self):
return iter(self._options)
@property
def selected(self):
""" 返回被选择的项。"""
if self._selected is None:
raise ValueError('未选择的列表。')
return self._selected
def select(self, rule):
""" 根据rule来选择最恰当的选项。
:param
rule: 选择规则
- high: 最高质量 100
- middle: 中等质量 50
- low: 最低质量 1
- %d: 1-100 [1,100] - (注意: 倾向于高质量。)
"""
if rule == 'high':
rule = 100
elif rule == 'low':
rule = 1
elif rule == 'middle':
rule = 50
if isinstance(rule, int) and 1 <= rule <= 100:
selected = self._options[max(0, int((100-rule) * len(self._options) / 100) - 1)]
else:
selected = self._options[0]
self._selected = selected
return selected
def __getattr__(self, item):
return getattr(self._selected, item)
def __repr__(self):
return repr(self._selected)
class Option:
""" 可选列表的选项 """
__slots__ = '_content', 'descriptions'
def __init__(self, content, descriptions=None):
self._content = content
if descriptions is None:
descriptions = {}
self.descriptions = descriptions
def __repr__(self):
return str(self._content)
def __getattr__(self, item):
return getattr(self._content, item)
@property
def content(self):
return self._content
class Response:
def __init__(self, request, **desc):
self.__name = request.name
desc.update(request.progress.data)
self.__datadict = desc
def __getattr__(self, item):
return self.__datadict[item]
def __repr__(self):
return '<%s %s>' % (self.__name, str(self.__dict__))
REQ_READY = 0
REQ_QUEUING = 1
REQ_RUNNING = 2
REQ_STOPPED = 3
REQ_WARNING = 4
REQ_ERROR = -1
REQ_DONE = 5
RE_VALID_PATHNAME = re.compile(r'[\\/:*?"<>|\r\n]+')
class RootRequest(Request):
name = 'root'
discard_next = False
def end_request(self):
raise NotImplementedError
def _all_status(iteration):
status = REQ_DONE
for i in iteration:
_b = i.status()
if _b == REQ_ERROR:
status = REQ_ERROR
break
elif _b == REQ_STOPPED:
status = REQ_STOPPED
break
elif _b == REQ_RUNNING:
status = REQ_RUNNING
break
elif _b != REQ_DONE:
status = REQ_QUEUING
break
return status
def requester_info():
return_dict = {}
for v in Request.__subclasses__():
return_dict[v.name] = {
'weight': v.WEIGHT
}
return return_dict
| [
"zzsaim@163.com"
] | zzsaim@163.com |
f3d84f9ed925ac895d998d28faf36f9a4170ae7c | 47e9a37046a77034ba276c6f7149755f390fe1da | /home/migrations/0003_auto_20181113_1228.py | eae998a45c6d75517b76a046d5f4c74d404e9fdb | [] | no_license | vikky-noelle/Django-test-website | bf079ca57f84b39c40ed9c5d55996ea8120e8078 | b1cd703506b67e0b6720f56a13dd26d4cce6fe90 | refs/heads/master | 2020-04-06T10:29:50.743050 | 2018-11-27T11:57:07 | 2018-11-27T11:57:07 | 157,382,353 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 373 | py | # Generated by Django 2.1.2 on 2018-11-13 12:28
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('home', '0002_remove_subscribers_name'),
]
operations = [
migrations.RenameField(
model_name='subscribers',
old_name='email',
new_name='your_email',
),
]
| [
"noreply@github.com"
] | vikky-noelle.noreply@github.com |
227bc6a8a7fa36278c9eb571bde20037770c7406 | f962f479b229091864c2cedf76981d3462c1b61b | /python/code/src/intermediate/Generator.py | 8d826aadd6319b54da517295fefa724b4d7f1050 | [] | no_license | RaghavGoyal/Raghav-root | 9ce55c462b73a3fcc3c0a4f94a307e1a13b85d21 | 5b7e624b710e4af45be87c543e2e32be6f826744 | refs/heads/master | 2023-03-16T13:36:48.904209 | 2022-07-21T13:34:38 | 2022-07-21T13:34:38 | 233,890,930 | 0 | 0 | null | 2023-03-06T02:33:51 | 2020-01-14T16:57:46 | Jupyter Notebook | UTF-8 | Python | false | false | 529 | py | def main():
generator()
# a generator is a function that returns a stream of values rather than a single value or object
def generator():
# python also allows function defined inside other function
# this is a generator function because it yields multiple values; one from each iteration of while loop
def inclusiveRange(number):
n = 0
while n <= number:
yield n
n += 1
for n in inclusiveRange(10):
print(n, end=", ")
if __name__ == '__main__':
main()
| [
"raghavgoyal.325@gmail.com"
] | raghavgoyal.325@gmail.com |
fc17438d0f196802015a870f9431509485ada656 | 7503f6fefc6a8c0633b3b093cf1e95b7d8e67866 | /turtle-shell/scripts/Mmovement3.py | 70e083bf35e9d52a605bc5360a109074b9605f38 | [] | no_license | gdwilliams1234/Shell-game-code | 171040ccc5decd1bf208dae43d3a7e3111c7f513 | 49650fa4343c3b6e41fc3e759ba3bb5d06e01958 | refs/heads/master | 2021-01-10T11:18:56.376332 | 2016-02-29T20:40:33 | 2016-02-29T20:40:33 | 52,761,749 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,050 | py | #!/usr/bin/env python
# import the necessary packages
import rospy
from sensor_msgs.msg import Image, CameraInfo
import cv2, cv_bridge
from collections import deque
import numpy as np
import argparse
class Movement:
def __init__(self):
self.bridge = cv_bridge.CvBridge()
#cv2.namedWindow("window",1)
self.image_sub = rospy.Subscriber('camera/rgb/image_raw', Image, self.image_callback)
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video",
help="path to the (optional) video file")
ap.add_argument("-b", "--buffer", type=int, default=32,
help="max buffer size")
args = vars(ap.parse_args())
self.pts = deque(maxlen=args["buffer"])
def image_callback(self, msg):
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video",
help="path to the (optional) video file")
ap.add_argument("-b", "--buffer", type=int, default=32,
help="max buffer size")
args = vars(ap.parse_args())
# define the lower and upper boundaries of the "green"
# ball in the HSV color space
greenLower = (150, 100, 100)
greenUpper = (180, 220, 220)
# initialize the list of tracked points, the frame counter,
# and the coordinate deltas
counter = 0
(dX, dY) = (0, 0)
direction = ""
# resize the frame, blur it, and convert it to the HSV
# color space
frame = self.bridge.imgmsg_to_cv2(msg,desired_encoding= 'bgr8')
blurred = cv2.GaussianBlur(frame, (11, 11), 0)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# construct a mask for the color "green", then perform
# a series of dilations and erosions to remove any small
# blobs left in the mask
mask = cv2.inRange(hsv, greenLower, greenUpper)
mask = cv2.erode(mask, None, iterations=2)
mask = cv2.dilate(mask, None, iterations=2)
# find contours in the mask and initialize the current
# (x, y) center of the ball
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
centers = []
radii = []
# only proceed if at least one contour was found
if len(cnts) > 0:
# find the largest contour in the mask, then use
# it to compute the minimum enclosing circle and
# centroid
for contour in cnts:
area = cv2.contourArea(contour)
if area > 500:
continue
br = cv2.boundingRect(contour)
radii.append(br[2])
#c = max(cnts, key=cv2.contourArea)
((x, y), radius) = cv2.minEnclosingCircle(contour)
M = cv2.moments(contour)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
centers.append(center)
print center
print("There are {} circles".format(len(centers)))
radius = int(np.average(radii)) +5
for center in centers:
cv2.circle(frame, center, 3, (255,0,0), -1)
cv2.circle(frame, center, radius, (255,0,0), 1)
# only proceed if the radius meets a minimum size
if radius > 10:
# draw the circle and centroid on the frame,
# then update the list of tracked points
cv2.circle(frame, (int(x), int(y)), int(radius),
(0, 255, 255), 10)
cv2.circle(frame, center, 5, (0, 0, 255), 10)
cv2.putText(frame,"Object 1", center, cv2.FONT_HERSHEY_SIMPLEX, 0.65, (200,100, 50), 3)
self.pts.appendleft(center)
# loop over the set of tracked points
for i in np.arange(1, len(self.pts)):
# if either of the tracked points are None, ignore
# them
if self.pts[i - 1] is None or self.pts[i] is None:
continue
# check to see if enough points have been accumulated in
# the buffer
if counter >= 10 and i == 1 and self.pts[-10] is not None:
# compute the difference between the x and y
# coordinates and re-initialize the direction
# text variables
dX = self.pts[-10][0] - self.pts[i][0]
dY = self.pts[-10][1] - self.pts[i][1]
(dirX, dirY) = ("", "")
# ensure there is significant movement in the
# x-direction
if np.abs(dX) > 20:
dirX = "East" if np.sign(dX) == 1 else "West"
# ensure there is significant movement in the
# y-direction
if np.abs(dY) > 20:
dirY = "North" if np.sign(dY) == 1 else "South"
# handle when both directions are non-empty
if dirX != "" and dirY != "":
direction = "{}-{}".format(dirY, dirX)
# otherwise, only one direction is non-empty
else:
direction = dirX if dirX != "" else dirY
# otherwise, compute the thickness of the line and
# draw the connecting lines
thickness = int(np.sqrt(args["buffer"] / float(i + 1)) * 2.5)
cv2.line(frame, self.pts[i - 1], self.pts[i], (0, 0, 255), thickness)
# show the movement deltas and the direction of movement on
# the frame
cv2.putText(frame, direction, (10, 30), cv2.FONT_HERSHEY_SIMPLEX,
0.65, (0, 0, 255), 3)
cv2.putText(frame, "dx: {}, dy: {}".format(dX, dY),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX,
0.35, (0, 0, 255), 1)
# show the frame to our screen and increment the frame counter
cv2.imshow("Frame", frame)
cv2.imshow("Mask", mask)
key = cv2.waitKey(1) & 0xFF
counter += 1
rospy.init_node('movement')
movement = Movement()
rospy.spin()
| [
"gwilliams18@csl01-l.cornellcollege.edu"
] | gwilliams18@csl01-l.cornellcollege.edu |
36a5e74c9b4e0ecef196913d6705eefa5ad329b5 | b1afdde12eee6a129adc9534184dd48304aa7e9f | /tests/functional/test_integration.py | 28a415cb3c7806fcd7c74ed71d645aa4a28225b8 | [] | no_license | nicholas-sokolov/Scoring-API | 38d409a28d2b6a30339b1ad21ae9ddf44960eb68 | beebb1ea81cdf94402fc523932fa3ca94423b1ea | refs/heads/master | 2020-04-17T22:22:49.993559 | 2019-08-06T15:18:31 | 2019-08-06T15:18:31 | 166,992,319 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,251 | py | import datetime
import hashlib
import pytest
from src import api
headers = {}
context = {}
store = {}
def get_response(request):
return api.method_handler({"body": request, "headers": headers}, context, store)
def set_auth(request):
if request.get('login') == api.ADMIN_LOGIN:
token = hashlib.sha512((datetime.datetime.now().strftime("%Y%m%d%H") + api.ADMIN_SALT).encode()).hexdigest()
else:
msg = request.get("account", "") + request.get("login", "") + api.SALT
token = hashlib.sha512(msg.encode()).hexdigest()
request['token'] = token
def test_empty_request():
_, code = get_response({})
assert api.INVALID_REQUEST == code
@pytest.mark.parametrize('request', [
{'login': "m&m's", 'token': 'qwerty', 'method': "", 'arguments': {}},
{'token': 'qwerty', 'method': "online_score", 'arguments': {}},
{'login': "m&m's", 'method': "online_score", 'arguments': {}},
{'login': "m&m's", 'token': 'qwerty', 'method': "online_score"},
{'login': "m&m's", 'token': 'qwerty', 'arguments': {}},
{'login': "m&m's", 'token': 'qwerty', 'method': "online_score", 'arguments': "123"},
{'login': "m&m's", 'token': 'qwerty', 'method': "online_score", 'arguments': 123},
{'login': 123, 'token': 'qwerty', 'method': "online_score", 'arguments': {}},
{'login': "m&m's", 'token': 123, 'method': "online_score", 'arguments': {}},
{'login': "m&m's", 'token': 'qwerty', 'method': 123, 'arguments': {}},
])
def test_invalid_request(request):
response, code = get_response(request)
assert api.INVALID_REQUEST == code
@pytest.mark.parametrize('request', [
{'login': "m&m's", 'token': 'qwerty'},
{'login': "m&m's", 'token': ''},
{'login': "admin", 'token': 'qwerty'},
{'login': "admin", 'token': ''},
])
def test_failed_authentication(request):
request.update({
'account': "m_account",
'method': "online_score",
'arguments': {}}
)
response, code = get_response(request)
assert api.FORBIDDEN == code
@pytest.mark.parametrize('arguments', [
{},
{"phone": "79175002040"},
{"phone": "89175002040", "email": "stupnikov@otus.ru"},
{"phone": "79175002040", "email": "stupnikovotus.ru"},
{"phone": "79175002040", "email": "stupnikov@otus.ru", "gender": -1},
{"phone": "79175002040", "email": "stupnikov@otus.ru", "gender": "1"},
{"phone": "79175002040", "email": "stupnikov@otus.ru", "gender": 1, "birthday": "01.01.1890"},
{"phone": "79175002040", "email": "stupnikov@otus.ru", "gender": 1, "birthday": "XXX"},
{"phone": "79175002040", "email": "stupnikov@otus.ru", "gender": 1, "birthday": "01.01.2000", "first_name": 1},
{"phone": "79175002040", "email": "stupnikov@otus.ru", "gender": 1, "birthday": "01.01.2000",
"first_name": "s", "last_name": 2},
{"phone": "79175002040", "birthday": "01.01.2000", "first_name": "s"},
{"email": "stupnikov@otus.ru", "gender": 1, "last_name": 2},
])
def test_invalid_score_request(arguments):
request = {"account": "m_account", "login": "m&m", "method": "online_score", "arguments": arguments}
set_auth(request)
response, code = get_response(request)
assert api.INVALID_REQUEST == code
assert len(response) != 0
| [
"sokolov.nicholas@gmail.com"
] | sokolov.nicholas@gmail.com |
6f1979ea7a08340e2de6eaf8245a25ff5d3c5748 | 230010fe3c38b6262924fb7bb0d90e4376981833 | /original/pythonnoroot/svetelny_panel/svetelny_panel/wiiremote.py | 25e80a2c1d766a73d44861954874233e8cd37bfd | [] | no_license | gymgeek/led_panel | 00842761c683d2ef3dfe9530a41bf830388aa745 | 2d5b989f881b76208332c7416ca720890410b20c | refs/heads/master | 2020-02-26T16:10:16.428431 | 2017-05-27T12:37:41 | 2017-05-27T12:37:41 | 71,040,133 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 963 | py | import cwiid
def winit(address=None, num_of_tries=3):
print "press 1 and 2 button on a wiimote!!!"
wm = None
ok = False
iinit = 0
while not ok and iinit < num_of_tries:
# print iinit
try:
if address is None:
wm = cwiid.Wiimote()
else:
wm = cwiid.Wiimote(address)
wm.rumble = 1
time.sleep(0.2)
wm.rumble = 0
wm.rpt_mode = cwiid.RPT_IR | cwiid.RPT_BTN
ok = True
except:
ok = False
iinit += 1
ok = False
return wm
def test_wii():
"""simple test of wiimote communication"""
w = winit()
print "konec inicializace"
print "end of initialisation"
time.sleep(1)
try:
for i in range(16):
w.led = i
time.sleep(0.5)
time.sleep(1)
w.led = 0
except:
print "nebyla navazana komunikace s ovladacem..."
return w
| [
"jezek.adamek@gmail.com"
] | jezek.adamek@gmail.com |
1774e36be7199ad065c19219e0a076a79ed86c3b | 083ebd921c8f785681e61f6c28bdd1d5948fa38e | /Motion Detector/motion_detector.py | a4782c95f35007a897aaa663f3ea1a0023c05f18 | [] | no_license | AnalyticLabs/Computer_Vision | 5d02852d9db75b11c3cff16d251cb6b9f2a6c13c | fc27c949e2a50c073d08b5a80be1e689a28ea626 | refs/heads/master | 2020-04-11T20:10:45.395313 | 2020-01-12T05:32:49 | 2020-01-12T05:32:49 | 162,062,543 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,582 | py | # -*- coding: utf-8 -*-
"""
Created on Sat Jun 8 13:05:05 2019
@author: arnab
"""
# import the necessary packages
from imutils.video import VideoStream
from imutils.object_detection import non_max_suppression
import argparse
import datetime
import imutils
import time
import cv2
import numpy as np
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", default='test.mp4', help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
ap.add_argument("-t", "--thresh", type=int, default=30, help="threshold for detection")
ap.add_argument("-o", "--output", default='output.avi', help="path to output video file")
ap.add_argument("-f", "--fps", type=int, default=20, help="FPS of output video")
ap.add_argument("-c", "--codec", type=str, default="MJPG", help="codec of output video")
ap.add_argument("-mt","--motion_thresh", type=float, default=0.25,
help="fraction of the total frame in motion")
ap.add_argument("-s","--supress_output", type=bool, default=False, help="supress the output video")
args = vars(ap.parse_args())
threshold_val = args["thresh"]
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
vs = VideoStream(src=0).start()
time.sleep(2.0)
# otherwise, we are reading from a video file
else:
vs = cv2.VideoCapture(args["video"])
motion_thresh = args["motion_thresh"]
#-------------------------------------------------------------------------------------------
# initialize the FourCC, video writer and the dimensions of the frame
fourcc = cv2.VideoWriter_fourcc(*args["codec"])
writer = None
(h, w) = (None, None)
#-------------------------------------------------------------------------------------------
# initialize the first frame in the video stream
firstFrame = None
input_framecount = 0
output_framecount = 0
# loop over the frames of the video
while True:
# grab the current frame and initialize the occupied/unoccupied
frame = vs.read()
frame = frame if args.get("video", None) is None else frame[1]
input_framecount += 1
# if the frame could not be grabbed, then we have reached the end of the video
if frame is None:
break
# resize the frame, convert it to grayscale, and blur it
frame = imutils.resize(frame, width=1000)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# if the first frame is None, initialize it
if firstFrame is None:
firstFrame = gray
continue
# compute the absolute difference between the current frame and
# first frame
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, threshold_val, 255, cv2.THRESH_BINARY)[1]
# dilate the thresholded image to fill in holes, then find contours
# on thresholded image
thresh = cv2.dilate(thresh, None, iterations=2)
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
rects = []
# loop over the contours
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < args["min_area"]:
continue
rects.append(cv2.boundingRect(c))
# apply non-maximal supression to reduce overlap of multiple frames
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
pick = non_max_suppression(rects, probs=None, overlapThresh=0.3)
total_motion_area = 0
for (xA, yA, xB, yB) in pick:
# uncomment the following line and comment line 127-128 if you want the detected
# motion boxes overlain on the written video
# cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)
# determining the total area in motion
total_motion_area += (xB-xA)*(yB-yA)
#-------------------------------------------------------------------------------------------
#initialize the writer
if writer is None:
# store the image dimensions, initialize the video writer,
(h, w) = frame.shape[:2]
total_area = h*w
writer = cv2.VideoWriter(args["output"], fourcc, args["fps"], (w,h), True)
if total_motion_area >= motion_thresh*total_area:
output = frame
# write the output frame to file
writer.write(output)
output_framecount += 1
#-------------------------------------------------------------------------------------------
if args["supress_output"] is False:
# comment these 2 line (this for loop) if you want the original frames with motion
for (xA, yA, xB, yB) in pick:
cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
# show the frame and record if the user presses a key
cv2.imshow("Security Feed", frame)
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Delta", frameDelta)
key = cv2.waitKey(1) & 0xFF
# if the `q` key is pressed, break from the lop
if key == ord("q"):
break
firstframe = gray
# cleanup the camera and close any open windows
vs.stop() if args.get("video", None) is None else vs.release()
writer.release()
cv2.destroyAllWindows()
print("\n input frame count = ", input_framecount)
print("\n output frame_count = ", output_framecount) | [
"45927239+AnalyticLabs@users.noreply.github.com"
] | 45927239+AnalyticLabs@users.noreply.github.com |
30e9ba18fa62f2da454db1369713885bbc36b975 | 6d94d551281de39e5fc81f6c063199aba86f8946 | /homeworks/B20840/Homework4-Day9/day9-homework-code.py | 0fe72f699195f584370473bbc329575c784a21d6 | [] | no_license | yuanjun/uband-python-s1 | 8cb15bd7a2fe15a8f1daa123e3219b7decef3f19 | 26da5e8ece60e0d9fbe569ac13bad9c981423f5a | refs/heads/master | 2020-12-03T00:07:48.215687 | 2017-07-09T15:48:28 | 2017-07-09T15:48:28 | 95,633,969 | 0 | 0 | null | 2017-06-28T06:00:35 | 2017-06-28T06:00:35 | null | UTF-8 | Python | false | false | 1,267 | py | #!/usr/bin/python
#_*_ coding:utf-8 _*_
#@author:B20840
def homework1():
#定义这个字典里面的内容
dictionary = {'abandon':'to give up to the control or influence of another person or agent',
'abase':'to lower in rank, office, prestige, or esteem',
'abash':'to destroy the self-possession or self-confidence of'
}
print '老爸在看一本英文书,他旁边有一个词典,但是只有三个词的解释,他们分别是%s' %(dictionary)
print '==========================='
#老爸准备撕书了
if dictionary.has_key('etiquette'):
print dictionary
else:
del dictionary['abandon'] #就是在这里撕书的
print '老爸怒了,把含有’abandon‘一页的单词撕掉了'
print '现在字典里面只剩下了%s' %(dictionary.keys())
print '==========================='
#老爸准备开心了
if dictionary.has_key('abase'):
print '老爸得到了abase这次词的解释是%s' %(dictionary['abase'])
dictionary['abandone']='to give up to the control or influence of another person or agent' #把abandon添加回去了
print '老爸很开心,又把’abondon‘加入到字典里面了'
print '现在字典里面有三个词的解释,你看%s' %(dictionary)
if __name__ == '__main__':
homework1() | [
"yuanjun@YJMacAir.local"
] | yuanjun@YJMacAir.local |
7ec4112133d33b3aff667aac27a9a4b8451f92f9 | fbb53a3366a0f10a7eb8070620cacec5101459fb | /company/m-solutions2019/c.py | 16ee9cebc54f31595a786fb0932d2b433b17b306 | [] | no_license | penicillin0/atcoder | 272bf0b9f211907c9f7f2491335f0d34f2dcd43b | 827d5cdc03531d48a44e021bd702f80b305f64d6 | refs/heads/master | 2023-08-05T09:43:50.114694 | 2021-09-20T09:21:07 | 2021-09-20T09:21:07 | 256,395,305 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,161 | py | N = int(input())
par = [-1] * N # 親だった場合は-(その集合のサイズ)
if N == 1:
print(0)
# xがどのグループに属しているか調べる
def find(x):
if par[x] < 0:
return x
else:
par[x] = find(par[x])
return find(par[x])
# 自分のいるグループの数
def size(x):
return -par[find(x)]
# xとyの属する集合を併合
def unite(x, y):
# 根を探す
x, y = find(x), find(y)
# 根が一緒
if x == y:
return
# 大きい方に小さい方をくっつける
if size(x) < size(y):
x, y = y, x
# xのサイズを更新
par[x] += par[y]
# yの親をxにする
par[y] = x
# 同じ集合に属するか判定
def same(x, y):
return find(x) == find(y)
AB = [list(map(int, input().split())) for _ in range(N - 1)]
C = list(map(int, input().split()))
for ab in AB:
a, b = ab
a, b = a - 1, b - 1
if same(a, b):
continue
else:
unite(a, b)
n = find(0)
# print(n)
ret = sum(C) - max(C)
print(ret)
m = C.index(max(C))
if n != m:
C[n], C[m] = C[m], C[n]
C = list(map(str, C))
print(' '.join(C))
| [
"a_nakamura@a-nakamuras-MacBook-Air-4.local"
] | a_nakamura@a-nakamuras-MacBook-Air-4.local |
0291e8e42c27493ddb0f48b5b01c154d63e078c4 | 0014deb8b6df6917dd7957598225a9a2c15fd4de | /recotest.py | 125108657149cd3680b63fc095e1d3e4b54ef09b | [] | no_license | Ramhawkz47/Zappy | 4da7a59f7db6d56365357e6b199892198748f1f7 | b3524964be1d59adac48adcdd74409cd870f2ede | refs/heads/master | 2022-11-14T21:13:02.993461 | 2020-07-09T15:06:03 | 2020-07-09T15:06:03 | 278,388,766 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 208 | py | import speech_recognition as sr
r=sr.Recognizer()
mic = sr.Microphone(device_index=0)
print("speak:")
with mic as source:
audio = r.listen(source)
print("done")
print(r.recognize_google(audio))
| [
"noreply@github.com"
] | Ramhawkz47.noreply@github.com |
26ceea2bd05649fa5a3a08f732d696749390941a | f1b1eced362ea0edb64cbe5cd27187a92e259eee | /aws_s3_image.py | d60dd0f92e7308a57a7c6e7adb70e98a034418ed | [] | no_license | Bharathreddy1981/aws_s3_flask_post | 22f1ddc38f8f6a70e967a178a009e03911d5fd1a | e2c72adb7a97f9227a7c8577d80aa41f28d88f7d | refs/heads/main | 2023-01-02T21:28:47.494200 | 2020-10-21T06:07:38 | 2020-10-21T06:07:38 | 305,698,513 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 716 | py | import boto3
from botocore.client import Config
import aws_url_image
def cat(value):
ACCESS_KEY_ID = "AKIAZNN5NBT72GR3NKP7"
ACCESS_SECRET_KEY = "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"
BUCKET_NAME = "flask121"
k=value["image"]
#print(k)
#d = aws_son.mat()
#k = d["image"]
data= open(k, "rb")
s3 = boto3.resource(
"s3",
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=ACCESS_SECRET_KEY,
config=Config(signature_version="s3v4")
)
s3.Bucket(BUCKET_NAME).put_object(Key=k, Body=data)
print("Done")
url=aws_url_image.fun()
#print(url)
file_name=k
final_url=url+file_name
return {"final":final_url}
| [
"70434245+Bharathreddy1981@users.noreply.github.com"
] | 70434245+Bharathreddy1981@users.noreply.github.com |
dedd9fd8026b1216f496a668c779eef0cdc1ef21 | a4cc48824ea32c24b730a458b65dd0844f113f8c | /task-03/task10.py | 803ab81351ec7136e4f3af8f6cbc7395be75caba | [] | no_license | Pank-aj/amfoss-tasks | 31195ac8a65c273b7a19c0c4ee208a256d4f1f6c | e23175dea1e7d237e6f2e1a317f08b70201c387c | refs/heads/main | 2023-06-12T22:07:06.485389 | 2021-07-01T17:01:25 | 2021-07-01T17:01:25 | 369,417,670 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 529 | py | t=int(input())
while t:
t-=1
n,m=map(int,input().split())
n=str(n)
if len(n)<m:
print(-1)
continue
mi=1000000000000000
pos=0
sum=0
pre=[]
for i in range(m):
sum+=ord(n[i])-48
pre.append(sum)
for i in range(m,len(n)):
sum=ord(n[i])-48+sum-(ord(n[pos])-48)
pos+=1
if(i!=0):
mi=min(mi,abs(pre[-1]-sum))
pre.append(sum)
if(mi!=1000000000000000):
print(mi)
else:
print(-1) | [
"noreply@github.com"
] | Pank-aj.noreply@github.com |
3623f7dea2f82a675fd99637d86022f3c7006302 | 41986b7a1b95784f0a6256ae24d5942c70ced4d7 | /prod/google-cloud-sdk/lib/googlecloudsdk/third_party/apis/gkehub/v1/gkehub_v1_messages.py | a3f4adacead73fa41eb1423f39268a2fe43b0d51 | [
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
] | permissive | wakabayashi-seiya/terraform_gcp | ed829a5a21d5d19d6663804ee5d5f7f3d23b4ec4 | f757e56779f33c2fabd8a8eed9c51ff0b897a38f | refs/heads/master | 2021-07-07T21:51:35.993317 | 2020-03-11T05:42:57 | 2020-03-11T05:42:57 | 239,411,772 | 0 | 1 | null | 2021-04-30T21:05:04 | 2020-02-10T02:32:04 | Python | UTF-8 | Python | false | false | 44,251 | py | """Generated message classes for gkehub version v1.
"""
# NOTE: This file is autogenerated and should not be edited by hand.
from apitools.base.protorpclite import messages as _messages
from apitools.base.py import encoding
from apitools.base.py import extra_types
package = 'gkehub'
class AuditConfig(_messages.Message):
r"""Specifies the audit configuration for a service. The configuration
determines which permission types are logged, and what identities, if any,
are exempted from logging. An AuditConfig must have one or more
AuditLogConfigs. If there are AuditConfigs for both `allServices` and a
specific service, the union of the two AuditConfigs is used for that
service: the log_types specified in each AuditConfig are enabled, and the
exempted_members in each AuditLogConfig are exempted. Example Policy with
multiple AuditConfigs: { "audit_configs": [ {
"service": "allServices" "audit_log_configs": [ {
"log_type": "DATA_READ", "exempted_members": [
"user:jose@example.com" ] }, {
"log_type": "DATA_WRITE", }, {
"log_type": "ADMIN_READ", } ] }, {
"service": "sampleservice.googleapis.com" "audit_log_configs": [
{ "log_type": "DATA_READ", }, {
"log_type": "DATA_WRITE", "exempted_members": [
"user:aliya@example.com" ] } ] }
] } For sampleservice, this policy enables DATA_READ, DATA_WRITE and
ADMIN_READ logging. It also exempts jose@example.com from DATA_READ logging,
and aliya@example.com from DATA_WRITE logging.
Fields:
auditLogConfigs: The configuration for logging of each type of permission.
service: Specifies a service that will be enabled for audit logging. For
example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
`allServices` is a special value that covers all services.
"""
auditLogConfigs = _messages.MessageField('AuditLogConfig', 1, repeated=True)
service = _messages.StringField(2)
class AuditLogConfig(_messages.Message):
r"""Provides the configuration for logging a type of permissions. Example:
{ "audit_log_configs": [ { "log_type": "DATA_READ",
"exempted_members": [ "user:jose@example.com" ]
}, { "log_type": "DATA_WRITE", } ] }
This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
jose@example.com from DATA_READ logging.
Enums:
LogTypeValueValuesEnum: The log type that this config enables.
Fields:
exemptedMembers: Specifies the identities that do not cause logging for
this type of permission. Follows the same format of Binding.members.
logType: The log type that this config enables.
"""
class LogTypeValueValuesEnum(_messages.Enum):
r"""The log type that this config enables.
Values:
LOG_TYPE_UNSPECIFIED: Default case. Should never be this.
ADMIN_READ: Admin reads. Example: CloudIAM getIamPolicy
DATA_WRITE: Data writes. Example: CloudSQL Users create
DATA_READ: Data reads. Example: CloudSQL Users list
"""
LOG_TYPE_UNSPECIFIED = 0
ADMIN_READ = 1
DATA_WRITE = 2
DATA_READ = 3
exemptedMembers = _messages.StringField(1, repeated=True)
logType = _messages.EnumField('LogTypeValueValuesEnum', 2)
class Binding(_messages.Message):
r"""Associates `members` with a `role`.
Fields:
condition: The condition that is associated with this binding. NOTE: An
unsatisfied condition will not allow user access via current binding.
Different bindings, including their conditions, are examined
independently.
members: Specifies the identities requesting access for a Cloud Platform
resource. `members` can have the following values: * `allUsers`: A
special identifier that represents anyone who is on the internet;
with or without a Google account. * `allAuthenticatedUsers`: A special
identifier that represents anyone who is authenticated with a Google
account or a service account. * `user:{emailid}`: An email address that
represents a specific Google account. For example,
`alice@example.com` . * `serviceAccount:{emailid}`: An email address
that represents a service account. For example, `my-other-
app@appspot.gserviceaccount.com`. * `group:{emailid}`: An email address
that represents a Google group. For example, `admins@example.com`. *
`deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
identifier) representing a user that has been recently deleted. For
example, `alice@example.com?uid=123456789012345678901`. If the user is
recovered, this value reverts to `user:{emailid}` and the recovered user
retains the role in the binding. *
`deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address
(plus unique identifier) representing a service account that has been
recently deleted. For example, `my-other-
app@appspot.gserviceaccount.com?uid=123456789012345678901`. If the
service account is undeleted, this value reverts to
`serviceAccount:{emailid}` and the undeleted service account retains the
role in the binding. * `deleted:group:{emailid}?uid={uniqueid}`: An
email address (plus unique identifier) representing a Google group
that has been recently deleted. For example,
`admins@example.com?uid=123456789012345678901`. If the group is
recovered, this value reverts to `group:{emailid}` and the recovered
group retains the role in the binding. * `domain:{domain}`: The G
Suite domain (primary) that represents all the users of that domain.
For example, `google.com` or `example.com`.
role: Role that is assigned to `members`. For example, `roles/viewer`,
`roles/editor`, or `roles/owner`.
"""
condition = _messages.MessageField('Expr', 1)
members = _messages.StringField(2, repeated=True)
role = _messages.StringField(3)
class CancelOperationRequest(_messages.Message):
r"""The request message for Operations.CancelOperation."""
class ConnectAgentResource(_messages.Message):
r"""ConnectAgentResource represents a Kubernetes resource manifest for
connect agnet deployment.
Fields:
manifest: YAML manifest of the resource.
type: Kubernetes type of the resource.
"""
manifest = _messages.StringField(1)
type = _messages.MessageField('TypeMeta', 2)
class Empty(_messages.Message):
r"""A generic empty message that you can re-use to avoid defining duplicated
empty messages in your APIs. A typical example is to use it as the request
or the response type of an API method. For instance: service Foo {
rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The
JSON representation for `Empty` is empty JSON object `{}`.
"""
class Expr(_messages.Message):
r"""Represents a textual expression in the Common Expression Language (CEL)
syntax. CEL is a C-like expression language. The syntax and semantics of CEL
are documented at https://github.com/google/cel-spec. Example (Comparison):
title: "Summary size limit" description: "Determines if a summary is
less than 100 chars" expression: "document.summary.size() < 100"
Example (Equality): title: "Requestor is owner" description:
"Determines if requestor is the document owner" expression:
"document.owner == request.auth.claims.email" Example (Logic): title:
"Public documents" description: "Determine whether the document should
be publicly visible" expression: "document.type != 'private' &&
document.type != 'internal'" Example (Data Manipulation): title:
"Notification string" description: "Create a notification string with a
timestamp." expression: "'New message received at ' +
string(document.create_time)" The exact variables and functions that may be
referenced within an expression are determined by the service that evaluates
it. See the service documentation for additional information.
Fields:
description: Optional. Description of the expression. This is a longer
text which describes the expression, e.g. when hovered over it in a UI.
expression: Textual representation of an expression in Common Expression
Language syntax.
location: Optional. String indicating the location of the expression for
error reporting, e.g. a file name and a position in the file.
title: Optional. Title for the expression, i.e. a short string describing
its purpose. This can be used e.g. in UIs which allow to enter the
expression.
"""
description = _messages.StringField(1)
expression = _messages.StringField(2)
location = _messages.StringField(3)
title = _messages.StringField(4)
class GenerateConnectManifestResponse(_messages.Message):
r"""Response message for `GkeHubService.GenerateConnectManifest` method.
Fields:
manifest: The ordered list of Kubernetes resources that need to be applied
to the cluster for GKE Connect agent installation/upgrade.
"""
manifest = _messages.MessageField('ConnectAgentResource', 1, repeated=True)
class GkeCluster(_messages.Message):
r"""GkeCluster represents a k8s cluster on GKE.
Fields:
resourceLink: Self-link of the GCP resource for the GKE cluster. For
example: //container.googleapis.com/v1/projects/my-project/zones/us-
west1-a/clusters/my-cluster It can be at the most 1000 characters in
length.
"""
resourceLink = _messages.StringField(1)
class GkehubProjectsLocationsGetRequest(_messages.Message):
r"""A GkehubProjectsLocationsGetRequest object.
Fields:
name: Resource name for the location.
"""
name = _messages.StringField(1, required=True)
class GkehubProjectsLocationsListRequest(_messages.Message):
r"""A GkehubProjectsLocationsListRequest object.
Fields:
filter: The standard list filter.
name: The resource that owns the locations collection, if applicable.
pageSize: The standard list page size.
pageToken: The standard list page token.
"""
filter = _messages.StringField(1)
name = _messages.StringField(2, required=True)
pageSize = _messages.IntegerField(3, variant=_messages.Variant.INT32)
pageToken = _messages.StringField(4)
class GkehubProjectsLocationsMembershipsCreateRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsCreateRequest object.
Fields:
membership: A Membership resource to be passed as the request body.
membershipId: Required. Client chosen ID for the membership. The ID must
be a valid RFC 1123 compliant DNS label. In particular, the ID must be:
1. At most 63 characters in length 2. It must consist of lower case
alphanumeric characters or `-` 3. It must start and end with an
alphanumeric character I.e. ID must match the regex:
`[a-z0-9]([-a-z0-9]*[a-z0-9])?` with at most 63 characters.
parent: Required. The parent in whose context the membership is created.
The parent value is in the format:
`projects/[project_id]/locations/global`.
"""
membership = _messages.MessageField('Membership', 1)
membershipId = _messages.StringField(2)
parent = _messages.StringField(3, required=True)
class GkehubProjectsLocationsMembershipsDeleteRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsDeleteRequest object.
Fields:
name: Required. The membership resource name in the format:
`projects/[project_id]/locations/global/memberships/[membership_id]`
"""
name = _messages.StringField(1, required=True)
class GkehubProjectsLocationsMembershipsGenerateConnectManifestRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsGenerateConnectManifestRequest
object.
Fields:
imagePullSecretContent: Optional. The image pull secret content for the
registry, if not public.
isUpgrade: Optional. If true, generate the resources for upgrade only.
Some resources (e.g. secrets) generated for installation will be
excluded.
name: Required. The membership resource the connect agent is associated
with.
`projects/[project_id]/locations/global/memberships/[membership_id]`.
namespace: Optional. Namespace for GKE Connect agent resources. If empty,
uses 'gke-connect'.
proxy: Optional. URI of a proxy if connectivity from the agent to
gkeconnect.googleapis.com requires the use of a proxy. Format must be in
the form http(s)://{proxy_address}, depending on the HTTP/HTTPS protocol
supported by the proxy. This will direct the connect agent's outbound
traffic through a HTTP(S) proxy.
registry: Optional. The registry to fetch connect agent image; default to
gcr.io/gkeconnect.
version: Optional. The version to use for connect agent. If empty, the
current default version will be used.
"""
imagePullSecretContent = _messages.BytesField(1)
isUpgrade = _messages.BooleanField(2)
name = _messages.StringField(3, required=True)
namespace = _messages.StringField(4)
proxy = _messages.BytesField(5)
registry = _messages.StringField(6)
version = _messages.StringField(7)
class GkehubProjectsLocationsMembershipsGetIamPolicyRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsGetIamPolicyRequest object.
Fields:
options_requestedPolicyVersion: Optional. The policy format version to be
returned. Valid values are 0, 1, and 3. Requests specifying an invalid
value will be rejected. Requests for policies with any conditional
bindings must specify version 3. Policies without any conditional
bindings may specify any valid value or leave the field unset.
resource: REQUIRED: The resource for which the policy is being requested.
See the operation documentation for the appropriate value for this
field.
"""
options_requestedPolicyVersion = _messages.IntegerField(1, variant=_messages.Variant.INT32)
resource = _messages.StringField(2, required=True)
class GkehubProjectsLocationsMembershipsGetRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsGetRequest object.
Fields:
name: Required. The Membership resource name in the format:
`projects/[project_id]/locations/global/memberships/[membership_id]`
"""
name = _messages.StringField(1, required=True)
class GkehubProjectsLocationsMembershipsListRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsListRequest object.
Fields:
filter: Optional. Lists the Memberships that match the filter expression.
A filter expression filters the resources listed in the response. The
expression must be of the form `{field} {operator} {value}` where
operators: `<`, `>`, `<=`,`>=`, `!=`, `=`, `:` are supported (colon `:`
represents a HAS operator which is roughly synonymous with equality).
`{field}` can refer to a proto or JSON field, or a synthetic field.
Field names can be camelCase or snake_case. Examples: - Filter by name:
name = "projects/foo-proj/locations/global/membership/bar - Filter by
labels: - Resources that have a key called `foo` labels.foo:* -
Resources that have a key called `foo` whose value is `bar`
labels.foo = bar - Filter by state: - Members in CREATING state.
state = CREATING
orderBy: Optional. Field to use to sort the list.
pageSize: Optional. When requesting a 'page' of resources, `page_size`
specifies number of resources to return. If unspecified or set to 0, all
resources will be returned.
pageToken: Optional. Token returned by previous call to `ListMemberships`
which specifies the position in the list from where to continue listing
the resources.
parent: Required. The parent in whose context the memberships are listed.
The parent value is in the format:
`projects/[project_id]/locations/global`.
"""
filter = _messages.StringField(1)
orderBy = _messages.StringField(2)
pageSize = _messages.IntegerField(3, variant=_messages.Variant.INT32)
pageToken = _messages.StringField(4)
parent = _messages.StringField(5, required=True)
class GkehubProjectsLocationsMembershipsPatchRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsPatchRequest object.
Fields:
membership: A Membership resource to be passed as the request body.
name: Required. The membership resource name in the format:
`projects/[project_id]/locations/global/memberships/[membership_id]`
updateMask: Required. Mask of fields to update. At least one field path
must be specified in this mask.
"""
membership = _messages.MessageField('Membership', 1)
name = _messages.StringField(2, required=True)
updateMask = _messages.StringField(3)
class GkehubProjectsLocationsMembershipsSetIamPolicyRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsSetIamPolicyRequest object.
Fields:
resource: REQUIRED: The resource for which the policy is being specified.
See the operation documentation for the appropriate value for this
field.
setIamPolicyRequest: A SetIamPolicyRequest resource to be passed as the
request body.
"""
resource = _messages.StringField(1, required=True)
setIamPolicyRequest = _messages.MessageField('SetIamPolicyRequest', 2)
class GkehubProjectsLocationsMembershipsTestIamPermissionsRequest(_messages.Message):
r"""A GkehubProjectsLocationsMembershipsTestIamPermissionsRequest object.
Fields:
resource: REQUIRED: The resource for which the policy detail is being
requested. See the operation documentation for the appropriate value for
this field.
testIamPermissionsRequest: A TestIamPermissionsRequest resource to be
passed as the request body.
"""
resource = _messages.StringField(1, required=True)
testIamPermissionsRequest = _messages.MessageField('TestIamPermissionsRequest', 2)
class GkehubProjectsLocationsOperationsCancelRequest(_messages.Message):
r"""A GkehubProjectsLocationsOperationsCancelRequest object.
Fields:
cancelOperationRequest: A CancelOperationRequest resource to be passed as
the request body.
name: The name of the operation resource to be cancelled.
"""
cancelOperationRequest = _messages.MessageField('CancelOperationRequest', 1)
name = _messages.StringField(2, required=True)
class GkehubProjectsLocationsOperationsDeleteRequest(_messages.Message):
r"""A GkehubProjectsLocationsOperationsDeleteRequest object.
Fields:
name: The name of the operation resource to be deleted.
"""
name = _messages.StringField(1, required=True)
class GkehubProjectsLocationsOperationsGetRequest(_messages.Message):
r"""A GkehubProjectsLocationsOperationsGetRequest object.
Fields:
name: The name of the operation resource.
"""
name = _messages.StringField(1, required=True)
class GkehubProjectsLocationsOperationsListRequest(_messages.Message):
r"""A GkehubProjectsLocationsOperationsListRequest object.
Fields:
filter: The standard list filter.
name: The name of the operation's parent resource.
pageSize: The standard list page size.
pageToken: The standard list page token.
"""
filter = _messages.StringField(1)
name = _messages.StringField(2, required=True)
pageSize = _messages.IntegerField(3, variant=_messages.Variant.INT32)
pageToken = _messages.StringField(4)
class GoogleRpcStatus(_messages.Message):
r"""The `Status` type defines a logical error model that is suitable for
different programming environments, including REST APIs and RPC APIs. It is
used by [gRPC](https://github.com/grpc). Each `Status` message contains
three pieces of data: error code, error message, and error details. You can
find out more about this error model and how to work with it in the [API
Design Guide](https://cloud.google.com/apis/design/errors).
Messages:
DetailsValueListEntry: A DetailsValueListEntry object.
Fields:
code: The status code, which should be an enum value of google.rpc.Code.
details: A list of messages that carry the error details. There is a
common set of message types for APIs to use.
message: A developer-facing error message, which should be in English. Any
user-facing error message should be localized and sent in the
google.rpc.Status.details field, or localized by the client.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class DetailsValueListEntry(_messages.Message):
r"""A DetailsValueListEntry object.
Messages:
AdditionalProperty: An additional property for a DetailsValueListEntry
object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a DetailsValueListEntry object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
code = _messages.IntegerField(1, variant=_messages.Variant.INT32)
details = _messages.MessageField('DetailsValueListEntry', 2, repeated=True)
message = _messages.StringField(3)
class ListLocationsResponse(_messages.Message):
r"""The response message for Locations.ListLocations.
Fields:
locations: A list of locations that matches the specified filter in the
request.
nextPageToken: The standard List next-page token.
"""
locations = _messages.MessageField('Location', 1, repeated=True)
nextPageToken = _messages.StringField(2)
class ListMembershipsResponse(_messages.Message):
r"""Response message for the `GkeHub.ListMemberships` method.
Fields:
nextPageToken: A token to request the next page of resources from the
`ListMemberships` method. The value of an empty string means that there
are no more resources to return.
resources: The list of Memberships contained within the parent.
unreachable: List of locations that could not be reached while fetching
this list.
"""
nextPageToken = _messages.StringField(1)
resources = _messages.MessageField('Membership', 2, repeated=True)
unreachable = _messages.StringField(3, repeated=True)
class ListOperationsResponse(_messages.Message):
r"""The response message for Operations.ListOperations.
Fields:
nextPageToken: The standard List next-page token.
operations: A list of operations that matches the specified filter in the
request.
"""
nextPageToken = _messages.StringField(1)
operations = _messages.MessageField('Operation', 2, repeated=True)
class Location(_messages.Message):
r"""A resource that represents Google Cloud Platform location.
Messages:
LabelsValue: Cross-service attributes for the location. For example
{"cloud.googleapis.com/region": "us-east1"}
MetadataValue: Service-specific metadata. For example the available
capacity at the given location.
Fields:
displayName: The friendly name for this location, typically a nearby city
name. For example, "Tokyo".
labels: Cross-service attributes for the location. For example
{"cloud.googleapis.com/region": "us-east1"}
locationId: The canonical id for this location. For example: `"us-east1"`.
metadata: Service-specific metadata. For example the available capacity at
the given location.
name: Resource name for the location, which may vary between
implementations. For example: `"projects/example-project/locations/us-
east1"`
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class LabelsValue(_messages.Message):
r"""Cross-service attributes for the location. For example
{"cloud.googleapis.com/region": "us-east1"}
Messages:
AdditionalProperty: An additional property for a LabelsValue object.
Fields:
additionalProperties: Additional properties of type LabelsValue
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a LabelsValue object.
Fields:
key: Name of the additional property.
value: A string attribute.
"""
key = _messages.StringField(1)
value = _messages.StringField(2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
@encoding.MapUnrecognizedFields('additionalProperties')
class MetadataValue(_messages.Message):
r"""Service-specific metadata. For example the available capacity at the
given location.
Messages:
AdditionalProperty: An additional property for a MetadataValue object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a MetadataValue object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
displayName = _messages.StringField(1)
labels = _messages.MessageField('LabelsValue', 2)
locationId = _messages.StringField(3)
metadata = _messages.MessageField('MetadataValue', 4)
name = _messages.StringField(5)
class Membership(_messages.Message):
r"""Membership contains information about a member cluster.
Messages:
LabelsValue: Optional. GCP labels for this membership.
Fields:
createTime: Output only. Timestamp for when the Membership was created.
deleteTime: Output only. Timestamp for when the Membership was deleted.
description: Output only. Description of this membership, limited to 63
characters. It will match the regex: `a-zA-Z0-9*` This field is present
for legacy purposes.
endpoint: Optional. Endpoint information to reach this member.
externalId: Optional. An externally-generated and managed ID for this
Membership. This ID may still be modified after creation but it is not
recommended to do so. The ID must match the regex: `a-zA-Z0-9*`
labels: Optional. GCP labels for this membership.
lastConnectionTime: Output only. For clusters using Connect, the timestamp
of the most recent connection established with Google Cloud. This time
is updated every several minutes, not continuously. For clusters that do
not use GKE Connect, or that have never connected successfully, this
field will be unset.
name: Output only. The unique name of this domain resource in the format:
`projects/[project_id]/locations/global/memberships/[membership_id]`.
`membership_id` can only be set at creation time using the
`membership_id` field in the creation request. `membership_id` must be a
valid RFC 1123 compliant DNS label. In particular, it must be: 1. At
most 63 characters in length 2. It must consist of lower case
alphanumeric characters or `-` 3. It must start and end with an
alphanumeric character I.e. `membership_id` must match the regex:
`[a-z0-9]([-a-z0-9]*[a-z0-9])?` with at most 63 characters.
state: Output only. State of the Membership resource.
updateTime: Output only. Timestamp for when the Membership was last
updated.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class LabelsValue(_messages.Message):
r"""Optional. GCP labels for this membership.
Messages:
AdditionalProperty: An additional property for a LabelsValue object.
Fields:
additionalProperties: Additional properties of type LabelsValue
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a LabelsValue object.
Fields:
key: Name of the additional property.
value: A string attribute.
"""
key = _messages.StringField(1)
value = _messages.StringField(2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
createTime = _messages.StringField(1)
deleteTime = _messages.StringField(2)
description = _messages.StringField(3)
endpoint = _messages.MessageField('MembershipEndpoint', 4)
externalId = _messages.StringField(5)
labels = _messages.MessageField('LabelsValue', 6)
lastConnectionTime = _messages.StringField(7)
name = _messages.StringField(8)
state = _messages.MessageField('MembershipState', 9)
updateTime = _messages.StringField(10)
class MembershipEndpoint(_messages.Message):
r"""MembershipEndpoint contains the information to reach a member.
Fields:
gkeCluster: If this Membership is a Kubernetes API server hosted on GKE,
this is a self link to its GCP resource.
"""
gkeCluster = _messages.MessageField('GkeCluster', 1)
class MembershipState(_messages.Message):
r"""State of the Membership resource.
Enums:
CodeValueValuesEnum: Code indicating the state of the Membership resource.
Fields:
code: Code indicating the state of the Membership resource.
description: Human readable description of the issue.
updateTime: The last update time of this state by the controllers
"""
class CodeValueValuesEnum(_messages.Enum):
r"""Code indicating the state of the Membership resource.
Values:
CODE_UNSPECIFIED: Not set.
CREATING: CREATING indicates the cluster is being registered.
READY: READY indicates the cluster is registered.
DELETING: DELETING indicates that the cluster is being unregistered.
UPDATING: UPDATING indicates that the cluster registration is being
updated.
"""
CODE_UNSPECIFIED = 0
CREATING = 1
READY = 2
DELETING = 3
UPDATING = 4
code = _messages.EnumField('CodeValueValuesEnum', 1)
description = _messages.StringField(2)
updateTime = _messages.StringField(3)
class Operation(_messages.Message):
r"""This resource represents a long-running operation that is the result of
a network API call.
Messages:
MetadataValue: Service-specific metadata associated with the operation.
It typically contains progress information and common metadata such as
create time. Some services might not provide such metadata. Any method
that returns a long-running operation should document the metadata type,
if any.
ResponseValue: The normal response of the operation in case of success.
If the original method returns no data on success, such as `Delete`, the
response is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Fields:
done: If the value is `false`, it means the operation is still in
progress. If `true`, the operation is completed, and either `error` or
`response` is available.
error: The error result of the operation in case of failure or
cancellation.
metadata: Service-specific metadata associated with the operation. It
typically contains progress information and common metadata such as
create time. Some services might not provide such metadata. Any method
that returns a long-running operation should document the metadata type,
if any.
name: The server-assigned name, which is only unique within the same
service that originally returns it. If you use the default HTTP mapping,
the `name` should be a resource name ending with
`operations/{unique_id}`.
response: The normal response of the operation in case of success. If the
original method returns no data on success, such as `Delete`, the
response is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class MetadataValue(_messages.Message):
r"""Service-specific metadata associated with the operation. It typically
contains progress information and common metadata such as create time.
Some services might not provide such metadata. Any method that returns a
long-running operation should document the metadata type, if any.
Messages:
AdditionalProperty: An additional property for a MetadataValue object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a MetadataValue object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
@encoding.MapUnrecognizedFields('additionalProperties')
class ResponseValue(_messages.Message):
r"""The normal response of the operation in case of success. If the
original method returns no data on success, such as `Delete`, the response
is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Messages:
AdditionalProperty: An additional property for a ResponseValue object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a ResponseValue object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
done = _messages.BooleanField(1)
error = _messages.MessageField('GoogleRpcStatus', 2)
metadata = _messages.MessageField('MetadataValue', 3)
name = _messages.StringField(4)
response = _messages.MessageField('ResponseValue', 5)
class Policy(_messages.Message):
r"""An Identity and Access Management (IAM) policy, which specifies access
controls for Google Cloud resources. A `Policy` is a collection of
`bindings`. A `binding` binds one or more `members` to a single `role`.
Members can be user accounts, service accounts, Google groups, and domains
(such as G Suite). A `role` is a named list of permissions; each `role` can
be an IAM predefined role or a user-created custom role. Optionally, a
`binding` can specify a `condition`, which is a logical expression that
allows access to a resource only if the expression evaluates to `true`. A
condition can add constraints based on attributes of the request, the
resource, or both. **JSON example:** { "bindings": [ {
"role": "roles/resourcemanager.organizationAdmin", "members": [
"user:mike@example.com", "group:admins@example.com",
"domain:google.com", "serviceAccount:my-project-
id@appspot.gserviceaccount.com" ] }, {
"role": "roles/resourcemanager.organizationViewer", "members":
["user:eve@example.com"], "condition": { "title":
"expirable access", "description": "Does not grant access after
Sep 2020", "expression": "request.time <
timestamp('2020-10-01T00:00:00.000Z')", } } ],
"etag": "BwWWja0YfJA=", "version": 3 } **YAML example:**
bindings: - members: - user:mike@example.com -
group:admins@example.com - domain:google.com - serviceAccount
:my-project-id@appspot.gserviceaccount.com role:
roles/resourcemanager.organizationAdmin - members: -
user:eve@example.com role: roles/resourcemanager.organizationViewer
condition: title: expirable access description: Does not
grant access after Sep 2020 expression: request.time <
timestamp('2020-10-01T00:00:00.000Z') - etag: BwWWja0YfJA= -
version: 3 For a description of IAM and its features, see the [IAM
documentation](https://cloud.google.com/iam/docs/).
Fields:
auditConfigs: Specifies cloud audit logging configuration for this policy.
bindings: Associates a list of `members` to a `role`. Optionally, may
specify a `condition` that determines how and when the `bindings` are
applied. Each of the `bindings` must contain at least one member.
etag: `etag` is used for optimistic concurrency control as a way to help
prevent simultaneous updates of a policy from overwriting each other. It
is strongly suggested that systems make use of the `etag` in the read-
modify-write cycle to perform policy updates in order to avoid race
conditions: An `etag` is returned in the response to `getIamPolicy`, and
systems are expected to put that etag in the request to `setIamPolicy`
to ensure that their change will be applied to the same version of the
policy. **Important:** If you use IAM Conditions, you must include the
`etag` field whenever you call `setIamPolicy`. If you omit this field,
then IAM allows you to overwrite a version `3` policy with a version `1`
policy, and all of the conditions in the version `3` policy are lost.
version: Specifies the format of the policy. Valid values are `0`, `1`,
and `3`. Requests that specify an invalid value are rejected. Any
operation that affects conditional role bindings must specify version
`3`. This requirement applies to the following operations: * Getting a
policy that includes a conditional role binding * Adding a conditional
role binding to a policy * Changing a conditional role binding in a
policy * Removing any role binding, with or without a condition, from a
policy that includes conditions **Important:** If you use IAM
Conditions, you must include the `etag` field whenever you call
`setIamPolicy`. If you omit this field, then IAM allows you to overwrite
a version `3` policy with a version `1` policy, and all of the
conditions in the version `3` policy are lost. If a policy does not
include any conditions, operations on that policy may specify any valid
version or leave the field unset.
"""
auditConfigs = _messages.MessageField('AuditConfig', 1, repeated=True)
bindings = _messages.MessageField('Binding', 2, repeated=True)
etag = _messages.BytesField(3)
version = _messages.IntegerField(4, variant=_messages.Variant.INT32)
class SetIamPolicyRequest(_messages.Message):
r"""Request message for `SetIamPolicy` method.
Fields:
policy: REQUIRED: The complete policy to be applied to the `resource`. The
size of the policy is limited to a few 10s of KB. An empty policy is a
valid policy but certain Cloud Platform services (such as Projects)
might reject them.
updateMask: OPTIONAL: A FieldMask specifying which fields of the policy to
modify. Only the fields in the mask will be modified. If no mask is
provided, the following default mask is used: paths: "bindings, etag"
This field is only used by Cloud IAM.
"""
policy = _messages.MessageField('Policy', 1)
updateMask = _messages.StringField(2)
class StandardQueryParameters(_messages.Message):
r"""Query parameters accepted by all methods.
Enums:
FXgafvValueValuesEnum: V1 error format.
AltValueValuesEnum: Data format for response.
Fields:
f__xgafv: V1 error format.
access_token: OAuth access token.
alt: Data format for response.
callback: JSONP
fields: Selector specifying which fields to include in a partial response.
key: API key. Your API key identifies your project and provides you with
API access, quota, and reports. Required unless you provide an OAuth 2.0
token.
oauth_token: OAuth 2.0 token for the current user.
prettyPrint: Returns response with indentations and line breaks.
quotaUser: Available to use for quota purposes for server-side
applications. Can be any arbitrary string assigned to a user, but should
not exceed 40 characters.
trace: A tracing token of the form "token:<tokenid>" to include in api
requests.
uploadType: Legacy upload protocol for media (e.g. "media", "multipart").
upload_protocol: Upload protocol for media (e.g. "raw", "multipart").
"""
class AltValueValuesEnum(_messages.Enum):
r"""Data format for response.
Values:
json: Responses with Content-Type of application/json
media: Media download with context-dependent Content-Type
proto: Responses with Content-Type of application/x-protobuf
"""
json = 0
media = 1
proto = 2
class FXgafvValueValuesEnum(_messages.Enum):
r"""V1 error format.
Values:
_1: v1 error format
_2: v2 error format
"""
_1 = 0
_2 = 1
f__xgafv = _messages.EnumField('FXgafvValueValuesEnum', 1)
access_token = _messages.StringField(2)
alt = _messages.EnumField('AltValueValuesEnum', 3, default=u'json')
callback = _messages.StringField(4)
fields = _messages.StringField(5)
key = _messages.StringField(6)
oauth_token = _messages.StringField(7)
prettyPrint = _messages.BooleanField(8, default=True)
quotaUser = _messages.StringField(9)
trace = _messages.StringField(10)
uploadType = _messages.StringField(11)
upload_protocol = _messages.StringField(12)
class TestIamPermissionsRequest(_messages.Message):
r"""Request message for `TestIamPermissions` method.
Fields:
permissions: The set of permissions to check for the `resource`.
Permissions with wildcards (such as '*' or 'storage.*') are not allowed.
For more information see [IAM
Overview](https://cloud.google.com/iam/docs/overview#permissions).
"""
permissions = _messages.StringField(1, repeated=True)
class TestIamPermissionsResponse(_messages.Message):
r"""Response message for `TestIamPermissions` method.
Fields:
permissions: A subset of `TestPermissionsRequest.permissions` that the
caller is allowed.
"""
permissions = _messages.StringField(1, repeated=True)
class TypeMeta(_messages.Message):
r"""TypeMeta is the type information needed for content unmarshalling of the
Kubernetes resources in the manifest.
Fields:
apiVersion: APIVersion of the resource (e.g. v1).
kind: Kind of the resource (e.g. Deployment).
"""
apiVersion = _messages.StringField(1)
kind = _messages.StringField(2)
encoding.AddCustomJsonFieldMapping(
StandardQueryParameters, 'f__xgafv', '$.xgafv')
encoding.AddCustomJsonEnumMapping(
StandardQueryParameters.FXgafvValueValuesEnum, '_1', '1')
encoding.AddCustomJsonEnumMapping(
StandardQueryParameters.FXgafvValueValuesEnum, '_2', '2')
| [
"e1517234@soka-u.jp"
] | e1517234@soka-u.jp |
413a7be6ca4ace1d3d30219808ad7200771171d3 | 4d384867f2cad0cf2ca87c1649418418e5568b3d | /Figus y sus códigos/Códigos/sonido.py | 317c16f15266b4b5d4b18904fcff1b1247406a0b | [] | no_license | mwappner/ComoSuenaLaFisica | ce0df9e6ae6773bb4ccf203ac678dec1c47a9d6d | ffc00d7d3b82488ba2a77072a489f6ac56345c4f | refs/heads/master | 2020-06-03T02:29:46.877906 | 2019-10-30T16:37:34 | 2019-10-30T16:37:34 | 191,395,693 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 952 | py | # -*- coding: utf-8 -*-
"""
Created on Mon Oct 28 18:02:14 2019
@author: Marcos
"""
import numpy as np
import matplotlibpyplot as plt
#%%
cantx = 500 #densidad de puntos horizontal
canty = 20 #denisdad de puntos vertical
r = np.random.rand(cantx, canty)*2.4
x = np.linspace(0, 20, cantx)
s = np.sin(x) + 1.1
donde = np.array([r[:,i]<s for i in range(r.shape[-1])]).T
plt.plot(r[donde], 'k.')
y = np.random.rand(*r.shape)
y[donde] = np.nan
#with plt.xkcd():
fig, (ax1, ax2) = plt.subplots(2,1, sharex=True, squeeze=True,
gridspec_kw={'height_ratios':(3,1)}) #para que el eje de arriba sea más chico
fig.set_size_inches([9, 2]) #tamaño de la figu
fig.subplots_adjust(hspace=0) #para que los ejes esten pegaditos
ax1.plot(x, y, 'k.')
ax1.axis('off')
ax2.plot(x, -s, lw=5)
ax2.set_ylim((0.1, -2.3))
ax2.axis('off')
fig.subplots_adjust(bottom = 0, top = 1, left = 0, right = 1)
plt.savefig('Figus/sonido.png', dpi=1000)
| [
"42619775+mwappner@users.noreply.github.com"
] | 42619775+mwappner@users.noreply.github.com |
f151e8badd6b1cb50965d9bd65e92835c2ea1db8 | e5abf2028b9e0b39a5bf905f14c401d3645bdb9a | /display.py | 2bcbbfdf5758468c37a0db038d2334e6b808bfba | [
"MIT"
] | permissive | vieirafrancisco/car-adventure | 2d2723e44fcb216f2ea37c1b35a1ec5f6f6fba8a | 79a86d830699f131fd4e4aa2031969aa7eae1a50 | refs/heads/master | 2020-03-30T00:01:11.845899 | 2018-09-28T22:27:57 | 2018-09-28T22:27:57 | 150,501,069 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 407 | py | import pygame
class DisplaySurface:
def __init__(self, width, height):
self.width = width
self.height = height
self._size = (self.width, self.height)
self._display_surface = pygame.display.set_mode(self._size, pygame.HWSURFACE | pygame.DOUBLEBUF)
def get_display_surface(self):
return self._display_surface
def get_size(self):
return self._size | [
"fvcneto.master98@gmail.com"
] | fvcneto.master98@gmail.com |
789e2a917fbd75552d71f5326c08785dc1fb86bb | d915e22f4217267e91e89f3ca34fd056e7bcd101 | /Planilhas e PPTs/plot_model.py | a57a5c8cb39c07364ea94e2bf5fb67cc3ea0ee8b | [] | no_license | Carol2212/tcc-matheuslima | 0bc6e2fd3db6b64acf0a19938b5f753d6dd2df5c | a0639e6097b4837224162b348ba3215507c49a6d | refs/heads/master | 2023-03-24T22:40:30.725333 | 2021-03-24T17:34:33 | 2021-03-24T17:34:33 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,373 | py | import matplotlib.pyplot as plt
import numpy as np
import os
# -- Nome de arquivos
RESULTS_PATH = 'results-true'
REAL_CHECK_DATANAME = 'res_real_checkpoint_test.npy'
REAL_LAST_DATANAME = 'res_real_lastepoch_test.npy'
PRED_CHECK_DATANAME = 'res_prediction_checkpoint_test.npy'
PRED_LAST_DATANAME = 'res_prediction_lastepoch_test.npy'
MODEL_BASENAME = 'model_foldtraining_'
FOLD_BASENAME = 'fold_'
NMB_OF_FOLDS = 10
# -- Funcao q plota
def plotAudioPowerWithPrediction(testSamples,predictedSamples):
plt.close('all')
plt.figure("Audio Power")
audio_length = testSamples.shape[0]
time = np.linspace(0., 0.33333333*audio_length, audio_length)
plt.plot(time, testSamples, label="Test Samples")
plt.plot(time, predictedSamples, label="Predicted Samples")
plt.legend()
plt.xlabel("Time [s]")
plt.ylabel("Amplitude")
plt.title("Audio timeline")
plt.show()
# -- Moving average
def moving_average(a, n):
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
if __name__ == '__main__':
model = input('Selecione o modelo: ')
fold = input("Selecione o fold (obrigatorio): ")
checkpoint = input("checkpoint ou last epoch? (check: 0; last: 1): ")
ma_n = input("plotar grafico com filtro de moving average? (0 para nao, N para valor de moving average): ")
if ma_n == "0" or ma_n == '':
ma_n = "1"
model = MODEL_BASENAME+model
fold = FOLD_BASENAME+fold
print("carregando model",model,"do fold",fold,"e moving average",ma_n)
pred_check = np.load(os.path.join(RESULTS_PATH, fold, model, PRED_CHECK_DATANAME))
real_check = np.load(os.path.join(RESULTS_PATH, fold, model, REAL_CHECK_DATANAME))
pred_last = np.load(os.path.join(RESULTS_PATH, fold, model, PRED_LAST_DATANAME))
real_last = np.load(os.path.join(RESULTS_PATH, fold, model, REAL_LAST_DATANAME))
pred_check = moving_average(pred_check, int(ma_n))
real_check = moving_average(real_check, int(ma_n))
pred_last = moving_average(pred_last, int(ma_n))
real_last = moving_average(real_last, int(ma_n))
if checkpoint == "0":
print("Plotando checkpoint")
plotAudioPowerWithPrediction(real_check, pred_check)
elif checkpoint == "1":
print("Plotando last epoch")
plotAudioPowerWithPrediction(real_last, pred_last)
| [
"m.mathelima@gmail.com"
] | m.mathelima@gmail.com |
da7713dfd819e7b455506bb5b7e7568ae13edd35 | 40b4138ad35c39c23157be91b65e0c7c8f2504c1 | /data_structure/df_plot.py | 31c583a88220dbd9adb05cb6121452dcc0298d61 | [] | no_license | kar00034/python_pandas_study | 1a750724598ac310ff3e6c23660f2e3cc29668e3 | 3488477386ac29e23b63d69bfe173a632bf060d3 | refs/heads/master | 2020-12-02T01:27:53.004396 | 2020-01-03T03:17:38 | 2020-01-03T03:17:38 | 230,169,896 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 598 | py | import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
#한글설정
matplotlib.rcParams['font.family'] = 'NanumGothic'
matplotlib.rcParams['axes.unicode_minus'] = False
df = pd.read_excel('./남북한발전전력량.xlsx')
df_ns = df.iloc[[0,5],3:]
df_ns.index = ['South','North']
df_ns.columns = df_ns.columns.map(int)
print(df_ns.head)
print()
#선그래프 그리기
df_ns.plot(title = '선그래프 그리기')
# 행, 열 전치하여 다시 그리기
tdf_ns = df_ns.T
print(tdf_ns.head())
#plt.subplot(132)
tdf_ns.plot(title = '전치하여 다시 그리기')
plt.show() | [
"kar00034@naver.com"
] | kar00034@naver.com |
15b646de20871780709cb26e3cf827bbafb5170f | ce3f114a0bc62affce0a474999c76a1e7f605b9d | /anekdoter/anekdoter/asgi.py | fe522d5c4adb51d80b629d9ecabf453c05cb1b07 | [] | no_license | mmm-da/prikolambus | 6cb6eeb1414931ffb65229c762a01ed6880c7207 | 8f486887273b8a465716b18f7b77e342f6dcf596 | refs/heads/main | 2023-03-19T05:36:46.862922 | 2021-03-13T16:24:32 | 2021-03-13T16:24:32 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 170 | py | import os
from django.core.asgi import get_asgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'anekdoter.settings')
application = get_asgi_application()
| [
"39336091+spAm25@users.noreply.github.com"
] | 39336091+spAm25@users.noreply.github.com |
c032c946646a001372909011d81229fc40d7d405 | 2c01504656368ebde4cb8187df253affa5eed280 | /repositories/player_repository.py | 1b45976f1afa642df67a1ad7ad39999d69b11c48 | [] | no_license | saadtarikk/Project1_sports_score_app | 26e71ef1d09a1b957228f1cdf57b992c82353ac4 | 350f32c0631b7a7ef4123bf1c101e1fc101f915d | refs/heads/main | 2023-06-21T01:12:01.398068 | 2021-07-26T12:35:45 | 2021-07-26T12:35:45 | 371,654,381 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,624 | py | from pdb import run
from db.run_sql import run_sql
from repositories import player_repository, team_repository
from models.player import Player
from models.team import Team
def save(player):
sql = "INSERT INTO players (player_name, fouls, goals, team_id) VALUES (%s, %s, %s, %s) RETURNING *"
values = [player.player_name, player.fouls,
player.goals, player.team_id.id]
results = run_sql(sql, values)
id = results[0]['id']
player.id = id
return player
def select_all():
players = []
sql = "SELECT * FROM players"
results = run_sql(sql)
for row in results:
team = team_repository.select(row['team_id'])
player = Player(row['player_name'], row['fouls'],
row['goals'], team, row['id'])
players.append(player)
return players
def select(id):
player = None
sql = "SELECT * FROM players WHERE id = %s"
values = [id]
result = run_sql(sql, values)[0]
if result is not None:
team = team_repository.select(result['team_id'])
player = Player(result['player_name'], result['fouls'],
result['goals'], team, result['id'])
return player
def delete_all():
sql = "DELETE FROM players"
run_sql(sql)
def delete(id):
sql = "DELETE FROM players WHERE id = %s"
values = [id]
run_sql(sql, values)
def update(player):
sql = "UPDATE players SET (player_name, fouls, goals, team_id) = (%s, %s, %s, %s) WHERE id = %s"
values = [player.player_name, player.fouls,
player.goals, player.team_id, player.id]
run_sql(sql, values)
| [
"saad.tarik@outlook.com"
] | saad.tarik@outlook.com |
b2b1b8bd829c84c9304597d47c6e4be521da222e | dc47139cd8401e30e200e51db5dd9cf2fdc00a62 | /nju_mobile_scrapper.py | 138e37eaeeae6ae85bdfb897e60da4e9898cf28a | [
"MIT"
] | permissive | startled-cat/nju_mobile_web_scrapper | 1516c79486c8aa62f00e5290b086ffefc658be55 | 267d13eeeea759d3e8c7a3995baf7a7211efef49 | refs/heads/main | 2023-04-15T01:16:55.214539 | 2021-04-22T15:08:37 | 2021-04-22T15:08:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,532 | py | # scrapper
import requests
from lxml import html
import time
PHONE_NR = "123456789"
PASSWORD = "password"
print("initializing ...")
sleep_time = 2
maxFetchTries = 5
# https://www.njumobile.pl/logowanie?backUrl=/mojekonto/stan-konta
session_requests = requests.session()
login_url = "https://www.njumobile.pl/logowanie?backUrl=/mojekonto/stan-konta"
result = session_requests.get(login_url)
tree = html.fromstring(result.text)
authenticity_token = list(set(tree.xpath("//input[@name='_dynSessConf']/@value")))[0]
post_url = "https://www.njumobile.pl/logowanie?_DARGS=/profile-processes/login/login.jsp.portal-login-form"
payload = {
"login-form": PHONE_NR,
"password-form": PASSWORD,
"/ptk/sun/login/formhandler/LoginFormHandler.backUrl": "/mojekonto/stan-konta",
"_dynSessConf": authenticity_token,
"_dyncharset": "UTF-8",
"login-submit": "zaloguj się",
"_DARGS": "/profile-processes/login/login.jsp.portal-login-form",
"_D:/ptk/sun/login/formhandler/LoginFormHandler.backUrl": "",
"/ptk/sun/login/formhandler/LoginFormHandler.hashMsisdn": "",
"_D:/ptk/sun/login/formhandler/LoginFormHandler.hashMsisdn": "",
"_D:login-form": "",
"_D:password-form": "",
"_D:login-submit": "",
}
headers = {
#"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
#"Accept-Encoding": "gzip, deflate, br",
#"Accept-Language": "en-GB,en;q=0.9,pl;q=0.8,en-US;q=0.7",
#"Content-Type": "application/x-www-form-urlencoded",
#"Cookie": DMP=DMP-NJU--2020.10.15.12.21.02.212-gxyLxKktNl; USID=44a62e38e822e0d717aa993f15bead7b; _fbp=fb.1.1602757271824.113644313; DMP_PROFILE_ID=ac232cbf9e55e1d95212f02a34be792b58ce59b14a18943017c443b249cd617d; DMP_HASH_GLOBAL_ID_2=6380E39D052A8400F12FBC9C013764CF76B6BDCD61DB02421487A8C75205BAAE; _snrs_uuid=21b378da-32c3-4a2c-aeef-b1e8b0ddc6e3; _snrs_puuid=21b378da-32c3-4a2c-aeef-b1e8b0ddc6e3; high-contrast=false; userAccessCookie=f6725115ba5748f5bacf51d959787f42acf63858; TS3f940b6d027=08cb46268eab2000acb00d14c4b42faa11d3e46dea60a2c268564d95d3fd84d551b4129b095df71408201c1389113000efd204e7e980b30b14043a290546e6e8c4b7c1c19efb408dd85efd9ec5515a5069267426470f2af19b942ff57dd90b7b; SECURED_SESSION_TOKEN=; JSESSIONID=2252D83D2C9DD9AC0C6B9C5969E05918.sunwww305; TS0180bd77=01b0228c7548a59397ffd68015354fb37158fdcb6003f7cb3b9b3f44fa5335c0b5a937cc5d9ffbef3b21bc216eaa0a1ae83efcffe7e15dbf13fba68e4cb8b8cf16b49f9db1bb42aea13aea9bb664a8ed3c3fc356f1756876961c49efe50e16a669e03bb2cfa33344393fbef8ecffefa971a79af3c2cbcfae1346ff6efdec566562a95e9c6d9b71a383b6404fbb298fffe1c48dd15c; _snrs_sa=ssuid:3af323b2-a6e8-459e-a3cb-6ba241932b5f&appear:1612458186&sessionVisits:10; _snrs_sb=ssuid:3af323b2-a6e8-459e-a3cb-6ba241932b5f&leaves:1612458579; _snrs_p=host:www.njumobile.pl&permUuid:21b378da-32c3-4a2c-aeef-b1e8b0ddc6e3&uuid:21b378da-32c3-4a2c-aeef-b1e8b0ddc6e3&emailHash:&user_hash:&init:undefined&last:1612377682.902¤t:1612458579&uniqueVisits:13&allVisits:184
#"Host": "www.njumobile.pl",
"Origin": "https://www.njumobile.pl",
#"Pragma": "no-cache",
"Referer": "https://www.njumobile.pl/logowanie?backUrl=/mojekonto/stan-konta",
#"save-data": "on",
#"Sec-Fetch-Dest": "document",
#"Sec-Fetch-Mode": "navigate",
#"Sec-Fetch-Site": "same-origin",
#"Sec-Fetch-User": "?1",
#"Sec-GPC": "1",
#"Upgrade-Insecure-Requests": "1",
#"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.146 Safari/537.36",
}
print("logging in ...")
result = session_requests.post(
post_url,
data = payload,
headers = headers
#headers = dict(referer=login_url)
)
#print('res: {}'.format(result.text))
print("login result: {}".format(result.ok))
print("sleeping for " + str(sleep_time) + " seconds ...")
#time.sleep(sleep_time)
fetched = False
triesLeft = maxFetchTries
while not fetched and triesLeft > 0:
triesLeft = triesLeft - 1
try:
#raise Exception("xdxdxd")
print("fetching status data ...")
url = 'https://www.njumobile.pl/mojekonto/stan-konta'
result = session_requests.get(
url,
headers = dict(referer = url)
)
tree = html.fromstring(result.content)
print("scrapping ...")
money_left = None
extra_mb_left = None
when_end = None
when_extra_end = None
monthly_gb_left = None
money_and_pakiet= tree.xpath("//div[@class='small-comment mobile-text-right tablet-text-right']/div/text()")
if(len(money_and_pakiet) >= 2):
money_left = money_and_pakiet[0]
extra_mb_left = money_and_pakiet[1]
when_end_raw = tree.xpath("//div[@class='four columns tablet-six mobile-twelve']/strong/text()")
if(len(when_end_raw) >= 3):
when_end = when_end_raw[2]
when_extra_end_raw = tree.xpath("//div[@class='four columns mobile-six']/strong/text()")
if(len(when_extra_end_raw) >= 3):
when_extra_end = when_extra_end_raw[2]
when_extra_end = when_extra_end[11:21]
monthly_gb_left_raw = tree.xpath("//div[@class='eleven columns']/p/strong/text()")
if(len(monthly_gb_left_raw) >= 1):
monthly_gb_left = monthly_gb_left_raw[0]
print("===============================================")
if(money_left is not None):
print("money left : " + str(money_left))
if(monthly_gb_left is not None and when_end is not None):
print("---")
print("main gb left : " + str(monthly_gb_left))
print("main valid untill : " + str(when_end))
if(extra_mb_left is not None and when_extra_end is not None):
print("---")
print("extra mb left : " + str(extra_mb_left))
print("extra valid untill : " + str(when_extra_end))
print("================================================")
fetched = True
break
except Exception as e:
print("=============== fetch try " + str(maxFetchTries-triesLeft) + " of " + str(maxFetchTries) + " ================")
print("error: ")
print(e)
if fetched :
break
else:
print("sleep 1s before retrying ...")
time.sleep(1)
if(not fetched):
print("failed to fetch data after " + str(maxFetchTries-triesLeft) + " tries")
print("(enter to exit)")
wait = input() | [
"38471959+kombajno678@users.noreply.github.com"
] | 38471959+kombajno678@users.noreply.github.com |
d9f2bd86ef4c1dbb84c013de8eff80ed75ed73ef | dda12fd346caea00fd2b1ee4158f84b0b3e09b32 | /GetGwebsitesPicture/myThread.py | b8da7705fdf2cfe99924450a342230b76e77aa3f | [] | no_license | Centurywang/PythonCode | 4186bf0cae19915e0bb4aedb0fe44521c101b599 | d8b10f56969e9937f33291c67590ccfd49ec8056 | refs/heads/master | 2020-04-14T12:36:16.306929 | 2019-01-02T13:50:45 | 2019-01-02T13:50:45 | 163,845,241 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 547 | py | import threading
exitFlag = 0
class myThread (threading.Thread):
def __init__(self, name, counter,urls,function):
threading.Thread.__init__(self)
self.name = name
self.counter = counter
self.urls = urls
self.function = function
def run(self):
print ("开始线程:" + self.name)
self.function(self.urls)
print ("退出线程:" + self.name)
if __name__ == "__main__":
import json
with open('PictureUrls.json') as f:
z = json.load(f)
print(len(z)) | [
"34159085+Centurywang@users.noreply.github.com"
] | 34159085+Centurywang@users.noreply.github.com |
6f310f436ac9574a69159a506b99a3faa814ef2b | f9b6c56cec99eb2147777c4448b4b8ad757ff074 | /longest_harmounious_subsequence.py | 1f2b1f59cd7bdf90c4e192bd21e008bf7b4f26d3 | [] | no_license | zhrmrz/longest_harmounious_subsequence | 268676d4c1d7f76cddb10fcaa42fb8718689f3c6 | 71ddac4edd4d3948d462aae430ba7154f4aa921f | refs/heads/master | 2020-08-29T03:39:23.468859 | 2019-10-27T20:34:52 | 2019-10-27T20:34:52 | 217,913,179 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 273 | py | from collections import Counter
class Sol:
def longest_harmounious_subsequence(self,nums):
max_subarr=0
freq=Counter(nums)
for num,count in freq.items():
if num+1 in freq:
max_subarr=max(max_subarr,count+freq[num+1])
| [
"noreply@github.com"
] | zhrmrz.noreply@github.com |
9c07b913f0c62b385a538a897e807a101752ace2 | 156153580ff91a297b8aacb905fdf4470895daed | /Tests/Section2/test_ConnectedCell.py | 6654088c9b57ef3db99efd9a219a5da6dab01c53 | [] | no_license | fiona-young/PythonHackerRank | 2bad43530088a3138ce0f9e5bc8dbabe30686980 | f24e1b6b63b41f49e2232e8878809bdad66ab5e5 | refs/heads/master | 2022-09-26T10:51:27.391699 | 2016-05-30T09:41:10 | 2016-05-30T09:41:10 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,013 | py | from unittest import TestCase
import sys
import io
from Section2.ConnectedCellDfs import main
class TestConnectedCell(TestCase):
def test_initial_case(self):
input_string = '''4
4
1 1 0 0
0 1 1 0
0 0 1 0
1 0 0 0
'''
result = '''5
'''
sys.stdin = io.StringIO(input_string)
sys.stdout = io.StringIO()
main()
self.assertEqual(result, sys.stdout.getvalue())
def test_case1(self):
input_string = '''7
5
1 1 1 0 1
0 0 1 0 0
1 1 0 1 0
0 1 1 0 0
0 0 0 0 0
0 1 0 0 0
0 0 1 1 0
'''
result = '''9
'''
sys.stdin = io.StringIO(input_string)
sys.stdout = io.StringIO()
main()
self.assertEqual(result, sys.stdout.getvalue())
def test_case2(self):
input_string = '''5
5
0 1 1 1 1
1 0 0 0 1
1 1 0 1 0
0 1 0 1 1
0 1 1 1 0
'''
result = '''15
'''
sys.stdin = io.StringIO(input_string)
sys.stdout = io.StringIO()
main()
self.assertEqual(result, sys.stdout.getvalue())
| [
"fionalmatters@gmail.com"
] | fionalmatters@gmail.com |
1c5a8082db1f42da68347fd178cb6dc4015a76ac | 6ec8a9edfba0a2619ede68250d4bda705d0d0893 | /1/binderPerEpitope/get_absent_ranks.py | 3a40e5f45f3aee645a0fd1199f0a92e6294faaaa | [] | no_license | lgdc-ufpa/predictive-immunogenetic-markers-in-covid-19 | f7499b8e8fad0f4da79296044697f38babdfb3fc | 06d58bd9921a758deb4951dfadf388d86c127dc0 | refs/heads/master | 2023-02-17T08:43:20.639751 | 2021-01-17T17:14:55 | 2021-01-17T17:14:55 | 330,265,115 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,146 | py | import os
from openpyxl import load_workbook, Workbook
hla_directory = os.getcwd() + '/files/'
hla_files = sorted(os.listdir(hla_directory), reverse=False)
for elem in hla_files:
if '.xls.xlsx' in elem:
os.rename(str(hla_directory + elem), str(hla_directory + elem[:2] + 'xlsx'))
hla_directory = os.getcwd() + '/files/'
hla_files = sorted(os.listdir(hla_directory), reverse=False)
for index_hla, hla_file in enumerate(hla_files):
li_absent_rank_files = []
# proteins[protein_name] = [NumBinders, NumStrongBinders]
hla_book = load_workbook(hla_directory + hla_file)
hla_sheet = hla_book.active
hla_name = str(hla_sheet['D1'].value).replace(':', '_')
print(hla_file, hla_name)
index = 3
while str(hla_sheet['C' + str(index)].value) != 'None':
if str(hla_sheet['H' + str(index)].value) == 'None':
li_absent_rank_files.append([hla_file, hla_name,
str(hla_sheet['A' + str(index)].value),
hla_sheet['B' + str(index)].value,
hla_sheet['C' + str(index)].value,
hla_sheet['D' + str(index)].value,
hla_sheet['E' + str(index)].value,
str(hla_sheet['F' + str(index)].value),
str(hla_sheet['G' + str(index)].value),
str(hla_sheet['H' + str(index)].value),
str(hla_sheet['I' + str(index)].value),
str(hla_sheet['J' + str(index)].value)])
index += 1
hla_book.close()
#TODO: storage the result into a csv file
with open(os.getcwd() + f'{os.sep}absent_ranks.csv', 'a') as f:
header = 'File,hla_name,Pos,Peptide,ID,core,icore,l-log50k,nM,Rank,Ave,Nb\n'
f.write(header)
for elem in li_absent_rank_files:
line = ['None' if v is None else v for v in elem]
line = ",".join(line).replace('None', '') + '\n'
f.write(line)
f.close()
| [
"brunoconde.ufpa@gmail.com"
] | brunoconde.ufpa@gmail.com |
611a2b09ca927db5d34e83c7de96170e37583a7a | fdc0b72a3782a06952df4d723783dfa1bae65753 | /admin_request_for_information/models/__init__.py | a840dc8f0a87fc036e57720fc0f08c1b21d27938 | [] | no_license | Denbho/vendor_portal | 0878ad82bf3c40d38f6e123f6b25a358bfebce4f | 341a7ca77cbd310f3835d4b43de5012354a307c5 | refs/heads/main | 2023-04-19T21:26:56.115346 | 2021-05-17T04:16:53 | 2021-05-17T04:16:53 | 364,744,567 | 2 | 0 | null | 2021-05-06T04:34:51 | 2021-05-06T00:52:09 | Python | UTF-8 | Python | false | false | 62 | py | # -*- coding: utf-8 -*-
from . import request_for_information
| [
"dennisboysilva@gmail.com"
] | dennisboysilva@gmail.com |
18c6afd4da565049a1e5345d0f1a679be78687d7 | 21931bf5647ccd5058b760527bf6c7b35e585cc7 | /common_methods.py | 5ec5b143fcc566b19dc48934d221432aee10b31d | [
"MIT"
] | permissive | kakakacool/nyx | 3a7f5bbe9527cfa5c8ddcb8fd6e91c8bfbcc1c6b | 70d550294bd1b957a504b961480eff16c6ee3c50 | refs/heads/master | 2021-01-21T07:26:25.811449 | 2015-01-07T05:41:31 | 2015-01-07T05:41:31 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,533 | py | def get_sources(indicator):
""" appends the sources of an indicator in a string"""
source_arr=[]
if 'source' in indicator.keys():
for source in indicator['source']:
if not source in source_arr:
source_arr.append(source['name'])
if source_arr:
return ','.join(source_arr)
else:
return "CRITs"
def get_intel_confidence(indicator):
""" sets the confidence to the highest confidence source.
I am starting the confidence level with the first campaign, then adding some points for each subsequent one.
The idea is that the more distinct campaigns this indicator is a part of, the more certain we can be that
it is not a false positive"""
initial_score = {'low':30, 'medium':50, 'high':75}
add_score={'low':5,'medium':10,'high':25}
# setting the confidence to parrallel the highest-confidence source
processed_campaigns=[indicator['campaign'][0]['name']]
confidence=initial_score[indicator['campaign'][0]['confidence']]
for campaign in indicator['campaign']:
if not campaign['name'] in processed_campaigns:
confidence+=add_score[campaign['confidence']]
processed_campaigns.append(campaign['name'])
if confidence in range(0,50):
return 'low'
elif confidence in range(50,75):
return 'medium'
elif confidence > 74:
return 'high'
else:
syslog.syslog(syslog.LOG_ERR,'something got messed up in trying to gauge the confidence.')
return 'low' | [
"Paul.Poputa-Clean@dvn.com"
] | Paul.Poputa-Clean@dvn.com |
3915f222d6b405a3aaff0b29d9923d7188d58082 | 81d220b8e45ad167e283ffda2feefd2bfb7fb1a4 | /pira_truths/urls.py | 4afbee57470ffd7b82cd7a805f1b40597a36d122 | [] | no_license | danieljcs/pira_truths | c57333f9b116f0f387ee0cc45e633fca13a8b5b3 | f9d9c51335a5d01ed5f0bafad5a614c14049d60b | refs/heads/main | 2023-01-13T16:32:32.814092 | 2020-11-18T12:53:21 | 2020-11-18T12:53:21 | 313,933,295 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,013 | py | """pira_truths URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.1/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path, include
from django.conf import settings
from django.conf.urls.static import static
from django.views.generic import TemplateView
from webapp.views import *
from django.contrib.auth.decorators import *
urlpatterns = [
path('admin/', admin.site.urls),
path('', IndexMainView.as_view(), name="home"),
]
| [
"50522425+danieljcs@users.noreply.github.com"
] | 50522425+danieljcs@users.noreply.github.com |
bb6725d49870ba291430088b9eb965e6adf618ee | 4a30934e1a744c25e8d853c1337cbbfed00cbd3a | /vision/unit_tests/test__gax.py | 31383936d0df5ace0c18eddc1b0822f44a21a1db | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | ammayathrajeshnair/googlecloudpython | 554d34692acb924c8e082845a56c9e838345cbbd | 22ded3be30dda0206e23a7846b5883a2caeeeddc | refs/heads/master | 2021-01-22T05:33:55.708557 | 2017-03-16T18:08:04 | 2017-03-16T18:08:04 | 81,677,403 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,801 | py | # Copyright 2016 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import mock
class TestGAXClient(unittest.TestCase):
def _get_target_class(self):
from google.cloud.vision._gax import _GAPICVisionAPI
return _GAPICVisionAPI
def _make_one(self, *args, **kwargs):
return self._get_target_class()(*args, **kwargs)
def test_ctor(self):
client = mock.Mock()
with mock.patch('google.cloud.vision._gax.image_annotator_client.'
'ImageAnnotatorClient'):
api = self._make_one(client)
self.assertIs(api._client, client)
def test_annotation(self):
from google.cloud.vision.feature import Feature
from google.cloud.vision.feature import FeatureTypes
from google.cloud.vision.image import Image
client = mock.Mock(spec_set=[])
feature = Feature(FeatureTypes.LABEL_DETECTION, 5)
image_content = b'abc 1 2 3'
image = Image(client, content=image_content)
with mock.patch('google.cloud.vision._gax.image_annotator_client.'
'ImageAnnotatorClient'):
gax_api = self._make_one(client)
mock_response = {
'batch_annotate_images.return_value':
mock.Mock(responses=['mock response data']),
}
gax_api._annotator_client = mock.Mock(
spec_set=['batch_annotate_images'], **mock_response)
with mock.patch('google.cloud.vision._gax.Annotations') as mock_anno:
images = ((image, [feature]),)
gax_api.annotate(images)
mock_anno.from_pb.assert_called_with('mock response data')
gax_api._annotator_client.batch_annotate_images.assert_called()
def test_annotate_no_results(self):
from google.cloud.vision.feature import Feature
from google.cloud.vision.feature import FeatureTypes
from google.cloud.vision.image import Image
client = mock.Mock(spec_set=[])
feature = Feature(FeatureTypes.LABEL_DETECTION, 5)
image_content = b'abc 1 2 3'
image = Image(client, content=image_content)
with mock.patch('google.cloud.vision._gax.image_annotator_client.'
'ImageAnnotatorClient'):
gax_api = self._make_one(client)
mock_response = {
'batch_annotate_images.return_value': mock.Mock(responses=[]),
}
gax_api._annotator_client = mock.Mock(
spec_set=['batch_annotate_images'], **mock_response)
with mock.patch('google.cloud.vision._gax.Annotations'):
images = ((image, [feature]),)
response = gax_api.annotate(images)
self.assertEqual(len(response), 0)
self.assertIsInstance(response, list)
gax_api._annotator_client.batch_annotate_images.assert_called()
def test_annotate_multiple_results(self):
from google.cloud.grpc.vision.v1 import image_annotator_pb2
from google.cloud.vision.annotations import Annotations
from google.cloud.vision.feature import Feature
from google.cloud.vision.feature import FeatureTypes
from google.cloud.vision.image import Image
client = mock.Mock(spec_set=[])
feature = Feature(FeatureTypes.LABEL_DETECTION, 5)
image_content = b'abc 1 2 3'
image = Image(client, content=image_content)
with mock.patch('google.cloud.vision._gax.image_annotator_client.'
'ImageAnnotatorClient'):
gax_api = self._make_one(client)
responses = [
image_annotator_pb2.AnnotateImageResponse(),
image_annotator_pb2.AnnotateImageResponse(),
]
response = image_annotator_pb2.BatchAnnotateImagesResponse(
responses=responses)
gax_api._annotator_client = mock.Mock(
spec_set=['batch_annotate_images'])
gax_api._annotator_client.batch_annotate_images.return_value = response
images = ((image, [feature]),)
responses = gax_api.annotate(images)
self.assertEqual(len(responses), 2)
self.assertIsInstance(responses[0], Annotations)
self.assertIsInstance(responses[1], Annotations)
gax_api._annotator_client.batch_annotate_images.assert_called()
class Test__to_gapic_feature(unittest.TestCase):
def _call_fut(self, feature):
from google.cloud.vision._gax import _to_gapic_feature
return _to_gapic_feature(feature)
def test__to_gapic_feature(self):
from google.cloud.vision.feature import Feature
from google.cloud.vision.feature import FeatureTypes
from google.cloud.grpc.vision.v1 import image_annotator_pb2
feature = Feature(FeatureTypes.LABEL_DETECTION, 5)
feature_pb = self._call_fut(feature)
self.assertIsInstance(feature_pb, image_annotator_pb2.Feature)
self.assertEqual(feature_pb.type, 4)
self.assertEqual(feature_pb.max_results, 5)
class Test__to_gapic_image(unittest.TestCase):
def _call_fut(self, image):
from google.cloud.vision._gax import _to_gapic_image
return _to_gapic_image(image)
def test__to_gapic_image_content(self):
from google.cloud.vision.image import Image
from google.cloud.grpc.vision.v1 import image_annotator_pb2
image_content = b'abc 1 2 3'
client = object()
image = Image(client, content=image_content)
image_pb = self._call_fut(image)
self.assertIsInstance(image_pb, image_annotator_pb2.Image)
self.assertEqual(image_pb.content, image_content)
def test__to_gapic_image_uri(self):
from google.cloud.vision.image import Image
from google.cloud.grpc.vision.v1 import image_annotator_pb2
image_uri = 'gs://1234/34.jpg'
client = object()
image = Image(client, source_uri=image_uri)
image_pb = self._call_fut(image)
self.assertIsInstance(image_pb, image_annotator_pb2.Image)
self.assertEqual(image_pb.source.gcs_image_uri, image_uri)
def test__to_gapic_with_empty_image(self):
image = mock.Mock(
content=None, source=None, spec=['content', 'source'])
with self.assertRaises(ValueError):
self._call_fut(image)
| [
"rajesh.2.nair@gmail.com"
] | rajesh.2.nair@gmail.com |
c9d709435701538afd446f3ecf155416a4f174ed | 8b6ee4844b99caf288a2f95085d4594b0bb15a4f | /scripts/test.py | 0c652b3b0cd43a57263db26895850d0d0f85e1bf | [] | no_license | wuyou33/cdcpd | 65511e235cb921d422e8de8b5c1961f9738975b6 | b7cef8ed51d1402ae500c4f58c5f09127fc6934c | refs/heads/master | 2020-09-06T16:27:22.196848 | 2019-09-16T17:42:45 | 2019-09-16T17:42:45 | 220,479,838 | 1 | 0 | null | 2019-11-08T14:09:32 | 2019-11-08T14:09:32 | null | UTF-8 | Python | false | false | 12,051 | py | #!/usr/bin/env python3
import numpy as np
import gurobipy
import cdcpd
import gurobi_utils as grb_utils
from optimize_eqn import opt_equations
verts = [[ 0.10715991, 0.04673988, 0.80860759],
[ 0.09638239, 0.01433933, 0.81983173],
[ 0.08617341, -0.01759939, 0.83085142],
[ 0.07782873, -0.0542619 , 0.84353298],
[ 0.06686021, -0.0774444 , 0.85133865],
[ 0.05289089, -0.08822634, 0.85543967],
[ 0.04637606, -0.12401531, 0.86811271],
[ 0.03562015, -0.13741997, 0.87270725],
[ 0.0248786 , -0.1413001 , 0.87320879],
[ 0.01468532, -0.14773981, 0.87562408],
[ 0.00357272, -0.14664085, 0.87595227],
[-0.00820486, -0.13195502, 0.87113355],
[-0.01745581, -0.10721477, 0.86086335],
[-0.02305258, -0.09941736, 0.85730128],
[-0.02878705, -0.09312096, 0.85570456],
[-0.03524446, -0.08421447, 0.85446885],
[-0.04015845, -0.06663123, 0.84921755],
[-0.04169592, -0.03363333, 0.83572083],
[-0.03347042, -0.0436415 , 0.83657182],
[-0.02979077, -0.07286137, 0.84963182],
[-0.03598519, -0.06596182, 0.85352851],
[-0.04282322, -0.01328143, 0.83862685],
[-0.03340218, 0.03100616, 0.81762132],
[-0.01035984, -0.01179704, 0.826102 ],
[-0.00166537, -0.06197893, 0.84648769],
[-0.00956164, -0.0242219 , 0.84116302],
[-0.00439734, 0.04134061, 0.81730814],
[ 0.01767483, 0.04656646, 0.80910828],
[ 0.04330447, 0.01360924, 0.81378065],
[ 0.04192999, 0.03358617, 0.8143961 ],
[ 0.05483742, 0.07409452, 0.80050367],
[ 0.07551139, 0.07664875, 0.79726915],
[ 0.09787113, 0.0644173 , 0.79847118],
[ 0.10994615, 0.07539072, 0.79767696],
[ 0.12614477, 0.07059715, 0.80021128],
[ 0.14983799, 0.04895134, 0.80346403],
[ 0.16857508, 0.00464203, 0.81560652],
[ 0.18039214, -0.04256719, 0.83159597],
[ 0.18869711, -0.09296316, 0.84949368],
[ 0.19966908, -0.12754858, 0.86138094],
[ 0.2097545 , -0.121052 , 0.86174037],
[ 0.21820921, -0.14896334, 0.87206294],
[ 0.23307842, -0.17789114, 0.87864879],
[ 0.24600141, -0.15242748, 0.87122104],
[ 0.25934295, -0.11600969, 0.86038443],
[ 0.27538173, -0.07380787, 0.84651199],
[ 0.29097631, -0.03616908, 0.83405798],
[ 0.30768158, 0.00909728, 0.81887261],
[ 0.32403388, 0.05319622, 0.80403907],
[ 0.3124735 , 0.0396488 , 0.8087228 ]]
prev_verts = [[-0.34377405, -0.33031026, 0.9278386 ],
[-0.3263928 , -0.32014287, 0.9249202 ],
[-0.30906639, -0.31003934, 0.92196596],
[-0.29113135, -0.30068174, 0.9192748 ],
[-0.27464664, -0.2895189 , 0.91576207],
[-0.2593275 , -0.27651927, 0.9125789 ],
[-0.24122125, -0.267361 , 0.9103944 ],
[-0.22545305, -0.2551129 , 0.9068141 ],
[-0.21077898, -0.24186993, 0.90173674],
[-0.19544971, -0.22885841, 0.89824384],
[-0.18084057, -0.21492821, 0.8952414 ],
[-0.16798946, -0.19961706, 0.89112914],
[-0.15702881, -0.18399456, 0.88389766],
[-0.14309183, -0.17012313, 0.8784354 ],
[-0.12847269, -0.15627009, 0.8751388 ],
[-0.11387897, -0.14208335, 0.87363803],
[-0.10035085, -0.1271662 , 0.87032723],
[-0.08988503, -0.11159595, 0.8622943 ],
[-0.07337007, -0.1019052 , 0.8552348 ],
[-0.05437598, -0.09480536, 0.85753894],
[-0.03930293, -0.08228814, 0.86324996],
[-0.03024845, -0.06401387, 0.8625017 ],
[-0.0233805 , -0.0501565 , 0.8491864 ],
[-0.00565448, -0.0494107 , 0.83910054],
[ 0.01425894, -0.04849402, 0.84347177],
[ 0.02558225, -0.03295129, 0.8503053 ],
[ 0.03193247, -0.01519542, 0.84250176],
[ 0.04593866, -0.00740468, 0.82986754],
[ 0.06325684, -0.00670256, 0.8190934 ],
[ 0.07793314, 0.0053124 , 0.8266251 ],
[ 0.09129542, 0.01992236, 0.8216763 ],
[ 0.10909576, 0.02755744, 0.8152463 ],
[ 0.12801644, 0.03185325, 0.80891806],
[ 0.14625542, 0.04090825, 0.81027406],
[ 0.16592996, 0.04628979, 0.8096061 ],
[ 0.18514465, 0.04683311, 0.802751 ],
[ 0.2047815 , 0.04242228, 0.79937005],
[ 0.22448874, 0.0372933 , 0.8007156 ],
[ 0.24367481, 0.0310424 , 0.8037671 ],
[ 0.26374963, 0.02757846, 0.80498976],
[ 0.28351188, 0.03205861, 0.8074131 ],
[ 0.3035415 , 0.02889915, 0.8097148 ],
[ 0.3230263 , 0.02440747, 0.8062649 ],
[ 0.34244183, 0.03069315, 0.8061259 ],
[ 0.36127096, 0.03856179, 0.8059291 ],
[ 0.37981704, 0.04684576, 0.8039517 ],
[ 0.3989821 , 0.05365665, 0.8022766 ],
[ 0.41773614, 0.06139682, 0.8000704 ],
[ 0.43681917, 0.06834923, 0.7980717 ],
[ 0.41643342, 0.06742887, 0.79832375]]
edges = [[ 0 , 1],
[ 1, 2],
[ 2 , 3],
[ 3, 4],
[ 4 , 5],
[ 5, 6],
[ 6, 7],
[ 7, 8],
[ 8, 9],
[ 9, 10],
[10, 11],
[11, 12],
[12, 13],
[13, 14],
[14, 15],
[15, 16],
[16, 17],
[17, 18],
[18, 19],
[19, 20],
[20, 21],
[21, 22],
[22 ,23],
[23, 24],
[24, 25],
[25, 26],
[26, 27],
[27, 28],
[28, 29],
[29, 30],
[30, 31],
[31, 32],
[32, 33],
[33, 34],
[34, 35],
[35, 36],
[36, 37],
[37, 38],
[38, 39],
[39, 40],
[40, 41],
[41, 42],
[42, 43],
[43, 44],
[44, 45],
[45, 46],
[46, 47],
[47, 48],
[48, 49],]
iteration = 1
def squared_norm(points):
sqr_dist = np.sum(np.square(points))
return sqr_dist
def isNeighbour(i, j):
if(np.abs(i-j)<=1):
return True
else:
return False
def edge_squared_distances(points, edges):
diff = points[edges[:, 0]] - points[edges[:, 1]]
sqr_dist = np.sum(np.square(diff), axis=1)
return sqr_dist
def DistBetween2Segment(p1, p2, p3, p4):
u = p1 - p2
v = p3 - p4
w = p2 - p4
a = np.dot(u,u)
b = np.dot(u,v)
c = np.dot(v,v)
d = np.dot(u,w)
e = np.dot(v,w)
D = a*c - b*b
sD = D
tD = D
case = 1
comp1 = (u[0]*v[1]-u[1]*v[0])
# comp2 = abs(u[2]*v[1]-u[1]*v[2])
# comp3 = abs(u[0]*v[2]-u[2]*v[0])
SMALL_NUM = 0.00000001
# compute the line parameters of the two closest points
#if (D < SMALL_NUM): # the lines are almost parallel
#if(comp1<SMALL_NUM and comp2<SMALL_NUM and comp3<SMALL_NUM):
if(comp1<SMALL_NUM and comp1>-SMALL_NUM):
sN = 0.0 #force using point P0 on segment S1
sD = 1.0 #to prevent possible division by 0.0 later
tN = e
tD = c
case = 2
else: # get the closest points on the infinite lines
sN = (b*e - c*d)
tN = (a*e - b*d)
if (sN < 0.0): # sc < 0 => the s=0 edge is visible
sN = 0.0
tN = e
tD = c
case = 2
elif (sN > sD):# sc > 1 => the s=1 edge is visible
sN = sD
tN = e + b
tD = c
case = 3
if (tN < 0.0): #tc < 0 => the t=0 edge is visible
tN = 0.0
# recompute sc for this edge
if (-d < 0.0):
case = 5
sN = 0.0
elif (-d > a):
case = 6
sN = sD
else:
case = 4
sN = -d
sD = a
elif (tN > tD): # tc > 1 => the t=1 edge is visible
tN = tD
# recompute sc for this edge
if ((-d + b) < 0.0):
case = 8
sN = 0
elif ((-d + b) > a):
case = 9
sN = sD
else:
case = 7
sN = (-d + b)
sD = a
# finally do the division to get sc and tc
if(np.absolute(sN) < SMALL_NUM):
sc = 0.0
else:
sc = sN / sD
if(np.absolute(tN) < SMALL_NUM):
tc = 0.0
else:
tc = tN / tD
# get the difference of the two closest points
dP = w + (sc * u) - (tc * v) # = S1(sc) - S2(tc)
distance = np.linalg.norm(dP)
# print(distance)
# print(np.sqrt(distance))
# print(np.linalg.norm(dP))
# print(dP)
return case, distance
def test_gurobi():
A = np.empty([4,3], dtype=float)
A[0] = np.array([0, 0, 0])
A[1] = np.array([0.5, 0, 0])
A[2] = np.array([0.5, -0.5, 0.0])
A[3] = np.array([0.5, 0.5, 0.0])
model = gurobipy.Model()
model.setParam('OutputFlag', False)
model.setParam('ScaleFlag', 2)
g_A = grb_utils.create_gurobi_arr(model, A.shape, name="A")
# g_B = grb_utils.create_gurobi_arr(model, B.shape, name="B")
# g_C = grb_utils.create_gurobi_arr(model, C.shape, name="C")
# g_D = grb_utils.create_gurobi_arr(model, D.shape, name="D")
#object passing through itself constraint
d_min = 0.01 # change
# lhs = np.empty(A.shape[0], dtype=float)
# rhs = np.full(lhs.shape, d_min)
#print((verts.shape))
delta = [g_A[0][0]-A[0][0], g_A[0][1]-A[0][1], g_A[0][2]-A[0][2], g_A[1][0]-A[1][0], g_A[1][1]-A[1][1], g_A[1][2]-A[1][2], g_A[2][0]-A[2][0], g_A[2][1]-A[2][1], g_A[2][2]-A[2][2], g_A[3][0]-A[3][0], g_A[3][1]-A[3][1], g_A[3][2]-A[3][2]]
Sc,Tc,diff = DistBetween2Segment(A[0],A[1],A[2],A[3])
print(Sc, Tc, diff)
derivative = opt_equations(A[0],A[1],A[2],A[3], Sc, Tc)
print(derivative)
lhs= derivative * delta
rhs = np.full(12,d_min - diff)
grb_utils.add_constraints(model, lhs, ">=" , rhs , name="collision")
# objective function
g_objective = np.sum(np.square(g_A - A))
model.setObjective(g_objective, gurobipy.GRB.MINIMIZE)
model.update()
model.optimize()
verts_result = grb_utils.get_value(g_A)
# print(verts_result)
# print("end")
print(verts_result)
def test_optimizer(verts, prev_verts, edges):
model = gurobipy.Model()
#model.setParam('OutputFlag', False)
model.setParam('ScaleFlag', 2)
verts = np.asarray(verts)
prev_verts = np.asarray(prev_verts)
g_verts = grb_utils.create_gurobi_arr(model, verts.shape, name="verts")
# distance constraint
rhs = (1 ** 2) * edge_squared_distances(prev_verts, np.asarray(edges))
lhs = edge_squared_distances(g_verts, np.asarray(edges))
grb_utils.add_constraints(model, lhs, "<=", rhs, name="edge")
#object passing through itself constraint
d_max = 0.2 # change
d_min = 0.01 # change
count = 0
if(iteration != 0):
lhs = []
rhs = []
dist = np.empty((g_verts.shape[0], g_verts.shape[0]))
for i in range(g_verts.shape[0]-1):
for j in range(i+1,g_verts.shape[0]-1):
if(isNeighbour(i,j)==False):
case,diff = DistBetween2Segment(prev_verts[i], prev_verts[i+1], prev_verts[j], prev_verts[j+1])
dist[i,j]=diff
if( diff<d_max and diff>0.00000001):
delta = [g_verts[i][0] - prev_verts[i][0], g_verts[i][1] - prev_verts[i][1], g_verts[i][2] - prev_verts[i][2],
g_verts[i+1][0] - prev_verts[i+1][0], g_verts[i+1][1] - prev_verts[i+1][1], g_verts[i+1][2] - prev_verts[i+1][2],
g_verts[j][0] - prev_verts[j][0], g_verts[j][1] - prev_verts[j][1], g_verts[j][2] - prev_verts[j][2],
g_verts[j+1][0] - prev_verts[j+1][0], g_verts[j+1][1] - prev_verts[j+1][1], g_verts[j+1][2] - prev_verts[j+1][2]]
derivative = opt_equations(prev_verts[i], prev_verts[i+1], prev_verts[j], prev_verts[j+1], case)
temp = np.dot(derivative, delta)
count+=1
lhs.append(temp)
rhs.append(d_min - diff)
if(len(rhs) != 0):
grb_utils.add_constraints(model, np.asarray(lhs), ">=" , np.asarray(rhs) , name="collision")
# objective function
g_objective = np.sum(np.square(g_verts - verts))
model.setObjective(g_objective, gurobipy.GRB.MINIMIZE)
model.update()
model.optimize()
print(count)
# print(grb_utils.get_value(g_objective))
verts_result = grb_utils.get_value(g_verts)
# print("end")
return verts_result
test_optimizer(verts, prev_verts, edges)
| [
"anveepnaik@gmail.com"
] | anveepnaik@gmail.com |
81ca32d7661a077e47039a5f78868c9fc5d381a8 | 66fda6586a902f8043b1f5e9532699babc7b591a | /lib_openshift/models/v1_build_config_status.py | fd78c8cbca2d5966a2f4258b5b0d00f8861062a6 | [
"Apache-2.0"
] | permissive | chouseknecht/lib_openshift | 86eff74b4659f05dfbab1f07d2d7f42b21e2252d | 02b0e4348631e088e72a982a55c214b30a4ab9d9 | refs/heads/master | 2020-12-11T05:23:17.081794 | 2016-07-28T20:15:39 | 2016-07-28T20:15:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,610 | py | # coding: utf-8
"""
OpenAPI spec version:
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from pprint import pformat
from six import iteritems
import re
class V1BuildConfigStatus(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
operations = [
]
def __init__(self, last_version=None):
"""
V1BuildConfigStatus - a model defined in Swagger
:param dict swaggerTypes: The key is attribute name
and the value is attribute type.
:param dict attributeMap: The key is attribute name
and the value is json key in definition.
"""
self.swagger_types = {
'last_version': 'int'
}
self.attribute_map = {
'last_version': 'lastVersion'
}
self._last_version = last_version
@property
def last_version(self):
"""
Gets the last_version of this V1BuildConfigStatus.
LastVersion is used to inform about number of last triggered build.
:return: The last_version of this V1BuildConfigStatus.
:rtype: int
"""
return self._last_version
@last_version.setter
def last_version(self, last_version):
"""
Sets the last_version of this V1BuildConfigStatus.
LastVersion is used to inform about number of last triggered build.
:param last_version: The last_version of this V1BuildConfigStatus.
:type: int
"""
self._last_version = last_version
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| [
"jdetiber@redhat.com"
] | jdetiber@redhat.com |
99feb886c99e47dfe7923b755be4235896a46e3f | 2e7621459c8d1ddef2ec01175b7ebb59b7ecb9bd | /mainpage/migrations/0006_auto_20190607_0723.py | b9a8118e1d102fc948fad5e7730a27cef2bf1046 | [] | no_license | nwihardjo/personal-website | f3bfd8998eac6fa9c90c06124b38ece446e37058 | cc0e927acb3a9d01667cebed922dc114f71c7cc1 | refs/heads/master | 2022-12-12T08:52:57.670523 | 2021-11-20T22:04:07 | 2021-11-20T22:04:07 | 189,973,933 | 2 | 0 | null | 2022-12-08T05:16:55 | 2019-06-03T09:21:18 | JavaScript | UTF-8 | Python | false | false | 371 | py | # Generated by Django 2.2.1 on 2019-06-07 07:23
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('mainpage', '0005_project_url'),
]
operations = [
migrations.AlterModelOptions(
name='project',
options={'ordering': ['-end_date__year', '-end_date__month']},
),
]
| [
"nwihardjo@connect.ust.hk"
] | nwihardjo@connect.ust.hk |
b13f38f3e8d8a5795b2d0d326e3fc93575f01d54 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02577/s751792218.py | eea7ba3706c9f7076b7627d00c4c3aa5626f695a | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 122 | py | N = input()
N_list = []
for i in N:
N_list.append(int(i))
if sum(N_list) % 9 == 0:
print("Yes")
else: print("No") | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
d080a80308e02553e9baac9420d73834f92a2979 | c026581b6c3855c75e7c9f9c6397acadc7833fb7 | /idm_core/name/urls.py | 5778785362e85f4443b71c0f79b76a31eb6f7cbe | [] | no_license | mans0954/idm-core | 5734fd08a3c8c5deaec62167c9470336f0c6c6ef | 2a3cf326e0bb3db469e2b318b122033a7dd92b83 | refs/heads/master | 2021-07-24T04:13:47.021951 | 2017-11-02T22:09:25 | 2017-11-02T22:09:25 | 109,317,967 | 1 | 0 | null | 2017-11-02T20:56:01 | 2017-11-02T20:55:58 | null | UTF-8 | Python | false | false | 745 | py | from django.conf.urls import url
from . import views
uuid_re = '[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}'
urlpatterns = [
url(r'^name/$',
views.NameListView.as_view(), name='name-list-self'),
url(r'^(?P<identity_type>[a-z-]+)/(?P<identity_id>' + uuid_re + ')/name/$',
views.NameListView.as_view(), name='name-list'),
url(r'^name/(?P<pk>[1-9][0-9]*)/$', views.NameDetailView.as_view(), name='name-detail'),
url(r'^name/new:(?P<context>[\w-]+)/$',
views.NameCreateView.as_view(), name='name-create-self'),
url(r'^(?P<identity_type>[a-z-]+)/(?P<identity_id>' + uuid_re + ')/name/new:(?P<context>[\w-]+)/$',
views.NameCreateView.as_view(), name='name-create'),
]
| [
"alexander.dutton@it.ox.ac.uk"
] | alexander.dutton@it.ox.ac.uk |
7e7c9a2a2aafa141006bb83d5fa4c038197efa19 | 07ef8aa9d5060e4ec86052b0d985c51d27a4195a | /zhihu_spider.py | 895b26aa151d443281a3dd0a9f5c9493fdb8345c | [] | no_license | lyshenshou99/zhihu_spider | 2fa79e3049f9e8dcdeebc69299a4a090c8348dea | 52effe7311e8b0d9dc46be7d53b013862bd6f77d | refs/heads/master | 2020-07-26T23:02:09.279289 | 2019-09-16T12:16:05 | 2019-09-16T12:16:05 | 208,791,140 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,549 | py | import requests
import re
import os
import time
import csv
from queue import Queue
# key是图片的url路径, value是图片所属的问题id(哪一个问题下的图片)
image_url_dict = {}
img_tag = re.compile(r"""<img\s.*?\s?data-original\s*=\s*['|"]?([^\s'"]+).*?>""", re.I)
# '292901966': '有着一双大长腿是什么感觉',
# '26297181': '大胸女生如何穿衣搭配',
# '274143680': '男生会主动搭讪一个长得很高并且长得好看的女生吗',
# '266695575': '当你有一双好看的腿之后会不会觉得差一张好看的脸',
# '297715922': '有一副令人羡慕的好身材是怎样的体验',
# '26037846': '身材好是一种怎样的体验',
# '28997505': '有个漂亮女朋友是什么体验',
# '29815334': '女生腿长是什么感觉',
# '35255031': '你的身材不配你的脸是一种怎样的体验',
# '274638737': '大胸妹子夏季如何穿搭',
# '264568089': '你坚持健身的理由是什么现在身材怎么样敢不敢发一张照片来看看',
# '49075464': '在知乎上爆照是一种什么样的体验',
# '22918070': '女生如何健身练出好身材',
# '56378769': '女生身高170cm以上是什么样的体验',
# '22132862': '女生如何选购适合自己的泳装',
# '46936305': '为什么包臀裙大部分人穿都不好看',
# '266354731': '被人关注胸部是种怎样的体验',
# '51863354': '你觉得自己身体哪个部位最漂亮',
# '66313867': '身为真正的素颜美女是种怎样的体验',
# '34243513': '你见过最漂亮的女生长什么样',
# '21052148': '有哪些评价女性身材好的标准',
# '52308383': '在校女学生如何才能穿搭得低调又时尚',
# '50426133': '平常人可以漂亮到什么程度',
# '268395554': '你最照骗的一张照片是什么样子',
# '277593543': '什么时候下定决心一定要瘦的',
# '277242822': '室友认为我的穿着很轻浮我该如何回应',
# '36523379': '穿和服是怎样的体验'
question_id_dict = {'62972819': '你们见过最好看的coser长什么样'}
def to_csv(image_url_dict):
with open('image_urls.csv', 'w', encoding='utf-8', newline='') as f:
writer = csv.writer(f)
for k, v in image_url_dict.items():
writer.writerow([k, v])
def get_pic_urls():
for question_id in question_id_dict.keys():
headers = {
'referer': 'https://www.zhihu.com/question/' + question_id,
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.67 Safari/537.36',
'cookie': '_zap=df060be0-1f62-4eb0-bbb4-e25a7df5d057; _xsrf=rE8vojikPuQr6BmPHoQ3vvyYb4p4yopH; d_c0="AJDiN_6IzA6PTtlScILp1OYKQEbXWyS9E24=|1547024288"; capsion_ticket="2|1:0|10:1547024292|14:capsion_ticket|44:MzYwODQ3OTEyYzg5NGQ1MDg1ZDJlYzM3NjM4NDllYTg=|c9a7c7c195e31124acde99d18f503a97dabe44ce4dd1082d20908438d41a3336"; z_c0="2|1:0|10:1547024293|4:z_c0|92:Mi4xcDljekFBQUFBQUFBa09JM19vak1EaVlBQUFCZ0FsVk5wUVVqWFFDWnZrRXNsaVRPckNNSUF2ZGRnY0pSbjl0Rlp3|15b49d1d4fc22680d78e82410f22a516be708ae88ddc690df30fe2a6d8faebd4"; q_c1=50ec85be93ed4ae99a970b47b56568fe|1547024294000|1547024294000; __gads=ID=12d6e4ce61c46133:T=1547024296:S=ALNI_MaUpRRzsIqkrSCpk4BGSWbuKPPZCg; __utmv=51854390.100-1|2=registration_date=20140204=1^3=entry_date=20140204=1; __utma=51854390.1237612516.1547692926.1547692926.1547792023.2; __utmz=51854390.1547792023.2.2.utmcsr=zhihu.com|utmccn=(referral)|utmcmd=referral|utmcct=/people/mrxian-sheng-65/collections; tst=r; tgw_l7_route=73af20938a97f63d9b695ad561c4c10c'
}
for i in range(0, 500, 5):
try:
url = 'https://www.zhihu.com/api/v4/questions/'+question_id+'/answers?include=data%5B%2A%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cis_labeled%3Bdata%5B%2A%5D.mark_infos%5B%2A%5D.url%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%2A%5D.topics&limit=5&offset='+str(i)+'&platform=desktop&sort_by=default'
res = requests.get(url, headers=headers)
# print(res.status_code)
if res.status_code == 200:
data = res.json()
if not data['data']:
print('没有数据!(%s)' % url)
break
for answer in data['data']:
content = answer.get('content', '')
if content:
# print(content)
image_url_list = img_tag.findall(content)
for image_url in image_url_list:
print('图片url: %s, 问题id: %s' % (image_url, question_id))
image_url_dict[image_url] = question_id
else:
print('返回值: %s, url: %s' % (res.status_code, url))
# 防止访问频繁
time.sleep(1.1)
except Exception as e:
print('请求出错, (%s)' % e)
time.sleep(1.1)
continue
def main():
get_pic_urls()
to_csv(image_url_dict)
if __name__ == '__main__':
main()
| [
"noreply@github.com"
] | lyshenshou99.noreply@github.com |
f12808fb8cb55ad0a7a87aaf5e966e8b24773f9f | 35c79cf663a904902600fb636fe38541a54c2e63 | /python_001_datatypes/NumericTypes.py | 940e560f576912b5a1669aa1d06f12d480e2530e | [] | no_license | DanielW1987/python-basics | bed5cedefc0336036461f0533f060383186314c9 | fe3a8787874d997a4a840fc43c718aebc37eccfa | refs/heads/master | 2020-08-15T09:58:47.065512 | 2019-10-29T14:06:25 | 2019-10-29T14:06:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 374 | py | # integers
a = 10
intValue: int = 10
print(type(a), type(intValue))
# floats
b = 3.141
floatValue: float = 3.141
print(type(b), type(floatValue))
# complex
c = 3 + 5j
print(type(c))
# binary: starts with 0B
e = 0B1010
print(type(e))
print(e)
# hexadecimal: starts with 0X
f = 0XFF
print(type(f))
print(f)
# octal: start with 0o
o = 0o530
print(type(o))
print(o)
| [
"wagner.daniel87@gmail.com"
] | wagner.daniel87@gmail.com |
e975d3130d15c06f25eb1ff044339c484eeb55e1 | dabd7cc52a84a5ed49673f0cb3376d3166d60700 | /Backtracking Contra con GUI/TkInter.py | a987dea2e0aab8584b3a8cd12ed79d20d71a92ad | [] | no_license | eladiomejias/Python | e05d26480c71ea3111b0a6d06739eea485f83e86 | c4cb06509f258f7e9b11b14f9856fd2e518e4246 | refs/heads/master | 2021-01-10T12:35:08.679583 | 2016-04-29T19:42:28 | 2016-04-29T19:42:28 | 43,497,078 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,294 | py | from Tkinter import *
import rarfile
import Tkinter, tkFileDialog
import tkMessageBox
import os
def explorer():
entry1.configure(state="normal")
file_path = tkFileDialog.askopenfilename()
value = os.path.basename(file_path)
entry1.delete(0, END)
entry1.insert(0,file_path)
entry1.configure(state="readonly")
text.set(value)
contra.set("")
if (value!=""):
label4.configure(bg="#f8f8f8")#Testing.
def buscarContra():
if entry1.get()!="Pulsa examinar..":
dir = entry1.get()
#De aqui para abajo se puede hacer orientado a objetos dividiendo esta parte.
validarArchivo(dir)
else:
tkMessageBox.showinfo("ERROR!!", "Ingrese un archivo.")
def validarArchivo(dir):
#Division del nombre
type = dir.split(".")[-1]
if type=="rar":
calcularBack(dir)
else:
tkMessageBox.showinfo("ERROR!!", "El archivo no es rar..")
def calcularBack(dir):
rf = rarfile.RarFile(dir)
if rf.needs_password()==True:
busco = metodoComun(rf)
if busco==False:
print "si"
metodoBacktracking(rf)
else:
tkMessageBox.showwarning("Aviso", "El rar no tiene clave.")
def metodoComun(rf):
#Diccionario comunes y no comunes.
contras = ["0000","1111","2222","3333","4444","5555","6666",
"7777","8888","9999","1234","1212","1004","2000","1122","6969","1313",
"4321","2001","1010",
#No comunes.
"8557","9047","8438","0439","9539","8196","7063","6093","6827","7394",
"0859","8957","9480","6793","8398","0738","7637","6835","9629","8093",
"8068"]
tempo = False
value = ""
for i in range(0,len(contras)):
try:
rf.extractall(None,None,contras[i])
value = contras[i]
tempo = True
break
except rarfile.RarCRCError:
continue
if tempo==True:
#Esto npi..
contra.set(value)
return tempo
def metodoBacktracking(rf):
var = 0
while True:
psw = str(var)
if len(psw) == 1:
psw = "000"+psw
elif len(psw) == 2:
psw = "00"+psw
elif len(psw) == 3:
psw = "0"+psw
print(psw)
try:
rf.extractall(None,None,psw)
break
except rarfile.RarCRCError:
var = var + 1
#print("Ready the pass is: "+psw)
#Esto no se hace por la herencia de la clase me imagino que debe ser un metodo estatico..
contra.set(psw)
ventana = Tk()
ventana.config()
ventana.geometry("350x300")
ventana.title('Backtracking Program')
text = StringVar()
contra = StringVar()
#vent.iconbitmap('icon-short.ico')
label1 = Label(ventana,text="Bienvenido a RAR-Backtrack")#,font=('Calibri',12)
label2 = Label(ventana,text="Ruta del archivo")
label3 = Label(ventana,text="Nombre del archivo: ")
label4 = Label(ventana, textvariable=text)
label6 = Label(ventana,text="Contrasena: ")
label5 = Label(ventana,textvariable=contra)
b1 = Button(ventana, text="Examinar", command = explorer, bd=0, bg="#d7ccc8",activebackground="#837062")
b2 = Button(ventana, text="Unrar now!", command= buscarContra)
entry1 = Entry(ventana, width=30)
entry1.insert(0,"Pulsa examinar..")
entry1.configure(state="readonly")
#Como se colocaran.
label1.grid(padx=0, pady=20, row=1, column=2)
label2.grid(row=2,column=2)
entry1.grid(row=3,column=2, padx=10, pady = 10)
b1.grid(row=3,column=3)
label3.grid(row=4,column=2, pady=20, padx=15)
label4.grid(row=4,column=3)
label6.grid(row=5,column=2,pady=20)
label5.grid(row=5, column=3)
b2.grid(row=6,column=2, pady=5, padx=0)
ventana.mainloop()
| [
"Eladio"
] | Eladio |
89811b7b59ae289ac3db4306984a6aee03c8e688 | a6040f46e86971180d5aa97ca738fe319a1f3e88 | /forms/items.py | 006fafc55999b7260b897929d0fe21b84567ecd1 | [] | no_license | CaptainKryuk/python-scrapy-forms | ab92c781fd6755ee40e035c3e739efb451610de4 | 719915940dfb0e50c24c4ec791695a6ff155f5ba | refs/heads/master | 2020-03-20T00:25:07.722970 | 2018-06-12T08:46:22 | 2018-06-12T08:46:22 | 137,043,067 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 285 | py | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class FormsItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
| [
"35116092+CaptainKryuk@users.noreply.github.com"
] | 35116092+CaptainKryuk@users.noreply.github.com |
98be232db980c474e72cc385c76f632c276d10bd | 8b79c18497f84890f6c261f3e67e41d0b8955f5c | /pallinder +.py | e4012e7f8fd579119f56c10856998712af0303a9 | [] | no_license | alexmasterblack/python_programming | f16ab287734327779e1c1137eee2d574352e366b | 04da78840951fa686a1d824debeadf1dee45f3da | refs/heads/master | 2021-02-12T22:34:35.916374 | 2020-07-11T00:30:45 | 2020-07-11T00:30:45 | 244,638,083 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 93 | py | word = ''.join([str(n)for n in input().split()])
print('YES' if word == word[::-1] else 'NO') | [
"noreply@github.com"
] | alexmasterblack.noreply@github.com |
37d18cddc7cd04f237cb183c58d0244a8489f42e | a9c3c0c958ed33646a6acfe97780d4939e1e0308 | /tensorflow/contrib/distribute/python/estimator_training_test.py | bd643bdbb4f4793433f41577484ae6545ba7d1bf | [
"Apache-2.0"
] | permissive | therladbsgh/tensorflow | 458fa3d34a48449845ded366cc8243fd177bfe49 | 9d5d35bf74c2dd4b65303a76b817fd1cf060df9b | refs/heads/master | 2020-05-15T00:33:30.533332 | 2019-04-18T01:15:45 | 2019-04-18T01:30:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 24,119 | py | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests that show Distribute Coordinator works with Estimator."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import glob
import json
import os
import sys
import tempfile
from absl.testing import parameterized
import numpy as np
from tensorflow.contrib.distribute.python import collective_all_reduce_strategy
from tensorflow.contrib.distribute.python import mirrored_strategy
from tensorflow.contrib.distribute.python import parameter_server_strategy
from tensorflow.contrib.optimizer_v2 import adagrad
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.distribute import combinations
from tensorflow.python.distribute import cross_device_ops as cross_device_ops_lib
from tensorflow.python.distribute import distribute_coordinator as dc
from tensorflow.python.distribute import estimator_training as dc_training
from tensorflow.python.distribute import multi_worker_test_base
from tensorflow.python.distribute.distribute_config import DistributeConfig
from tensorflow.python.eager import context
from tensorflow.python.estimator import exporter as exporter_lib
from tensorflow.python.estimator import run_config as run_config_lib
from tensorflow.python.estimator import training as estimator_training
from tensorflow.python.estimator.canned import dnn_linear_combined
from tensorflow.python.estimator.canned import prediction_keys
from tensorflow.python.estimator.export import export as export_lib
from tensorflow.python.feature_column import feature_column_lib as feature_column
from tensorflow.python.platform import gfile
from tensorflow.python.platform import test
from tensorflow.python.summary import summary_iterator
from tensorflow.python.summary.writer import writer_cache
from tensorflow.python.training import session_manager
BATCH_SIZE = 10
LABEL_DIMENSION = 2
DATA = np.linspace(
0., 2., BATCH_SIZE * LABEL_DIMENSION, dtype=np.float32).reshape(
BATCH_SIZE, LABEL_DIMENSION)
EVAL_NAME = "foo"
EXPORTER_NAME = "saved_model_exporter"
MAX_STEPS = 10
CHIEF = dc._TaskType.CHIEF
EVALUATOR = dc._TaskType.EVALUATOR
WORKER = dc._TaskType.WORKER
PS = dc._TaskType.PS
original_run_std_server = dc._run_std_server
class DistributeCoordinatorIntegrationTest(
multi_worker_test_base.IndependentWorkerTestBase, parameterized.TestCase):
@classmethod
def setUpClass(cls):
"""Create a local cluster with 2 workers."""
super(DistributeCoordinatorIntegrationTest, cls).setUpClass()
cls._cluster_spec = multi_worker_test_base.create_in_process_cluster(
num_workers=3, num_ps=2, has_eval=True)
def setUp(self):
self._model_dir = tempfile.mkdtemp()
super(DistributeCoordinatorIntegrationTest, self).setUp()
def dataset_input_fn(self, x, y, batch_size, shuffle):
def input_fn():
dataset = dataset_ops.Dataset.from_tensor_slices((x, y))
if shuffle:
dataset = dataset.shuffle(batch_size)
dataset = dataset.repeat(100).batch(batch_size)
return dataset
return input_fn
def _get_exporter(self, name, fc):
feature_spec = feature_column.make_parse_example_spec(fc)
serving_input_receiver_fn = (
export_lib.build_parsing_serving_input_receiver_fn(feature_spec))
return exporter_lib.LatestExporter(
name, serving_input_receiver_fn=serving_input_receiver_fn)
def _extract_loss_and_global_step(self, event_folder):
"""Returns the loss and global step in last event."""
event_paths = glob.glob(os.path.join(event_folder, "events*"))
self.assertNotEmpty(
event_paths, msg="Event file not found in dir %s" % event_folder)
loss = None
global_step_count = None
for e in summary_iterator.summary_iterator(event_paths[-1]):
current_loss = None
for v in e.summary.value:
if v.tag == "loss":
current_loss = v.simple_value
# If loss is not found, global step is meaningless.
if current_loss is None:
continue
current_global_step = e.step
if global_step_count is None or current_global_step > global_step_count:
global_step_count = current_global_step
loss = current_loss
return (loss, global_step_count)
def _get_estimator(self,
train_distribute,
eval_distribute,
remote_cluster=None):
input_dimension = LABEL_DIMENSION
linear_feature_columns = [
feature_column.numeric_column("x", shape=(input_dimension,))
]
dnn_feature_columns = [
feature_column.numeric_column("x", shape=(input_dimension,))
]
return dnn_linear_combined.DNNLinearCombinedRegressor(
linear_feature_columns=linear_feature_columns,
dnn_hidden_units=(2, 2),
dnn_feature_columns=dnn_feature_columns,
label_dimension=LABEL_DIMENSION,
model_dir=self._model_dir,
dnn_optimizer=adagrad.AdagradOptimizer(0.001),
linear_optimizer=adagrad.AdagradOptimizer(0.001),
config=run_config_lib.RunConfig(
experimental_distribute=DistributeConfig(
train_distribute=train_distribute,
eval_distribute=eval_distribute,
remote_cluster=remote_cluster)))
def _complete_flow(self,
train_distribute,
eval_distribute,
remote_cluster=None,
use_train_and_evaluate=True):
estimator = self._get_estimator(train_distribute, eval_distribute,
remote_cluster)
input_dimension = LABEL_DIMENSION
train_input_fn = self.dataset_input_fn(
x={"x": DATA},
y=DATA,
batch_size=BATCH_SIZE // train_distribute.num_replicas_in_sync,
shuffle=True)
if eval_distribute:
eval_batch_size = BATCH_SIZE // eval_distribute.num_replicas_in_sync
else:
eval_batch_size = BATCH_SIZE
eval_input_fn = self.dataset_input_fn(
x={"x": DATA}, y=DATA, batch_size=eval_batch_size, shuffle=False)
linear_feature_columns = [
feature_column.numeric_column("x", shape=(input_dimension,))
]
dnn_feature_columns = [
feature_column.numeric_column("x", shape=(input_dimension,))
]
feature_columns = linear_feature_columns + dnn_feature_columns
eval_spec = estimator_training.EvalSpec(
name=EVAL_NAME,
input_fn=eval_input_fn,
steps=None,
exporters=self._get_exporter(EXPORTER_NAME, feature_columns),
start_delay_secs=0,
throttle_secs=1)
if use_train_and_evaluate:
estimator_training.train_and_evaluate(
estimator,
estimator_training.TrainSpec(train_input_fn, max_steps=MAX_STEPS),
eval_spec)
else:
estimator.train(train_input_fn, max_steps=MAX_STEPS)
latest_ckpt_path = estimator.latest_checkpoint()
metrics = estimator.evaluate(eval_input_fn,
checkpoint_path=latest_ckpt_path,
name=EVAL_NAME)
# Export the eval result to files.
eval_result = estimator_training._EvalResult(
status=estimator_training._EvalStatus.EVALUATED,
metrics=metrics,
checkpoint_path=latest_ckpt_path)
evaluator = estimator_training._TrainingExecutor._Evaluator(estimator,
eval_spec,
None)
evaluator._export_eval_result(eval_result, True)
return estimator
def _inspect_train_and_eval_events(self, estimator):
# Make sure nothing is stuck in limbo.
writer_cache.FileWriterCache.clear()
# Examine the training events. Use a range to check global step to avoid
# flakyness due to global step race condition.
training_loss, _ = self._extract_loss_and_global_step(self._model_dir)
self.assertIsNotNone(training_loss)
# Examine the eval events. The global step should be accurate.
eval_dir = os.path.join(self._model_dir, "eval_" + EVAL_NAME)
eval_loss, eval_global_step = self._extract_loss_and_global_step(
event_folder=eval_dir)
self.assertIsNotNone(eval_loss)
self.assertGreaterEqual(eval_global_step, MAX_STEPS)
# Examine the export folder.
export_dir = os.path.join(
os.path.join(self._model_dir, "export"), EXPORTER_NAME)
self.assertTrue(gfile.Exists(export_dir))
# Examine the ckpt for predict.
def predict_input_fn():
return dataset_ops.Dataset.from_tensor_slices({
"x": DATA
}).batch(BATCH_SIZE)
predicted_proba = np.array([
x[prediction_keys.PredictionKeys.PREDICTIONS]
for x in estimator.predict(predict_input_fn)
])
self.assertAllEqual((BATCH_SIZE, LABEL_DIMENSION), predicted_proba.shape)
def _make_cross_device_ops(self, num_gpus_per_worker):
return cross_device_ops_lib.MultiWorkerAllReduce(
["/job:worker/task:0", "/job:worker/task:1", "/job:worker/task:2"],
num_gpus_per_worker)
def _get_strategy_object(self, strategy_cls):
if strategy_cls == mirrored_strategy.CoreMirroredStrategy:
return strategy_cls(
cross_device_ops=self._make_cross_device_ops(
num_gpus_per_worker=context.num_gpus()))
elif strategy_cls == mirrored_strategy.MirroredStrategy:
return strategy_cls(
num_gpus_per_worker=context.num_gpus(),
cross_device_ops=self._make_cross_device_ops(
num_gpus_per_worker=context.num_gpus()))
else:
return strategy_cls(num_gpus_per_worker=context.num_gpus())
@combinations.generate(
combinations.combine(
mode=["graph"],
train_distribute_cls=[
collective_all_reduce_strategy.CollectiveAllReduceStrategy,
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy,
parameter_server_strategy.ParameterServerStrategy
],
eval_distribute_cls=[
None,
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy,
parameter_server_strategy.ParameterServerStrategy,
collective_all_reduce_strategy.CollectiveAllReduceStrategy,
],
required_gpus=[0, 1]))
def test_complete_flow_standalone_client(self, train_distribute_cls,
eval_distribute_cls):
train_distribute = self._get_strategy_object(train_distribute_cls)
if eval_distribute_cls:
eval_distribute = self._get_strategy_object(eval_distribute_cls)
else:
eval_distribute = None
cluster_spec = copy.deepcopy(self._cluster_spec)
if (train_distribute_cls !=
parameter_server_strategy.ParameterServerStrategy):
cluster_spec.pop("ps", None)
estimator = self._complete_flow(train_distribute, eval_distribute,
cluster_spec)
self._inspect_train_and_eval_events(estimator)
@combinations.generate(
combinations.combine(
mode=["graph"],
eval_distribute_class=[
None,
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy,
parameter_server_strategy.ParameterServerStrategy,
],
required_gpus=[0, 1]))
def test_complete_flow_standalone_client_collective_nccl(
self, eval_distribute_class):
train_distribute = (
collective_all_reduce_strategy.CollectiveAllReduceStrategy(
num_gpus_per_worker=context.num_gpus(),
communication=cross_device_ops_lib.CollectiveCommunication.NCCL))
if eval_distribute_class:
eval_distribute = self._get_strategy_object(eval_distribute_class)
else:
eval_distribute = None
cluster_spec = copy.deepcopy(self._cluster_spec)
cluster_spec.pop("ps", None)
estimator = self._complete_flow(train_distribute, eval_distribute,
cluster_spec)
self._inspect_train_and_eval_events(estimator)
@combinations.generate(
combinations.combine(
mode=["graph"],
train_distribute_cls=[
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy,
],
eval_distribute_cls=[
None,
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy,
],
required_gpus=[0, 1]))
def test_estimator_standalone_client(self, train_distribute_cls,
eval_distribute_cls):
train_distribute = self._get_strategy_object(train_distribute_cls)
if eval_distribute_cls:
eval_distribute = self._get_strategy_object(eval_distribute_cls)
else:
eval_distribute = None
# We use the whole cluster for evaluation.
cluster = copy.deepcopy(self._cluster_spec)
cluster.pop("evaluator", None)
estimator = self._complete_flow(
train_distribute, eval_distribute, remote_cluster=cluster,
use_train_and_evaluate=False)
self._inspect_train_and_eval_events(estimator)
def _mock_run_std_server(self, *args, **kwargs):
ret = original_run_std_server(*args, **kwargs)
# Wait for all std servers to be brought up in order to reduce the chance of
# remote sessions taking local ports that have been assigned to std servers.
self._barrier.wait()
return ret
def _independent_worker_fn(
self,
train_distribute,
eval_distribute,
):
with test.mock.patch.object(dc, "_run_std_server",
self._mock_run_std_server):
self._complete_flow(train_distribute, eval_distribute)
@combinations.generate(
combinations.combine(
mode=["graph"],
train_distribute_cls=[
collective_all_reduce_strategy.CollectiveAllReduceStrategy,
parameter_server_strategy.ParameterServerStrategy,
],
eval_distribute_cls=[
None,
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy,
parameter_server_strategy.ParameterServerStrategy,
collective_all_reduce_strategy.CollectiveAllReduceStrategy,
],
required_gpus=[0, 1]))
def test_complete_flow_independent_worker_between_graph(
self, train_distribute_cls, eval_distribute_cls):
if (context.num_gpus() < 2 and eval_distribute_cls ==
collective_all_reduce_strategy.CollectiveAllReduceStrategy):
self.skipTest("`CollectiveAllReduceStrategy` needs at least two towers.")
train_distribute = self._get_strategy_object(train_distribute_cls)
if eval_distribute_cls:
eval_distribute = self._get_strategy_object(eval_distribute_cls)
else:
eval_distribute = None
if (train_distribute_cls == parameter_server_strategy
.ParameterServerStrategy):
cluster_spec = multi_worker_test_base.create_cluster_spec(
num_workers=3, num_ps=2, has_eval=True)
# 3 workers, 2 ps and 1 evaluator.
self._barrier = dc._Barrier(6)
else:
cluster_spec = multi_worker_test_base.create_cluster_spec(
num_workers=3, num_ps=0, has_eval=True)
# 3 workers and 1 evaluator.
self._barrier = dc._Barrier(4)
threads = self.run_multiple_tasks_in_threads(self._independent_worker_fn,
cluster_spec, train_distribute,
eval_distribute)
threads_to_join = []
for task_type, ts in threads.items():
if task_type == PS:
continue
for t in ts:
threads_to_join.append(t)
self.join_independent_workers(threads_to_join)
estimator = self._get_estimator(train_distribute, eval_distribute)
self._inspect_train_and_eval_events(estimator)
@combinations.generate(
combinations.combine(
mode=["graph"],
train_distribute_cls=[
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy
],
eval_distribute_cls=[
None,
mirrored_strategy.MirroredStrategy,
mirrored_strategy.CoreMirroredStrategy
],
required_gpus=[0, 1]))
def test_complete_flow_independent_worker_in_graph(self, train_distribute_cls,
eval_distribute_cls):
train_distribute = self._get_strategy_object(train_distribute_cls)
if eval_distribute_cls:
eval_distribute = self._get_strategy_object(eval_distribute_cls)
else:
eval_distribute = None
cluster_spec = multi_worker_test_base.create_cluster_spec(
num_workers=3, num_ps=0, has_eval=True)
# 3 workers and 1 evaluator.
self._barrier = dc._Barrier(4)
threads = self.run_multiple_tasks_in_threads(self._independent_worker_fn,
cluster_spec, train_distribute,
eval_distribute)
self.join_independent_workers([threads[WORKER][0], threads[EVALUATOR][0]])
estimator = self._get_estimator(train_distribute, eval_distribute)
self._inspect_train_and_eval_events(estimator)
TF_CONFIG_WITH_CHIEF = {
"cluster": {
"chief": ["fake_chief"],
},
"task": {
"type": "chief",
"index": 0
}
}
TF_CONFIG_WITH_MASTER = {
"cluster": {
"master": ["fake_master"],
},
"task": {
"type": "master",
"index": 0
}
}
TF_CONFIG_WITHOUT_TASK = {"cluster": {"chief": ["fake_worker"]}}
class RunConfigTest(test.TestCase):
def test_previously_unexpected_cluster_spec(self):
with test.mock.patch.dict(
"os.environ", {"TF_CONFIG": json.dumps(TF_CONFIG_WITHOUT_TASK)}):
run_config_lib.RunConfig(
experimental_distribute=DistributeConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy(
["/device:GPU:0", "/device:GPU:1"])))
def test_should_run_distribute_coordinator(self):
"""Tests that should_run_distribute_coordinator return a correct value."""
# We don't use distribute coordinator for local training.
self.assertFalse(
dc_training.should_run_distribute_coordinator(
run_config_lib.RunConfig()))
# When `train_distribute` is not specified, don't use distribute
# coordinator.
with test.mock.patch.dict("os.environ",
{"TF_CONFIG": json.dumps(TF_CONFIG_WITH_CHIEF)}):
self.assertFalse(
dc_training.should_run_distribute_coordinator(
run_config_lib.RunConfig()))
# When `train_distribute` is specified and TF_CONFIG is detected, use
# distribute coordinator.
with test.mock.patch.dict("os.environ",
{"TF_CONFIG": json.dumps(TF_CONFIG_WITH_CHIEF)}):
config_with_train_distribute = run_config_lib.RunConfig(
experimental_distribute=DistributeConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy(
["/device:GPU:0", "/device:GPU:1"])))
config_with_eval_distribute = run_config_lib.RunConfig(
experimental_distribute=DistributeConfig(
eval_distribute=mirrored_strategy.CoreMirroredStrategy(
["/device:GPU:0", "/device:GPU:1"])))
self.assertTrue(
dc_training.should_run_distribute_coordinator(
config_with_train_distribute))
self.assertFalse(
dc_training.should_run_distribute_coordinator(
config_with_eval_distribute))
# With a master in the cluster, don't run distribute coordinator.
with test.mock.patch.dict("os.environ",
{"TF_CONFIG": json.dumps(TF_CONFIG_WITH_MASTER)}):
config = run_config_lib.RunConfig(
experimental_distribute=DistributeConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy(
["/device:GPU:0", "/device:GPU:1"])))
self.assertFalse(dc_training.should_run_distribute_coordinator(config))
def test_init_run_config_duplicate_distribute(self):
with self.assertRaises(ValueError):
run_config_lib.RunConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy(),
experimental_distribute=DistributeConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy()))
with self.assertRaises(ValueError):
run_config_lib.RunConfig(
eval_distribute=mirrored_strategy.CoreMirroredStrategy(),
experimental_distribute=DistributeConfig(
eval_distribute=mirrored_strategy.CoreMirroredStrategy()))
def test_init_run_config_none_distribute_coordinator_mode(self):
# We don't use distribute coordinator for local training.
config = run_config_lib.RunConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy())
dc_training.init_run_config(config, {})
self.assertIsNone(config._distribute_coordinator_mode)
# With a master in the cluster, don't run distribute coordinator.
with test.mock.patch.dict("os.environ",
{"TF_CONFIG": json.dumps(TF_CONFIG_WITH_MASTER)}):
config = run_config_lib.RunConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy())
self.assertIsNone(config._distribute_coordinator_mode)
# When `train_distribute` is not specified, don't use distribute
# coordinator.
with test.mock.patch.dict("os.environ",
{"TF_CONFIG": json.dumps(TF_CONFIG_WITH_CHIEF)}):
config = run_config_lib.RunConfig()
self.assertFalse(hasattr(config, "_distribute_coordinator_mode"))
def test_init_run_config_independent_worker(self):
# When `train_distribute` is specified and TF_CONFIG is detected, use
# distribute coordinator with INDEPENDENT_WORKER mode.
with test.mock.patch.dict("os.environ",
{"TF_CONFIG": json.dumps(TF_CONFIG_WITH_CHIEF)}):
config = run_config_lib.RunConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy())
self.assertEqual(config._distribute_coordinator_mode,
dc.CoordinatorMode.INDEPENDENT_WORKER)
def test_init_run_config_standalone_client(self):
# When `train_distribute` is specified, TF_CONFIG is detected and
# `experimental.remote_cluster` is set use distribute coordinator with
# STANDALONE_CLIENT mode.
config = run_config_lib.RunConfig(
train_distribute=mirrored_strategy.CoreMirroredStrategy(),
experimental_distribute=DistributeConfig(
remote_cluster={"chief": ["fake_worker"]}))
self.assertEqual(config._distribute_coordinator_mode,
dc.CoordinatorMode.STANDALONE_CLIENT)
if __name__ == "__main__":
# Reduce `recovery_wait_secs` from 30 seconds so the test completes quickly.
orig_init = session_manager.SessionManager.__init__
def new_init(*args, **kwargs):
kwargs.pop("recovery_wait_secs", None)
kwargs["recovery_wait_secs"] = 0.5
orig_init(*args, **kwargs)
session_manager.SessionManager.__init__ = new_init
with test.mock.patch.object(sys, "exit", os._exit):
test.main()
| [
"gardener@tensorflow.org"
] | gardener@tensorflow.org |
a8cff76094aaf294cea00102085b2551c7c766a1 | 301831aa83397f3cfed0e48283076fea5026aad5 | /src/apps/productos/models.py | 96fd7e6ee7ea151b4c7e44fbff48d73ea81811cf | [] | no_license | hanstakeshi/crehana-project | c49e49313586b854a00f1e4e485bc0fdde0146c3 | 98ad65429c4cc7a8d1c9d0f6553b3060562e7e5f | refs/heads/master | 2021-09-14T16:15:51.092317 | 2018-05-12T01:50:14 | 2018-05-12T01:50:14 | 123,772,643 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,714 | py | # -*- coding: utf-8 -*-
from django.db import models
from uuslug import uuslug
from filebrowser.fields import FileBrowseField
# Create your models here.
class Categoria(models.Model):
nombre = models.CharField('Nombre', max_length=120)
slug = models.SlugField('slug', max_length=180, blank=True)
position = models.SmallIntegerField('Posición', default=0)
def save(self, *args, **kwargs):
try:
for x in self.producto_presentacion.order_by('position'):
if x.precio_base > 0:
self.oferta = True
except:
pass
self.slug = uuslug(self.nombre, instance=self)
super(Categoria, self).save(*args, **kwargs)
def __str__(self):
return self.nombre
class Curso(models.Model):
mostrar_home = models.BooleanField("¿Mostrar en el Home?", default=False)
fk_categoria = models.ForeignKey(Categoria,
related_name="prod_cat",
verbose_name="Categoria",
blank=True, null=True)
nombre = models.CharField('Nombre del Curso', max_length=120)
position = models.SmallIntegerField(u'Posición', default=0)
codigo = models.CharField(u"Código", max_length=400)
precio = models.DecimalField('Precio Referencia', max_digits=12, decimal_places=2, default=0)
img_curso = FileBrowseField('Imagen del Curso',
max_length=200, blank=True,
extensions=['.jpg', '.png', '.gif'],
directory='imagen_curso')
class Meta:
verbose_name = u'Curso'
verbose_name_plural = u'Curso'
def __str__(self):
return u'%s' % self.nombre
| [
"agurtohans@gmail.com"
] | agurtohans@gmail.com |
7e2557e4813eaa8da98c9b826870dfd81a0e9b88 | e9670ebcd4b554d6ffe2f7b23c89f2982df39ddb | /Django/first_project/first_app/urls.py | e441f16ef0077d78574120d4c1337ebcaa1a7df9 | [] | no_license | Rushi-Bhavsar/BFDL-Full-Stack-Training | 3ab4f58be23522a632a4c346a9738d35c2cb4cc2 | 0648d37568be2406b0027bacb0509e30987e8b38 | refs/heads/main | 2023-06-20T07:05:32.307145 | 2021-07-14T17:00:08 | 2021-07-14T17:00:08 | 374,981,013 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 234 | py | from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('images/', views.show_images, name='images'),
path('accessRecords/', views.show_access_records, name='records')
]
| [
"rushi.bhavsar.57@gmail.com"
] | rushi.bhavsar.57@gmail.com |
5c523dcbb32e9869f218b6ef3eac3f283bf5b190 | 5f6f9bdd7f8655f02998944333fe41d7f9282b7b | /SentimentalAnalysis.py | b2dbb6ae1a108d9695d4d5912ff1b7a402183d5c | [] | no_license | Krishnachinya/Twitter | 1fa9eed1236a6a8b18a284e205f2fadfc7c011f2 | 813df86482ba50c61749207438350cd48694a0d0 | refs/heads/master | 2020-03-16T07:23:51.899975 | 2018-05-08T08:02:53 | 2018-05-08T08:02:53 | 132,575,243 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,469 | py | import pandas as pd
import json
import string
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
import nltk
file = open("/Users/KrishnChinya/PycharmProjects/Twitter/data.json","r")
json_file = json.loads(file.read())
file.close()
states_abbrevation = {
'AK': 'Alaska',
'AL': 'Alabama',
'AR': 'Arkansas',
'AS': 'American Samoa',
'AZ': 'Arizona',
'CA': 'California',
'CO': 'Colorado',
'CT': 'Connecticut',
'DC': 'District of Columbia',
'DE': 'Delaware',
'FL': 'Florida',
'GA': 'Georgia',
'GU': 'Guam',
'HI': 'Hawaii',
'IA': 'Iowa',
'ID': 'Idaho',
'IL': 'Illinois',
'IN': 'Indiana',
'KS': 'Kansas',
'KY': 'Kentucky',
'LA': 'Louisiana',
'MA': 'Massachusetts',
'MD': 'Maryland',
'ME': 'Maine',
'MI': 'Michigan',
'MN': 'Minnesota',
'MO': 'Missouri',
'MP': 'Northern Mariana Islands',
'MS': 'Mississippi',
'MT': 'Montana',
'NA': 'National',
'NC': 'North Carolina',
'ND': 'North Dakota',
'NE': 'Nebraska',
'NH': 'New Hampshire',
'NJ': 'New Jersey',
'NM': 'New Mexico',
'NV': 'Nevada',
'NY': 'New York',
'OH': 'Ohio',
'OK': 'Oklahoma',
'OR': 'Oregon',
'PA': 'Pennsylvania',
'PR': 'Puerto Rico',
'RI': 'Rhode Island',
'SC': 'South Carolina',
'SD': 'South Dakota',
'TN': 'Tennessee',
'TX': 'Texas',
'UT': 'Utah',
'VA': 'Virginia',
'VI': 'Virgin Islands',
'VT': 'Vermont',
'WA': 'Washington',
'WI': 'Wisconsin',
'WV': 'West Virginia',
'WY': 'Wyoming'
}
tweets = pd.DataFrame(columns=["States","Text"]);
columns = list(tweets)
length_json = len(json_file)
pos = 1
words = []
state = ""
stopwords = ENGLISH_STOP_WORDS
for json in json_file:
word = json['text']
# print(word)
word = word.lower()
# word = word.decode("utf-8")
#remove puncuatation and special symbols
p = string.punctuation
d = string.digits
table = str.maketrans(p, len(p)*" ")
word = word.translate(table)
table = str.maketrans(d, len(d)*" ")
word = word.translate(table)
word = nltk.word_tokenize(word)
# print(word)
words = [wrd for wrd in word if wrd not in stopwords]
# print(words)
if(json['place']!=None):
state = json['place']['full_name'].split(',')[1].strip()
if state not in states_abbrevation.keys():
for key,value in states_abbrevation.items():
if value.lower() == json['place']['full_name'].split(',')[0].lower():
state = key
break;
state = 'unknown'
else:
for key, value in states_abbrevation.items():
if key == state:
state = key
break;
else:
state = 'unknown'
if(pos < length_json):
if(tweets.size != 0):
if((tweets['States'] == state).any()):
tweets['Text'].values[0].extend(words)
# tweets.append(words)
else:
tweets.loc[tweets.size] = [state, words];
else:
tweets.loc[tweets.size] = [state, words];
sentiment = {};
# here calculating scores
file = open("/Users/KrishnChinya/PycharmProjects/Twitter/AFINN-111.txt")
for f in file.readlines():
lst = f.split()
if(len(lst) == 2):
name = lst[0]
scores = lst[1]
else:
name = "";
while(len(lst)>=2):
if(len(name) == 0):
name = lst[0]
lst.remove(name)
else:
name = name + " " +lst[0]
lst.remove(lst[0])
if(len(lst) == 1):
scores = lst[0]
sentiment[name] = int(scores)
file.close()
state_scores = pd.DataFrame(columns=["States","Score"]);
sentiment_score = 0;
for index, row in tweets.iterrows():
for key, value in sentiment.items():
for tweet_word in row[1]:
if(tweet_word == key):
sentiment_score = sentiment_score + int(value)
state_scores.loc[state_scores.size] = [row[0], sentiment_score]
sentiment_score = 0
state_scores.to_csv("/Users/KrishnChinya/PycharmProjects/Twitter/scores1.csv",index=False) | [
"Krishnachinya@gmail.com"
] | Krishnachinya@gmail.com |
d8c5bc377f0aaa256a6d8cc966695d8c742743a6 | b24400d5811f2540d9762c0c21825d542bc4cacb | /Day 2/dilation_erosion.py | 098eca82454cc871acfd265ea1fb38314d6d25cc | [] | no_license | innokaiclub/OpenCV | f6147f7d09e0a8626ff2c41ba913b699f50a67e7 | 26141cd9846859d177f62e2a2f2619faaa037449 | refs/heads/master | 2020-08-01T11:04:22.753837 | 2019-10-18T07:57:11 | 2019-10-18T07:57:11 | 209,515,785 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 315 | py | import cv2
import numpy as np
img = cv2.imread('logo.jpg', 0)
kernel = np.ones((5,5), np.uint8)
img_erosion = cv2.erode(img, kernel, iterations=1)
img_dilation = cv2.dilate(img, kernel, iterations=1)
cv2.imshow('Input', img)
cv2.imshow('Erosion', img_erosion)
cv2.imshow('Dilation', img_dilation)
cv2.waitKey(0)
| [
"noreply@github.com"
] | innokaiclub.noreply@github.com |
ff9bc8743fc108d614b69a47dc7b6b336efed993 | 894ed19b0168134e9950a733e05b1e7547c462f3 | /util.py | efee494ed3b73156963cb3b7fdac1d184cc744b0 | [] | no_license | danoan/blink-dev | 705e3de699f0b1b824d7bf3efcbdbac4321d9400 | 0156439ad4ac8cc37b555005e67d54b33a685f17 | refs/heads/master | 2021-01-02T22:51:40.529591 | 2014-01-20T22:18:23 | 2014-01-20T22:18:23 | 14,269,329 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,769 | py | #coding:utf-8
import json
import psycopg2
import urlparse
import datetime
class BlinkException(Exception):
pass
class BlinkCodeNotExist(Exception):
pass
class Util(object):
PATHS_DICT = {"CSS_PATH":"/static/css/", "JS_PATH":"/static/js/", "IMG_PATH":"/static/img/", "MOV_PATH":"/static/mov/"}
LOCALE_DICT = {"af_ZA":"Afrikaans","ar_AR":"Arabic","az_AZ":"Azerbaijani","be_BY":"Belarusian","bg_BG":"Bulgarian","bn_IN":"Bengali","bs_BA":"Bosnian","ca_ES":"Catalan","cs_CZ":"Czech","cy_GB":"Welsh","da_DK":"Danish","de_DE":"German","el_GR":"Greek","en_GB":"English (UK)","en_PI":"English (Pirate)","en_UD":"English (Upside Down)","en_US":"English (US)","eo_EO":"Esperanto","es_ES":"Spanish (Spain)","es_LA":"Spanish","et_EE":"Estonian","eu_ES":"Basque","fa_IR":"Persian","fb_LT":"Leet Speak","fi_FI":"Finnish","fo_FO":"Faroese","fr_CA":"French (Canada)","fr_FR":"French (France)","fy_NL":"Frisian","ga_IE":"Irish","gl_ES":"Galician","he_IL":"Hebrew","hi_IN":"Hindi","hr_HR":"Croatian","hu_HU":"Hungarian","hy_AM":"Armenian","id_ID":"Indonesian","is_IS":"Icelandic","it_IT":"Italian","ja_JP":"Japanese","ka_GE":"Georgian","km_KH":"Khmer","ko_KR":"Korean","ku_TR":"Kurdish","la_VA":"Latin","lt_LT":"Lithuanian","lv_LV":"Latvian","mk_MK":"Macedonian","ml_IN":"Malayalam","ms_MY":"Malay","nb_NO":"Norwegian (bokmal)","ne_NP":"Nepali","nl_NL":"Dutch","nn_NO":"Norwegian (nynorsk)","pa_IN":"Punjabi","pl_PL":"Polish","ps_AF":"Pashto","pt_BR":"Portuguese (Brazil)","pt_PT":"Portuguese (Portugal)","ro_RO":"Romanian","ru_RU":"Russian","sk_SK":"Slovak","sl_SI":"Slovenian","sq_AL":"Albanian","sr_RS":"Serbian","sv_SE":"Swedish","sw_KE":"Swahili","ta_IN":"Tamil","te_IN":"Telugu","th_TH":"Thai","tl_PH":"Filipino","tr_TR":"Turkish","uk_UA":"Ukrainian","vi_VN":"Vietnamese","zh_CN":"Simplified Chinese (China)","zh_HK":"Traditional Chinese (Hong Kong)","zh_TW":"Traditional Chinese (Taiwan)"}
TYPE_EXCEPTION = 0
TYPE_SUCCESS = 1
TYPE_INFORMATION = 2
#Action response Package
@staticmethod
def ARP(pk_type,ex,user_data):
if pk_type==0: #Exception
ex_obj = {"ex_type":ex.args[0],"ex_info":ex.args[1],"ex_msg":ex.args[2],"ex_extra":ex.args[3]}
p = {"type":"exception","ex_obj":ex_obj,"user_data":user_data}
elif pk_type==1: #Success
p = {"type":"success","ex_obj":None,"user_data":user_data}
elif pk_type==2: #Information
p = {"type":"information","ex_obj":None,"user_data":user_data}
return json.dumps(p)
@staticmethod
def get_age_from_facebook_date(birthday):
try:
birthday_list = birthday.split("/")
birthday = datetime.datetime(int(birthday_list[2]),int(birthday_list[0]),int(birthday_list[1]))
today = datetime.datetime.now()
age = (today.year - birthday.year)-1
if today.month >= birthday.month:
if today.day >= birthday.day:
age+=1
return age
except IndexError:
try:
return int(birthday)
except ValueError:
return None
@staticmethod
def get_db_date_format(date):
try:
date = date.split("/")
return datetime.date(date[2],date[0],date[1])
except Exception:
return None
class Database():
@staticmethod
def connect_database(DB_URL):
urlparse.uses_netloc.append("postgres")
url = urlparse.urlparse(DB_URL)
return psycopg2.connect(
database=url.path[1:],
user=url.username,
password=url.password,
host=url.hostname,
port=url.port
) | [
"danoan2008@gmail.com"
] | danoan2008@gmail.com |
240acd9f27afae0df14d392f51b84541d70a1dd9 | 9ea049bcad2c0cb785e09ea1039266028a95a9b2 | /alphabet.py | 82879f598017ab2a679dabe6fee7c31b3489da4e | [] | no_license | vaishaliusha/vaishali | 5835d3cc79cddefbf247d551f0431a8505b2bc06 | 730beffe15a5295af63a6df502eb3d8b738746c3 | refs/heads/master | 2020-04-11T06:42:30.583200 | 2019-02-11T09:44:43 | 2019-02-11T09:44:43 | 161,588,923 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 284 | py | while True:
print("Enter '0' for exit.")
ch = input("Enter any character: ")
if ch == '0':
break
else:
if((ch>='a' and ch<='z') or (ch>='A' and ch<='Z')):
print("the given character",ch, "is an alphabet.\n")
else:
print("the given chracter",ch, "is not an alphabet.\n")
| [
"noreply@github.com"
] | vaishaliusha.noreply@github.com |
5d6cee2f86914a629b9478bf49101d646a4faf12 | 970ae27add331f3f6942117364361f8591aed3e6 | /start_menu.py | 479b209159d46f72c8a27667004ad63b63c62f80 | [
"MIT"
] | permissive | EraSiuS/PygaMone | f4f2eca069ec13f0e0191da2afe266b299629c59 | facc90987254390c5930c84ba883d07fea73fb5e | refs/heads/master | 2023-02-15T03:54:45.535801 | 2020-12-28T00:05:44 | 2020-12-28T00:05:44 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,789 | py | import pygame
import main
import utils
import game
import sound_manager
class StartMenu(object):
def __init__(self, screen: pygame.Surface):
self.__screen: pygame.Surface = screen
self.__display: pygame.Surface = pygame.Surface(main.SCREEN_SIZE)
self.__display.set_alpha(255)
self.__open_time = utils.current_milli_time()
# self.__sound = pygame.mixer.Sound('assets/sound/music/ps_1.mp3')
self.__sound = pygame.mixer.Sound('assets/sound/music/start_japan.mp3')
self.__logo = pygame.image.load('assets/textures/hud/logo_full.png')
self.__bg = pygame.transform.scale(pygame.image.load('assets/textures/hud/main_screen.png'), main.SCREEN_SIZE)
self.__font = pygame.font.Font("assets/font/MyFont-Regular.otf", 24)
self.__text = self.__font.render('Press a button to play !', True, (0, 0, 0))
self.__text_size = self.__text.get_rect().size
self.__clock = pygame.time.Clock()
while self.__tick():
self.__clock.tick(100)
def dell_var(self):
del self.__open_time, self.__sound, self.__logo, self.__bg, self.__font, self.__text
del self.__text_size, self.__display, self.__clock
def __tick(self):
dif_t = utils.current_milli_time() - self.__open_time
if dif_t < 3000:
self.__display.fill((255, 255, 255))
self.__display.blit(self.__logo, ((main.SCREEN_SIZE[0] - 600) // 2, (main.SCREEN_SIZE[1] - 128) // 2))
self.__screen.blit(self.__display, (0, 0))
else:
if sound_manager.MUSIC_CHANNEL.get_sound() is None:
sound_manager.MUSIC_CHANNEL.play(self.__sound)
self.__screen.blit(self.__bg, (0, 0))
self.__screen.blit(self.__text, ((main.SCREEN_SIZE[0] - self.__text_size[0]) // 2,
(main.SCREEN_SIZE[1] - self.__text_size[1]) // 2))
i = self.__display.get_alpha()
if i > 0:
self.__display.fill((255, 255, 255))
if i > 50:
self.__display.blit(self.__logo, ((main.SCREEN_SIZE[0] - 600) // 2, (main.SCREEN_SIZE[1] - 128) // 2))
self.__display.set_alpha(max(0, i - 2))
self.__screen.blit(self.__display, (0, 0))
pygame.display.update()
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
self.dell_var()
return False
if event.type == pygame.KEYDOWN:
if dif_t > 3000:
sound_manager.MUSIC_CHANNEL.stop()
self.dell_var()
game.Game(self.__screen)
return False
return True
| [
"aloisboyer58@gmail.com"
] | aloisboyer58@gmail.com |
6193f23e0dae25104c13ea9c9235641eac0d155f | df342ebb48b87bdd763b727740be4fe3efb5fd58 | /201902-aruba-py-1/multi-threadinig-demos/demo13-prime-task-await.py | 08303942075fbd669791cc29d5a8e2b76bacaa8f | [] | no_license | vaidyaenc/vaidya | 335db5e9080878f92859ff0de7f13a47a63196aa | 2fcdc4e1961b0fd8832e719eda74d6b59642960d | refs/heads/master | 2021-05-16T05:48:27.488129 | 2019-03-13T11:45:43 | 2019-03-13T11:45:43 | 103,250,478 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 504 | py | import threading as t
from threadutils import Task
import primeutils
import sys
import time
def main(name,args):
task1= Task(primeutils.prime_count,0,100) #sum 0-10
task2= Task(primeutils.prime_count,0,100000) # 50-60
print('waiting for tasks to finish')
result1=task1.wait()
print('prime_count(0,100) ={}'.format(result1))
print('prime_count(0,100000) ={}'.format(task2.wait()))
print('end of program')
if __name__=='__main__':
main(sys.argv[0],sys.argv[1:]) | [
"sureshv@blrubdev-sureshv.arubanetworks.com"
] | sureshv@blrubdev-sureshv.arubanetworks.com |
877cbcfb1605ae832ac62ac555c862fcd1e8210c | 576e74664e89a15904d3b41cf7a583b83cccf6b6 | /checkout/migrations/0001_initial.py | d6f38c31cf7400561116519a48850b4a27f9801d | [] | no_license | Andy-Osborne/boutique-django | 1a2c30dbe71c263d6a72270a24e92925024dcb71 | 8228bcb84f39ca3ea4ad306838d4300fbf021baf | refs/heads/master | 2022-12-26T00:52:20.180348 | 2020-09-30T10:12:16 | 2020-09-30T10:12:16 | 291,962,174 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,337 | py | # Generated by Django 3.1.1 on 2020-09-11 14:26
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('products', '0002_auto_20200907_2031'),
]
operations = [
migrations.CreateModel(
name='Order',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('order_number', models.CharField(editable=False, max_length=32)),
('full_name', models.CharField(max_length=50)),
('email', models.EmailField(max_length=254)),
('phone_number', models.CharField(max_length=20)),
('country', models.CharField(max_length=40)),
('postcode', models.CharField(blank=True, max_length=20, null=True)),
('town_or_city', models.CharField(max_length=40)),
('street_address1', models.CharField(max_length=80)),
('street_address2', models.CharField(blank=True, max_length=80, null=True)),
('county', models.CharField(blank=True, max_length=80, null=True)),
('date', models.DateField(auto_now=True)),
('delivery_cost', models.DecimalField(decimal_places=2, default=0, max_digits=6)),
('order_total', models.DecimalField(decimal_places=2, default=0, max_digits=10)),
('grand_total', models.DecimalField(decimal_places=2, default=0, max_digits=10)),
],
),
migrations.CreateModel(
name='OrderLineItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('product_size', models.CharField(blank=True, max_length=2, null=True)),
('quantity', models.IntegerField(default=0)),
('lineitem_total', models.DecimalField(decimal_places=2, editable=False, max_digits=6)),
('order', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='lineitems', to='checkout.order')),
('product', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='products.product')),
],
),
]
| [
"ajaosborne@googlemail.com"
] | ajaosborne@googlemail.com |
85a28501572f11f77fea13dfc65be7674cf6183d | 42cbceac991dd4d16d55cfbfca69c77edee29b9e | /asc.py | 3d9b35b807df4810b3e861f816e62befef346c00 | [] | no_license | BCooper58/python | 75d5c06deab992af6119d6c75d0988a28d242a07 | 91e6c24eafd59c10146ef369f5e0c1ca12ce4b34 | refs/heads/master | 2021-07-24T17:39:58.529282 | 2019-01-14T17:27:01 | 2019-01-14T17:27:01 | 102,133,103 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 145 | py | def asc2():
for i in range(256):
c = chr(i)
print("[",i," ",c,end="")
if (i % 16 == 0):
print("\n",end="")
def main():
asc2()
main()
| [
"noreply@github.com"
] | BCooper58.noreply@github.com |
5ad0428c695af2b019eeb2f0663b66e863d03a50 | c11c27b07086e97c633a833d37787474724bd2d2 | /src/ResNeXt/concateFeature.py | 6d8f98b7b69a10e8dff467812e4cacb8108ba6ef | [
"MIT"
] | permissive | willyspinner/High-Performance-Face-Recognition | d1826a73653dede6b43799439e4fb692f119c70b | c5caad61be97fd20f9c47a727278ff938dc5cc8f | refs/heads/master | 2020-06-22T16:36:29.663302 | 2019-07-19T09:41:47 | 2019-07-19T09:41:47 | 197,746,624 | 0 | 0 | MIT | 2019-07-19T09:42:00 | 2019-07-19T09:41:59 | null | UTF-8 | Python | false | false | 1,953 | py | import scipy.io as sio
import pickle
import numpy as np
import os
import numpy as np
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from scipy import spatial
from sklearn.externals import joblib
import time
reducedDim = 2048
pca = PCA(n_components = reducedDim, whiten = True)
path = "/media/zhaojian/6TB/data/extra_general_model_feature/"
with open(path + "NovelSet_List/NovelSet_1.txt", 'r') as f:
lines = f.readlines()
vggFeatures = np.loadtxt(path + 'NovelSet_Fea/VGG_NOVELSET_1.txt')
print "vggFeatures.shape: ", vggFeatures.shape
inputFeaturePath = "extracted_feature/NovelSet_1IdentityFeature/"
outputFeaturePath = "extracted_feature/NovelSet_1IdentityFeaturePCA2048/"
features = []
labelList = []
for index in range(len(lines)):
print index
line = lines[index]
ID = line.split("/")[-2]
print ID
labelList.append(ID)
vggFeature = feature = vggFeatures[index].flatten()
print "vggFeature.shape", vggFeature.shape
# caffeFeature = sio.loadmat(inputFeaturePath + ID + ".mat")["identityFeature"].flatten()
# print "caffeFeature.shape", caffeFeature.shape
#
# identityFeature = np.concatenate((caffeFeature, vggFeature), axis = 0)
# print "identityFeature.shape: ", identityFeature.shape
identityFeature = vggFeature
features.append(identityFeature)
features = np.asarray(features)
print "features..shape: ", features.shape
# sio.savemat("concatenateFeatures", {"identityFeature": features})
# sio.savemat("vggNovelSet_1_Features", {"identityFeature": features})
features = sio.loadmat("vggNovelSet_1_Features")['identityFeature']
#
# features = pca.fit_transform(features)
#
print "features..shape: ", features.shape
#
#
for index in range(len(features)):
identityFeature = features[index]
print "identityFeature.shape: ", identityFeature.shape
label = labelList[index]
# print index
# print label
sio.savemat(outputFeaturePath + label, {"identityFeature": identityFeature})
| [
"noreply@github.com"
] | willyspinner.noreply@github.com |
35f3e6fc87bf0e774aa1fc4dd0a9fff46bc4aee3 | bd4dcd90d41aa228f0384c9ba03edd105a93d7ec | /products/migrations/0101_auto_20200221_2128.py | 40b496b06fcc159e8132ad5c55c7e06b1c94a954 | [] | no_license | deganoth/mu-shop | 0be0bb0cfa635986b37edbe371daf8373f09aefd | dc1a77ecf6217286c005d762b559fe3f61ef2f6d | refs/heads/master | 2023-02-17T08:23:36.339586 | 2023-01-10T17:51:21 | 2023-01-10T17:51:21 | 243,972,792 | 0 | 1 | null | 2023-02-15T23:10:09 | 2020-02-29T13:22:02 | Python | UTF-8 | Python | false | false | 5,567 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11.24 on 2020-02-21 21:28
from __future__ import unicode_literals
from django.db import migrations
import djmoney.models.fields
class Migration(migrations.Migration):
dependencies = [
('products', '0100_auto_20200221_2125'),
]
operations = [
migrations.AlterField(
model_name='product',
name='price_currency',
field=djmoney.models.fields.CurrencyField(choices=[('XUA', 'ADB Unit of Account'), ('AFN', 'Afghani'), ('DZD', 'Algerian Dinar'), ('ARS', 'Argentine Peso'), ('AMD', 'Armenian Dram'), ('AWG', 'Aruban Guilder'), ('AUD', 'Australian Dollar'), ('AZN', 'Azerbaijanian Manat'), ('BSD', 'Bahamian Dollar'), ('BHD', 'Bahraini Dinar'), ('THB', 'Baht'), ('PAB', 'Balboa'), ('BBD', 'Barbados Dollar'), ('BYN', 'Belarussian Ruble'), ('BYR', 'Belarussian Ruble'), ('BZD', 'Belize Dollar'), ('BMD', 'Bermudian Dollar (customarily known as Bermuda Dollar)'), ('BTN', 'Bhutanese ngultrum'), ('VEF', 'Bolivar Fuerte'), ('BOB', 'Boliviano'), ('XBA', 'Bond Markets Units European Composite Unit (EURCO)'), ('BRL', 'Brazilian Real'), ('BND', 'Brunei Dollar'), ('BGN', 'Bulgarian Lev'), ('BIF', 'Burundi Franc'), ('XOF', 'CFA Franc BCEAO'), ('XAF', 'CFA franc BEAC'), ('XPF', 'CFP Franc'), ('CAD', 'Canadian Dollar'), ('CVE', 'Cape Verde Escudo'), ('KYD', 'Cayman Islands Dollar'), ('CLP', 'Chilean peso'), ('XTS', 'Codes specifically reserved for testing purposes'), ('COP', 'Colombian peso'), ('KMF', 'Comoro Franc'), ('CDF', 'Congolese franc'), ('BAM', 'Convertible Marks'), ('NIO', 'Cordoba Oro'), ('CRC', 'Costa Rican Colon'), ('HRK', 'Croatian Kuna'), ('CUP', 'Cuban Peso'), ('CUC', 'Cuban convertible peso'), ('CZK', 'Czech Koruna'), ('GMD', 'Dalasi'), ('DKK', 'Danish Krone'), ('MKD', 'Denar'), ('DJF', 'Djibouti Franc'), ('STD', 'Dobra'), ('DOP', 'Dominican Peso'), ('VND', 'Dong'), ('XCD', 'East Caribbean Dollar'), ('EGP', 'Egyptian Pound'), ('SVC', 'El Salvador Colon'), ('ETB', 'Ethiopian Birr'), ('EUR', 'Euro'), ('XBB', 'European Monetary Unit (E.M.U.-6)'), ('XBD', 'European Unit of Account 17(E.U.A.-17)'), ('XBC', 'European Unit of Account 9(E.U.A.-9)'), ('FKP', 'Falkland Islands Pound'), ('FJD', 'Fiji Dollar'), ('HUF', 'Forint'), ('GHS', 'Ghana Cedi'), ('GIP', 'Gibraltar Pound'), ('XAU', 'Gold'), ('XFO', 'Gold-Franc'), ('PYG', 'Guarani'), ('GNF', 'Guinea Franc'), ('GYD', 'Guyana Dollar'), ('HTG', 'Haitian gourde'), ('HKD', 'Hong Kong Dollar'), ('UAH', 'Hryvnia'), ('ISK', 'Iceland Krona'), ('INR', 'Indian Rupee'), ('IRR', 'Iranian Rial'), ('IQD', 'Iraqi Dinar'), ('IMP', 'Isle of Man Pound'), ('JMD', 'Jamaican Dollar'), ('JOD', 'Jordanian Dinar'), ('KES', 'Kenyan Shilling'), ('PGK', 'Kina'), ('LAK', 'Kip'), ('KWD', 'Kuwaiti Dinar'), ('AOA', 'Kwanza'), ('MMK', 'Kyat'), ('GEL', 'Lari'), ('LVL', 'Latvian Lats'), ('LBP', 'Lebanese Pound'), ('ALL', 'Lek'), ('HNL', 'Lempira'), ('SLL', 'Leone'), ('LSL', 'Lesotho loti'), ('LRD', 'Liberian Dollar'), ('LYD', 'Libyan Dinar'), ('SZL', 'Lilangeni'), ('LTL', 'Lithuanian Litas'), ('MGA', 'Malagasy Ariary'), ('MWK', 'Malawian Kwacha'), ('MYR', 'Malaysian Ringgit'), ('TMM', 'Manat'), ('MUR', 'Mauritius Rupee'), ('MZN', 'Metical'), ('MXV', 'Mexican Unidad de Inversion (UDI)'), ('MXN', 'Mexican peso'), ('MDL', 'Moldovan Leu'), ('MAD', 'Moroccan Dirham'), ('BOV', 'Mvdol'), ('NGN', 'Naira'), ('ERN', 'Nakfa'), ('NAD', 'Namibian Dollar'), ('NPR', 'Nepalese Rupee'), ('ANG', 'Netherlands Antillian Guilder'), ('ILS', 'New Israeli Sheqel'), ('RON', 'New Leu'), ('TWD', 'New Taiwan Dollar'), ('NZD', 'New Zealand Dollar'), ('KPW', 'North Korean Won'), ('NOK', 'Norwegian Krone'), ('PEN', 'Nuevo Sol'), ('MRO', 'Ouguiya'), ('TOP', 'Paanga'), ('PKR', 'Pakistan Rupee'), ('XPD', 'Palladium'), ('MOP', 'Pataca'), ('PHP', 'Philippine Peso'), ('XPT', 'Platinum'), ('GBP', 'Pound Sterling'), ('BWP', 'Pula'), ('QAR', 'Qatari Rial'), ('GTQ', 'Quetzal'), ('ZAR', 'Rand'), ('OMR', 'Rial Omani'), ('KHR', 'Riel'), ('MVR', 'Rufiyaa'), ('IDR', 'Rupiah'), ('RUB', 'Russian Ruble'), ('RWF', 'Rwanda Franc'), ('XDR', 'SDR'), ('SHP', 'Saint Helena Pound'), ('SAR', 'Saudi Riyal'), ('RSD', 'Serbian Dinar'), ('SCR', 'Seychelles Rupee'), ('XAG', 'Silver'), ('SGD', 'Singapore Dollar'), ('SBD', 'Solomon Islands Dollar'), ('KGS', 'Som'), ('SOS', 'Somali Shilling'), ('TJS', 'Somoni'), ('SSP', 'South Sudanese Pound'), ('LKR', 'Sri Lanka Rupee'), ('XSU', 'Sucre'), ('SDG', 'Sudanese Pound'), ('SRD', 'Surinam Dollar'), ('SEK', 'Swedish Krona'), ('CHF', 'Swiss Franc'), ('SYP', 'Syrian Pound'), ('BDT', 'Taka'), ('WST', 'Tala'), ('TZS', 'Tanzanian Shilling'), ('KZT', 'Tenge'), ('XXX', 'The codes assigned for transactions where no currency is involved'), ('TTD', 'Trinidad and Tobago Dollar'), ('MNT', 'Tugrik'), ('TND', 'Tunisian Dinar'), ('TRY', 'Turkish Lira'), ('TMT', 'Turkmenistan New Manat'), ('TVD', 'Tuvalu dollar'), ('AED', 'UAE Dirham'), ('XFU', 'UIC-Franc'), ('USD', 'US Dollar'), ('USN', 'US Dollar (Next day)'), ('UGX', 'Uganda Shilling'), ('CLF', 'Unidad de Fomento'), ('COU', 'Unidad de Valor Real'), ('UYI', 'Uruguay Peso en Unidades Indexadas (URUIURUI)'), ('UYU', 'Uruguayan peso'), ('UZS', 'Uzbekistan Sum'), ('VUV', 'Vatu'), ('CHE', 'WIR Euro'), ('CHW', 'WIR Franc'), ('KRW', 'Won'), ('YER', 'Yemeni Rial'), ('JPY', 'Yen'), ('CNY', 'Yuan Renminbi'), ('ZMK', 'Zambian Kwacha'), ('ZMW', 'Zambian Kwacha'), ('ZWD', 'Zimbabwe Dollar A/06'), ('ZWN', 'Zimbabwe dollar A/08'), ('ZWL', 'Zimbabwe dollar A/09'), ('PLN', 'Zloty')], default=None, editable=False, max_length=3, null=True),
),
]
| [
"oliver.deegan@gmail.com"
] | oliver.deegan@gmail.com |
5762741a29ba36f2c36980cbe7c87cd3d2f89121 | a01e7f87a0088965e2e0a02476d2df12a49a1a18 | /package/tfi_helper/dhcp/hapack/dhcpparser.py | dea3a1526ea3c35f8b80c04e697d0a60a841bed7 | [] | no_license | gsrr/IFT_jerry | 0456a8a1fb98f84ad5c26dc36bdf32e2d85c750c | 4c2f6900dfd7ae7f6b3cc2150b1c1be236b4c95c | refs/heads/master | 2020-04-04T05:30:10.544252 | 2019-08-22T09:12:03 | 2019-08-22T09:12:03 | 48,145,836 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 686 | py | import argparse
class DHCPParser:
def __init__(self):
self.cmds = ['dhcp_test']
self.parser_dhcp = argparse.ArgumentParser(prog="dhcp", add_help=False)
self.parser_dhcp_test = argparse.ArgumentParser(prog="dhcp_test", add_help=False)
self.parser_dhcp_test.add_argument("-z", nargs="?", required=True)
def find(self, args):
cnt = 0
cmd = "dhcp"
while cnt < len(args):
cmd += ("_" + args[cnt])
if cmd in self.cmds:
break
cnt += 1
args = args[cnt+1:]
namespace = getattr(self, "parser" + "_" + cmd).parse_args(args).__dict__
return cmd, namespace
| [
"jerry.cheng@infortrend.com"
] | jerry.cheng@infortrend.com |
edf13e760e2c79556b59d33cc8cb6c3261ebc614 | ff423429bdc87d96c8ce2d90a3992d8142980fa7 | /Basics Programs in Python/PrintOutput.py | 01648e0bae1f586fffc9be16b7c970eab4f31ba1 | [] | no_license | Gnagdhar/Learning- | 92a3fd86b219290ae9f2bbfaef2d493470fe89ee | e7ac1c9c025ae89c9e7a9345a985fd6805a9763f | refs/heads/master | 2022-06-01T02:26:12.834855 | 2020-04-27T15:14:00 | 2020-04-27T15:14:00 | 257,869,536 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 122 | py | print("Geeks for Geeks")
x=5
print("x=",x)
print("G", "F" ,"D", sep='@')
print("Python", end="@")
print("Geeks for Geeks") | [
"gangadharx.uppin@intel.com"
] | gangadharx.uppin@intel.com |
d3173858f10737bbb574b5291c639096bd42fdb8 | 1ebe5a07e7f6260c2c2ceb6ca00dcf2a0341e544 | /op_impl/built-in/ai_core/tbe/impl/power.py | e29e5eed1d10da730e4062ba4a475b68b162ebd6 | [] | no_license | gekowa/ascend-opp | f5e09905336d85f9974d555d03d37a75cb8185c1 | 5c28a2faf9d2a117ea6f0923efe35fcd53904dd2 | refs/heads/master | 2023-04-09T12:14:40.337104 | 2021-04-19T23:00:59 | 2021-04-19T23:00:59 | 359,620,865 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,024 | py | # Copyright 2019 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
power
"""
# pylint: disable=redefined-outer-name
import math
from functools import reduce
import te.lang.cce
from te import tvm
from te.platform.fusion_manager import fusion_manager
from te import platform as tbe_platform
from te.utils.op_utils import *
from topi import generic
from topi.cce import util
def positive_compute(base, power, version, input_dtype):
"""
calculate power for positive elements of base tensor
Parameters
----------
base: the base tensor
power: attr power
version: the product version
input_dtype: dtype of input
Returns
----------
res: the result tensor
"""
base_cast = base
if input_dtype == "float16" and \
tbe_platform.cce_conf.api_check_support("te.lang.cce.vexp", "float32") and \
tbe_platform.cce_conf.api_check_support("te.lang.cce.vlog", "float32"):
base_cast = te.lang.cce.cast_to(base, "float32")
log_val = te.lang.cce.vlog(base_cast)
mul_val = te.lang.cce.vmuls(log_val, power)
exp_val = te.lang.cce.vexp(mul_val)
if exp_val.dtype.lower() != input_dtype:
exp_val = te.lang.cce.cast_to(exp_val, input_dtype)
return exp_val
def negtive_compute(base, power, nan_values, version, input_dtype):
"""
calculate power for negative elements of base tensor
Parameters
----------
base: the base tensor
power: attr power
nan_values: a tensor with nan values
version: the product version
input_dtype: dtype of input
Returns
----------
res: the result tensor
"""
if float(power).is_integer():
base_cast = base
if input_dtype == "float16" and \
tbe_platform.cce_conf.api_check_support("te.lang.cce.vexp", "float32") and \
tbe_platform.cce_conf.api_check_support("te.lang.cce.vlog", "float32"):
base_cast = te.lang.cce.cast_to(base, "float32")
sign_value = math.pow(-1, power)
abs_base_value = te.lang.cce.vabs(base_cast)
log_value = te.lang.cce.vlog(abs_base_value)
mul_value = te.lang.cce.vmuls(log_value, power)
exp_value = te.lang.cce.vexp(mul_value)
res = te.lang.cce.vmuls(exp_value, sign_value)
if res.dtype.lower() != input_dtype:
res = te.lang.cce.cast_to(res, input_dtype)
return res
return nan_values
def zero_compute(power, nan_values, zero_values):
"""
calculate power for zero elements of base tensor
Parameters
----------
power: attr power
nan_values: a tensor with nan values
zero_values: a tensor with zero values
Returns
----------
res: the result tensor
"""
if power > 0.0:
return zero_values
return nan_values
def power_scalar(input_x, base, power):
"""
calculate power when attr scale is 0.0 and attr power is not
Parameters
----------
input_x: placeholder of input
base: the base value, equals attr shift
power: attr power
Returns
----------
res: the result when attr scale is 0.0 and attr power is not
"""
tmp_zero = te.lang.cce.vmuls(input_x, 0)
ones = te.lang.cce.vadds(tmp_zero, 1)
zeros = tmp_zero
if base > 0.0:
res = te.lang.cce.vmuls(ones, math.pow(base, power))
return res
if base < 0.0:
if float(power).is_integer():
res = te.lang.cce.vmuls(ones, math.pow(base, power))
return res
# return abnormal value
res = te.lang.cce.vrec(zeros)
return res
if power > 0:
return zeros
# return abnormal value
res = te.lang.cce.vrec(zeros)
return res
def zero_diff_scale_compute(input_x, shift, power):
"""
calculate power when power*scale is 0.0
Parameters
----------
input_x: placeholder of input
shift: attr shift
power: attr power
Returns
----------
res: the result when power*scale is 0.0
"""
if power == 0.0:
tmp_zero = te.lang.cce.vmuls(input_x, 0)
res = te.lang.cce.vadds(tmp_zero, 1)
return res
res = power_scalar(input_x, shift, power)
return res
# pylint: disable=locally-disabled,unused-argument,too-many-arguments
@fusion_manager.register("power")
def power_compute(input_x, output_y, power=1.0, scale=1.0,
shift=0.0, kernel_name="power"):
"""
calculate power according to different cases
Parameters
----------
input_x: placeholder of input
power: attr power
scale: attr scale
shift: attr shift
Returns
----------
res: result of power
"""
cce_product = tbe_platform.cce_conf.get_soc_spec("SOC_VERSION")
input_dtype = input_x.dtype.lower()
diff_scale = power * scale
if diff_scale == 0.0:
res = zero_diff_scale_compute(input_x, shift, power)
return res
shift_scaled_x = te.lang.cce.vmuls(input_x, scale)
shift_scaled_x = te.lang.cce.vadds(shift_scaled_x, shift)
tmp_zero = te.lang.cce.vmuls(input_x, 0)
zeros = tmp_zero
nan_value = te.lang.cce.vrec(zeros)
if power == 1.0:
res = shift_scaled_x
return res
if power == 2.0:
res = te.lang.cce.vmul(shift_scaled_x, shift_scaled_x)
return res
if power == 3.0:
res = te.lang.cce.vmul(shift_scaled_x, shift_scaled_x)
res = te.lang.cce.vmul(res, shift_scaled_x)
return res
positive_pow_val = \
positive_compute(shift_scaled_x, power, cce_product, input_dtype)
negative_pow_val = \
negtive_compute(shift_scaled_x, power,
nan_value, cce_product, input_dtype)
zero_pow_val = zero_compute(power, nan_value, zeros)
res = te.lang.cce.vcmpsel(shift_scaled_x, zeros,
'gt', positive_pow_val, negative_pow_val)
res = te.lang.cce.vcmpsel(shift_scaled_x, zeros,
'eq', zero_pow_val, res)
return res
# pylint: disable=redefined-outer-name, too-many-arguments, unused-variable
@check_op_params(REQUIRED_INPUT, REQUIRED_OUTPUT, OPTION_ATTR_FLOAT,
OPTION_ATTR_FLOAT, OPTION_ATTR_FLOAT, KERNEL_NAME)
def power(input_x, output_y, power=1.0, scale=1.0,
shift=0.0, kernel_name="power"):
"""
calculate power of input tensor according to
y = (x * scale + shift) ** power
Parameters
----------
input_x: dict of input, include shape and
dtype, dtype support float16, float32
output_y: dict of output, include shape and
dtype, dtype support float16, float32
power: attr power, default value is 1.0
scale: attr scale, default value is 1.0
shift: attr shift, default value is 0.0
kernel_name: cce kernel name, default value is "power"
Returns
----------
None
"""
shape = input_x.get("shape")
input_dtype = input_x.get("dtype").lower()
check_shape(shape, param_name="x")
type_tuple = ("float16", "float32")
check_dtype(input_dtype, type_tuple, param_name="x")
fuseshape = [1]
fuseshape[0] = reduce(lambda x, y: x*y, shape)
data_input = tvm.placeholder(fuseshape, name="data_input", dtype=input_dtype)
cur_cce_product = tbe_platform.cce_conf.get_soc_spec("SOC_VERSION")
if cur_cce_product in ("Ascend310", "Hi3796CV300ES", "Hi3796CV300CS"):
if input_dtype == "float32":
error_info = {}
error_info['errCode'] = 'E80008'
error_info['param_name'] = 'input_x'
error_info['op_name'] = 'power'
error_info['expect_value'] = "float16"
error_info['real_value'] = input_dtype
raise RuntimeError(error_info, "In op[%s], the parameter[%s]'s dtype "
"should be [%s], but actually is [%s]."
% (error_info['op_name'], error_info['param_name'],
error_info['expect_value'], error_info['real_value']))
res = power_compute(data_input, output_y, power, scale, shift, kernel_name)
else:
res = power_compute(data_input, output_y, power, scale, shift, kernel_name)
with tvm.target.cce():
sch = generic.auto_schedule(res)
config = {"name": kernel_name,
"tensor_list": [data_input, res],
"print_ir": True}
te.lang.cce.cce_build_code(sch, config)
| [
"gekowa@gmail.com"
] | gekowa@gmail.com |
e9f741d96322ebdd229415f2476460ee09b605b7 | 17328efebb4116990038340137b65b29d636315f | /perfis/views.py | 3e6b95d184873f52f982e9c8cff45590edf09c3d | [] | no_license | Richters/alura_django | 3aaa6584d8edc01b1c5322b1076cb5659ac89fa8 | 3a7a49c6adc74c9e48e0d5f9677cdd4f4a32bc68 | refs/heads/master | 2022-04-03T12:38:14.458554 | 2020-02-06T14:37:28 | 2020-02-06T14:37:28 | 198,315,500 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,196 | py | from django.shortcuts import render, redirect
from perfis.models import Perfil, Convite
from django.contrib.auth.decorators import login_required, permission_required
@login_required
def index(request):
return render(request,'index.html', {'perfis' : Perfil.objects.all(), 'perfil_logado' : get_perfil_logado(request)})
@login_required
def exibir(request, perfil_id):
perfil = Perfil.objects.get(id=perfil_id)
perfil_logado = get_perfil_logado(request)
ja_eh_contato = perfil in perfil_logado.contatos.all()
return render(request,'perfil.html',{'perfil' : perfil, 'perfil_logado' : get_perfil_logado(request), 'ja_eh_contato' : ja_eh_contato})
@permission_required('perfis.add_convite', raise_exception=True)
@login_required
def convidar(request, perfil_id):
perfil_a_convidar = Perfil.objects.get(id=perfil_id)
perfil_logado = get_perfil_logado(request)
perfil_logado.convidar(perfil_a_convidar)
return redirect('index')
@login_required
def aceitar(request, convite_id):
convite = Convite.objects.get(id=convite_id)
convite.aceitar()
return redirect('index')
@login_required
def get_perfil_logado(request):
return request.user.perfil | [
"lucas.s.richter@gmail.com"
] | lucas.s.richter@gmail.com |
9d82a9d1425b1deae0c45fc833fe73e80449e0b6 | 2b7c7e9b00ed9b2dbbac943ee4b79865a96d10de | /Figure_script/Figure_1.py | 7caa0f0d7080d155e2572b49ddd294af94fa11d9 | [] | no_license | YaojieLu/Plant_traits_inversion | ad973e60bb32717d9d718f774c2ec77433c38ced | ec83642ae2a2e6ef96502e58f8074bffdadfefe8 | refs/heads/master | 2021-06-21T15:22:00.225498 | 2020-12-13T22:12:21 | 2020-12-13T22:12:21 | 140,017,309 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,680 | py | import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
from scipy import stats
# load traces
ts = pickle.load(open("../Data/45.pickle", "rb"))
params = ['alpha', 'c', 'g1', 'kxmax', 'p50', 'L']
true_values = [0.02, 16, 50, 7, -4.5, 2]
# figure
labels = ['$\\alpha$', '$\\mathit{c}$', '$\\mathit{g_{1}}$',
'$\\mathit{k_{xmax}}$', '$\\psi_{x50}$', '$\\mathit{L}$']
ranges = [[0.001, 0.2], [2, 20], [10, 100], [1, 10], [-10, -0.1], [0.5, 5]]
fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(30, 20))
for i, row in enumerate(axs):
for j, col in enumerate(row):
idx = i*3+j
param = params[idx]
df = pd.DataFrame({param: ts[param]}).iloc[:, 0]
col.hist(df, range=[ranges[idx][0], ranges[idx][1]], bins=100)
# kde = stats.gaussian_kde(df)
# param_range = np.linspace(ranges[idx][0], ranges[idx][1], 1000)
# col.plot(param_range, kde(param_range), linewidth=2.5, color='blue')
mean, std = df.mean(), df.std()
cv = abs(round(std/mean, 2))
col.set_title('RSD = {}'.format(cv), fontsize=30)
col.axvline(x=true_values[idx], c='black',
label='True value', linestyle='dashed')
col.axes.get_yaxis().set_visible(False)
col.tick_params(labelsize=30)
col.set_xlabel(labels[idx], fontsize=30)
if idx == 0:
col.legend([Line2D([0], [0], linestyle='dashed', color='black')],
['True value'], loc='upper right', fontsize=30, framealpha=0)
plt.subplots_adjust(hspace=0.25, wspace=0.1)
plt.savefig('../Figures/Figure 45.png', bbox_inches = 'tight')
| [
"="
] | = |
8f28ab12e6205691d69253b9b16c31e06f857774 | b5cc6d7b5f7ccea36fce4eab961979404414f8b0 | /kent-report/py/beam_distances.py | 2cc89895ad6d3fed6c27470bb32f1dfd505d8989 | [] | no_license | MiroK/cutFEM-beam | adf0c925dbe64b370dab48e82335617450675f5d | 2fb3686804e836d4031fbf231a36a0f9ac8a3012 | refs/heads/master | 2021-01-21T23:54:32.868307 | 2015-02-14T13:14:59 | 2015-02-14T13:14:59 | 25,625,143 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,537 | py | from __future__ import division
from sympy import sin, cos, pi, sqrt, symbols, lambdify
from sympy.mpmath import quad
import numpy as np
x, y, s = symbols('x, y, s')
def eigen_basis(n):
'''
Return first n eigenfunctions of Laplacian over biunit interval with homog.
Dirichlet bcs. at endpoints -1, 1. Functions of x.
'''
k = 0
functions = []
while k < n:
alpha = pi/2 + k*pi/2
if k % 2 == 0:
functions.append(cos(alpha*x))
else:
functions.append(sin(alpha*x))
k += 1
return functions
def shen_basis(n):
'''
Return first n Shen basis functions. Special polynomials made of Legendre
polynomials that have 0 values at -1, 1. Functions of x.
'''
k = 0
functions = []
while k < n:
weight = 1/sqrt(4*k + 6)
functions.append(weight*(legendre(k+2, x) - legendre(k, x)))
k += 1
return functions
def beam_restrict(A, B, u):
'''
Restict function(s) u of x, y to beam = {(x, y)=0.5*A*(1-s) + 0.5*B*(1+s)}.
'''
if isinstance(u, list):
return [beam_restrict(A, B, v) for v in u]
else:
assert x in u.atoms() and y in u.atoms()
ux = u.subs(x, A[0]/2*(1-s) + B[0]/2*(1+s))
u = ux.subs(y, A[1]/2*(1-s) + B[1]/2*(1+s))
return u
def L2_distance(f, g):
'L2 norm over [-1, 1] of f-g.'
d = f-g
d = lambdify(s, d)
return sqrt(quad(lambda s: d(s)**2, [-1, 1]))
def H10_distance(f, g):
'H10 norm over [-1, 1] of f-g.'
d = (f-g).diff(s, 1)
d = lambdify(s, d)
return sqrt(quad(lambda s: d(s)**2, [-1, 1]))
def distance_matrices(A, B, Vp, Vb, Q, norm):
'''
Given beam specified by A, B return two matrices. The first matrix has
norm(u-q) where u are functions from Vp restricted to beam and q are
functions from Q. The other matrix is norm(p-q) for p in Vb and Q in
Q.
'''
if norm == 'L2':
distance = L2_distance
elif norm == 'H10':
distance = H10_distance
else:
raise ValueError
m, n, r = len(Vp), len(Vb), len(Q)
mat0 = np.zeros((m, r))
# First do the restriction
Vp = beam_restrict(A, B, Vp)
for i, u in enumerate(Vp):
for j, q in enumerate(Q):
mat0[i, j] = distance(u, q)
mat1 = np.zeros((n, r))
for i, p in enumerate(Vb):
for j, q in enumerate(Q):
mat1[i, j] = distance(p, q)
return mat0, mat1
# -----------------------------------------------------------------------------
if __name__ == '__main__':
import matplotlib.pyplot as plt
from itertools import product
# Number of plate function in 1d, number of beam functions and number of
# functions for Lagrange multiplier space
m, n, r = 20, 20, 20
# Vp basis - functions of x, y
Vp = [fx*fy.subs(x, y) for fx, fy in product(eigen_basis(m), eigen_basis(m))]
# Vb basis - functions of s
Vb = [f.subs(x, s) for f in eigen_basis(n)]
# Q basis - functions of s
Q = [f.subs(x, s) for f in eigen_basis(r)]
# Sample beam
A = np.array([0, 0])
B = np.array([1, 1])
for norm in ['L2', 'H10']:
matBp, matBb = distance_matrices(A, B, Vp, Vb, Q, norm)
plt.figure()
plt.title(norm)
plt.pcolor(matBp)
plt.xlabel('$Q$')
plt.ylabel('$V_p$')
plt.colorbar()
plt.figure()
plt.title(norm)
plt.pcolor(matBb)
plt.xlabel('$Q$')
plt.ylabel('$V_b$')
plt.colorbar()
plt.show()
| [
"miroslav.kuchta@gmail.com"
] | miroslav.kuchta@gmail.com |
16f825fced458dded8f650d9b7c3fb8719b670ed | 13c3d54f4daabd4e51af23b8962d4127f334ab84 | /afterlive/contact/urls.py | 271b84ec3b48311c207b4a8f3019f1fa4f98dc69 | [] | no_license | JoshDHoeg/afterlive1.0 | eef2305b992ee5e153f18447e251356005d476f9 | 279b7fd592e87c0e157750fa64fba8d48adaaf0a | refs/heads/master | 2020-02-26T15:04:15.303168 | 2018-01-15T23:28:31 | 2018-01-15T23:28:31 | 83,254,731 | 0 | 0 | null | 2017-05-15T03:57:31 | 2017-02-27T01:17:07 | Python | UTF-8 | Python | false | false | 861 | py | """afterlive URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$',
views.email,
name='email'
),
url(r'^thanks/$',
views.thanks,
name='thanks'
),
]
| [
"jdhoeg97@gmail.com"
] | jdhoeg97@gmail.com |
f4152d966e2a73e2e841c317bc9cff9abf1373a1 | e24888f29b25defbfdddf22167152d78ececd400 | /mpi/migrations/0006_auto_20190802_1801.py | 363a4e29f6c994c7dd2431e020f73e833fa57c73 | [
"MIT"
] | permissive | MaldoCarre/SIIEC-WEB | e9313616d83c5932020279101d1ac5e3f112d58e | 871479792d19e566d05335376c62d3df9fbcb09f | refs/heads/master | 2020-05-22T11:05:07.324355 | 2019-09-19T20:48:42 | 2019-09-19T20:48:42 | 186,314,399 | 0 | 0 | MIT | 2019-06-21T13:57:53 | 2019-05-12T23:21:35 | JavaScript | UTF-8 | Python | false | false | 448 | py | # Generated by Django 2.1.2 on 2019-08-02 21:01
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('mpi', '0005_auto_20190802_1752'),
]
operations = [
migrations.AlterField(
model_name='mpi',
name='cargarchivo',
field=models.FileField(blank=True, max_length=10, null=True, upload_to='misCargas'),
),
]
| [
"noreply@github.com"
] | MaldoCarre.noreply@github.com |
909881d063f682743192e6996ad6866cba712501 | c3a85cdfd83693b8ed4a3e61f21baedc9c10a0f9 | /linepy/config.py | 26cce78d09019bcd3bb64c6558bf52f7d0d1e960 | [
"Apache-2.0"
] | permissive | Gbotline/python3-2020 | cfd5fd9a5d0ab31535ab8fd07e4a54a6612abd0b | 2d28e077a85b8b4efea82f71af7442e64cd752ab | refs/heads/master | 2023-03-17T18:18:58.948062 | 2020-01-31T10:13:32 | 2020-01-31T10:13:32 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,342 | py | # -*- coding: utf-8 -*-
from akad.ttypes import ApplicationType
import re
class Config(object):
LINE_HOST_DOMAIN = 'https://gd2.line.naver.jp'
LINE_OBS_DOMAIN = 'https://obs-sg.line-apps.com'
LINE_TIMELINE_API = 'https://gd2.line.naver.jp/mh/api'
LINE_TIMELINE_MH = 'https://gd2.line.naver.jp/mh'
LINE_LOGIN_QUERY_PATH = '/api/v4p/rs'
LINE_AUTH_QUERY_PATH = '/api/v4/TalkService.do'
LINE_API_QUERY_PATH_FIR = '/S4'
LINE_POLL_QUERY_PATH_FIR = '/P4'
LINE_CALL_QUERY_PATH = '/V4'
LINE_CERTIFICATE_PATH = '/Q'
LINE_CHAN_QUERY_PATH = '/CH4'
LINE_SQUARE_QUERY_PATH = '/SQS1'
CHANNEL_ID = {
'LINE_TIMELINE': '1341209850',
'LINE_WEBTOON': '1401600689',
'LINE_TODAY': '1518712866',
'LINE_STORE': '1376922440',
'LINE_MUSIC': '1381425814',
'LINE_SERVICES': '1459630796'
}
APP_TYPE = "CHROMEOS\t2.1.5\tCHROMEOS\t11.12.5"
APP_VER = '8.9.1'
CARRIER = '51089, 1-0'
SYSTEM_NAME = 'CHROM'
SYSTEM_VER = '11.12.5'
IP_ADDR = '8.8.8.8'
EMAIL_REGEX = re.compile(r"[^@]+@[^@]+\.[^@]+")
def __init__(self):
self.APP_NAME = 'CHROMEOS\t2.1.5\tCHROMEOS\t11.12.5'
self.USER_AGENT = 'Line/%s' % self.APP_VER
| [
"36443319+Cupzaa@users.noreply.github.com"
] | 36443319+Cupzaa@users.noreply.github.com |
1bd68140d32eb41f4a7e8552136f8d5ef1080f18 | 1ab7b3f2aa63de8488ce7c466a67d367771aa1f2 | /Ricardo_OS/Python_backend/venv/lib/python3.8/site-packages/pandas/tests/indexing/test_partial.py | 337ec683ee745d97ace410b4d302af252d40ba04 | [
"MIT"
] | permissive | icl-rocketry/Avionics | 9d39aeb11aba11115826fd73357b415026a7adad | 95b7a061eabd6f2b607fba79e007186030f02720 | refs/heads/master | 2022-07-30T07:54:10.642930 | 2022-07-10T12:19:10 | 2022-07-10T12:19:10 | 216,184,670 | 9 | 1 | MIT | 2022-06-27T10:17:06 | 2019-10-19T09:57:07 | C++ | UTF-8 | Python | false | false | 23,869 | py | """
test setting *parts* of objects both positionally and label based
TODO: these should be split among the indexer tests
"""
import numpy as np
import pytest
import pandas as pd
from pandas import DataFrame, Index, Period, Series, Timestamp, date_range, period_range
import pandas._testing as tm
class TestPartialSetting:
def test_partial_setting(self):
# GH2578, allow ix and friends to partially set
# series
s_orig = Series([1, 2, 3])
s = s_orig.copy()
s[5] = 5
expected = Series([1, 2, 3, 5], index=[0, 1, 2, 5])
tm.assert_series_equal(s, expected)
s = s_orig.copy()
s.loc[5] = 5
expected = Series([1, 2, 3, 5], index=[0, 1, 2, 5])
tm.assert_series_equal(s, expected)
s = s_orig.copy()
s[5] = 5.0
expected = Series([1, 2, 3, 5.0], index=[0, 1, 2, 5])
tm.assert_series_equal(s, expected)
s = s_orig.copy()
s.loc[5] = 5.0
expected = Series([1, 2, 3, 5.0], index=[0, 1, 2, 5])
tm.assert_series_equal(s, expected)
# iloc/iat raise
s = s_orig.copy()
msg = "iloc cannot enlarge its target object"
with pytest.raises(IndexError, match=msg):
s.iloc[3] = 5.0
msg = "index 3 is out of bounds for axis 0 with size 3"
with pytest.raises(IndexError, match=msg):
s.iat[3] = 5.0
# ## frame ##
df_orig = DataFrame(
np.arange(6).reshape(3, 2), columns=["A", "B"], dtype="int64"
)
# iloc/iat raise
df = df_orig.copy()
msg = "iloc cannot enlarge its target object"
with pytest.raises(IndexError, match=msg):
df.iloc[4, 2] = 5.0
msg = "index 2 is out of bounds for axis 0 with size 2"
with pytest.raises(IndexError, match=msg):
df.iat[4, 2] = 5.0
# row setting where it exists
expected = DataFrame(dict({"A": [0, 4, 4], "B": [1, 5, 5]}))
df = df_orig.copy()
df.iloc[1] = df.iloc[2]
tm.assert_frame_equal(df, expected)
expected = DataFrame(dict({"A": [0, 4, 4], "B": [1, 5, 5]}))
df = df_orig.copy()
df.loc[1] = df.loc[2]
tm.assert_frame_equal(df, expected)
# like 2578, partial setting with dtype preservation
expected = DataFrame(dict({"A": [0, 2, 4, 4], "B": [1, 3, 5, 5]}))
df = df_orig.copy()
df.loc[3] = df.loc[2]
tm.assert_frame_equal(df, expected)
# single dtype frame, overwrite
expected = DataFrame(dict({"A": [0, 2, 4], "B": [0, 2, 4]}))
df = df_orig.copy()
df.loc[:, "B"] = df.loc[:, "A"]
tm.assert_frame_equal(df, expected)
# mixed dtype frame, overwrite
expected = DataFrame(dict({"A": [0, 2, 4], "B": Series([0, 2, 4])}))
df = df_orig.copy()
df["B"] = df["B"].astype(np.float64)
df.loc[:, "B"] = df.loc[:, "A"]
tm.assert_frame_equal(df, expected)
# single dtype frame, partial setting
expected = df_orig.copy()
expected["C"] = df["A"]
df = df_orig.copy()
df.loc[:, "C"] = df.loc[:, "A"]
tm.assert_frame_equal(df, expected)
# mixed frame, partial setting
expected = df_orig.copy()
expected["C"] = df["A"]
df = df_orig.copy()
df.loc[:, "C"] = df.loc[:, "A"]
tm.assert_frame_equal(df, expected)
# GH 8473
dates = date_range("1/1/2000", periods=8)
df_orig = DataFrame(
np.random.randn(8, 4), index=dates, columns=["A", "B", "C", "D"]
)
expected = pd.concat(
[df_orig, DataFrame({"A": 7}, index=dates[-1:] + dates.freq)], sort=True
)
df = df_orig.copy()
df.loc[dates[-1] + dates.freq, "A"] = 7
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
df.at[dates[-1] + dates.freq, "A"] = 7
tm.assert_frame_equal(df, expected)
exp_other = DataFrame({0: 7}, index=dates[-1:] + dates.freq)
expected = pd.concat([df_orig, exp_other], axis=1)
df = df_orig.copy()
df.loc[dates[-1] + dates.freq, 0] = 7
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
df.at[dates[-1] + dates.freq, 0] = 7
tm.assert_frame_equal(df, expected)
def test_partial_setting_mixed_dtype(self):
# in a mixed dtype environment, try to preserve dtypes
# by appending
df = DataFrame([[True, 1], [False, 2]], columns=["female", "fitness"])
s = df.loc[1].copy()
s.name = 2
expected = df.append(s)
df.loc[2] = df.loc[1]
tm.assert_frame_equal(df, expected)
# columns will align
df = DataFrame(columns=["A", "B"])
df.loc[0] = Series(1, index=range(4))
tm.assert_frame_equal(df, DataFrame(columns=["A", "B"], index=[0]))
# columns will align
df = DataFrame(columns=["A", "B"])
df.loc[0] = Series(1, index=["B"])
exp = DataFrame([[np.nan, 1]], columns=["A", "B"], index=[0], dtype="float64")
tm.assert_frame_equal(df, exp)
# list-like must conform
df = DataFrame(columns=["A", "B"])
msg = "cannot set a row with mismatched columns"
with pytest.raises(ValueError, match=msg):
df.loc[0] = [1, 2, 3]
# TODO: #15657, these are left as object and not coerced
df = DataFrame(columns=["A", "B"])
df.loc[3] = [6, 7]
exp = DataFrame([[6, 7]], index=[3], columns=["A", "B"], dtype="object")
tm.assert_frame_equal(df, exp)
def test_series_partial_set(self):
# partial set with new index
# Regression from GH4825
ser = Series([0.1, 0.2], index=[1, 2])
# loc equiv to .reindex
expected = Series([np.nan, 0.2, np.nan], index=[3, 2, 3])
with pytest.raises(KeyError, match="with any missing labels"):
result = ser.loc[[3, 2, 3]]
result = ser.reindex([3, 2, 3])
tm.assert_series_equal(result, expected, check_index_type=True)
expected = Series([np.nan, 0.2, np.nan, np.nan], index=[3, 2, 3, "x"])
with pytest.raises(KeyError, match="with any missing labels"):
result = ser.loc[[3, 2, 3, "x"]]
result = ser.reindex([3, 2, 3, "x"])
tm.assert_series_equal(result, expected, check_index_type=True)
expected = Series([0.2, 0.2, 0.1], index=[2, 2, 1])
result = ser.loc[[2, 2, 1]]
tm.assert_series_equal(result, expected, check_index_type=True)
expected = Series([0.2, 0.2, np.nan, 0.1], index=[2, 2, "x", 1])
with pytest.raises(KeyError, match="with any missing labels"):
result = ser.loc[[2, 2, "x", 1]]
result = ser.reindex([2, 2, "x", 1])
tm.assert_series_equal(result, expected, check_index_type=True)
# raises as nothing in in the index
msg = (
r"\"None of \[Int64Index\(\[3, 3, 3\], dtype='int64'\)\] are "
r"in the \[index\]\""
)
with pytest.raises(KeyError, match=msg):
ser.loc[[3, 3, 3]]
expected = Series([0.2, 0.2, np.nan], index=[2, 2, 3])
with pytest.raises(KeyError, match="with any missing labels"):
ser.loc[[2, 2, 3]]
result = ser.reindex([2, 2, 3])
tm.assert_series_equal(result, expected, check_index_type=True)
s = Series([0.1, 0.2, 0.3], index=[1, 2, 3])
expected = Series([0.3, np.nan, np.nan], index=[3, 4, 4])
with pytest.raises(KeyError, match="with any missing labels"):
s.loc[[3, 4, 4]]
result = s.reindex([3, 4, 4])
tm.assert_series_equal(result, expected, check_index_type=True)
s = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4])
expected = Series([np.nan, 0.3, 0.3], index=[5, 3, 3])
with pytest.raises(KeyError, match="with any missing labels"):
s.loc[[5, 3, 3]]
result = s.reindex([5, 3, 3])
tm.assert_series_equal(result, expected, check_index_type=True)
s = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4])
expected = Series([np.nan, 0.4, 0.4], index=[5, 4, 4])
with pytest.raises(KeyError, match="with any missing labels"):
s.loc[[5, 4, 4]]
result = s.reindex([5, 4, 4])
tm.assert_series_equal(result, expected, check_index_type=True)
s = Series([0.1, 0.2, 0.3, 0.4], index=[4, 5, 6, 7])
expected = Series([0.4, np.nan, np.nan], index=[7, 2, 2])
with pytest.raises(KeyError, match="with any missing labels"):
s.loc[[7, 2, 2]]
result = s.reindex([7, 2, 2])
tm.assert_series_equal(result, expected, check_index_type=True)
s = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4])
expected = Series([0.4, np.nan, np.nan], index=[4, 5, 5])
with pytest.raises(KeyError, match="with any missing labels"):
s.loc[[4, 5, 5]]
result = s.reindex([4, 5, 5])
tm.assert_series_equal(result, expected, check_index_type=True)
# iloc
expected = Series([0.2, 0.2, 0.1, 0.1], index=[2, 2, 1, 1])
result = ser.iloc[[1, 1, 0, 0]]
tm.assert_series_equal(result, expected, check_index_type=True)
def test_series_partial_set_with_name(self):
# GH 11497
idx = Index([1, 2], dtype="int64", name="idx")
ser = Series([0.1, 0.2], index=idx, name="s")
# loc
with pytest.raises(KeyError, match="with any missing labels"):
ser.loc[[3, 2, 3]]
with pytest.raises(KeyError, match="with any missing labels"):
ser.loc[[3, 2, 3, "x"]]
exp_idx = Index([2, 2, 1], dtype="int64", name="idx")
expected = Series([0.2, 0.2, 0.1], index=exp_idx, name="s")
result = ser.loc[[2, 2, 1]]
tm.assert_series_equal(result, expected, check_index_type=True)
with pytest.raises(KeyError, match="with any missing labels"):
ser.loc[[2, 2, "x", 1]]
# raises as nothing in in the index
msg = (
r"\"None of \[Int64Index\(\[3, 3, 3\], dtype='int64', "
r"name='idx'\)\] are in the \[index\]\""
)
with pytest.raises(KeyError, match=msg):
ser.loc[[3, 3, 3]]
with pytest.raises(KeyError, match="with any missing labels"):
ser.loc[[2, 2, 3]]
idx = Index([1, 2, 3], dtype="int64", name="idx")
with pytest.raises(KeyError, match="with any missing labels"):
Series([0.1, 0.2, 0.3], index=idx, name="s").loc[[3, 4, 4]]
idx = Index([1, 2, 3, 4], dtype="int64", name="idx")
with pytest.raises(KeyError, match="with any missing labels"):
Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[5, 3, 3]]
idx = Index([1, 2, 3, 4], dtype="int64", name="idx")
with pytest.raises(KeyError, match="with any missing labels"):
Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[5, 4, 4]]
idx = Index([4, 5, 6, 7], dtype="int64", name="idx")
with pytest.raises(KeyError, match="with any missing labels"):
Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[7, 2, 2]]
idx = Index([1, 2, 3, 4], dtype="int64", name="idx")
with pytest.raises(KeyError, match="with any missing labels"):
Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[4, 5, 5]]
# iloc
exp_idx = Index([2, 2, 1, 1], dtype="int64", name="idx")
expected = Series([0.2, 0.2, 0.1, 0.1], index=exp_idx, name="s")
result = ser.iloc[[1, 1, 0, 0]]
tm.assert_series_equal(result, expected, check_index_type=True)
def test_partial_set_invalid(self):
# GH 4940
# allow only setting of 'valid' values
orig = tm.makeTimeDataFrame()
df = orig.copy()
# don't allow not string inserts
msg = "cannot insert DatetimeArray with incompatible label"
with pytest.raises(TypeError, match=msg):
df.loc[100.0, :] = df.iloc[0]
with pytest.raises(TypeError, match=msg):
df.loc[100, :] = df.iloc[0]
# allow object conversion here
df = orig.copy()
df.loc["a", :] = df.iloc[0]
exp = orig.append(Series(df.iloc[0], name="a"))
tm.assert_frame_equal(df, exp)
tm.assert_index_equal(df.index, Index(orig.index.tolist() + ["a"]))
assert df.index.dtype == "object"
def test_partial_set_empty_series(self):
# GH5226
# partially set with an empty object series
s = Series(dtype=object)
s.loc[1] = 1
tm.assert_series_equal(s, Series([1], index=[1]))
s.loc[3] = 3
tm.assert_series_equal(s, Series([1, 3], index=[1, 3]))
s = Series(dtype=object)
s.loc[1] = 1.0
tm.assert_series_equal(s, Series([1.0], index=[1]))
s.loc[3] = 3.0
tm.assert_series_equal(s, Series([1.0, 3.0], index=[1, 3]))
s = Series(dtype=object)
s.loc["foo"] = 1
tm.assert_series_equal(s, Series([1], index=["foo"]))
s.loc["bar"] = 3
tm.assert_series_equal(s, Series([1, 3], index=["foo", "bar"]))
s.loc[3] = 4
tm.assert_series_equal(s, Series([1, 3, 4], index=["foo", "bar", 3]))
def test_partial_set_empty_frame(self):
# partially set with an empty object
# frame
df = DataFrame()
msg = "cannot set a frame with no defined columns"
with pytest.raises(ValueError, match=msg):
df.loc[1] = 1
with pytest.raises(ValueError, match=msg):
df.loc[1] = Series([1], index=["foo"])
msg = "cannot set a frame with no defined index and a scalar"
with pytest.raises(ValueError, match=msg):
df.loc[:, 1] = 1
# these work as they don't really change
# anything but the index
# GH5632
expected = DataFrame(columns=["foo"], index=Index([], dtype="object"))
def f():
df = DataFrame(index=Index([], dtype="object"))
df["foo"] = Series([], dtype="object")
return df
tm.assert_frame_equal(f(), expected)
def f():
df = DataFrame()
df["foo"] = Series(df.index)
return df
tm.assert_frame_equal(f(), expected)
def f():
df = DataFrame()
df["foo"] = df.index
return df
tm.assert_frame_equal(f(), expected)
expected = DataFrame(columns=["foo"], index=Index([], dtype="int64"))
expected["foo"] = expected["foo"].astype("float64")
def f():
df = DataFrame(index=Index([], dtype="int64"))
df["foo"] = []
return df
tm.assert_frame_equal(f(), expected)
def f():
df = DataFrame(index=Index([], dtype="int64"))
df["foo"] = Series(np.arange(len(df)), dtype="float64")
return df
tm.assert_frame_equal(f(), expected)
def f():
df = DataFrame(index=Index([], dtype="int64"))
df["foo"] = range(len(df))
return df
expected = DataFrame(columns=["foo"], index=Index([], dtype="int64"))
expected["foo"] = expected["foo"].astype("float64")
tm.assert_frame_equal(f(), expected)
df = DataFrame()
tm.assert_index_equal(df.columns, Index([], dtype=object))
df2 = DataFrame()
df2[1] = Series([1], index=["foo"])
df.loc[:, 1] = Series([1], index=["foo"])
tm.assert_frame_equal(df, DataFrame([[1]], index=["foo"], columns=[1]))
tm.assert_frame_equal(df, df2)
# no index to start
expected = DataFrame({0: Series(1, index=range(4))}, columns=["A", "B", 0])
df = DataFrame(columns=["A", "B"])
df[0] = Series(1, index=range(4))
df.dtypes
str(df)
tm.assert_frame_equal(df, expected)
df = DataFrame(columns=["A", "B"])
df.loc[:, 0] = Series(1, index=range(4))
df.dtypes
str(df)
tm.assert_frame_equal(df, expected)
def test_partial_set_empty_frame_row(self):
# GH5720, GH5744
# don't create rows when empty
expected = DataFrame(columns=["A", "B", "New"], index=Index([], dtype="int64"))
expected["A"] = expected["A"].astype("int64")
expected["B"] = expected["B"].astype("float64")
expected["New"] = expected["New"].astype("float64")
df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
y = df[df.A > 5]
y["New"] = np.nan
tm.assert_frame_equal(y, expected)
# tm.assert_frame_equal(y,expected)
expected = DataFrame(columns=["a", "b", "c c", "d"])
expected["d"] = expected["d"].astype("int64")
df = DataFrame(columns=["a", "b", "c c"])
df["d"] = 3
tm.assert_frame_equal(df, expected)
tm.assert_series_equal(df["c c"], Series(name="c c", dtype=object))
# reindex columns is ok
df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
y = df[df.A > 5]
result = y.reindex(columns=["A", "B", "C"])
expected = DataFrame(columns=["A", "B", "C"], index=Index([], dtype="int64"))
expected["A"] = expected["A"].astype("int64")
expected["B"] = expected["B"].astype("float64")
expected["C"] = expected["C"].astype("float64")
tm.assert_frame_equal(result, expected)
def test_partial_set_empty_frame_set_series(self):
# GH 5756
# setting with empty Series
df = DataFrame(Series(dtype=object))
tm.assert_frame_equal(df, DataFrame({0: Series(dtype=object)}))
df = DataFrame(Series(name="foo", dtype=object))
tm.assert_frame_equal(df, DataFrame({"foo": Series(dtype=object)}))
def test_partial_set_empty_frame_empty_copy_assignment(self):
# GH 5932
# copy on empty with assignment fails
df = DataFrame(index=[0])
df = df.copy()
df["a"] = 0
expected = DataFrame(0, index=[0], columns=["a"])
tm.assert_frame_equal(df, expected)
def test_partial_set_empty_frame_empty_consistencies(self):
# GH 6171
# consistency on empty frames
df = DataFrame(columns=["x", "y"])
df["x"] = [1, 2]
expected = DataFrame(dict(x=[1, 2], y=[np.nan, np.nan]))
tm.assert_frame_equal(df, expected, check_dtype=False)
df = DataFrame(columns=["x", "y"])
df["x"] = ["1", "2"]
expected = DataFrame(dict(x=["1", "2"], y=[np.nan, np.nan]), dtype=object)
tm.assert_frame_equal(df, expected)
df = DataFrame(columns=["x", "y"])
df.loc[0, "x"] = 1
expected = DataFrame(dict(x=[1], y=[np.nan]))
tm.assert_frame_equal(df, expected, check_dtype=False)
@pytest.mark.parametrize(
"idx,labels,expected_idx",
[
(
period_range(start="2000", periods=20, freq="D"),
["2000-01-04", "2000-01-08", "2000-01-12"],
[
Period("2000-01-04", freq="D"),
Period("2000-01-08", freq="D"),
Period("2000-01-12", freq="D"),
],
),
(
date_range(start="2000", periods=20, freq="D"),
["2000-01-04", "2000-01-08", "2000-01-12"],
[
Timestamp("2000-01-04", freq="D"),
Timestamp("2000-01-08", freq="D"),
Timestamp("2000-01-12", freq="D"),
],
),
(
pd.timedelta_range(start="1 day", periods=20),
["4D", "8D", "12D"],
[pd.Timedelta("4 day"), pd.Timedelta("8 day"), pd.Timedelta("12 day")],
),
],
)
def test_loc_with_list_of_strings_representing_datetimes(
self, idx, labels, expected_idx
):
# GH 11278
s = Series(range(20), index=idx)
df = DataFrame(range(20), index=idx)
expected_value = [3, 7, 11]
expected_s = Series(expected_value, expected_idx)
expected_df = DataFrame(expected_value, expected_idx)
tm.assert_series_equal(expected_s, s.loc[labels])
tm.assert_series_equal(expected_s, s[labels])
tm.assert_frame_equal(expected_df, df.loc[labels])
@pytest.mark.parametrize(
"idx,labels",
[
(
period_range(start="2000", periods=20, freq="D"),
["2000-01-04", "2000-01-30"],
),
(
date_range(start="2000", periods=20, freq="D"),
["2000-01-04", "2000-01-30"],
),
(pd.timedelta_range(start="1 day", periods=20), ["3 day", "30 day"]),
],
)
def test_loc_with_list_of_strings_representing_datetimes_missing_value(
self, idx, labels
):
# GH 11278
s = Series(range(20), index=idx)
df = DataFrame(range(20), index=idx)
msg = r"with any missing labels"
with pytest.raises(KeyError, match=msg):
s.loc[labels]
with pytest.raises(KeyError, match=msg):
s[labels]
with pytest.raises(KeyError, match=msg):
df.loc[labels]
@pytest.mark.parametrize(
"idx,labels,msg",
[
(
period_range(start="2000", periods=20, freq="D"),
["4D", "8D"],
(
r"None of \[Index\(\['4D', '8D'\], dtype='object'\)\] "
r"are in the \[index\]"
),
),
(
date_range(start="2000", periods=20, freq="D"),
["4D", "8D"],
(
r"None of \[Index\(\['4D', '8D'\], dtype='object'\)\] "
r"are in the \[index\]"
),
),
(
pd.timedelta_range(start="1 day", periods=20),
["2000-01-04", "2000-01-08"],
(
r"None of \[Index\(\['2000-01-04', '2000-01-08'\], "
r"dtype='object'\)\] are in the \[index\]"
),
),
],
)
def test_loc_with_list_of_strings_representing_datetimes_not_matched_type(
self, idx, labels, msg
):
# GH 11278
s = Series(range(20), index=idx)
df = DataFrame(range(20), index=idx)
with pytest.raises(KeyError, match=msg):
s.loc[labels]
with pytest.raises(KeyError, match=msg):
s[labels]
with pytest.raises(KeyError, match=msg):
df.loc[labels]
def test_indexing_timeseries_regression(self):
# Issue 34860
arr = date_range("1/1/2008", "1/1/2009")
result = arr.to_series()["2008"]
rng = date_range(start="2008-01-01", end="2008-12-31")
expected = Series(rng, index=rng)
tm.assert_series_equal(result, expected)
def test_index_name_empty(self):
# GH 31368
df = pd.DataFrame({}, index=pd.RangeIndex(0, name="df_index"))
series = pd.Series(1.23, index=pd.RangeIndex(4, name="series_index"))
df["series"] = series
expected = pd.DataFrame(
{"series": [1.23] * 4}, index=pd.RangeIndex(4, name="df_index")
)
tm.assert_frame_equal(df, expected)
# GH 36527
df = pd.DataFrame()
series = pd.Series(1.23, index=pd.RangeIndex(4, name="series_index"))
df["series"] = series
expected = pd.DataFrame(
{"series": [1.23] * 4}, index=pd.RangeIndex(4, name="series_index")
)
tm.assert_frame_equal(df, expected)
| [
"kd619@ic.ac.uk"
] | kd619@ic.ac.uk |
87caa1c0fc7647e9ee7fa4ae4a4507d441dd5373 | d800e7fd6c81aa8a606ff68acc36d7ea11f63f49 | /Stuff/OOPSLA/closure.py | 36b927a5188418a0ae71adf759eca47fc208b2ba | [] | no_license | ScorcherGray/ProjectF | 1cf8476b79834a12a4c580ba1e190f28961f241f | 80f767114102e09c218536c264d89f86a089cb3e | refs/heads/master | 2022-04-21T04:16:23.987343 | 2020-04-22T03:30:04 | 2020-04-22T03:30:04 | 257,431,481 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 224 | py | def f(n):
def g(x):
print vars()
return x+n
print g(1) # 6 {'x': 1, 'n': 5}
n = 10
print g(1) # 11 {'x': 1, 'n': 10}
return g
h = f(5)
print h(1) # 11 {'x': 1, 'n': 10}
| [
"danielthegray11@gmail.com"
] | danielthegray11@gmail.com |
e09b3024881b4ef2b206f18d3610ac4fd3e3c545 | 2609aa6090c178c50b01040ee11ed0d53e007066 | /check-error.py | 5622f7a91c0842f8b7beb295c6ddbd792cc6f9b7 | [
"Apache-2.0"
] | permissive | todorokit/tensorflow_cnn_image_sample | 8d3fddfdd3997d8927c65aef9e2809bdb429b70c | 5f8dee00eebcbada9e03de7742026b2a37963860 | refs/heads/master | 2021-01-20T00:28:07.478468 | 2018-09-26T12:02:04 | 2018-09-26T12:02:04 | 89,137,003 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,874 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import time
import tensorflow as tf
import tensorflow.python.platform
import modelcnn
from util.Container import getContainer
from util.utils import *
from util import image as imgModule
from config.classes import classList
from config import baseConfig
config = Container.get("config")
NUM_CLASSES = config.NUM_CLASSES
IMAGE_SIZE = config.IMAGE_SIZE
NUM_RGB_CHANNEL = config.NUM_RGB_CHANNEL
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('outfile', 'miss.html', 'output html name')
flags.DEFINE_integer('acc_batch_size', 80, 'Accuracy batch size. Take care of memory limit.')
flags.DEFINE_float('memory', 0.90, 'Using gpu memory.')
flags.DEFINE_string('config', "config.celeba", 'config module(file) name (no extension).')
def main(_):
Container = getContainer(FLAGS)
testDataset = Container.get("testdataset")
images_placeholder = tf.placeholder(baseConfig.floatSize, shape=(None, IMAGE_SIZE[0]*IMAGE_SIZE[1]*NUM_RGB_CHANNEL))
labels_placeholder = tf.placeholder(baseConfig.floatSize, shape=(None, NUM_CLASSES))
keep_prob = tf.placeholder(baseConfig.floatSize)
phaseTrain = tf.placeholder(tf.bool, name='phase_train')
with tf.name_scope("tower_0"):
logits, _ = modelcnn.inference(images_placeholder, keep_prob, config, False, phaseTrain)
sess = Container.get("sess")
saver = Container.get("saver")
cwd = os.getcwd()
oks = []
lowscores = []
ngs = []
stat = {}
arg = 0
for images, labels, paths in testDataset.flow():
ix = 0
arrs = sess.run(logits, feed_dict={images_placeholder: images,keep_prob: 1.0, phaseTrain: False})
for arr in arrs:
if config.dataType == "multi-label":
raise Exception("multi -label not support")
else:
labelVal = top1(labels[ix])
topVal = top1(arr)
score = arr[topVal]
if (topVal == labelVal):
if ( score < 0.5) :
lowscores.append((paths[ix], classList[topVal], score))
else:
oks.append((paths[ix], classList[topVal], score))
else:
ngs.append((paths[ix], classList[labelVal], classList[topVal], score))
try:
stat[labelVal] = stat[labelVal] + 1
except:
stat[labelVal] = 1
ix += 1
i = 0
tds = []
trs = []
def img (src):
return "<img width='25%%' src='file:///%s'/>" % (os.path.join(cwd, path).replace("\\", "/"))
for ng in ngs :
path , labelName, className, score = ng
i+=1
tds.append("<td>%s<br/>%s:%s<br/>%g</td>\n" % (img(path), labelName, className, score))
if (i >= 4):
trs.append("<tr>"+"".join(tds)+"</tr>")
tds = []
i = 0
ngstr = "".join(trs)
i = 0
tds = []
trs = []
for low in lowscores :
path , labelName, score = low
i+=1
tds.append("<td>%s<br/>%s<br/>%g</td>\n" % (img(path), labelName, score))
if (i >= 4):
trs.append("<tr>"+"".join(tds)+"</tr>")
tds = []
i = 0
lowstr = "".join(trs)
trs = []
for label in stat:
trs.append("<tr><td>%s</td><td>%d</td></tr>" % (classList[label], stat[label]))
statstr = "".join(trs)
fp = open(FLAGS.outfile, "w")
fp.write("""
<html><body>
STAT<br>
<table border='1'>%s</table>
MISTAKEN<br>
<table border='1'>%s</table>
LOW SCORES<br>
<table border='1'>%s</table>
</body></html>""" % (statstr,ngstr, lowstr))
tf.app.run()
| [
"fstest1234ifua@gmail.com"
] | fstest1234ifua@gmail.com |
946c892022c314f967ecf76ee0af9c2b840fd465 | 2d26967878e9c0ef08ff0f9d26f334e395219647 | /0x0A-python-inheritance/6-base_geometry.py | cf26a6834cf18f2d0ca77cc79be912fc64e70350 | [] | no_license | StaciAF/holbertonschool-higher_level_programming | 01e9e65c5adcefdba59010371a33179ca47f53d4 | 5aec6364ae4cdd35184ad3a2c1dfead7031468c7 | refs/heads/master | 2022-12-18T18:30:24.398046 | 2020-09-25T00:00:35 | 2020-09-25T00:00:35 | 259,436,155 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 236 | py | #!/usr/bin/python3
"""
this module accesses class BaseGeometry
"""
class BaseGeometry:
""" new class instantiated """
def area(self):
""" method to compute area """
raise Exception('area() is not implemented')
| [
"aaen.it19@gmail.com"
] | aaen.it19@gmail.com |
a728bf9ae2c46a9eeba638b54da02ebb8ac8ddca | a35b24c8c3c5bdf861f3cda9396f2fa6795ec929 | /abc/abc037/a/main.py | bb4c99a18af37578e976b0d53202738d5e7c3592 | [] | no_license | Msksgm/atcoder_msksgm_practice | 92a19e2d6c034d95e1cfaf963aff5739edb4ab6e | 3ae2dcb7d235a480cdfdfcd6a079e183936979b4 | refs/heads/master | 2021-08-18T16:08:08.551718 | 2020-09-24T07:01:11 | 2020-09-24T07:01:11 | 224,743,360 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 226 | py | def main():
a, b, c = map(int, input().split())
min_price = min(a, b)
max_price = max(a, b)
ans = (c // min_price)
ans += (c % min_price) // max_price
print(ans)
if __name__ == "__main__":
main()
| [
"4419517@ed.tus.ac.jp"
] | 4419517@ed.tus.ac.jp |
9bad890e91484ddcd179a6f0dd858d40ded060ed | 0496be70c261935942f0ee69b10a9202b854a22c | /OrderingSystem/Customer/urls.py | d256615f2dd2bdb6e0367ac3dd68ed019a0b7719 | [] | no_license | akirameng/ordering-system | e89b639e6e3da16fe2613b6d17be87627e4db8a4 | d1a73d0eca7185b1d693d6b4844ad8da4d868698 | refs/heads/master | 2020-04-17T16:17:10.911072 | 2015-09-16T23:59:04 | 2015-09-16T23:59:04 | 42,620,565 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,106 | py | """OrderingSystem URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.8/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Add an import: from blog import urls as blog_urls
2. Add a URL to urlpatterns: url(r'^blog/', include(blog_urls))
"""
from django.conf.urls import patterns, url
from rest_framework.urlpatterns import format_suffix_patterns
from Customer import views
from django.contrib.auth.decorators import login_required
urlpatterns = patterns(
'',
url(r'^$', views.IndexView.as_view(), name='homepage'),
#url(r'^restaurant$', views.RestaurantView.as_view(), name='restaurantPage'),
url(r'^(?P<pk>[0-9]+)/order$', login_required(views.OrderView.as_view()), name='orderPage'),
url(r'^(?P<pk>[0-9]+)/complete$', login_required(views.CompleteOrderView.as_view()), name='completeOrder'),
url(r'^restaurant/(?P<pk>[0-9]+)/$', views.RestaurantAPIView.as_view(), name='resaurant_detail_api'),
url(r'^(?P<pk>[0-9]+)/$', views.RestaurantDetailView.as_view(), name='resaurant_detail'),
url(r'^(?P<pk>[0-9]+)/dishlist$', views.DishListView.as_view(), name='resaurant_dishlist'),
url(r'^dishlist/(?P<dish_id>[0-9]+)/$', views.DetailDish.as_view(), name='resaurant_detaildish'),
url(r'^dishlist/(?P<dish_id>[0-9]+)/like$', views.DetailDishLike.as_view(), name='resaurant_detaildish_like'),
url(r'^dishlist/(?P<dish_id>[0-9]+)/unliked$', views.DetailDishUnlike.as_view(), name='resaurant_detaildish_unlike'),
url(r'^filter$', views.FilterView.as_view(), name='resaurant_filter'),
url(r'^cookie/$',views.CookieView.as_view(),name="order_cookie"),
url(r'^searchresult/$',views.search.as_view(),name="searchpage"),
)
#urlpatterns = format_suffix_patterns(urlpatterns)
| [
"mza57@sfu.ca"
] | mza57@sfu.ca |
8ac0a0616eab681a27b513b3fcf675767ffe7628 | 19daac0031c234f0f0b492d7354f91755d648af9 | /smartlock/scripts/unlock.py | 08f8afc0ddfd525879afa5fbfa01e8c515480cb2 | [] | no_license | gyarab/chytry-lock | c1748f0d5d476ba3050287d229385817c9413815 | a459ecf52d30f29fc9a9f67e27438baf54b760c1 | refs/heads/master | 2020-04-02T12:46:25.400708 | 2019-06-05T05:52:37 | 2019-06-05T05:52:37 | 154,451,156 | 0 | 0 | null | 2019-04-15T11:09:26 | 2018-10-24T06:37:15 | null | UTF-8 | Python | false | false | 143 | py | import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
GPIO.setup(37, GPIO.OUT)
GPIO.output(37, True)
time.sleep(0.5)
GPIO.cleanup()
| [
"noreply@github.com"
] | gyarab.noreply@github.com |
18221db11a267ff10a1f1c2c0ce2cdb0f07a762b | cebb0cd9d9c2ca8383a5ef34a28b7f3a387263cb | /lsh/lsh.py | bb53c6ee3f8ab4efd7a3d683eaf924b26d8cf2ca | [] | no_license | ssmike/ml | edbf7b17906cc6e2924eaf8104af43352da780cc | 8d019ddcdf91f57b895c1095951aeeee84e7b81e | refs/heads/master | 2021-01-17T17:46:41.178197 | 2016-08-13T19:08:49 | 2016-08-13T19:08:49 | 65,615,464 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,132 | py | import scipy.stats as st
from collections import defaultdict
from scipy.spatial.distance import euclidean
import numpy as np
import scipy as sp
__all__ = ['LSH']
class LSH:
def __init__(self, bin_distance, n_estimators=10, hash_size=7):
self.n_estimators = n_estimators
self.hash_size = hash_size
self.w = bin_distance
self.bin_distance = bin_distance
def hash(self, point):
result = []
point = np.append(point, 1)
for h in self.hashes:
result.append(tuple((np.floor(np.sum(h * point, axis=1)/self.bin_distance)).astype(int)))
return tuple(result)
def insert(self, point):
for est, hsh in zip(self.estimators, self.hash(point)):
est[hsh].append(point)
def fit(self, X):
self.dim = len(X[0])
self.hashes = []
# dicts in python are hashtables so we don't have to implement them
self.estimators = [defaultdict(lambda:[]) for i in range(self.n_estimators)]
bin_distance = self.bin_distance
for j in range(self.n_estimators):
temp = []
self.hashes.append(temp)
for i in range(self.hash_size):
temp.append(np.append(st.norm(0, 1).rvs(self.dim) / np.sqrt(self.dim),
st.uniform(-bin_distance, bin_distance).rvs(1)))
for x in X:
self.insert(x)
def kneighbours(self, point, k):
result = []
for est, hsh in zip(self.estimators, self.hash(point)):
result += est[hsh]
result.sort(key=lambda x: euclidean(x, point))
prev = None
cleaned = []
for i in range(len(result)):
if prev is None or (prev != result[i]).any():
cleaned.append(result[i])
prev = result[i]
return cleaned[:k]
if __name__ == '__main__':
import numpy as np
from lsh import LSH
from scipy.stats import norm, uniform
data = uniform(loc=0, scale=100).rvs(500 * 2).reshape((500, 2))
index = LSH(10)
index.fit(data)
print(index.kneighbours(data[0], k=2)) | [
"surinmike@gmail.com"
] | surinmike@gmail.com |
79f7198200be4d319c47ef26eb3c57f5f1be53d5 | 5cd0807f442e6d3890167c5d9c4715c32ee4dfcc | /Hello/product/admin.py | 0ac66c87b29eec67df5f5a9cf675bd28596b0216 | [] | no_license | udoy382/PythonForBeginners | 592b2890a71e6895c2db43dbaf39f08156ef5826 | 686f5e982ae40f149688a76ded53b90c6f17af8a | refs/heads/main | 2023-03-27T05:14:08.705961 | 2021-03-25T14:35:19 | 2021-03-25T14:35:19 | 351,468,393 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 89 | py | from django.contrib import admin
from .models import Product
admin.site.register(Product) | [
"srudoy436@gmail.com"
] | srudoy436@gmail.com |
9815087e80c3f0b15bf9a109a3263889e8a125ae | ae2c75fd7f9e86860ee013c8c05416fa9c688f1d | /manage.py | 94f12bedc46b2d514400712c428e7aefc1760406 | [] | no_license | crowdbotics-apps/new-app-chetna-soni--4500 | 733fc277bb085256867f1a8af29d1b444aa9aa86 | 50b03200b0fe8fa6b2b72214634a3beb6c5bf192 | refs/heads/master | 2023-05-27T22:30:32.627736 | 2020-05-13T13:57:29 | 2020-05-13T13:57:29 | 263,645,180 | 0 | 0 | null | 2021-06-12T18:03:30 | 2020-05-13T13:56:59 | Python | UTF-8 | Python | false | false | 645 | py | #!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'new_app_chetna_soni__4500.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| [
"team@crowdbotics.com"
] | team@crowdbotics.com |
13590eb83cedf7e78563f292ee34f03b3d739622 | a0a288a9563ed4519cfe9f9c24ecc41237753dbc | /thechronic/strange.py | 417876af3a2960256bd2b445292b60da0c62abbd | [
"MIT"
] | permissive | iluxonchik/the-chronic | 99b236456efb9c32dfb9e3978f9e2cc28910a03c | 4dd41ea1a96e4c5cb1741de02d55cf09b2e78979 | refs/heads/master | 2021-04-28T22:40:51.993595 | 2018-04-02T13:38:04 | 2018-04-02T13:38:04 | 77,719,263 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 696 | py | class Strange(object):
"""
Wrapper arround the built-in range() function, which returns str instead of
int on iteration.
Just like a range object, an instance of Srange can be iterated over
multiple times.
"""
def __init__(self, start, stop=None, step=1):
if stop is None:
stop = start
start = 0
self._range = range(start, stop, step)
self._iter = iter(self._range)
def __iter__(self):
return self
def __next__(self):
try:
str_num = str(next(self._iter))
except StopIteration as err:
self._iter = iter(self._range)
raise err
return str_num
| [
"iluxon4ik@hotmail.com"
] | iluxon4ik@hotmail.com |
a1f386d8f7d24f035c705ed788f7bae6452680ae | ac9e080176d06a898a0dff5707714a83764b1672 | /test/unicode.py | 87de3ac50d773c7d6ef529966eaed70dba4c2398 | [
"MIT"
] | permissive | Paz320/Template-Informe | ecbd873f0b01f2f91bf72a9865bbe020976177ac | e798d952c2e0d0dcde3fa7751e04183459795d7d | refs/heads/master | 2023-09-02T09:17:29.118838 | 2021-11-09T23:07:22 | 2021-11-09T23:07:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,220 | py | """
Generate unicode tests.
"""
myunicodes=''.join(open('src/cfg/unicode.tex', 'r').readlines())
f = open('test/unicode.sty', 'r').readlines()
newkcodes = []
for j in f:
j = j.strip()
if len(j) == 0:
continue
if '%' in j.strip()[0]:
continue
if 'DeclareUnicodeCharacter' not in j or '\\def' in j or '\\newcommand' in j:
continue
kcode = j.split('}{')[0].split('{')[1]
if f'{kcode}' in myunicodes:
# print(f'{kcode} repeated')
continue
if '%' in j:
j = j.split('%')[0].strip()
newkcodes.append(j)
newkcodes.sort()
print('New kcodes')
for j in newkcodes:
print(j)
# Iterate through unicodes
write_test = True
if write_test:
f = open('test/unicode.tex', 'w')
f.write('Ejemplos:\n\\begin{itemize}\n')
added = []
for j in myunicodes.split('\n'):
if 'DeclareUnicodeCharacter' not in j or '\\def' in j or '\\ifdefined' in j:
continue
if j[0] == '%':
continue
kcode = j.split('}{')[0].split('{')[1]
if kcode not in added:
added.append(kcode)
else:
print(f'Error, {kcode} repeated')
char = chr(int(f'0x{kcode}', 16))
f.write(f'\t\\item {char}\t% '+kcode+'\n')
f.write('\end{itemize}')
f.close()
f = open('test/unicode_replacer.py', 'w')
cmd = []
notcmd = []
addedjval = []
for j in myunicodes.split('\n'):
if 'DeclareUnicodeCharacter' not in j or '\\def' in j or '\\ifdefined' in j:
continue
jsp = j.split('}{')
kcode = jsp.pop(0).split('{')[1]
jval = '}{'.join(jsp).strip()[0:-1]
char = chr(int(f'0x{kcode}', 16))
if '\\ensuremath' in jval:
jval = jval.replace('\\ensuremath{', '')[0:-1]
if jval[0] == '{':
continue
if 'NOT' in jval or 'NONE' in jval or '\LOCALunknownchar' in jval:
continue
if '\hbox' in jval or '\else' in jval or '{ }' in jval or '\\,' in jval or '\\text{' in jval or '!' in jval:
continue
jval = jval.replace('\\', '\\\\')
if jval in addedjval:
# print(f'REPEATED {jval}')
continue
addedjval.append(jval)
txt = f"\t('{jval}', '{char}'),\n"
if jval == char:
continue
if '\\' not in jval:
if len(jval) == 1 or "'" in jval:
continue
notcmd.append(txt)
else:
cmd.append(txt)
cmd.sort(key=lambda v: v.upper())
notcmd.sort(key=lambda v: v.upper())
for j in cmd:
f.write(j)
for j in notcmd:
f.write(j)
f.close() | [
"pablo@ppizarror.com"
] | pablo@ppizarror.com |
f157d656198029b1ab506e3537a92e536b82d242 | f6de15dd01a3e514afb66839856126026b423fd0 | /searching&sorting/soal/tebaktinggi.py | 90650c07d700267213329fa5dad9812752456420 | [] | no_license | NizanHulq/Kuliah-Python | 918e3ab9a72cbabaae6f38c5fea004641926c8b6 | f0cc2abeecc497da2a03cf18408252cb636e89fc | refs/heads/main | 2023-08-21T17:48:51.661188 | 2021-10-08T16:03:37 | 2021-10-08T16:03:37 | 415,047,439 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 359 | py | length = int(input())
list_a = input().split()
for i in range(length-1):
for j in range(length-1):
if list_a[j] > list_a[j+1]:
tmp = list_a[j]
list_a[j] = list_a[j+1]
list_a[j+1] = tmp
index = 0
for i in range(length-1):
if int(list_a[i]) == 165 :
index = i+1
break
print(index)
| [
"nizandiaulhaq@gmail.com"
] | nizandiaulhaq@gmail.com |
b285b77cb3df2d48a8246417b091eb630118214e | 5cb26c6aadac7c860e65d38ef3db18160564c126 | /django_criterion/core/__init__.py | 1102b60cba35d5d672841fb4c472b3bd63ad2df5 | [
"MIT"
] | permissive | flix477/django-criterion | 2b81c8458878cf8f6bc1c00f45076e37daf27b40 | 18abb46be8ceb2f149a5c0a311bddfb0d859f0c4 | refs/heads/master | 2023-01-06T17:58:32.521589 | 2020-10-20T23:40:43 | 2020-10-20T23:40:43 | 305,458,437 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,619 | py | import contextlib
import dataclasses
from dataclasses import dataclass
from datetime import datetime
from importlib import import_module
import inspect
import json
import math
import random
import statistics
import sys
from time import perf_counter
from typing import Dict, TextIO, List
from django.conf import settings
from django.core.management.color import no_style
from django.db import connection, DEFAULT_DB_ALIAS
from django.core.management.sql import sql_flush
from django.test.utils import CaptureQueriesContext
from .timing import Timing, calc_timing_diff
from .queries import Queries
from .result_type import ResultType
@dataclass(frozen=True)
class BenchmarkResult:
queries: Queries
timing: Timing
def from_dict(d):
return BenchmarkResult(
queries=Queries.from_dict(d["queries"]),
timing=Timing.from_dict(d["timing"]),
)
def result_type(self) -> ResultType:
return self.queries.result_type().merge(self.timing.result_type())
class BenchmarkCase:
def setup(self):
pass
def pure(f):
"""
BenchmarkCase that do not modify the database can be declared pure
with this annotation. This will speed up the test run by not flushing
the database after the benchmark.
"""
f.pure = True
return f
@contextlib.contextmanager
def test_database():
connection.creation.create_test_db()
try:
yield connection
finally:
connection.creation.destroy_test_db(DEFAULT_DB_ALIAS)
def get_cases(scripts: List[str]):
cases = []
for script in scripts:
module = import_module(script)
for name, value in inspect.getmembers(module):
if (
inspect.isclass(value)
and issubclass(value, BenchmarkCase)
and value != BenchmarkCase
):
cases.append(value)
return cases
def warmup_op(x: float, y: float) -> float:
return math.sqrt((x / (y ** 2)) * (x + y))
def rand() -> float:
return random.random() * 1000 + 10000
def warmup(for_seconds=10) -> None:
t0 = datetime.now()
while (datetime.now() - t0).seconds < for_seconds:
warmup_op(rand(), rand())
def run_cases(cases, n: int) -> Dict[str, Dict[str, BenchmarkResult]]:
return {qualified_name(x): run_case(x, n) for x in cases}
def run_case(case, n: int) -> Dict[str, BenchmarkResult]:
case = case()
return {
name: run_bench(case, value, n)
for name, value in inspect.getmembers(case)
if inspect.ismethod(value) and name.startswith("bench_")
}
def run_bench(case, f, sample_size) -> BenchmarkResult:
total = 0
timings = []
is_pure = hasattr(f, "pure") and f.pure
captured_queries = []
for i in range(sample_size):
case.setup()
start = perf_counter()
with CaptureQueriesContext(connection) as queries:
f()
timings.append(perf_counter() - start)
total += len(queries.captured_queries)
if i == sample_size - 1:
captured_queries = list(queries.captured_queries)
elif not is_pure:
flush(connection)
connection.queries_log.clear()
average_time = statistics.mean(timings)
stdev = statistics.stdev(timings)
variance = stdev ** 2
return BenchmarkResult(
queries=Queries(
value=total / sample_size,
captured_queries=captured_queries,
),
timing=Timing(
average=average_time, stdev=stdev, variance=variance, diff=None
),
)
def flush(connection) -> None:
sql_list = sql_flush(
no_style(), connection, reset_sequences=True, allow_cascade=False
)
connection.ops.execute_sql_flush(DEFAULT_DB_ALIAS, sql_list)
def qualified_name(c) -> str:
return f"{c.__module__}.{c.__name__}"
def class_name(x: str) -> str:
return x[x.rfind(".") + 1 :] # noqa: E203
def load_results(f: TextIO) -> Dict[str, Dict[str, BenchmarkResult]]:
try:
results = json.load(f)
return {
case: {
bench: BenchmarkResult.from_dict(bench_result)
for bench, bench_result in benchmarks.items()
}
for case, benchmarks in results.items()
}
except Exception as error:
print("ERROR: Couldn't load comparison data.")
print(error)
sys.exit(1)
def write_output(
f: TextIO, data: Dict[str, Dict[str, BenchmarkResult]]
) -> None:
try:
json.dump(
{
case: {
bench: dataclasses.asdict(bench_result)
for bench, bench_result in benchmarks.items()
}
for case, benchmarks in data.items()
},
f,
)
except Exception as error:
print("ERROR: Couldn't save benchmark data.")
print(error)
sys.exit(1)
def compare_results(
a: Dict[str, Dict[str, BenchmarkResult]],
b: Dict[str, Dict[str, BenchmarkResult]],
n: int,
) -> Dict[str, Dict[str, BenchmarkResult]]:
comparison = {}
for case, benchmarks in a.items():
if case not in b:
comparison[case] = benchmarks
continue
benchmarks_b = b[case]
comparison[case] = {}
for bench_name, result in benchmarks.items():
if bench_name not in benchmarks_b:
comparison[case][bench_name] = result
continue
result_b = benchmarks_b[bench_name]
queries = Queries(
value=result.queries.value,
diff=result.queries.value - result_b.queries.value,
captured_queries=result.queries.captured_queries,
)
timing_diff = calc_timing_diff(result.timing, result_b.timing, n)
timing = Timing(
average=result.timing.average,
stdev=result.timing.stdev,
variance=result.timing.variance,
diff=timing_diff,
)
comparison[case][bench_name] = BenchmarkResult(
queries=queries, timing=timing
)
return comparison
def print_results(
results: Dict[str, Dict[str, BenchmarkResult]], show_queries=False
) -> None:
print("Results:")
for case, results in results.items():
print(f"- {class_name(case)}")
for bench, result in results.items():
result_type = result.result_type().pretty_print()
print(f" > {bench.ljust(30)}: {result_type}")
queries = result.queries
queries_result = queries.result_type().pretty_print()
print(
" " * 4
+ f"~ {'Number of queries'.ljust(28)}: {queries.value:.1f} ("
+ (
queries_result
if queries.diff is None
else (queries.pretty_diff() + ", " + queries_result)
)
+ ")"
)
if show_queries:
for i, q in enumerate(queries.captured_queries):
print(" " * 6 + f"- Query {i + 1}:")
print(" " * 8 + f"> SQL: {q['sql']}")
print(" " * 8 + f"> Timing: {q['time']}")
timing = result.timing
timing_result = timing.result_type().pretty_print()
print(
" " * 4
+ f"~ {'Timing (seconds)'.ljust(28)}: "
+ f"{timing.average:.4f}±{timing.stdev:.4f} ("
+ (
timing_result
if not timing.diff
else (timing.diff.pretty() + ", " + timing_result)
)
+ ")"
)
def run(
scripts, output=None, compare=None, show_queries=False, sample_size=61
) -> None:
if not scripts:
# TODO: autodiscover
scripts = []
cases = get_cases(scripts)
if not cases:
print("Nothing to do.")
sys.exit(0)
with test_database():
print("Warming up...")
warmup()
print("Running cases...")
results = run_cases(cases, sample_size)
if compare:
with open(compare, "r") as f:
comparison_data = load_results(f)
results = compare_results(results, comparison_data, sample_size)
print_results(results, show_queries=show_queries)
if output:
with open(output, "w") as f:
write_output(f, results)
count = len(results)
print(f"Ran {count} case{'s' if count != 1 else ''}.")
| [
"flxleveille@gmail.com"
] | flxleveille@gmail.com |
8a7f44524ce9a081def3a9a9ada89f66644202d9 | 05e634a232574f676434dfa8e4183f3d0a1a4bc9 | /tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/autoaugment.py | d2317602b1e7662fe828258ffcda461867fe541f | [
"Apache-2.0"
] | permissive | PaddlePaddle/models | 67ac00d93c5255ac64a9d80ae5be2e8927e47cee | 8042c21b690ffc0162095e749a41b94dd38732da | refs/heads/release/2.4 | 2023-09-04T15:23:59.543625 | 2023-07-20T11:54:16 | 2023-07-20T11:54:16 | 88,868,842 | 7,633 | 3,597 | Apache-2.0 | 2023-09-05T23:23:54 | 2017-04-20T13:30:15 | Python | UTF-8 | Python | false | false | 12,501 | py | import math
import torch
from enum import Enum
from torch import Tensor
from typing import List, Tuple, Optional
from . import functional as F, InterpolationMode
__all__ = ["AutoAugmentPolicy", "AutoAugment"]
class AutoAugmentPolicy(Enum):
"""AutoAugment policies learned on different datasets.
Available policies are IMAGENET, CIFAR10 and SVHN.
"""
IMAGENET = "imagenet"
CIFAR10 = "cifar10"
SVHN = "svhn"
def _get_transforms(policy: AutoAugmentPolicy):
if policy == AutoAugmentPolicy.IMAGENET:
return [
(("Posterize", 0.4, 8), ("Rotate", 0.6, 9)),
(("Solarize", 0.6, 5), ("AutoContrast", 0.6, None)),
(("Equalize", 0.8, None), ("Equalize", 0.6, None)),
(("Posterize", 0.6, 7), ("Posterize", 0.6, 6)),
(("Equalize", 0.4, None), ("Solarize", 0.2, 4)),
(("Equalize", 0.4, None), ("Rotate", 0.8, 8)),
(("Solarize", 0.6, 3), ("Equalize", 0.6, None)),
(("Posterize", 0.8, 5), ("Equalize", 1.0, None)),
(("Rotate", 0.2, 3), ("Solarize", 0.6, 8)),
(("Equalize", 0.6, None), ("Posterize", 0.4, 6)),
(("Rotate", 0.8, 8), ("Color", 0.4, 0)),
(("Rotate", 0.4, 9), ("Equalize", 0.6, None)),
(("Equalize", 0.0, None), ("Equalize", 0.8, None)),
(("Invert", 0.6, None), ("Equalize", 1.0, None)),
(("Color", 0.6, 4), ("Contrast", 1.0, 8)),
(("Rotate", 0.8, 8), ("Color", 1.0, 2)),
(("Color", 0.8, 8), ("Solarize", 0.8, 7)),
(("Sharpness", 0.4, 7), ("Invert", 0.6, None)),
(("ShearX", 0.6, 5), ("Equalize", 1.0, None)),
(("Color", 0.4, 0), ("Equalize", 0.6, None)),
(("Equalize", 0.4, None), ("Solarize", 0.2, 4)),
(("Solarize", 0.6, 5), ("AutoContrast", 0.6, None)),
(("Invert", 0.6, None), ("Equalize", 1.0, None)),
(("Color", 0.6, 4), ("Contrast", 1.0, 8)),
(("Equalize", 0.8, None), ("Equalize", 0.6, None)),
]
elif policy == AutoAugmentPolicy.CIFAR10:
return [
(("Invert", 0.1, None), ("Contrast", 0.2, 6)),
(("Rotate", 0.7, 2), ("TranslateX", 0.3, 9)),
(("Sharpness", 0.8, 1), ("Sharpness", 0.9, 3)),
(("ShearY", 0.5, 8), ("TranslateY", 0.7, 9)),
(("AutoContrast", 0.5, None), ("Equalize", 0.9, None)),
(("ShearY", 0.2, 7), ("Posterize", 0.3, 7)),
(("Color", 0.4, 3), ("Brightness", 0.6, 7)),
(("Sharpness", 0.3, 9), ("Brightness", 0.7, 9)),
(("Equalize", 0.6, None), ("Equalize", 0.5, None)),
(("Contrast", 0.6, 7), ("Sharpness", 0.6, 5)),
(("Color", 0.7, 7), ("TranslateX", 0.5, 8)),
(("Equalize", 0.3, None), ("AutoContrast", 0.4, None)),
(("TranslateY", 0.4, 3), ("Sharpness", 0.2, 6)),
(("Brightness", 0.9, 6), ("Color", 0.2, 8)),
(("Solarize", 0.5, 2), ("Invert", 0.0, None)),
(("Equalize", 0.2, None), ("AutoContrast", 0.6, None)),
(("Equalize", 0.2, None), ("Equalize", 0.6, None)),
(("Color", 0.9, 9), ("Equalize", 0.6, None)),
(("AutoContrast", 0.8, None), ("Solarize", 0.2, 8)),
(("Brightness", 0.1, 3), ("Color", 0.7, 0)),
(("Solarize", 0.4, 5), ("AutoContrast", 0.9, None)),
(("TranslateY", 0.9, 9), ("TranslateY", 0.7, 9)),
(("AutoContrast", 0.9, None), ("Solarize", 0.8, 3)),
(("Equalize", 0.8, None), ("Invert", 0.1, None)),
(("TranslateY", 0.7, 9), ("AutoContrast", 0.9, None)),
]
elif policy == AutoAugmentPolicy.SVHN:
return [
(("ShearX", 0.9, 4), ("Invert", 0.2, None)),
(("ShearY", 0.9, 8), ("Invert", 0.7, None)),
(("Equalize", 0.6, None), ("Solarize", 0.6, 6)),
(("Invert", 0.9, None), ("Equalize", 0.6, None)),
(("Equalize", 0.6, None), ("Rotate", 0.9, 3)),
(("ShearX", 0.9, 4), ("AutoContrast", 0.8, None)),
(("ShearY", 0.9, 8), ("Invert", 0.4, None)),
(("ShearY", 0.9, 5), ("Solarize", 0.2, 6)),
(("Invert", 0.9, None), ("AutoContrast", 0.8, None)),
(("Equalize", 0.6, None), ("Rotate", 0.9, 3)),
(("ShearX", 0.9, 4), ("Solarize", 0.3, 3)),
(("ShearY", 0.8, 8), ("Invert", 0.7, None)),
(("Equalize", 0.9, None), ("TranslateY", 0.6, 6)),
(("Invert", 0.9, None), ("Equalize", 0.6, None)),
(("Contrast", 0.3, 3), ("Rotate", 0.8, 4)),
(("Invert", 0.8, None), ("TranslateY", 0.0, 2)),
(("ShearY", 0.7, 6), ("Solarize", 0.4, 8)),
(("Invert", 0.6, None), ("Rotate", 0.8, 4)),
(("ShearY", 0.3, 7), ("TranslateX", 0.9, 3)),
(("ShearX", 0.1, 6), ("Invert", 0.6, None)),
(("Solarize", 0.7, 2), ("TranslateY", 0.6, 7)),
(("ShearY", 0.8, 4), ("Invert", 0.8, None)),
(("ShearX", 0.7, 9), ("TranslateY", 0.8, 3)),
(("ShearY", 0.8, 5), ("AutoContrast", 0.7, None)),
(("ShearX", 0.7, 2), ("Invert", 0.1, None)),
]
def _get_magnitudes():
_BINS = 10
return {
# name: (magnitudes, signed)
"ShearX": (torch.linspace(0.0, 0.3, _BINS), True),
"ShearY": (torch.linspace(0.0, 0.3, _BINS), True),
"TranslateX": (torch.linspace(0.0, 150.0 / 331.0, _BINS), True),
"TranslateY": (torch.linspace(0.0, 150.0 / 331.0, _BINS), True),
"Rotate": (torch.linspace(0.0, 30.0, _BINS), True),
"Brightness": (torch.linspace(0.0, 0.9, _BINS), True),
"Color": (torch.linspace(0.0, 0.9, _BINS), True),
"Contrast": (torch.linspace(0.0, 0.9, _BINS), True),
"Sharpness": (torch.linspace(0.0, 0.9, _BINS), True),
"Posterize": (torch.tensor([8, 8, 7, 7, 6, 6, 5, 5, 4, 4]), False),
"Solarize": (torch.linspace(256.0, 0.0, _BINS), False),
"AutoContrast": (None, None),
"Equalize": (None, None),
"Invert": (None, None),
}
class AutoAugment(torch.nn.Module):
r"""AutoAugment data augmentation method based on
`"AutoAugment: Learning Augmentation Strategies from Data" <https://arxiv.org/pdf/1805.09501.pdf>`_.
If the image is torch Tensor, it should be of type torch.uint8, and it is expected
to have [..., 1 or 3, H, W] shape, where ... means an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
Args:
policy (AutoAugmentPolicy): Desired policy enum defined by
:class:`torchvision.transforms.autoaugment.AutoAugmentPolicy`. Default is ``AutoAugmentPolicy.IMAGENET``.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`. Default is ``InterpolationMode.NEAREST``.
If input is Tensor, only ``InterpolationMode.NEAREST``, ``InterpolationMode.BILINEAR`` are supported.
fill (sequence or number, optional): Pixel fill value for the area outside the transformed
image. If given a number, the value is used for all bands respectively.
"""
def __init__(self,
policy: AutoAugmentPolicy=AutoAugmentPolicy.IMAGENET,
interpolation: InterpolationMode=InterpolationMode.NEAREST,
fill: Optional[List[float]]=None):
super().__init__()
self.policy = policy
self.interpolation = interpolation
self.fill = fill
self.transforms = _get_transforms(policy)
if self.transforms is None:
raise ValueError(
"The provided policy {} is not recognized.".format(policy))
self._op_meta = _get_magnitudes()
@staticmethod
def get_params(transform_num: int) -> Tuple[int, Tensor, Tensor]:
"""Get parameters for autoaugment transformation
Returns:
params required by the autoaugment transformation
"""
policy_id = torch.randint(transform_num, (1, )).item()
probs = torch.rand((2, ))
signs = torch.randint(2, (2, ))
return policy_id, probs, signs
def _get_op_meta(self,
name: str) -> Tuple[Optional[Tensor], Optional[bool]]:
return self._op_meta[name]
def forward(self, img: Tensor):
"""
img (PIL Image or Tensor): Image to be transformed.
Returns:
PIL Image or Tensor: AutoAugmented image.
"""
fill = self.fill
if isinstance(img, Tensor):
if isinstance(fill, (int, float)):
fill = [float(fill)] * F._get_image_num_channels(img)
elif fill is not None:
fill = [float(f) for f in fill]
transform_id, probs, signs = self.get_params(len(self.transforms))
for i, (op_name, p,
magnitude_id) in enumerate(self.transforms[transform_id]):
if probs[i] <= p:
magnitudes, signed = self._get_op_meta(op_name)
magnitude = float(magnitudes[magnitude_id].item()) \
if magnitudes is not None and magnitude_id is not None else 0.0
if signed is not None and signed and signs[i] == 0:
magnitude *= -1.0
if op_name == "ShearX":
img = F.affine(
img,
angle=0.0,
translate=[0, 0],
scale=1.0,
shear=[math.degrees(magnitude), 0.0],
interpolation=self.interpolation,
fill=fill)
elif op_name == "ShearY":
img = F.affine(
img,
angle=0.0,
translate=[0, 0],
scale=1.0,
shear=[0.0, math.degrees(magnitude)],
interpolation=self.interpolation,
fill=fill)
elif op_name == "TranslateX":
img = F.affine(
img,
angle=0.0,
translate=[
int(F._get_image_size(img)[0] * magnitude), 0
],
scale=1.0,
interpolation=self.interpolation,
shear=[0.0, 0.0],
fill=fill)
elif op_name == "TranslateY":
img = F.affine(
img,
angle=0.0,
translate=[
0, int(F._get_image_size(img)[1] * magnitude)
],
scale=1.0,
interpolation=self.interpolation,
shear=[0.0, 0.0],
fill=fill)
elif op_name == "Rotate":
img = F.rotate(
img,
magnitude,
interpolation=self.interpolation,
fill=fill)
elif op_name == "Brightness":
img = F.adjust_brightness(img, 1.0 + magnitude)
elif op_name == "Color":
img = F.adjust_saturation(img, 1.0 + magnitude)
elif op_name == "Contrast":
img = F.adjust_contrast(img, 1.0 + magnitude)
elif op_name == "Sharpness":
img = F.adjust_sharpness(img, 1.0 + magnitude)
elif op_name == "Posterize":
img = F.posterize(img, int(magnitude))
elif op_name == "Solarize":
img = F.solarize(img, magnitude)
elif op_name == "AutoContrast":
img = F.autocontrast(img)
elif op_name == "Equalize":
img = F.equalize(img)
elif op_name == "Invert":
img = F.invert(img)
else:
raise ValueError(
"The provided operator {} is not recognized.".format(
op_name))
return img
def __repr__(self):
return self.__class__.__name__ + '(policy={}, fill={})'.format(
self.policy, self.fill)
| [
"noreply@github.com"
] | PaddlePaddle.noreply@github.com |
1226d08a21ab006fe4fe012a053f7e528b8da9ce | 77106da09235373439b7aa7282218ff20b3d2454 | /utility/ReadingStuff.py | 9ce27c2a7e5ba0993d607fa8fc5c5ab6c93b8437 | [
"Apache-2.0"
] | permissive | hkuadithya/general-dynamics-data-analytics | ea6dfefca514d613709c6b1a49738bf36b9fc41e | d3bb2fa8abdc0a8d654941794f65abddc5e85cf1 | refs/heads/master | 2021-09-01T00:23:33.699435 | 2017-12-23T20:10:52 | 2017-12-23T20:10:52 | 115,173,859 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,353 | py | from ast import literal_eval
from pprint import pprint
import pandas as pd
import numpy as np
user_job_url_hits_file = '../url-sentiment-analysis/user_job_url_hits.pickle'
job_flag = {'/Jobs & Education/Jobs', '/Jobs & Education/Jobs/Career Resources & Planning',
'/Jobs & Education/Jobs/Job Listings', '/Jobs & Education/Jobs/Resumes & Portfolios'}
url_category = pd.read_csv('../url-sentiment-analysis/url_google_sentiment_analysis.csv', sep=',', usecols=[0, 5])
user_url_df = pd.read_csv('C:/Users/hkuad/Desktop/Subjects/DA/DataSets/NewData/DataSet2/http_info.csv', sep=',',
usecols=[2, 4])
url_category['category'] = url_category['category'].apply(literal_eval)
url_list = [None] * url_category.shape[0]
for i in range(0, url_category.shape[0]):
for category in url_category['category'][i]:
if category in job_flag:
url_list[i] = url_category['url'][i]
break
url_list = [url for url in url_list if url is not None]
pprint(url_list)
user_url_df = user_url_df[user_url_df['url'].isin(url_list)]
user_url_df = user_url_df.groupby('user').agg({'url': np.size})
user_url_df.rename(columns={'url': 'job_url_hits'}, inplace=True)
user_url_df.sort_values('job_url_hits', ascending=False, inplace=True)
user_url_df.to_pickle(user_job_url_hits_file)
pprint(user_url_df)
| [
"hkadithya@gmail.com"
] | hkadithya@gmail.com |
a150e8412623cfdbd0e1b3c25285bfff2c65cbd1 | 3a71949bd063224465131a83ee0359d1bf01a25b | /card_identifier.py | 02f0e603c76117bb6938fa9fbdf775c8a3bcbc9c | [] | no_license | mriedman/hanabi | d8a95b746a5a01b7fb0d49bf55d3f7444b511049 | fb3b1cea79b36d7d4e4bce0a666e7106649315d9 | refs/heads/master | 2023-01-15T02:58:42.813203 | 2020-11-21T04:43:38 | 2020-11-21T04:43:38 | 303,199,095 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 18,382 | py | from hanabi_learning_environment.rl_env import Agent
from hanabi_learning_environment import pyhanabi
import numpy as np
from scipy.special import expit
import math
from typing import Tuple, Any, Callable, Dict, List
from collections import defaultdict
import random
from baseline_agent import BaselineAgent
from adv_human import AdvancedHumanAgent
from copy import deepcopy
class CardIdentifierAgent(Agent):
def __init__(self, config):
# Initialize
self.config = config
# Extract max info tokens or set default to 8.
self.max_information_tokens = config.get('information_tokens', 8)
# Set up card identifier
self.card_identifier = HanabiCardIdentifier(.05, self.feature_extractor, config, activator='relu')
# Set up encoder
self.encoder = pyhanabi.ObservationEncoder(pyhanabi.HanabiGame(config))
self.agent = BaselineAgent(config)
if config['print']==1:
self.card_identifier.printt=1
def act(self, observation: pyhanabi.HanabiObservation):
if observation.cur_player_offset() != 0:
return None
player = 0
# Moves are sorted newest to oldest by default, but we want to update knowledg in chronological order
prior_actions = observation.last_moves()[-1::-1]
cards_remaining = self.cards_remaining(observation)
for i in prior_actions:
move = i.move()
if move.type() == 5:
# MOVE_TYPE = 5 is a dealing move
continue
if player == 0 and move.type() in [1, 2]:
# Current player played or discarded on last turn
self.card_identifier.incorporateCardProbFeedback(observation, move.card_index(), i.color(), i.rank())
for j in range(move.card_index(), 4):
# Shift each card drawn more recently than the discarded card to the left
self.card_identifier.card_priors[j] = self.card_identifier.card_priors[j + 1]
# Add a new card prior for the new card
self.card_identifier.card_priors[4] = HanabiCardIdentifier.normalize(np.array(cards_remaining))
if player == 0:
# Re-weight card priors to account for any new cards we've seen (
# e.g. if our cooperator discarded W5, we know our cards aren't W%
for index, vals in enumerate(zip(cards_remaining, self.card_identifier.card_space)):
if vals[0] < vals[1]:
for card in self.card_identifier.card_priors:
card[index] *= vals[0] / vals[1]
# Updated possible cards (based on what is known about opponents, fireworks, and discards)
self.card_identifier.card_space = cards_remaining
self.card_identifier.card_priors = [HanabiCardIdentifier.normalize(i) for i in
self.card_identifier.card_priors]
# Debugging stuff
if self.config['print'] > 10 or True:
#print(111)
if all(sum(i)>0 for i in self.card_identifier.card_priors):
card_probs = self.card_identifier.getCardProbs(observation)
self.card_identifier.incCardPriorMomentum(card_probs)
card_probs = list(card_probs)
if any(i > .2 for i in card_probs[0]):
self.card_identifier.incResult(True)
print(round(self.card_identifier.iter_size,4),end='')
print('|',end='')
else:
self.card_identifier.incResult(False)
#print(card_probs)
#print(222)
# If another player has hinted us...
if move.type() in [3, 4] and player > 0:
self.card_identifier.cardUpdate(observation, i, move)
player += 1
# print(self.card_identifier.card_priors)
# Play card if we've been hinted number and color
for card_index, hint in enumerate(observation.card_knowledge()[0]):
if hint.color() is not None and hint.rank() is not None:
if observation.card_playable_on_fireworks(hint.color(), hint.rank()):
move = pyhanabi.HanabiMove.get_play_move(card_index)
if self.legal_move(observation.legal_moves(), move):
return move
# Play card if we've ruled out from our knowledge the possibility that the card can't be played even if we don't know what it is
for card_index in range(5):
playable = True
for i, prob in enumerate(self.card_identifier.card_priors[card_index]):
if prob > 0.02: # Arbitrary threshold- may need to raise or lower
if not observation.card_playable_on_fireworks(i//5, i % 5):
playable = False
break
# Sometimes it doesn't work and this stops it from losing
if playable and observation.life_tokens() > 1:
#print('yay|',end='')
if random.random()<0:
print(self.card_identifier.num_iters)
move = pyhanabi.HanabiMove.get_play_move(card_index)
if self.legal_move(observation.legal_moves(), move):
return move
#return AdvancedHumanAgent.act(self, observation)
# Check if it's possible to hint a card to your colleagues.
fireworks = observation.fireworks()
if observation.information_tokens() > 0:
# Check if there are any playable cards in the hands of the opponents.
for player_offset in range(1, observation.num_players()):
player_hand = observation.observed_hands()[player_offset]
player_hints = observation.card_knowledge()[player_offset]
# Check if the card in the hand of the opponent is playable.
for idx, tpl in enumerate(zip(player_hand, player_hints)):
card, hint = tpl
if BaselineAgent.playable_card(card,
fireworks) and hint.color() is None:
if True or not any(card1.color() == card.color() for card1 in player_hand[idx + 1:]):
move = pyhanabi.HanabiMove.get_reveal_color_move(player_offset, card.color())
if self.legal_move(observation.legal_moves(), move):
return move
# return move
if BaselineAgent.playable_card(card,
fireworks) and hint.rank() is None:
move = pyhanabi.HanabiMove.get_reveal_rank_move(player_offset, card.rank())
if self.legal_move(observation.legal_moves(), move):
return move
# return move.to_dict()
# If no card is hintable then discard or play.
for i in observation.legal_moves():
if i.type() == pyhanabi.HanabiMoveType.DISCARD:
return i
return observation.legal_moves()[-1]
@staticmethod
def legal_move(legal_moves: List[pyhanabi.HanabiMove], move: pyhanabi.HanabiMove):
for pos_move in legal_moves:
if pos_move.type() == move.type():
if move.type() == 1 or move.type() == 2:
if move.card_index() == pos_move.card_index():
return True
if move.type() == 3:
if move.color() == pos_move.color() and move.target_offset() == pos_move.target_offset():
return True
if move.type() == 4:
if move.rank() == pos_move.rank() and move.target_offset() == pos_move.target_offset():
return True
return False
def cards_remaining(self, observation: pyhanabi.HanabiObservation):
# Determine unknown cards from observation
card_list = [3,2,2,2,1] * 5
known_cards = observation.discard_pile()
hands = observation.observed_hands()
for hand in hands:
if str(hand[0]) == 'XX':
continue
known_cards += hand
for card in known_cards:
card_list[card.color() * 5 + card.rank()] -= 1
offset = 0
for firework in observation.fireworks():
for i in range(firework):
card_list[offset + i] -= 1
offset += 5
return card_list
def feature_extractor1(self, observation: pyhanabi.HanabiObservation, card_index: int):
num_cards = self.config['rank'] * self.config['colors']
obs_vector = self.encoder.encode(observation)
# Add prior card knowledge
features = list(self.card_identifier.card_priors[card_index])
offset = num_cards * self.config['hand_size'] + self.config['players'] + 2 * num_cards
# Add fireworks info
features += obs_vector[offset: offset + num_cards]
offset += num_cards + 8 + 3 + 2 * num_cards - self.config['hand_size'] * self.config['players']
# Add most recent hint info
features += obs_vector[offset + 6:offset + 21]
return features
def feature_extractor(self, observation: pyhanabi.HanabiObservation, card_index: int):
# Add prior card knowledge
features = list(self.card_identifier.card_priors[card_index])
# Add fireworks info
fireworks = observation.fireworks()
for color in fireworks:
for rank in range(5):
if rank == color:
features.append(1)
else:
features.append(0)
# Add most recent hint info
last_moves = observation.last_moves()
opp_move = None
for move in last_moves:
if not move.move().type() == 5:
opp_move = move
break
if opp_move is None or opp_move.move().type() < 3:
features += [0] * 15
elif opp_move.move().type() == 3:
features += [1 if i == opp_move.move().color() else 0 for i in range(5)]
features += [0]*5
features += [1 if i in opp_move.card_info_revealed() else 0 for i in range(5)]
elif opp_move.move().type() == 4:
features += [0]*5
features += [1 if i == opp_move.move().rank() else 0 for i in range(5)]
features += [1 if i in opp_move.card_info_revealed() else 0 for i in range(5)]
if card_index == 0:
pass
return features
def reset(self, config):
self.config = config
if config['print']==1:
self.card_identifier.printt=1
self.card_identifier.reset(config)
class HanabiCardIdentifier:
def __init__(self, discount: float, feature_extractor: Callable, config: Dict, exploration_prob=0, activator : str = 'logistic' ):
self.discount = discount
self.featureExtractor = feature_extractor
self.explorationProb = exploration_prob
self.printt=0
rng = np.random.default_rng()
feature_length = config['rank'] * config['colors'] * 2 + config['rank'] + config['colors'] + config['hand_size']
self.index_matrices = [[rng.random((30, feature_length)),
#rng.random((30, 30)),
#rng.random((20, 20)),
rng.random((config['rank'] * config['colors'], 30))]
for _ in range(config['hand_size'])]
if activator == 'relu':
self.activator = lambda x: max(0, x)
self.dact = lambda x: 1 if x >= 0 else 0
elif activator == 'logistic' or True:
self.activator = expit
self.dact = lambda x: x * (1 - x)
#self.card_priors = np.array([1 for _ in range(config['rank'] * config['colors'])])
self.card_priors = [np.array([3,2,2,2,1]*5) for _ in range(config['hand_size'])]
self.card_priors = [self.normalize(i) for i in self.card_priors]
self.card_space = [3,2,2,2,1]*5
self.num_iters = 1
self.iter_size = 0.01
self.cp_momentum = 1
@staticmethod
def normalize(array):
if sum(array)==0:
#print('hi')
return HanabiCardIdentifier.normalize(np.ones(array.shape))
return array / sum(array)
def incCardPriorMomentum(self, new_probs):
momentum_list = []
for i in range(5): # Hand size
probs = self.cp_momentum * self.card_priors[i] + (1 - self.cp_momentum) * new_probs[i]
for j in range(len(self.card_priors[i])):
if self.card_priors[i][j] == 0:
probs[j] = 0
probs = self.normalize(probs)
momentum_list.append(probs)
self.card_priors = momentum_list
def reset(self, config: Dict):
self.activator = expit
# self.card_priors = np.array([1 for _ in range(config['rank'] * config['colors'])])
self.card_priors = [np.array([3, 2, 2, 2, 1] * 5) for _ in range(config['hand_size'])]
self.card_priors = [self.normalize(i) for i in self.card_priors]
self.card_space = [3, 2, 2, 2, 1] * 5
self.cp_momentum = max(0, self.cp_momentum-.005)
def cardUpdate(self, observation: pyhanabi.HanabiObservation, history: pyhanabi.HanabiHistoryItem, move: pyhanabi.HanabiMove):
cp2=deepcopy(self.card_priors)
if move.type() == 3: # Color
pos_cards = [i for i in range(move.color() * 5, (move.color() + 1) * 5)]
elif move.type() == 4: #Rank
pos_cards = [i for i in range(move.rank(), 25, 5)]
else:
print('sadasdad')
return
for card in range(5):
for i in range(25):
if (card in history.card_info_revealed()) ^ (i in pos_cards):
self.card_priors[card][i] = 0
for i,card in enumerate(self.card_priors):
if sum(card) == 0:
pass
#print([i,cp2[i]])
self.card_priors = [self.normalize(i) for i in self.card_priors]
#print('sadksjbadskaksdj')
# Return the __ associated with the weights and features
def getCardProbs(self, state: pyhanabi.HanabiObservation) -> List:
prob_list = []
for index in range(5):
scores = self.featureExtractor(state, index)
for layer in self.index_matrices[index]:
scores = layer.dot(scores)
scores = self.activator(scores)
prob_list.append(scores)
return [self.normalize(probs) for probs in prob_list]
def getCardProbLayers(self, state: pyhanabi.HanabiObservation, index: int):
scores = self.featureExtractor(state, index)
yield scores
for layer in self.index_matrices[index]:
scores = layer.dot(scores)
scores = self.activator(scores)
yield scores
def getQ(self, state: pyhanabi.HanabiObservation, action: pyhanabi.HanabiMove) -> float:
pass
# This algorithm will produce an action given a state.
# Here we use the epsilon-greedy algorithm: with probability
# |explorationProb|, take a random action.
'''def getAction(self, state: pyhanabi.HanabiObservation) -> Any:
self.num_iters += 1
if random.random() < self.explorationProb:
return random.choice(self.actions(state))
else:
return max((self.getQ(state, action), action) for action in state.legal_moves())[1]'''
def incResult(self, res: bool):
self.iter_size *= 0.999
self.iter_size += 0.01 if res else 0
# Call this function to get the step size to update the weights.
def getStepSize(self) -> float:
if False and self.cp_momentum > 0.95:
return 0.5 * min(0.1, (self.num_iters) ** (-1/2))
return 0.5 * min(0.5, 0*(self.num_iters) ** (-1/2))
def incorporateCardProbFeedback(self, observation, card, color, rank):
self.num_iters += self.iter_size
index = 5 * color + rank
results = list(self.getCardProbLayers(observation, card))
errors = [np.zeros((i.shape[0],)) for i in self.index_matrices[card]]
matrices = self.index_matrices[card]
for j, col in enumerate(errors[-1]):
target = 1 if j == index else 0
errors[-1][j] = (target - results[-1][j]) * self.dact(results[-1][j])
for l_num, layer in zip(range(len(errors)-2, -1, -1), errors[-2::-1]):
for j, col in enumerate(errors[l_num]):
layer[j] = sum(errors[l_num+1][k] * matrices[l_num+1][k][j] for k in range(len(errors[l_num+1]))) * \
self.dact(results[l_num+1][j])
for l_num, layer in enumerate(matrices):
for i, row in enumerate(layer):
for j, col in enumerate(row):
row[j] *= (1 - self.getStepSize())
row[j] += self.getStepSize() * errors[l_num][i] * results[l_num][j]
# We will call this function with (s, a, r, s'), which you should use to update |weights|.
# Note that if s is a terminal state, then s' will be None. Remember to check for this.
# You should update the weights using self.getStepSize(); use
# self.getQ() to compute the current estimate of the parameters.
'''def incorporateFeedback(self, state: pyhanabi.HanabiObservation, action: Any, reward: int, newState: Tuple) -> None:
# BEGIN_YOUR_CODE (our solution is 9 lines of code, but don't worry if you deviate from this)
maxQ = 'NaN'
for newAction in self.actions(newState):
if maxQ == 'NaN' or self.getQ(newState, newAction) > maxQ:
maxQ = self.getQ(newState, newAction)
delta = self.getStepSize() * (reward + self.discount * maxQ)
for f, v in self.featureExtractor(state, action):
self.weights[f] *= (1 - self.getStepSize())
self.weights[f] += delta
# END_YOUR_CODE'''
| [
"mriedman1@gmail.com"
] | mriedman1@gmail.com |
8ce70e596c11781cfb6e3432a4139cd29152a118 | e7f591e51e72631d37ce0c2a5a54f87dab67398f | /EX18/ex18.py | e84d5d21dbe549dd5fe57c4c1c379bf585dc6bbb | [] | no_license | SimzZafari/LPTHW | fabc6dca6017e8a60699c997fa9e4bd0a1ad6328 | 65765b7b0c4bce112e0bcfa6aa9ef457d6a4ec46 | refs/heads/master | 2021-01-11T05:33:37.966370 | 2016-11-22T09:53:16 | 2016-11-22T09:53:16 | 71,800,320 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 539 | py | # this one is like your scripts with argv
def print_two(*args):
arg1, arg2 = args
print "arg1: %r, arg2: %r" % (arg1, arg2)
#ok, tahat *args is actually pointless, we can just do this
def print_two_again(arg1, arg2):
print "arg1: %r, arg2: %r" % (arg1, arg2)
# this just takes one argument
def print_one(arg1):
print "arg1: %r" % arg1
# this one takes no arguments
def print_none():
print "I got nothin'."
print_two("Zed", "Shaw")
print_two_again("Zed","Shaw")
print_one("First!")
print_none()
| [
"SimonReindl@Simons-MacBook-Pro.local"
] | SimonReindl@Simons-MacBook-Pro.local |
27352aafb693010831b1b2608e3eb84ea9410208 | 6b5db7bb29ef6256233909f1d03059804a0403a5 | /saef_app/modules/admin/__init__.py | e629b7a23d2257b62f8be87ee967abe457322534 | [
"BSD-3-Clause"
] | permissive | echevemaster/saef | 59a883b09e74262d4bf8b1853454b88317b760dd | b38c15180147e6d48f9aa93a653ebc287752776a | refs/heads/master | 2016-09-06T16:32:45.749401 | 2013-12-04T21:56:06 | 2013-12-04T21:56:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 44 | py | from views import bundle
_all__ = [bundle]
| [
"echevemaster@gmail.com"
] | echevemaster@gmail.com |
0a6733160174fa09ea15599be27c9e8428de9f0c | 3db9ed766d2a1c0bcb454c842436f68b74292cb9 | /blockchain/__init__.py | 159d07fcb7f0d29f8894f71c77c2b6348a159e8b | [
"MIT"
] | permissive | EnniOne/minimum_viable_block_chain | 178e21be4f1567f0020a505488b4ebe4b2aa06df | 3e0bb1ea5e63f3d22958806a37ead2ab94b827cd | refs/heads/master | 2020-03-24T00:26:23.010877 | 2018-07-25T11:38:30 | 2018-07-25T11:38:30 | 142,291,350 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 226 | py | from .chain import Blockchain
from .transaction import Transaction
from .block import Block
from .node import Node, WalletNode, MiningNode
__all__ = ["Blockchain", "Node", "WalletNode", "MiningNode", "Transaction", "Block"]
| [
"ringer@hm.edu"
] | ringer@hm.edu |
4baf087e4e4c72d03eb1f4f5b7f52fbbaa305d56 | b71e91d4eb55b6826dbe378180aa7b2b8a717bdf | /Capitulo1/exerc4_3_v5.py | 1058a9e307d85293890de1402b022dd0572ac930 | [] | no_license | gustavopierre/think_python | 49a9ceb50f760b41f6fbac54a07f6b394aa8d637 | a3ad6e660db4e6ce2aa105f5084e585f95936867 | refs/heads/main | 2023-03-24T23:48:29.415573 | 2021-03-15T22:15:30 | 2021-03-15T22:15:30 | 348,137,048 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 336 | py | import turtle
import math
def arc(t, r, angle):
n = int(2*math.pi*r/10)
x = int(n*angle/360)
for count in range(x):
t.fd(10)
t.lt(360/n)
print(f'r = {r}')
print(f'angle = {angle}')
print(f'n = {n}')
print(f'x = {x}')
bob = turtle.Turtle()
print(bob)
arc(bob, 100, 270)
turtle.mainloop()
| [
"gustavopierre@gmail.com"
] | gustavopierre@gmail.com |
aaa900efbd9859b8857e3e31bfdbc19e1336afc7 | d640af54981f4762447239b1530334743437eff7 | /env/lib/python3.7/warnings.py | 3b637fcbed7ab255744d19f207af750e7bb1bd8b | [] | no_license | agnalknagja/Django_REST_API | 7c0ceb88b11bb88d2efa4ecc3d93f5a017bcbf3a | ac5ab0706850d1eaf72527c4305f7dd707109ddd | refs/heads/master | 2020-05-21T02:15:58.057339 | 2019-05-10T15:59:28 | 2019-05-10T15:59:28 | 185,870,697 | 0 | 0 | null | 2019-05-29T14:16:55 | 2019-05-09T20:57:41 | Python | UTF-8 | Python | false | false | 62 | py | /Users/Buffard/.pyenv/versions/3.7.1/lib/python3.7/warnings.py | [
"samuelwebber19@gmail.com"
] | samuelwebber19@gmail.com |
6e02a4cd2c2891c084f93dad75871c179905debf | b54097ce251925a82e591a08ae625fa884500b9c | /tests/test_github.py | e942b6bfaabe6db425870e1377356785c841cac2 | [
"BSD-3-Clause"
] | permissive | johnnoone/aiovault | b45b576cfb30570b1bbe9ab018a3247156dbefea | 03e1bfb6f0404dcf97ce87a98c539027c4e78a37 | refs/heads/master | 2021-01-10T19:56:50.715283 | 2015-07-10T21:15:21 | 2015-07-10T21:15:21 | 35,452,083 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,199 | py | from aiovault import Vault, LoginError
from conftest import async_test
import pytest
@async_test
def test_github_raw_loading(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
response = yield from client.read('/sys/auth/github/login',
params={"help": 1})
data = yield from response.json()
print(data['help'])
# low level create/delete
response = yield from client.write('/sys/auth/github',
json={"type": "github"})
assert response.status == 204, 'Must add github auth backend'
response = yield from client.delete('/sys/auth/github')
assert response.status == 204, 'Must delete github auth backend'
# high level create/delete
response = yield from client.auth.enable('github')
assert response.type == 'github', 'Must add github auth backend'
response = yield from client.auth.disable('github')
assert response is True, 'Must delete github auth backend'
@async_test
def test_help(dev_server):
client = Vault(dev_server.addr, token=dev_server.root_token)
response = yield from client.read('/sys/auth/github',
params={"help": 1})
data = yield from response.json()
assert 'help' in data
@async_test
def test_github_loading(dev_server, env):
try:
github_org = env.GITHUB_ORG
github_token = env.GITHUB_TOKEN
except AttributeError:
return 'GITHUB_ORG or GITHUB_TOKEN missing'
client = Vault(dev_server.addr, token=dev_server.root_token)
backend1 = backend = yield from client.auth.enable('github')
configured = yield from backend.configure(organization=github_org)
assert configured
configured = yield from backend.write_team('test', policies='foo')
assert configured
client = Vault(dev_server.addr)
backend = client.auth.load('github')
dummy_token = '1111111111111111111111111111111111111111'
with pytest.raises(LoginError):
yield from backend.login(github_token=dummy_token)
yield from backend.login(github_token=github_token)
disabled = yield from backend1.disable()
assert disabled
| [
"clint.northwood@gmail.com"
] | clint.northwood@gmail.com |
f048c6bfbace1f000f09ef620851d8e19c9d0381 | 142420e46f9945a3d7d2563490e945265636a426 | /treatment_calculator/features.py | deea016468e340e4758377f963e39010d63ca0bb | [
"MIT"
] | permissive | COVIDAnalytics/website | 2918904b2956f70fd3b3acb45397b34f2a91add9 | 16e12d0e9419138b0b16377622616d12408b8dec | refs/heads/master | 2023-05-27T00:44:52.987520 | 2022-09-15T17:06:52 | 2022-09-15T17:06:52 | 253,613,209 | 13 | 12 | MIT | 2023-05-01T21:23:08 | 2020-04-06T20:54:52 | CSS | UTF-8 | Python | false | false | 16,172 | py | import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from treatment_calculator.utils import langs, get_title_mapping, labs_ques, oxygen, oxygen_vals
def map_feat_vals(x, name, language):
if name == "Gender":
return langs[language].get_gender(x == 1)
else:
return name
def build_dropdown_card(_id, m, content_dict, language, feature_name, readable_name):
"""Makes feature card with dropdown data"""
insert_data = [
dbc.Col(
children=[
html.H5(readable_name, className="input-label"),
html.Div(
id='calc-categorical-{}-wrapper'.format(_id),
children=dcc.Dropdown(
id={
'type': 'treatments',
'index': 'calc-categorical-{}'.format(_id),
'f_idx': content_dict["index"],
'feature': feature_name,
'f_rng': repr((None, content_dict["default"], None))
},
options=[{'label': map_feat_vals(x, readable_name, language), 'value': x}
for x in content_dict['vals']],
value=1,
className="dcc_dropdown feature-dropdown",
clearable=False,
),
),
]
),
]
card = [
dbc.Row(
insert_data,
no_gutters=True,
style={"width": "100%"}
),
dbc.Tooltip(
content_dict['explanation'],
target='calc-categorical-{}-wrapper'.format(_id),
),
]
return card
def build_input_card(_id, m, content_dict, feature_name, readable_name):
is_temp = content_dict["name"] == "Body Temperature"
insert_data = [
dbc.Col([
html.H5(readable_name + " (" + content_dict["units"] + ")", className="input-label"),
html.Div(
id="calc-numeric-{}-wrapper".format(_id),
children=dbc.Input(
id={
'type': 'treatments',
'index': "calc-numeric-{}".format(_id),
'f_idx': content_dict["index"],
'feature': readable_name,
'f_rng': str((content_dict["min_val"], content_dict["default"], content_dict["max_val"])),
},
type="number",
placeholder="e.g. {}".format(int(content_dict['default'])),
className="numeric-input " + "temp-input" if is_temp else "",
bs_size="lg",
min=content_dict["min_val"],
max=content_dict["max_val"],
),
),
], align="stretch"
),
]
if is_temp:
insert_data.append(
dcc.Dropdown(
id={
'type': 'temperature',
'index': "units",
},
options=[{'label': x, 'value': x} for x in ["°F", "°C"]],
value="°F",
className="dcc_dropdown temp-dropdown",
clearable=False
),
)
card = [
dbc.Row(
insert_data,
align="end",
no_gutters=True,
style={"width": "100%"}
),
dbc.Tooltip(
content_dict['explanation'],
target="calc-numeric-{}-wrapper".format(_id),
),
]
return card
def build_checkbox_card(_id, feature_name, feature_index, readable_name, explanation):
item = dbc.Row(
no_gutters=True,
style={"width": "100%"},
children=[
html.H5(readable_name.split("(")[0], className="input-label", style={"max-width": "100%"}),
html.Div(
id='bin-{}-wrapper'.format(feature_index),
style={"width": "100%", "display": "flex", "paddingLeft": "10px"},
children=[
dbc.Checkbox(
id={
'type': 'treatments-checkbox',
'index': 'calc-checkbox-{}'.format(_id),
'f_idx': feature_index,
'feature': feature_name
},
checked=False
),
html.H5(readable_name.split("(")[1][0:-1], className="input-label",
style={"marginBottom": "0px", "marginTop": "0px", "marginLeft": "20px",
"color": "#495057", "fontSize": "15px", "opacity": "1"}),
]
),
dbc.Tooltip(
explanation,
target="bin-{}-wrapper".format(feature_index)
)
])
return item
def build_multidrop_card(_id, show_name, content_dict, language, feature_name):
"""Used to select multiple from chronic diseases at bottom of mortality calculator"""
title_mapping = get_title_mapping()
options = []
for i in range(len(content_dict["index"])):
options.append({'label': title_mapping[language][content_dict['vals'][i]],
'value': content_dict['index'][i]})
return dbc.Col([
html.H5(content_dict["name"], className="input-label",
style={"display": "inline-block" if show_name else "none"}),
dcc.Dropdown(
options=options,
value=[] if feature_name != "Race" else None,
id={
'type': 'treatments-multi',
'index': "calc-multidrop-{}".format(_id),
'feature': feature_name
},
# Classname needed for tooltip target
className="dcc_dropdown feature-dropown calc-multidrop-{}".format(_id),
style={"width": "100%"},
multi=True if feature_name != "Race" else False,
placeholder="Default: Other" if feature_name == "Race" else "Select..."
),
dbc.Tooltip(
content_dict['explanation'],
target=".calc-multidrop-{}".format(_id)
),
])
# TODO: Dropdown tooltips are not translated
def build_feature_cards(features, m=True, labs=False, language=0):
"""This builds all the feature cards"""
inputs = features["numeric"]
dropdowns = features["categorical"]
multidrop = features["multidrop"]
checkboxes = features["checkboxes"]
title_mapping = get_title_mapping()
# The scaffold that will hold ordered feature cards
feature_scaffold = [
{
"group": "Demographics",
"features": ["age", "gender", "race", "temperature"],
"mortality": {
"layout": "2x2",
"layout_m": "1x3"
},
},
{
"group": "Metabolic Panel",
"features": ["alanine amino", "aspartate amino", "bilirubin", "calcium",
"creatin", "sodium", "urea nitro", "potas", "glyc"],
"mortality": {
"layout": "3x1",
"layout_m": "4x2",
"expanded": {
"alanine amino": 2,
"glyc": 2
}
},
"infection": {
"expanded": {
"alanine amino": [("lg", 2), ("md", 2)], #scale by 2 for large and medium devices
"urea nitro": [("lg", 2), ("sm", 2)],
}
}
},
{
"group": "Abnormal Labs and Vitals",
"features": [],
"mortality": {
"layout": "2x3",
"vertical_expanded": {
"checkboxes": 0.75,
}
}
},
{
"group": "Blood Counts",
# Note: red cell does not exist in mortality calculator, that's why the different dimens
"features": ["hemoglobin", "lympho", "platelet", "leucocyte"],
"mortality": {
"layout": "2x2",
"layout_m": "2x2",
"expanded": {
"red cell": 2,
}
}
},
{
"group": "Other Lab Values",
"features": ["C-reactive protein", "prothrombin time"],
"mortality": {
"layout": "2x1",
"layout_m": "1x2",
},
"infection": {
"vertical_expanded": {
"C-reactive protein": 1.5,
"prothrombin time": 1.5
}
}
},
{
"group": "Miscellaneous",
"features": ["comorbid", "treatmen"],
"mortality": {
"layout": "2x1",
"layout_m": "1x2",
"expanded": {
"comorbid": 3
},
"vertical_expanded": {
"comorb": 2
}
}
},
{
"group": "Unknown",
"features": [],
"mortality": {
"layout": "3x3",
}
}
]
for group in feature_scaffold:
group["cards"] = [(None, [])] * len(group["features"])
feature_scaffold[-1]["cards"] = []
# Add a card into its right place in the scaffold
def add_feature(feature_name, feature_card):
add_feature.count += 1
# Try to add card to its appropraite group
for grp in enumerate(feature_scaffold):
# Check if name is in this group's features
for fname in enumerate(grp[1]["features"]):
if fname[1].lower() in feature_name.lower():
feature_scaffold[grp[0]]["cards"][fname[0]] = (feature_name, feature_card)
return
if feature_name == "checkboxes":
feature_scaffold[2]["cards"].append((feature_name, feature_card))
return
# Add card to default group
feature_scaffold[-1]["cards"].append((feature_name, feature_card))
add_feature.count = 0
for _id, content_dict in enumerate(dropdowns):
add_feature(
content_dict['name'],
build_dropdown_card(str(_id), m, content_dict, language, content_dict['name'],
title_mapping[language][content_dict['name']])
)
for _id, content_dict in enumerate(checkboxes):
for i in range(len(content_dict["vals"])):
add_feature(
"checkboxes",
build_checkbox_card(str(_id),
title_mapping[language][content_dict["vals"][i]],
content_dict["index"][i],
title_mapping[language][content_dict["vals"][i]],
content_dict["explanation"]
)
)
for _id, content_dict in enumerate(inputs):
add_feature(
content_dict['name'],
# Give different IDs to fix input box not clearing when change
build_input_card(str(_id) + str(labs), m, content_dict, content_dict['name'],
title_mapping[language][content_dict['name']])
)
for _id, content_dict in enumerate(multidrop):
add_feature(
content_dict['name'],
build_multidrop_card(str(_id),
True,
content_dict, language, content_dict['name'])
)
# final card layout
feature_content = []
# card number to keep track of increasing delay
card_num = 0
# Loop through all the groups
for grp in feature_scaffold:
# Get the layout dimensions, row x col
r, c = [int(x) for x in grp["mortality"]["layout"].split('x')]
r_m, c_m = r, c
if "layout_m" in grp["mortality"]:
r_m, c_m = [int(x) for x in grp["mortality"]["layout_m"].split('x')]
# If there are no cards, skip this group
if all([x[0] is None for x in grp["cards"]]): continue
group_content = []
w = 12 / c
w_m = 12 / c_m
# Get all the correct horizontal expansion factors from group
expansions = {}
if m and "expanded" in grp["mortality"]:
expansions = grp["mortality"]["expanded"]
elif not m:
if "infection" in grp:
if "expanded" in grp["infection"]:
expansions = grp["infection"]["expanded"]
elif "expanded" in grp["mortality"]:
expansions = grp["mortality"]["expanded"]
# Get all the correct vertical expansion factors from group
v_expansions = {}
if m and "vertical_expanded" in grp["mortality"]:
v_expansions = grp["mortality"]["vertical_expanded"]
elif not m:
if "infection" in grp:
if "vertical_expanded" in grp["infection"]:
v_expansions = grp["infection"]["vertical_expanded"]
elif "vertical_expanded" in grp["mortality"]:
v_expansions = grp["mortality"]["vertical_expanded"]
# Loop throgh all the cards in this group
for name, card in grp["cards"]:
if name is None:
continue
# get expansion factor of this card
f = {"sm": 1, "md": 1, "lg": 1}
for n in [ex for ex in expansions if ex.lower() in name.lower()]:
if type(expansions[n]) == list:
for size, scale in expansions[n]:
f[size] = scale
else:
f["sm"] = expansions[n]
f["md"] = expansions[n]
f["lg"] = expansions[n]
# get vertical expansion factor of this card
v_f = 1
for n in [ex for ex in v_expansions if ex.lower() in name.lower()]:
v_f = v_expansions[n]
# Create card content and add it to the group content
group_content.append(dbc.Col(
xs=12,
sm=w_m * f["sm"],
md=w_m * f["md"],
lg=w * f["lg"],
style={"padding": "0px"},
children=dbc.Card(
style={"borderRadius": "0px",
"height": "{}px".format(str(150 * v_f)),
"borderWidth": "1px",
"background": "rgba(0, 0, 0, 0)"},
children=[
dbc.CardBody(card, className="feat-options-body")
])
))
card_num += 1
# Add the group content to the feature content
feature_content.append(dbc.Col(
style={
'paddingBottom': 30,
'borderColor': 'red',
},
xs=12,
sm=c_m * 6,
md=c_m * 6,
lg=c * 4,
children=[
html.Div(
**{"data-aos": "fade-up", "data-aos-delay": str(card_num % 4 * 150)},
# For overlapping dropdown problem
style={"transformStyle": "flat",
"zIndex": str(add_feature.count - card_num),
"position": "relative"},
className="aos-refresh-onload",
children=dbc.Card(
className="elevation-3",
style={"borderWidth": "0px"},
children=[
dbc.CardHeader(grp["group"],
style={"fontWeight": "bold"}),
dbc.Row(group_content, style={"margin": "0px", "borderWidth": "0px"})
]
)
)
],
))
return feature_content
| [
"travisjayday@gmail.com"
] | travisjayday@gmail.com |
2c51bfb8698673b17b3e9b3e276d6fca4dc6d535 | 2ccf091f3df1a0f2159d3cd5f9bb8069580e1aa3 | /posts/utils.py | dd12e4e135c689b8ac52c7896226d71d08cf661f | [
"MIT"
] | permissive | vodnalasricharan/StudentMithra | 6aaa6bf499193d0e03799f2e852b29c063b82c6e | bfe1f07d7c38a13c3e6690bfbb5be8c26b5b72a4 | refs/heads/main | 2023-06-24T02:33:55.816873 | 2021-07-29T06:14:34 | 2021-07-29T06:14:34 | 378,157,061 | 2 | 3 | MIT | 2021-07-29T06:14:35 | 2021-06-18T13:20:45 | HTML | UTF-8 | Python | false | false | 686 | py | # import datetime
import math
import re
from django.utils.html import strip_tags
def count_words(html_string):
# html_string = """
# <h1>This is a title</h1>
# """
word_string = strip_tags(html_string)
matching_words = re.findall(r'\w+', word_string)
count = len(matching_words) #joincfe.com/projects/
return count
def get_read_time(html_string):
count = count_words(html_string)
read_time_min = math.ceil(count/200.0) #assuming 200wpm reading
# read_time_sec = read_time_min * 60
# read_time = str(datetime.timedelta(seconds=read_time_sec))
# read_time = str(datetime.timedelta(minutes=read_time_min))
return int(read_time_min) | [
"vodnalasricharan@gmail.com"
] | vodnalasricharan@gmail.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.